- Activation Function
- Confusion Matrix
- Convolutional Neural Networks
- Forward Propagation
- Generative Adversarial Network
- Gradient Descent
- Linear Regression
- Logistic Regression
- Machine Learning Algorithms
- Multilayer Perceptron
- Naive Bayes
- Neural Networking and Deep Learning
- RuleFit
- Stack Ensemble
- Word2Vec
- XGBoost

- Attention Mechanism
- BERT
- Binary Classification
- Classify Token ([CLS])
- Conversational Response Generation
- GLUE (General Language Understanding Evaluation)
- GPT (Generative Pre-Trained Transformers)
- Language Modeling
- Layer Normalization
- Mask Token ([MASK])
- Probability Distribution
- Probing Classifiers
- SQuAD (Stanford Question Answering Dataset)
- Self-attention
- Separate token ([SEP])
- Sequence-to-sequence Language Generation
- Sequential Text Spans
- Text Classification
- Text Generation
- Transformer Architecture
- WordPiece

- AUC-ROC
- Analytical Review
- Autoencoders
- Bias-Variance Tradeoff
- Decision Optimization
- Explanatory Variables
- Exponential Smoothing
- Level of Granularity
- Long Short-Term Memory
- Loss Function
- Model Management
- Precision and Recall
- Predictive Learning
- ROC Curve
- Recommendation system
- Stochastic Gradient Descent
- Target Leakage
- Target Variable
- Underwriting

A

C

D

G

L

M

N

P

R

S

T

X

- Activation Function
- Confusion Matrix
- Convolutional Neural Networks
- Forward Propagation
- Generative Adversarial Network
- Gradient Descent
- Linear Regression
- Logistic Regression
- Machine Learning Algorithms
- Multilayer Perceptron
- Naive Bayes
- Neural Networking and Deep Learning
- RuleFit
- Stack Ensemble
- Word2Vec
- XGBoost

- Attention Mechanism
- BERT
- Binary Classification
- Classify Token ([CLS])
- Conversational Response Generation
- GLUE (General Language Understanding Evaluation)
- GPT (Generative Pre-Trained Transformers)
- Language Modeling
- Layer Normalization
- Mask Token ([MASK])
- Probability Distribution
- Probing Classifiers
- SQuAD (Stanford Question Answering Dataset)
- Self-attention
- Separate token ([SEP])
- Sequence-to-sequence Language Generation
- Sequential Text Spans
- Text Classification
- Text Generation
- Transformer Architecture
- WordPiece

- AUC-ROC
- Analytical Review
- Autoencoders
- Bias-Variance Tradeoff
- Decision Optimization
- Explanatory Variables
- Exponential Smoothing
- Level of Granularity
- Long Short-Term Memory
- Loss Function
- Model Management
- Precision and Recall
- Predictive Learning
- ROC Curve
- Recommendation system
- Stochastic Gradient Descent
- Target Leakage
- Target Variable
- Underwriting

The architecture of neural networks is made up of an input, output, and hidden layer. Neural networks themselves, or artificial neural networks (ANNs), are a subset of machine learning designed to mimic the processing power of a human brain. Neural networks function by passing data through the layers of an artificial neuron.

There are many components to a neural network architecture. Each neural network has a few components in common:

Input - Input is data that is put into the model for learning and training purposes.

Weight - Weight helps organize the variables by importance and impact of contribution.

Transfer function - Transfer function is when all the inputs are summarized and combined into one output variable.

Activation function - The role of the activation function is to decide whether or not a specific neuron should be activated. This decision is based on whether or not the neuron’s input will be important to the prediction process.

Bias - Bias shifts the value given by the activation function.

Neural networks are an efficient way to solve machine learning problems and can be used in various situations. Neural networks offer precision and accuracy. Finding the correct neural network for each project can increase efficiency.

Perceptron - A neural network that applies a mathematical operation to an input value, providing an output variable.

Feed-Forward Networks - A multi-layered neural network where the information moves from left to right, or in other words, in a forward direction. The input values pass through a series of hidden layers on their way to the output layer.

Residual Networks (ResNet) - A deep feed-forward network with hundreds of layers.

Recurrent neural networks (RNNs) remember previously learned predictions to help make future predictions with accuracy.

Long short term memory network (LSTM) - LSTM adds extra structures, or gates, to an RNN to improve memory capabilities.

Echo state network (ESN) - A type of RNN hidden layers that are sparsely connected.

Convolutional neural networks (CNNs) are a type of feed-forward network that are used for image analysis and language processing. There are hidden convolutional layers that form ConvNets and detect patterns. CNNs use features such as edges, shapes, and textures to detect patterns. Examples of CNNs include:

AlexNet - Contains multiple convolutional layers designed for image recognition.

Visual geometry group (VGG) - VGG is similar to AlexNet, but has more layers of narrow convolutions.

Capsule networks - Contain nested capsules (groups of neurons) to create a more powerful CNN.

Generative adversarial networks (GAN) are a type of unsupervised learning where data is generated from patterns that were discovered from the input data. GANs have two main parts that compete against one another:

Generator - creates synthetic data from the learning phase of the model. It will take random datasets and generate a transformed image.

Discriminator - decides whether or not the images produced are fake or genuine.

GANs are used to help predict what the next frame in a video might be, text to image generation, or image to image translation.

Unlike RNNs, transformer neural networks do not have a concept of timestamps. This enables them to pass through multiple inputs at once, making them a more efficient way to process data.

Deep learning is a continually developing area of study and neural networks are at the core of it. With the main objective being to replicate the processing power of a human brain, neural network architecture has many more advancements to make. A few applications of neural network development are image compression, stock market prediction, banking, and computer security.

Explore more deep learning use cases