- Activation Function
- Confusion Matrix
- Convolutional Neural Networks
- Forward Propagation
- Generative Adversarial Network
- Gradient Descent
- Linear Regression
- Logistic Regression
- Machine Learning Algorithms
- Multilayer Perceptron
- Naive Bayes
- Neural Networking and Deep Learning
- RuleFit
- Stack Ensemble
- Word2Vec
- XGBoost

- Attention Mechanism
- BERT
- Binary Classification
- Classify Token ([CLS])
- Conversational Response Generation
- GLUE (General Language Understanding Evaluation)
- GPT (Generative Pre-Trained Transformers)
- Language Modeling
- Layer Normalization
- Mask Token ([MASK])
- Probability Distribution
- Probing Classifiers
- SQuAD (Stanford Question Answering Dataset)
- Self-attention
- Separate token ([SEP])
- Sequence-to-sequence Language Generation
- Sequential Text Spans
- Text Classification
- Text Generation
- Transformer Architecture
- WordPiece

- AUC-ROC
- Analytical Review
- Autoencoders
- Bias-Variance Tradeoff
- Decision Optimization
- Explanatory Variables
- Exponential Smoothing
- Level of Granularity
- Long Short-Term Memory
- Loss Function
- Model Management
- Precision and Recall
- Predictive Learning
- ROC Curve
- Recommendation system
- Stochastic Gradient Descent
- Target Leakage
- Target Variable
- Underwriting

A

C

D

G

L

M

N

P

R

S

T

X

- Activation Function
- Confusion Matrix
- Convolutional Neural Networks
- Forward Propagation
- Generative Adversarial Network
- Gradient Descent
- Linear Regression
- Logistic Regression
- Machine Learning Algorithms
- Multilayer Perceptron
- Naive Bayes
- Neural Networking and Deep Learning
- RuleFit
- Stack Ensemble
- Word2Vec
- XGBoost

- Attention Mechanism
- BERT
- Binary Classification
- Classify Token ([CLS])
- Conversational Response Generation
- GLUE (General Language Understanding Evaluation)
- GPT (Generative Pre-Trained Transformers)
- Language Modeling
- Layer Normalization
- Mask Token ([MASK])
- Probability Distribution
- Probing Classifiers
- SQuAD (Stanford Question Answering Dataset)
- Self-attention
- Separate token ([SEP])
- Sequence-to-sequence Language Generation
- Sequential Text Spans
- Text Classification
- Text Generation
- Transformer Architecture
- WordPiece

- AUC-ROC
- Analytical Review
- Autoencoders
- Bias-Variance Tradeoff
- Decision Optimization
- Explanatory Variables
- Exponential Smoothing
- Level of Granularity
- Long Short-Term Memory
- Loss Function
- Model Management
- Precision and Recall
- Predictive Learning
- ROC Curve
- Recommendation system
- Stochastic Gradient Descent
- Target Leakage
- Target Variable
- Underwriting

Generic adversarial networks (GANs) are machine learning (ML) models in which two neural networks compete to produce more accurate predictions. Normally, GANs run unsupervised and learn through a cooperative zero-sum game framework.

Generative adversarial networks are widely used for image generation, video generation, and voice generation.

A generative adversarial network (GAN) has two parts:

The generator learns how to generate plausible data and the generated instances serve as negative examples for the discriminator.

The discriminator learns to differentiate between real and fake data and it penalizes the generator for producing implausible results.

During training, the generator produces obvious fake data, and the discriminator quickly learns how to distinguish it. As training progresses, the generator gets closer to producing output that can fool the discriminator.

Finally, if generator training goes well, the discriminator becomes worse at telling the difference between real and fake. Eventually, it starts to classify fake data as real, and its accuracy decreases.

Text to image translation

Image editing / manipulating

Creating images (2-dimensional images)

Recreating images of higher resolution

Creating 3-dimensional object

A wide variety of applied math and engineering domains use high-dimensional probability distributions. The training and sampling process is an excellent test of our ability to represent and manipulate high-dimensional probability distributions.

Generative models can be incorporated into reinforcement learning in several ways. Simulating possible futures can be achieved by using time-series models. In reinforcement learning, such models could be used in various ways.

Generic models can be trained with incomplete data and provide predictions based on insufficient data inputs. For example, many or even most training examples are missing labels in semi-supervised learning.

Generative adversarial networks enable machine learning to work with multi-modal outputs using generative models. A single input may correspond to many different correct answers for many tasks, each of which is acceptable.

The generation of samples from a distribution is intrinsically required in many tasks.

Models with variational autoencoders explicitly learn likelihood distributions through loss functions. However, generative adversarial networks do not do this. The GAN generators serve to generate images that could fool the discriminator.

Generative adversarial networks consist of two deep neural networks that act as adversaries against each other. In reinforcement learning, an agent is programmed to undertake complex sequences of actions within a complex environment to obtain as many rewards as possible.

Generative adversarial networks and Autoencoders are generative models, which means they learn a given data distribution rather than its density. The critical difference is how they do it.

The autoencoder compresses its input down to a vector - with much fewer dimensions than its input data - and then transforms it back into a tensor with the same shape as its input through several neural net layers. They're trained to reproduce their input, similar to learning a compression algorithm for a specific dataset.

A generative adversarial network looks like an inside-out autoencoder. However, instead of compressing high dimensional data, it has low dimensional vectors as the inputs and high dimensional data in the middle.