Return to page

WIKI

What is Transfer Learning?

Transfer learning is an approach used to transfer information from one machine learning task to another. Practically speaking, a pre-trained model that was trained for one task is re-purposed as the starting point for a new task. As a result, great amounts of time and resources can be saved by transfer learning.

Creating complex models from scratch requires vast amounts of computing resources, data, and time. Transfer learning accelerates the process by leveraging commonalities between tasks (such as detecting edges in images) and applying those learning to a new task. Training time for a model can go from weeks to hours, making machine learning more commercially viable for many businesses.

Transfer learning is very popular in domains like computer vision and NLP where large amounts of data are needed to produce accurate models. 
 

How does Transfer Learning work?

For example, in computer vision, neural networks usually detect edges in the earliest layers, shapes in the middle layers, and some task-specific features in the later layers. The early and middle layers are used in transfer learning, and only the latter layers are retrained. As a result, it helps leverage the labeled data of the task it was trained on.
 

What are the types of Transfer Learning?

Below are the five different types types of transfer learning:

 

1. Domain adaptation

The concept of domain adaptation is usually applied when the marginal probabilities between a source domain and a target domain are different.

 

 

2. Domain confusion

In a deep learning network, different layers capture various features. We can identify domain-invariant features and improve their transferability between domains. The model is then nudged to learn as similar a representation as possible.

 

 

3. Multitask learning

Multitask learning is a slightly different flavor of transfer learning. Multitask learning involves learning several tasks simultaneously without distinguishing between sources and targets. Unlike transferring learning, where the learner initially knows nothing about the target task, the learner receives information about multiple functions simultaneously.

 

 

4. One-shot learning

The one-shot learning approach is a variant of transfer learning in which we try to infer the required output from just one or a few training examples. It is useful in real-world scenarios where it is not feasible to label data for every possible category (if it is a classification task) or in situations where new categories can be added frequently.

 

 

5. Zero-shot learning

Zero-shot learning is another extreme variant of transfer learning, which relies on no labeled examples to learn a task.

 

What is the importance of Transfer Learning?

Transfer learning aims to save time and effort and provides the advantage of using tested models. This way, you cut costs by avoiding the need for a high-cost GPU for retraining the model. The goal is to make machine learning as human as possible.

What are the three theories of Transfer Learning?

The three theories of transfer learning are:

1. Analogy

2. Knowledge compilation 

3. Constraint violation 
 

Each theory aims to predict human performance in distinct and identifiable ways on various transfer tasks.

What is Transfer Learning in Machine Learning?

Transfer learning is a machine learning method that uses a pre-trained model as the basis for a new model. Essentially, a model trained on one task is repurposed for a second related task to allow rapid progress when modeling the second task.

Examples of Transfer Learning

Transfer learning is used in various ways to strengthen machine learning models for natural language processing. Embedding pre-trained layers that understand specific dialects or vocabulary can be an example of simultaneously training a model to detect multiple elements of language.
 

Transfer Learning vs. Other Technologies & Methodologies

Transfer learning vs. fine-tuning

In transfer learning, a model developed for one task is used for another. In contrast, a fine-tuning approach to transfer learning involves changing the model output to fit a new task and training only the output model.

Transfer learning vs. domain adaptation

Transfer learning is a subcategory of domain adaptation. Domain adaptation involves the same feature space (but different distributions) for all domains; transfer learning includes cases where the target domain's feature space differs from the source feature space.

Transfer learning vs. meta-learning

Meta-learning is more about speeding up and optimizing hyperparameters for not yet trained networks. In contrast, transfer learning uses a network that has already been trained and uses part of it to train on a new task that is relatively similar.

Transfer learning vs. reinforcement learning

Transfer learning involves fine-tuning a model trained on one set of data and then applying it to another collection of data and a different task. Reinforcement learning refers to how some agents should respond to environmental conditions to receive high rewards.