Return to page

WIKI

What are Weights and Biases?

 

Weights and biases are neural network parameters that simplify machine learning data identification.

The weights and biases develop how a neural network propels data flow forward through the network; this is called forward propagation. Once forward propagation is completed, the neural network will then refine connections using the errors that emerged in forward propagation. Then, the flow will reverse to go through layers and identify nodes and connections that require adjusting ; this is referred to as backward propagation.

Weights refer to connection managements between two basic units within a neural network.

To train these units to move forward in the network, weights of unit signals must be increased or decreased. These connections will then be tested, reversed through the network to identify errors, and repeated to produce the optimal results.

Biases in neural networks are additional crucial units in sending data to the correct end unit. Biases are units entirely separate from the units already in place within the network. They are added to the middle data units to help influence the end product. Biases cannot be added to initial units of data. Like weights, biases will also be adjusted through reversing the neural network flow in order to produce the most accurate end result. When a bias is added, even if the previous unit has a value of zero, the bias will activate a signal and push the data forward.
 

Terms Related to Weights and Biases

Neuron - Basic units of the neural network containing individual data features.

Layers - A collection of unrelated neurons (the input layer) that connect to another set of neurons. This continues until the final layer of neurons (the output layer) is reached. Weights affect the neuron connections between layers.

Hidden Layers - Layers between input layers and output layers. This is where artificial neurons take in a set of weighted inputs and produce an output through an activation function


Activation and Loss Functions

Activation Function is the output of an input. The data from a neuron in a previous layer will produce a similar value in the next layer.

Loss Function is the difference between the expected algorithm output and the actual output. The loss, or error, is a measurement for how off the algorithm is compared to the predictions.

Regularization - A technique that narrows down the connection weights. The weight values may become too large and unusable. Regularization brings the weights back down to a manageable value which will optimize the model's functionality.

Parameters - The weights and biases of neuron connections.

Value - Bias introduces a value of 1 to the hidden layers. Changing the value of the activation function to a 1 directs the neural network to a more optimal product. Adding value to hidden layers teaches the machine learning model how data produced by the input layer must be used.


Why are Weights and Biases Important?

Weights and biases are crucial concepts to a neural network. The neural network processes the characteristics of a data subject (like an image or audio clip) and produces an identification of the subject.

  • Weights set the standards for the neuron’s signal strength. This value will determine the influence input data has on the output product.

  • Biases give extra characteristics with a value of 1 that the neural network did not previously have. The neural network needs that extra information to efficiently propagate forward.

  • Weights and biases both better distinguish the neurons and their connections to give an accurate output.

     

Examples of Weights and Biases

Neural networks were designed to mimic how the human brain differentiates and organizes inputs.

For example, to train an AI model to identify the letters A, B, and C, the neural network will need to understand the shapes that make up each letter. For the letter C, the model will need to detect three shapes: a top curve, a slightly bent line to the left, and a bottom curve. If a top curve is detected, it will propagate forward to the next layer. However, neurons can accidentally be triggered by miscalculating data. It might see the top curve of B or the left line of A and misclassify those letters as C.

Weights and biases further define the importance of signals and unidentified data features. These additions will help eliminate neural network errors. Inputting a bias into hidden layers will place a data characteristic that was missed in a previous iteration. Shifting the necessity of a signal using weights can help a machine learning model define the importance of calculated data.


Weights and Biases FAQs

What are weights and biases used for?

Weights and biases teach models the information necessary to propagate forward and produce adequate output.

What is a neural network?

A neural network is an algorithm built to work like a human brain. It is composed of multiple layers of neurons. It starts with an input layer consisting of independent neurons that do not rely on any weighted signal. It introduces primary data. The input layer then feeds into one to two hidden layers. The hidden layers contain neurons and biases that place value to data that sorts everything into the output layer. The output layer expresses the data identification for  machine learning models.

Can weights and biases be overused?

Weights should be used as needed by the neural network. Occasionally, a neural network can be overtrained and produce large, unmanageable weights that clutter neuron signals. This clutter makes the model too complex and leads to a ML concept called overfitting. Overfitting picks up unnecessary data — or noise — that convolutes the model output prediction. When overfitting occurs, a weight regularization method will need to be implemented. Regularization keeps connection weights small by performing learning algorithm updates. These updates stabilize the model’s generalization, meaning its ability to adapt to new data.

Biases are added features meant to help specify what a correct answer should look like. There can only be one bias per neuron layer.


Weights and Biases Resources

H2O.ai takes a more in-depth look at neural networks in the Neural Network wiki page.

For more information on how to implement this tool in a machine learning model, check out H2O.ai's Deep Learning Booklet.