# Activation Function

## What is Activation Function?

An activation function is a function that creates inputs and finds relationships from a series of outputs. An activation function uses algorithms that function like a human brain to find patterns and relationships in sets of data. Different activation functions are used depending on the desired impact and performance of the neural network. Activation functions are made up of 3 layers; input layers, hidden layers, and output layers.

## Why is Activation Function Important?

Activation functions are important because they can add linearity or non-linearity to a neural network. Activation functions allow information to be presented in a way that patterns and relationships in data can be extracted. Since all data is not linear, activation functions allow users to find patterns in multidimensional information. Since activation functions can be multidimensional, they allow for the analysis of image, audio, and video.

## What are the Main Types of Activation Functions?

### Linear

Linear activation functions are represented with f(x) = x, it only delivers a range of activations and cannot compute complex data. This means that complex patterns and information can not be found using linear activation functions. Linear functions are good for simple sets of data that can be easily interpreted.

### Binary Step

Binary step activation functions are able to comprehend more complex data, but cannot be used for problems with multi-step classifications.

### Non-Linear

Non-linear functions are the most used and make it simple for a neural network to separate information. There are several different kinds of non-linear functions that are used depending on results needed. The most common of nonlinear functions are Sigmoid, Tanh, and ReLU.

## When to Use an Activation Function?

Binary step activation functions are used to determine if a neuron should be activated. This is determined if the input is greater than the threshold. Linear activation functions are used when the activation is proportional to the input. This is used for simple tasks with easy interpretability, for more complex patterns one of the other kinds of activation functions will need to be used. The nonlinear functions are used when a complex set of data needs to be interpreted. This can be used on multi-step classification problems and information is presented in a way that patterns and relationships can be determined.

## Activation Function Terms

Neural Network: A neural network is a series of algorithms that function similarly to a human brain to find patterns and relationships in sets of data.

Linearity (Linear): Information that follows the pattern of a straight line, it is not complex.

Input Layer: Provides information from outside into the network, no computation in this layer.

Hidden Layer: All computation is performed in the hidden layer and once finished is brought to the outside layer.

Outside Layer: The information that was learned by the network is shared with the outside.