What is a neural network
Neural network is a collection of neurons that are connected by layers. Each neuron is a small computing unit that performs simple calculations to collectively solve a problem. They are organized in layers. There are 3 types of layers: input layer, hidden layer and outter layer. Each layer contains a number of neurons, except for the input layer. Neural networks mimic the way a human brain processes information.
Components of a neural network
-
Activation function
determines whether a neuron should be activated or not. The computations that happen in a neural network include applying an activation function. If a neuron activates, then it means the input is important. The are different kinds of activation functions. The choice of which activation function to use depends on what you want the output to be. Another important role of an activation function is to add non-linearity to the model.
- Binary used to set an output node to 1 if function result is positive and 0 if the function result is negative. f(x)={0,if x<01,if x≥0f(x)={0,if x<01,if x≥0
- Sigmod is used to predict the probability of an output node being between 0 and 1. f(x)=11+e−xf(x)=11+e−x
- Tanh is used to predict if an output node is between 1 and -1. Used in classification use cases. f(x)=ex−e−xex+e−xf(x)=ex−e−xex+e−x
- ReLU used to set the output node to 0 if fuction result is negative and keeps the result value if the result is a positive value. f(x)={0,if x<0x,if x≥0f(x)={0,if x<0x,if x≥0
-
Weights influence how well the output of our network will come close to the expected output value. As an input enters the neuron, it gets multiplied by a weight value and the resulting output is either observed, or passed to the next layer in the neural network. Weights for all neurons in a layer are organized into one tensor
-
Bias makes up the difference between the activation function’s output and its intended output. A low bias suggest that the network is making more assumptions about the form of the output, whereas a high bias value makes less assumptions about the form of the output.
We can say that an output yy of a neural network layer with weights WW and bias bb is computed as summation of the inputs multiply by the weights plus the bias x=∑(weights∗inputs)+biasx=∑(weights∗inputs)+bias, where f(x)f(x) is the activation function.