Introduction
This article is Part 1 of a series of 3 articles that I am going to post. The proposed article content will be as follows:
- Part 1: This one, will be an introduction into Perceptron networks (single layer neural networks)
- Part 2: Will be about multi layer neural networks, and the back propogation training method to solve a non-linear classification problem such as the logic of an XOR logic gate. This is something that a Perceptron can't do. This is explained further within this article
- Part 3: Will be about how to use a genetic algorithm (GA) to train a multi layer neural network to solve some logic problem
Let's start with some biology
Nerve cells in the brain are called neurons. There is an estimated 1010 to the power(1013) neurons in the human brain. Each neuron can make contact with several thousand other neurons. Neurons are the unit which the brain uses to process information.
So what does a neuron look like
A neuron consists of a cell body, with various extensions from it. Most of these are branches called dendrites. There is one much longer process (possibly also branching) called the axon. The dashed line shows the axon hillock, where transmission of signals starts
The following diagram illustrates this.
Figure 1 Neuron
The boundary of the neuron is known as the cell membrane. There is a voltage difference (the membrane potential) between the inside and outside of the membrane.
If the input is large enough, an action potential is then generated. The action potential (neuronal spike) then travels down the axon, away from the cell body.
Figure 2 Neuron Spiking
Synapses
The connections between one neuron and another are called synapses. Information always leaves a neuron via its axon (see Figure 1 above), and is then transmitted across a synapse to the receiving neuron.
Neuron Firing
Neurons only fire when input is bigger than some threshold. It should, however, be noted that firing doesn't get bigger as the stimulus increases, its an all