Network architectures
一:single-layer feed-forward
二:multi-layer feed-forward
三:recurrent
Learning Algorithms
Depend on the network architecture:
•Error correcting learning (perceptron)
•Delta rule (AdaLine, Backprop)
•Competitive Learning (Self Organizing Maps)
The (McCulloch-Pitts) perceptron is a single layer NN with a non-linear j, the sign function
•The perceptron is used for binary classification.
•Given training examples of classes C1, C2 , train the perceptron in such a way that it classifies correctly the training examples:
–If the output of the perceptron is +1 then the input is assigned to class C1
– If the output is -1 then the input is assigned to C2
• We try to find suitable values for the weights in such a way that the training examples are correctly classified.
•Geometrically, we try to find a hyper-plane that separates the examples of the two classes.
Multi layer feed-forward NN(FFNN)
We consider a more general network architecture: between the input and output
layers there are hidden layers, as illustrated below.
Hidden nodes do not directly send outputs to the external environment.
FFNNs overcome the limitation of single-layer NN: they can handle non-linearly
separable learning tasks.
用的都是结果对权值求导来更新权值。
Radial-basis function (RBF) networks
RBF = radial-basis function: a function which depends only on the radial distance from a vector
Hopfield network for clustering