Multilayer perceptron (MLP) is a feedforward artificial neural network model that maps sets of input data onto a set of appropriate outputs.
MLP utilizes a supervised learning technique called backpropagation for training the network.
Single-step learning is faster
Batch learning yields lower residual error. Batch-learning is working with the true gradient. Thus, the residual error is often smaller than the single-step residual error. But, since one batch-learning step is only performed after a full set of training data is presented, the weight update frequency is rather slow.
Combination of both: Start with single step-learning to get a faster improvement, and later on switch to batch-learning to get better final result.
The question ofwhen to stop training is very complicated. Some of the possibilities are:
- Stop when the average error function for the training set becomes small.
- Stop when the gradient of the average error function for the training set becomes small.
- Stop when the average error function for the validation set starts to go up, and use the weights from the step that yielded the smallest validation error.
- Stop when your boredom level is no longer tolerable.
From:
http://www.researchgate.net/post/Which_one_is_better_between_online_and_offline_trained_neural_network
ftp://ftp.sas.com/pub/neural/FAQ2.html#A_functions
http://neuralnetworksanddeeplearning.com/chap1.html
http://en.wikipedia.org/wiki/Multilayer_perceptron