neural network learning_Multilayer perceptron Batch learning when to stop training

Multilayer perceptron (MLP) is a feedforward artificial neural network model that maps sets of input data onto a set of appropriate outputs. 

MLP utilizes a supervised learning technique called backpropagation for training the network.


Single-step learning is faster

Batch learning yields lower residual error. Batch-learning is working with the true gradient. Thus, the residual error is often smaller than the single-step residual error. But, since one batch-learning step is only performed after a full set of training data is presented, the weight update frequency is rather slow.

Combination of both: Start with single step-learning to get a faster improvement, and later on switch to batch-learning to get better final result. 


The question ofwhen to stop training is very complicated. Some of the possibilities are:

  • Stop when the average error function for the training set becomes small.
  • Stop when the gradient of the average error function for the training set becomes small.
  • Stop when the average error function for the validation set starts to go up, and use the weights from the step that yielded the smallest validation error.
  • Stop when your boredom level is no longer tolerable.


From:

http://www.researchgate.net/post/Which_one_is_better_between_online_and_offline_trained_neural_network

ftp://ftp.sas.com/pub/neural/FAQ2.html#A_functions

http://neuralnetworksanddeeplearning.com/chap1.html

http://en.wikipedia.org/wiki/Multilayer_perceptron

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值