摘抄自
caffe.proto
:
1. fixed: always return base_lr
2. step: returnbase_lr∗gamma(floor(iter/step))
3. exp: returnbase_lr∗gammaiter
4. inv: returnbase_lr∗(1+gamma∗iter)(−power)
5. multistep: similar to step but it allows non uniform steps defined by stepvalue
6. poly: the effective learning rate follows a polynomial decay, to be zero by the max_iter. returnbase_lr∗(1−iter/maxiter)(power)
7. sigmoid: the effective learning rate follows a sigmod decay. returnbase_lr∗(1/(1+exp(−gamma∗(iter−stepsize))))
caffe下学习速率调整策略
最新推荐文章于 2020-03-05 22:50:31 发布