在caffe.proto里面,有关于learning rate policy的描述,介绍了如下几种策略:
// The learning rate decay policy. The currently implemented learning rate
// policies are as follows:
// - fixed: always return base_lr. 固定学习率
// - step: return base_lr * gamma ^ (floor(iter / step)) 每隔固定迭代次数衰减学习率
// - exp: return base_lr * gamma ^ iter 根据迭代次数衰减学习率
// - inv: return base_lr * (1 + gamma * iter) ^ (- power)
// - multistep: similar to step but it allows non uniform steps defined by
// stepvalue 允许非固定步长衰减
// - poly: the effective learning rate follows a polynomial decay, to be
// zero by the max_iter. return base_lr (1 - iter/max_iter) ^ (power) 多项式衰减
// - sigmoid: the effective learning rate follows a sigmod decay
// return base_lr ( 1/(1 + exp(-gamma * (iter - stepsize)))) sigmoid函数衰减
//
// where base_lr, max_iter, gamma, step, stepvalue and power are defined
// in the solver parameter protocol buffer, and iter is the current iteration. base_lr,max_iter,gamma,step,stepvalue和power都可以在solver.prototxt里面设置