Caffe中learning rate 和 weight decay 的理解

本文详细解析了Caffe中learning rate和weight decay的概念。learning rate决定了权重更新的影响程度,weight decay则作为权重更新规则中的额外项,使权重指数级衰减至零,以防止过拟合。通常,初始学习率设为0.01,并在训练过程中根据损失函数变化调整。权重衰减通过惩罚大权重来限制模型自由度,避免过拟合。
摘要由CSDN通过智能技术生成

Caffe中learning rate 和 weight decay 的理解

在caffe.proto中 对caffe网络中出现的各项参数做了详细的解释。

1.关于learning rate

  optional float base_lr = 5; // The base learning rate 
 // The learning rate decay policy. The currently implemented learning rate
  // policies are as follows:
  //    - fixed: always return base_lr.
  //    - step: return base_lr * gamma ^ (floor(iter / step))
  //    - exp: return base_lr * gamma ^ iter
  //    - inv: return base_lr * (1 + gamma * iter) ^ (- power)
  //    - multistep: similar to step but it allows non uniform steps defined by
  //      stepvalue
  //    - poly: the effective learning rate follows a polynomial decay, to be
  //      zero by the max_iter. return base_lr (1 - iter/max_iter) ^ (power)
  //    - sigmoid: the effective learning rate follows a sigmod decay
  //      return base_lr ( 1/(1 + exp(-gamma * (iter - stepsize))))
  //
  // where base_lr, max_iter, gamma, step, stepvalue and power are defined
  // in the solver parameter protocol buffer, and iter is the current iteration.
  optional string lr_policy = 8;
  optional float gamma = 9; // The parameter to compute the learning rate.
  optional float power = 10; // The parameter to compute the learning rate.
  optional float momentum = 11; // The momentum value.
  optional float weight_decay = 12; // The weight decay.
  // regularization types supported: L1 and L2
  // controlled by weight_decay
  optional string regularization_type = 29 [default = "L2"];
  // the stepsize for learning rate policy "step"

2. 关于weight decay

    在机器学习或者模式识别中,会出现overfitting,而当网络逐渐overfitting时网络权值逐渐变大,因此,为了避免出现overfitting,会给误差函数添加一个惩罚项,常用的惩罚项是所有权重的平方乘以一个衰减常量之和。其用来惩罚大的权值。

regularization  controlled by weight_decay

    权值衰减惩罚项使得权值收敛到较小的绝对值,而惩罚大的权值。因为大的权值会使得系统出现过拟合,降低其泛化性能。

The weight_decay parameter govern the regularization term of the neural net.

During training a regularization term is added to the network's loss to compute the backprop gradient. Theweight_decay value determines how dominant this regularization term will be in the gradient computation.

As a rule of thumb, the more training examples you have, the weaker this term should be. The more parameters you have (i.e., deeper net, larger filters, large InnerProduct layers etc.) the higher this term should be.

Caffe also allows you to choose between L2 regularization (default) andL1 regularization, by setting

regularization_type: "L1"

While learning rate may (and usually does) change during training, the regularization weight is fixed throughout.


4.1.1SGD
Stochastic gradient descent ( solver_type: SGD ) updates the weights W by a linear combination of the negative gradient ∇L(W) and the previous weight update V t .
The learning rate α is the weight of the negative gradient. The momentum μ is the weight of the previous update.

Formally, we have the following formulas to compute the update value V t+1 and the updated weights W t+1 at iteration t+1 , given the previous weight update V t and current weights W t :
 V t+1 =μV t −α∇L(W t )

 W t+1 =W t +V t+1

The learning “hyperparameters” ( α and μ ) might require a bit of tuning for best results. If you’re not sure where to start, take a look at the “Rules of thumb” below, and for further information you might refer to Leon Bottou’s Stochastic Gradient Descent Tricks [1]. [1] L. Bottou. Stochastic Gradient Descent Tricks. Neural Networks: Tricks of the Trade:Springer, 2012.

4.1.1.1.Rules of thumb for setting the learning rate α and momentum μ

A good strategy for deep learning with SGD is to initialize the learning rate α to a value
around α≈0.01=10 −2 , and dropping it by a constant factor (e.g., 10) throughout training
when the loss begins to reach an apparent “plateau”, repeating this several times.
Generally, you probably want to use a momentum μ=0.9 or similar value. By smoothing
the weight updates across iterations, momentum tends to make deep learning with SGD
both stabler and faster.

这里 μ = momentum  α =  base_lr 

Difference between neural net weight decay and learning rate

The learning rate is a parameter that determines how much an updating step influences the current value of the weights. While weight decay is an additional term in the weight update rule that causes the weights to exponentially decay to zero, if no other update is scheduled.

So let's say that we have a cost or error function

  • 1
    点赞
  • 7
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值