对 caffe 中Xavier, msra 权值初始化方式的解释

If you work through the Caffe MNIST tutorial, you’ll come across this curious line

weight_filler { type: "xavier" }

and the accompanying explanation

For the weight filler, we will use the xavier algorithm that automatically determines the scale of initialization based on the number of input and output neurons.

Unfortunately, as of the time this post was written, Google hasn’t heard much about “the xavier algorithm”. To work out what it is, you need to poke around the Caffe source until you find the right docstring and then read the referenced paper, Xavier Glorot & Yoshua Bengio’s Understanding the difficulty of training deep feedforward neural networks.

Why’s Xavier initialization important?

In short, it helps signals reach deep into the network.

  • If the weights in a network start too small, then the signal shrinks as it passes through each layer until it’s too tiny to be useful.
  • If the weights in a network start too large, then the signal grows as it passes through each layer until it’s too massive to be useful.

Xavier initialization makes sure the weights are ‘just right’, keeping the signal in a reasonable range of values through many layers.

To go any further than this, you’re going to need a small amount of statistics - specifically you need to know about random distributions and their variance.

Okay, hit me with it. What’s Xavier initialization?

In Caffe, it’s initializing the weights in your network by drawing them from a distribution with zero mean and a specific variance,

Var(W)=1nin Var(W)=1nin

where  W W is the initialization distribution for the neuron in question, and  nin nin is the number of neurons feeding into it. The distribution used is typically Gaussian or uniform.

It’s worth mentioning that Glorot & Bengio’s paper originally recommended using

Var(W)=2nin+nout Var(W)=2nin+nout

where  nout nout is the number of neurons the result is fed to. We’ll come to why Caffe’s scheme might be different in a bit.

And where did those formulas come from?

Suppose we have an input  X X with  n n components and a linear neuron with random weights  W W that spits out a number  Y Y. What’s the variance of  Y Y? Well, we can write

Y=W1X1+W2X2++WnXn Y=W1X1+W2X2+⋯+WnXn

And from Wikipedia we can work out that  WiXi WiXi is going to have variance

Var(WiXi)=E[Xi]2Var(Wi)+E[Wi]2Var(Xi)+Var(Wi)Var(ii) Var(WiXi)=E[Xi]2Var(Wi)+E[Wi]2Var(Xi)+Var(Wi)Var(ii)

Now if our inputs and weights both have mean  0 0, that simplifies to

Var(WiXi)=Var(Wi)Var(Xi) Var(WiXi)=Var(Wi)Var(Xi)

Then if we make a further assumption that the  Xi Xi and  Wi Wi are all independent and identically distributed, we can work out that the variance of  Y Y is

Var(Y)=Var(W1X1+W2X2++WnXn)=nVar(Wi)Var(Xi) Var(Y)=Var(W1X1+W2X2+⋯+WnXn)=nVar(Wi)Var(Xi)

Or in words: the variance of the output is the variance of the input, but scaled by  nVar(Wi) nVar(Wi). So if we want the variance of the input and output to be the same, that means  nVar(Wi) nVar(Wi) should be 1. Which means the variance of the weights should be

Var(Wi)=1n=1nin Var(Wi)=1n=1nin

Voila. There’s your Caffe-style Xavier initialization.

Glorot & Bengio’s formula needs a tiny bit more work. If you go through the same steps for the backpropagated signal, you find that you need

Var(Wi)=1nout Var(Wi)=1nout

to keep the variance of the input gradient & the output gradient the same. These two constraints can only be satisfied simultaneously if  nin=nout nin=nout, so as a compromise, Glorot & Bengio take the average of the two:

Var(Wi)=2nin+nout Var(Wi)=2nin+nout

I’m not sure why the Caffe authors used the  nin nin-only variant. The two possibilities that come to mind are

  • that preserving the forward-propagated signal is much more important than preserving the back-propagated one.
  • that for implementation reasons, it’s a pain to find out how many neurons in the next layer consume the output of the current one.

That seems like an awful lot of assumptions.

It is. But it works. Xavier initialization was one of the big enablers of the move away from per-layer generative pre-training.

The assumption most worth talking about is the “linear neuron” bit. This is justified in Glorot & Bengio’s paper because immediately after initialization, the parts of the traditional nonlinearities -  tanh,sigm tanh,sigm - that are being explored are the bits close to zero, and where the gradient is close to  1 1. For the more recent rectifying nonlinearities, that doesn’t hold, and in a recent paper by He, Rang, Zhen and Sun they build on Glorot & Bengio and suggest using

Var(W)=2nin Var(W)=2nin

instead. Which makes sense: a rectifying linear unit is zero for half of its input, so you need to double the size of weight variance to keep the signal’s variance constant.

  • 1
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值