文章目录
神经网络中有很多层的叠加,数据经过每一层后,其分布会发生变化,给下一层的训练带来麻烦,这一现象称为Internal Covariate Shift。在bn之前一般通过减小学习率、初始化权重、以及细致训练策略来解决。
BatchNormalization
BatchNormalization就是将每一层数据拉回均值为0,方差为1的正太分布上。同时为了保证其还有学习到的特征,再对数据进行缩放和平移。假设输入为 x x x,大小为(N,D),则算法如下:
- 沿着通道D计算均值: μ B = 1 n ∑ i = 1 n x i \mu _ { \mathcal { B } } = \frac { 1 } { n } \sum _ { i = 1 } ^ { n } x _ { i } μB=n1∑i=1nxi
- 沿通道D计算方差: σ B 2 = 1 n ∑ i = 1 n ( x i − μ B ) 2 \sigma _ { \mathcal { B } } ^ { 2 }= \frac { 1 } { n } \sum _ { i = 1 } ^ { n } \left( x _ { i } - \mu _ { \mathcal { B } } \right) ^ { 2 } σB2=n1∑i=1n(xi−μB)2
- 将数据拉回正态分布: x ^ i = x i − μ B σ B 2 + ϵ \widehat { x } _ { i } = \frac { x _ { i } - \mu _ { \mathcal { B } } } { \sqrt { \sigma _ { \mathcal { B } } ^ { 2 } + \epsilon } } x i=σB2+ϵxi−μB
- 进行平移和缩放: y i = γ x ^ i + β y _ { i } = \gamma \widehat { x } _ { i } + \beta yi=γx i+β
在训练时,batchnorm的正向传播如上所示,但在测试时,由于batchsize=1,即m=1,因此测试时使用的均值和方差直接由训练时计算的均值和方差的滑动平均值来代替。下面为代码。
def batchnorm_forward(x, gamma, beta, bn_param):
"""
Input:
- x: Data of shape (N, D)
- gamma: Scale parameter of shape (D,)
- beta: Shift paremeter of shape (D,)
- bn_param: Dictionary with the following keys:
- mode: 'train' or 'test'; required
- eps: Constant for numeric stability
- momentum: Constant for running mean / variance.
- running_mean: Array of shape (D,) giving running mean of features
- running_var Array of shape (D,) giving running variance of features
Returns a tuple of:
- out: of shape (N, D)
- cache: A tuple of values needed in the backward pass
"""
mode = bn_param['mode']
eps = bn_param.get('eps', 1e-5)
momentum = bn_param.get('momentum', 0.9)
N, D = x.shape
running_mean = bn_param.get('running_mean', np.zeros(D, dtype=x.dtype))
running_var = bn_param.get('running_var', np.zeros(D, dtype=x.dtype))
out, cache = None, None
if mode == 'train':
sample_mean = np.mean(x,axis=0,keepdims=True)
sample_var = np.var(x,axis=0,keepdims=True)
sample_sqrtvar = np.sqrt(sample_var+eps)
x_norm = (x-sample_mean)/sample_sqrtvar
out = x_norm*gamma+beta
cache = (x,x_norm,gamma,beta,eps,sample_mean,sample_var,sample_sqrtvar)
running_mean = momentum * running_mean + (1 - momentum) * sample_mean
running_var = momentum * running_var + (1 - momentum) * sample_var
elif mode == 'test':
x_norm = (x-running_mean)/np.sqrt(running_var+eps)
out = x_norm*gamma+beta
# 将滑动均值和滑动方差保存或更新
bn_param['running_mean'] = running_mean
bn_param['running_var'] = running_var
return out, cache
在卷积神经网络中,输入X的大小可能为(N,W,H,C),所以求均值和方差的过程就变为
sample_mean = np.mean(x,axis=(0,1,2), keepdims=True)
sample_var = np.var(x,axis=(0,1,2), keepdims=True)
反向传播
另外,在cs231n中较难的一点是batchnorm的反向传播算法。下面为batchnorm的计算图。我们要计算的是 ∂ L ∂ γ \frac { \partial L } { \partial \gamma } ∂γ∂L ∂ L ∂ β \frac { \partial L} { \partial \beta } ∂β∂L ∂ L ∂ x \frac { \partial L} { \partial x } ∂x∂L。
首先计算比较容易的 ∂ l ∂ γ \frac { \partial l } { \partial \gamma } ∂γ∂l ∂ l ∂ β \frac { \partial l } { \partial \beta } ∂β∂l:
- ∂ L ∂ γ = ∑ i =