【cs231n】Batchnorm及其反向传播


神经网络中有很多层的叠加,数据经过每一层后,其分布会发生变化,给下一层的训练带来麻烦,这一现象称为Internal Covariate Shift。在bn之前一般通过减小学习率、初始化权重、以及细致训练策略来解决。

BatchNormalization

BatchNormalization就是将每一层数据拉回均值为0,方差为1的正太分布上。同时为了保证其还有学习到的特征,再对数据进行缩放和平移。假设输入为 x x x,大小为(N,D),则算法如下:

  1. 沿着通道D计算均值: μ B = 1 n ∑ i = 1 n x i \mu _ { \mathcal { B } } = \frac { 1 } { n } \sum _ { i = 1 } ^ { n } x _ { i } μB=n1i=1nxi
  2. 沿通道D计算方差: σ B 2 = 1 n ∑ i = 1 n ( x i − μ B ) 2 \sigma _ { \mathcal { B } } ^ { 2 }= \frac { 1 } { n } \sum _ { i = 1 } ^ { n } \left( x _ { i } - \mu _ { \mathcal { B } } \right) ^ { 2 } σB2=n1i=1n(xiμB)2
  3. 将数据拉回正态分布: x ^ i = x i − μ B σ B 2 + ϵ \widehat { x } _ { i } = \frac { x _ { i } - \mu _ { \mathcal { B } } } { \sqrt { \sigma _ { \mathcal { B } } ^ { 2 } + \epsilon } } x i=σB2+ϵ xiμB
  4. 进行平移和缩放: y i = γ x ^ i + β y _ { i } = \gamma \widehat { x } _ { i } + \beta yi=γx i+β

在训练时,batchnorm的正向传播如上所示,但在测试时,由于batchsize=1,即m=1,因此测试时使用的均值和方差直接由训练时计算的均值和方差的滑动平均值来代替。下面为代码。

def batchnorm_forward(x, gamma, beta, bn_param):
    """
    Input:
    - x: Data of shape (N, D)
    - gamma: Scale parameter of shape (D,)
    - beta: Shift paremeter of shape (D,)
    - bn_param: Dictionary with the following keys:
      - mode: 'train' or 'test'; required
      - eps: Constant for numeric stability
      - momentum: Constant for running mean / variance.
      - running_mean: Array of shape (D,) giving running mean of features
      - running_var Array of shape (D,) giving running variance of features
    Returns a tuple of:
    - out: of shape (N, D)
    - cache: A tuple of values needed in the backward pass
    """
    mode = bn_param['mode']
    eps = bn_param.get('eps', 1e-5)
    momentum = bn_param.get('momentum', 0.9)

    N, D = x.shape
    running_mean = bn_param.get('running_mean', np.zeros(D, dtype=x.dtype))
    running_var = bn_param.get('running_var', np.zeros(D, dtype=x.dtype))

    out, cache = None, None
    if mode == 'train':
        sample_mean = np.mean(x,axis=0,keepdims=True)
        sample_var = np.var(x,axis=0,keepdims=True)
        sample_sqrtvar = np.sqrt(sample_var+eps)
        x_norm = (x-sample_mean)/sample_sqrtvar
        out = x_norm*gamma+beta
        cache = (x,x_norm,gamma,beta,eps,sample_mean,sample_var,sample_sqrtvar)
        running_mean = momentum * running_mean + (1 - momentum) * sample_mean
        running_var = momentum * running_var + (1 - momentum) * sample_var
        
    elif mode == 'test':
      	x_norm = (x-running_mean)/np.sqrt(running_var+eps)
        out = x_norm*gamma+beta

    # 将滑动均值和滑动方差保存或更新
    bn_param['running_mean'] = running_mean
    bn_param['running_var'] = running_var

    return out, cache

在卷积神经网络中,输入X的大小可能为(N,W,H,C),所以求均值和方差的过程就变为

sample_mean = np.mean(x,axis=(0,1,2), keepdims=True)
sample_var = np.var(x,axis=(0,1,2), keepdims=True)
反向传播

另外,在cs231n中较难的一点是batchnorm的反向传播算法。下面为batchnorm的计算图。我们要计算的是 ∂ L ∂ γ \frac { \partial L } { \partial \gamma } γL ∂ L ∂ β \frac { \partial L} { \partial \beta } βL ∂ L ∂ x \frac { \partial L} { \partial x } xL
在这里插入图片描述
首先计算比较容易的 ∂ l ∂ γ \frac { \partial l } { \partial \gamma } γl ∂ l ∂ β \frac { \partial l } { \partial \beta } βl

  • ∂ L ∂ γ = ∑ i =
  • 7
    点赞
  • 21
    收藏
    觉得还不错? 一键收藏
  • 2
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值