深度学习之网络设计时优化【Dropout,Batch Normalization及其python实现】

一、Dropout

随机失活是一种 简单但非常有效的 神经网络训练效果提升技巧,原理大概是它在一定程度上避免了某些特定特征组合对训练造成的负面影响。

在正向传播时随机挑选一部分神经元失活。在反向传播时梯度只流经没有失活的神经元。

图1 使用Dropout的网络结构示意图

 

注意:

在测试时候要对随机失活进行补益。比如说随机失活了P=0.5的神经元,那么在最后测试的结果上乘以P=0.5,测试集结果与训练集结果会有一半的误差,极大的误差,就没什么准确度可言了。

def dropout_forward(x, dropout_param):
    """
    Inputs:
    - x: Input data, of any shape
    - dropout_param: A dictionary with the following keys:
      - p: Dropout parameter. We keep each neuron output with probability p.
      - mode: 'test' or 'train'. If the mode is train, then perform dropout;
        if the mode is test, then just return the input.
      - seed: Seed for the random number generator. Passing seed makes this
        function deterministic, which is needed for gradient checking but not
        in real networks.

    Outputs:
    - out: Array of the same shape as x.
    - cache: tuple (dropout_param, mask). In training mode, mask is the dropout
      mask that was used to multiply the input; in test mode, mask is None.

    NOTE: Please implement **inverted** dropout, not the vanilla version of dropout.
    See http://cs231n.github.io/neural-networks-2/#reg for more details.

    NOTE 2: Keep in mind that p is the probability of **keep** a neuron
    output; this might be contrary to some sources, where it is referred to
    as the probability of dropping a neuron output.
    """
    p, mode = dropout_param['p'], dropout_param['mode']
    if 'seed' in dropout_param:
        np.random.seed(dropout_param['seed'])

    mask = None
    out = None

    if mode == 'train':
        
        # 建立蒙板,取整个矩阵的1-p部分
        # 为保证不影响最后结果x.W,要还原一下/(1-p)
        mask = (np.random.rand(*x.shape) > p) / (1 - p)
        out = mask * x
    elif mode == 'test':
        out = x
       
    cache = (dropout_param, mask)
    out = out.astype(x.dtype, copy=False)

    return out, cache
def dropout_backward(dout, cache):
    """
    Inputs:
    - dout: Upstream derivatives, of any shape
    - cache: (dropout_param, mask) from dropout_forward.
    """
    dropout_param, mask = cache
    mode = dropout_param['mode']

    dx = None
    if mode == 'train':
        dx = dout * mask
    elif mode == 'test':
        dx = dout
    return dx

二、Batch Normalization

机器学习中有一种思想是均值为0,方差1的非相关性数据作为输入时往往能产生更好的结果。对于神经网络而言,即使输入数据X满足条件但是经过几层之后甚至一层之后这种条件就被破坏了,于是乎就有了Batch Normalization。

BN可以很好的提升神经网络的性能,加速收敛,提高准确度,一般有了BN就不用Dropout了。

图2  正向传播时BN层的计算公式

 

每一列为单位,前两行求整个数据集的均值方差,第三行是归一化数据集,这个公式很像高斯分布的指数部分,最后一步是重构数据集。

 

图3 反向传播时的梯度公式

 

比照图2,写出各个部分的微分,便得到了dx。

def batchnorm_forward(x, gamma, beta, bn_param):
    """
    Forward pass for batch normalization.

    During training the sample mean and (uncorrected) sample variance are
    computed from minibatch statistics and used to normalize the incoming data.
    During training we also keep an exponentially decaying running mean of the
    mean and variance of each feature, and these averages are used to normalize
    data at test-time.

    At each timestep we update the running averages for mean and variance using
    an exponential decay based on the momentum parameter:

    running_mean = momentum * running_mean + (1 - momentum) * sample_mean
    running_var = momentum * running_var + (1 - momentum) * sample_var

    Note that the batch normalization paper suggests a different test-time
    behavior: they compute sample mean and variance for each feature using a
    large number of training images rather than using a running average. For
    this implementation we have chosen to use running averages instead since
    they do not require an additional estimation step; the torch7
    implementation of batch normalization also uses running averages.

    Input:
    - x: Data of shape (N, D)
    - gamma: Scale parameter of shape (D,)
    - beta: Shift paremeter of shape (D,)
    - bn_param: Dictionary with the following keys:
      - mode: 'train' or 'test'; required
      - eps: Constant for numeric stability
      - momentum: Constant for running mean / variance.
      - running_mean: Array of shape (D,) giving running mean of features
      - running_var Array of shape (D,) giving running variance of features

    Returns a tuple of:
    - out: of shape (N, D)
    - cache: A tuple of values needed in the backward pass
    """
    mode = bn_param['mode']
    eps = bn_param.get('eps', 1e-5)
    momentum = bn_param.get('momentum', 0.9)

    N, D = x.shape
    running_mean = bn_param.get('running_mean', np.zeros(D, dtype=x.dtype))
    running_var = bn_param.get('running_var', np.zeros(D, dtype=x.dtype))

    out, cache = None, None
    if mode == 'train':
        sample_mean = np.mean(x, axis=0)
        sample_var = np.var(x, axis=0)
        x_norm = (x - sample_mean) / np.sqrt(sample_var + eps)
        out = gamma * x_norm + beta  # 到此处,这轮batch norm就做完了

        # 更新整个数据集的mean,var
        running_mean = momentum * running_mean + (1 - momentum) * sample_mean
        running_var = momentum * running_var + (1 - momentum) * sample_var
        bn_param['running_mean'] = running_mean
        bn_param['running_var'] = running_var

        # 把中间数据存入cache,反向传播时用
        cache = {
            'x_minus_mean': (x - sample_mean),
            'x_norm': x_norm,
            'gamma': gamma,
            'i_var': 1 / np.sqrt(sample_var + eps),
            'sqrt_var': np.sqrt(sample_var + eps),
        }
    elif mode == 'test':
        #######################################################################
        # TODO: Implement the test-time forward pass for batch normalization. #
        # Use the running mean and variance to normalize the incoming data,   #
        # then scale and shift the normalized data using gamma and beta.      #
        # Store the result in the out variable.                               #
        #######################################################################
        # pass
        out = gamma * ((x - running_mean) / np.sqrt(running_var + eps)) + beta
        #######################################################################
        #                          END OF YOUR CODE                           #
        #######################################################################
    else:
        raise ValueError('Invalid forward batchnorm mode "%s"' % mode)

    # Store the updated running means back into bn_param
    bn_param['running_mean'] = running_mean
    bn_param['running_var'] = running_var

    return out, cache

 

def batchnorm_backward(dout, cache):
    """
    Backward pass for batch normalization.

    For this implementation, you should write out a computation graph for
    batch normalization on paper and propagate gradients backward through
    intermediate nodes.

    Inputs:
    - dout: Upstream derivatives, of shape (N, D)
    - cache: Variable of intermediates from batchnorm_forward.

    Returns a tuple of:
    - dx: Gradient with respect to inputs x, of shape (N, D)
    - dgamma: Gradient with respect to scale parameter gamma, of shape (D,)
    - dbeta: Gradient with respect to shift parameter beta, of shape (D,)
    """
    dx, dgamma, dbeta = None, None, None

    x_minus_mean = cache.get('x_minus_mean')
    x_norm = cache.get('x_norm')
    i_var = cache.get('i_var')
    sqrt_var = cache.get('sqrt_var')
    gamma = cache.get('gamma')

    dbeta = np.sum(dout, axis=0)
    dgamma = np.sum(dout * x_norm, axis=0)

    N, D = dout.shape
    dx_norm = dout * gamma  # 1.
    dxmu1 = dx_norm * i_var  # 2.
    di_var = np.sum(dx_norm * x_minus_mean)  # 3.
    dsqrtvar = di_var * (-1 / sqrt_var ** 2)  # 4
    dvar = dsqrtvar * 0.5 * (1 / sqrt_var)  # 5
    dsq = dvar * np.ones_like(dout) / N  # 6

    dxmu2 = dsq * 2 * x_minus_mean  # 7
    dx1 = dxmu1 + dxmu2  # 7
    dmu = -np.sum(dxmu1 + dxmu2, axis=0)  # 8
    dx2 = dmu * np.ones_like(dout) / N  # 9
    dx = dx1 + dx2  # 10

    return dx, dgamma, dbeta

 

def batchnorm_backward_alt(dout, cache):
    """
    Alternative backward pass for batch normalization.

    For this implementation you should work out the derivatives for the batch
    normalizaton backward pass on paper and simplify as much as possible. You
    should be able to derive a simple expression for the backward pass.
    See the jupyter notebook for more hints.

    Note: This implementation should expect to receive the same cache variable
    as batchnorm_backward, but might not use all of the values in the cache.

    Inputs / outputs: Same as batchnorm_backward
    """
    dx, dgamma, dbeta = None, None, None
    N, D = dout.shape
    x_norm = cache.get('x_norm')
    gamma = cache.get('gamma')
    i_var = cache.get('i_var')
    x_minus_mean = cache.get('x_minus_mean')
    sqrt_var = cache.get('sqrt_var')

    # Backprop dout to calculate dbeta and dgamma.
    dbeta = np.sum(dout, axis=0)
    dgamma = np.sum(dout * x_norm, axis=0)

    # Alternative faster formula way of calculating dx. 
    # ref:http://cthorey.github.io./backpropagation/
    dx = (1 / N) * gamma * 1 / sqrt_var * (
            (N * dout) - np.sum(dout, axis=0) - (x_minus_mean) * np.square(i_var) * np.sum(dout * (x_minus_mean),
                                                                                           axis=0))

    return dx, dgamma, dbeta

 

 

  • 0
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 2
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值