CS231n作业笔记2.5:dropout的实现与应用

CS231n简介

详见 CS231n课程笔记1:Introduction
本文都是作者自己的思考,正确性未经过验证,欢迎指教。

作业笔记

dropout中唯一需要注意的就是为了平衡train与test,通过除以期望值即可。

1. 前向传播

  if mode == 'train':
    mask = (np.random.rand(*x.shape)<p)
    out = x*mask / p
  elif mode == 'test':
    out = x
    mask = np.ones_like(x)

2. 后向传播

  if mode == 'train':
    dx = dout * mask / dropout_param['p']
  elif mode == 'test':
    dx = dout
  return dx

3. 应用:带dropout的多层神经网络

在每一层ReLU后接一层dropout即可。关于多层神经网络的实现,请参考CS231n作业笔记2.4:Batchnorm的实现与使用

    cache = {}
    hidden_value = None
    hidden_value,cache['fc1'] = affine_forward(X,self.params['W1'],self.params['b1'])
    if self.use_batchnorm:
        hidden_value,cache['bn1'] = batchnorm_forward(hidden_value, self.params['gamma1'], self.params['beta1'], self.bn_params[0])
    hidden_value,cache['relu1'] = relu_forward(hidden_value)
    if self.use_dropout:
        hidden_value, cache['drop1'] = dropout_forward(hidden_value,self.dropout_param)
    for index in range(2,self.num_layers):
        hidden_value,cache['fc'+str(index)] = affine_forward(hidden_value,self.params['W'+str(index)],self.params['b'+str(index)])
        if self.use_batchnorm:
            hidden_value,cache['bn'+str(index)] = batchnorm_forward(hidden_value,  self.params['gamma'+str(index)], self.params['beta'+str(index)], self.bn_params[index-1])
        hidden_value,cache['relu'+str(index)] = relu_forward(hidden_value)
        if self.use_dropout:
            hidden_value, cache['drop'+str(index)] = dropout_forward(hidden_value,self.dropout_param)

    scores,cache['score'] = affine_forward(hidden_value,self.params['W'+str(self.num_layers)],self.params['b'+str(self.num_layers)])

    # If test mode return early
    if mode == 'test':
      return scores

    loss, grads = 0.0, {}
    loss,dscores = softmax_loss(scores,y)
    for index in range(1,self.num_layers+1):
        loss += 0.5*self.reg*np.sum(self.params['W'+str(index)]**2)

    dhidden_value,grads['W'+str(self.num_layers)],grads['b'+str(self.num_layers)] = affine_backward(dscores,cache['score'])
    for index in range(self.num_layers-1,1,-1):
        if (self.use_dropout):
            dhidden_value = dropout_backward(dhidden_value, cache['drop'+str(index)])
        dhidden_value = relu_backward(dhidden_value,cache['relu'+str(index)])
        if self.use_batchnorm:
            dhidden_value, grads['gamma'+str(index)], grads['beta'+str(index)] = batchnorm_backward(dhidden_value, cache['bn'+str(index)])
        dhidden_value,grads['W'+str(index)],grads['b'+str(index)] = affine_backward(dhidden_value,cache['fc'+str(index)])
    if (self.use_dropout):
        dhidden_value = dropout_backward(dhidden_value, cache['drop1'])
    dhidden_value = relu_backward(dhidden_value,cache['relu1'])
    if self.use_batchnorm:
        dhidden_value, grads['gamma1'], grads['beta1'] = batchnorm_backward(dhidden_value, cache['bn1'])
    dhidden_value,grads['W1'],grads['b1'] = affine_backward(dhidden_value,cache['fc1'])

    for index in range(1,self.num_layers+1):
        grads['W'+str(index)] += self.reg * self.params['W'+str(index)] 
  • 2
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值