1148人阅读 评论(2)

# 作业笔记

Batchnorm的思想简单易懂，实现起来也很轻松，但是却具有很多优良的性质，具体请参考课程笔记。下图简要介绍了一下Batchnorm需要完成的工作以及优点（详情请见CS231n课程笔记5.3：Batch Normalization）：

1. 最后一步对归一化后的数据进行平移与缩放，且此参数可学习。
2. 上诉参数对于x的每一维都具有相应的参数，故假设X.shape = [N,D]，那么gamma.shape = [D,]

## 1. 前向传播

1. out_media的命名是为了后向传播的时候处理方便
2. 使用var而不是std，这既符合图中公式，又方便了后向传播

  if mode == 'train':
mean = np.mean(x,axis = 0)
var = np.var(x,axis = 0)
running_mean = running_mean * momentum + (1-momentum) * mean
running_var = running_var * momentum + (1-momentum) * var
out_media = (x-mean)/np.sqrt(var + eps)
out = (out_media + beta) * gamma
cache = (out_media,x,mean,var,beta,gamma,eps)
elif mode == 'test':
out = (x-running_mean)/np.sqrt(running_var+eps)
out = (out + beta) * gamma
cache = (out,x,running_mean,running_var,beta,gamma,eps)

## 2. 后向传播

2. dvar也会向dmean传播，所以先分解dvar。
3. 直接求解dvar过于复杂，使用dstd过渡。
4. 每次求解的时候不要忘记乘以全局梯度。
5. 对于dvar的分解，分别使用(x-mean)^2以及(x-mean)过渡。
  dout_media = dout * gamma
dgamma = np.sum(dout * (out_media + beta),axis = 0)
dbeta = np.sum(dout * gamma,axis = 0)
dx = dout_media / np.sqrt(var + eps)
dmean = -np.sum(dout_media / np.sqrt(var+eps),axis = 0)
dstd = np.sum(-dout_media * (x - mean) / (var + eps),axis = 0)
dvar = 1./2./np.sqrt(var+eps) * dstd
dx_minus_mean_square = dvar / x.shape[0]
dx_minus_mean = 2 * (x-mean) * dx_minus_mean_square
dx += dx_minus_mean
dmean += np.sum(-dx_minus_mean,axis = 0)
dx += dmean / x.shape[0] 

## 3. 应用：带Batchnorm的多层神经网络

### 3.1. 初始化代码

    self.bn_params = []
if self.use_batchnorm:
self.bn_params = [{'mode': 'train'} for i in xrange(self.num_layers - 1)]
for i in xrange(self.num_layers-1):
self.params['beta'+str(i+1)] = np.zeros(hidden_dims[i])
self.params['gamma'+str(i+1)] = np.ones(hidden_dims[i])

### 3.2. 前向传播代码

    cache = {}
hidden_value = None
hidden_value,cache['fc1'] = affine_forward(X,self.params['W1'],self.params['b1'])
if self.use_batchnorm:
hidden_value,cache['bn1'] = batchnorm_forward(hidden_value, self.params['gamma1'], self.params['beta1'], self.bn_params[0])
hidden_value,cache['relu1'] = relu_forward(hidden_value)
for index in range(2,self.num_layers):
hidden_value,cache['fc'+str(index)] = affine_forward(hidden_value,self.params['W'+str(index)],self.params['b'+str(index)])
if self.use_batchnorm:
hidden_value,cache['bn'+str(index)] = batchnorm_forward(hidden_value,  self.params['gamma'+str(index)], self.params['beta'+str(index)], self.bn_params[index-1])
hidden_value,cache['relu'+str(index)] = relu_forward(hidden_value)

scores,cache['score'] = affine_forward(hidden_value,self.params['W'+str(self.num_layers)],self.params['b'+str(self.num_layers)])

### 3.3. 后向传播代码

    loss, grads = 0.0, {}
loss,dscores = softmax_loss(scores,y)
for index in range(1,self.num_layers+1):
loss += 0.5*self.reg*np.sum(self.params['W'+str(index)]**2)

for index in range(self.num_layers-1,1,-1):
dhidden_value = relu_backward(dhidden_value,cache['relu'+str(index)])
if self.use_batchnorm:
dhidden_value = relu_backward(dhidden_value,cache['relu1'])
if self.use_batchnorm:

for index in range(1,self.num_layers+1):
grads['W'+str(index)] += self.reg * self.params['W'+str(index)] 
0
0

* 以上用户言论只代表其个人观点，不代表CSDN网站的观点或立场
个人资料
• 访问：36067次
• 积分：1426
• 等级：
• 排名：千里之外
• 原创：107篇
• 转载：6篇
• 译文：3篇
• 评论：9条
博客专栏
 Neural Networks for Machine Learning课程笔记 文章：0篇 阅读：0
 CS231n课程笔记 文章：0篇 阅读：0
阅读排行
最新评论