cs231-assignment2-总结
技巧
- dx是和x形状一样全为0的矩阵,将dx里x>0的位置设为1
dx = np.zeros_like(x, dtype=float)
dx[x > 0] = 1
- 把dx的形状变成x的形状
dx = np.reshape(dx, x.shape)
- batch norm是对特征值归一化,不是对图像归一化
- 计算出来的梯度先化简后再写成代码减少计算量
x_norm, gamma, beta, sample_mean, sample_var, x, eps = cache
dnorm = gamma * dout
dvar = -0.5 * np.sum(dnorm * (x - sample_mean), axis=0) * np.power(sample_var + eps, -3/2)
dmean = -1 * np.sum(dnorm * np.power(sample_var + eps, -1/2), axis=0) - 2 * dvar * np.mean(x - sample_mean, axis=0)
dgamma = np.sum(dout * x_norm, axis=0)
dbeta = np.sum(dout)
- mask
mask = np.random.rand(*x.shape) < p
- np的补零操作
x_padded = np.pad(x, ((0, 0), (0, 0), (pad, pad), (pad, pad)), mode='constant')
- 卷积网络的input 有N个batch output也会有N个
Input:
- x: Input data of shape (N, C, H, W)
- w: Filter weights of shape (F, C, HH, WW)
- b: Biases, of shape (F,)
Output:
- out: Output data, of shape (N, F, H', W') where H' and W' are given by
H' = 1 + (H + 2 * pad - HH) / stride
W' = 1 + (W + 2 * pad - WW) / stride
- 让最大的值为1,其余为0
m = np.max(win)
(m== win)
- 拼接的效果
layers_dims = [input_dim] + hidden_dims + [num_classes]
-
为什么要用平滑平均:
在assignment1里,train和test都使用的是整体train集的平均,这里采用用了平滑平均。
https://www.zhihu.com/question/55621104 -
顺序:
wx+b - batch norm - relu - dropout
code:
layers.py
import numpy as np
def affine_forward(x, w, b):
out = None
N = x.shape[0]
x_reshape = x.reshape(N, -1)
out = np.dot(x_reshape, w) + b
cache = (x, w, b)
return out, cache
def affine_backward(dout, cache):
"""
Computes the backward pass for an affine layer.
Inputs:
- dout: Upstream derivative, of shape (N, M)
- cache: Tuple of:
- x: Input data, of shape (N, d_1, ... d_k)
- w: Weights, of shape (D, M)
Returns a tuple of:
- dx: Gradient with respect to x, of shape (N, d1, ..., d_k)
- dw: Gradient with respect to w, of shape (D, M)
- db: Gradient with respect to b, of shape (M,)
"""
x, w, b = cache
dx, dw, db = None, None, None
N, M = dout.shape
x_reshape = x.reshape(N, -1)
dw = np.transpose(x_reshape).dot(dout)
dx_reshape = dout.dot(np.transpose(dw))
dx = np.reshape(dx_reshape, x.shape)
db = np.sum(dout, axis=0, keepdims=True)
return dx, dw, db
def relu_forward(x):
"""
Computes the forward pass for a layer of rectified linear units (ReLUs).
Input:
- x: Inputs, of any shape
Returns a tuple of:
- out: Output, of the same shape as x
- cache: x
"""
out = None
out = np.maximum(x, 0)
cache = x
return out, cache
def relu_backward(dout, cache):
"""
Computes the backward pass for a layer of rectified linear units (ReLUs).
Input:
- dout: Upstream derivatives, of any shape
- cache: Input x, of same shape as dout
Returns:
- dx: Gradient with respect to x
"""
dx, x = None, cache
dx = dout
dx[x <= 0] = 0
return dx
def batchnorm_forward(x, gamma, beta, bn_param):
"""
Forward pass for batch normalization.
During training the sample mean and (uncorrected) sample variance are
computed from minibatch statistics and used to normalize the incoming data.
During training we also keep an exponentially decaying running mean of the mean
and variance of each feature, and these averages are used to normalize data
at test-time.
At each timestep we update the running averages for mean and variance using
an exponential decay based on the momentum parameter:
running_mean = momentum * running_mean + (1 - momentum) * sample_mean
running_var = momentum * running_var + (1 - momentum) * sample_var
Note that the batch normalization paper suggests a different test-time
behavior: they compute sample mean and variance for each feature using a
large number of training images rather than using a running average. For
this implementation we have chosen to use running averages instead since
they do not require an additional estimation step; the torch7 implementation
of batch normalization also uses running averages.
Input:
- x: Data of shape (N, D)
- gamma: Scale parameter of shape (D,)
- beta: Shift paremeter of shape (D,)
- bn_param: Dictionary with the following keys:
- mode: 'train' or 'test'; required
- eps: Constant for numeric stability
- momentum: Constant for running mean / variance.
- running_mean: Array of shape (D,) giving running mean of features
- running_var Array of shape (D,) giving running variance of features
Returns a tuple of:
- out: of shape (N, D)
- cache: A tuple of values needed in the backward pass
"""
mode = bn_param['mode']
eps = bn_param.get('eps', 1e-5)
momentum = bn_param.get('momentum', 0.9)
N, D = x.shape
running_mean = bn_param.get('running_mean', np.zeros(D, dtype=x.dtype))
running_var = bn_param.get('running_var', np.zeros(D, dtype=x.dtype))
out, cache = None, None
if mode == 'train':
sample_mean = np.mean(x, axis=0, keepdims=True)
sample_var = np.var(x, axis=0, keepdims=True)
x_norm = (x - sample_mean)/np.sqrt(sample_var + eps)
out = gamma * x_norm + beta
cache = (x_normalized, gamma, beta, sample_mean, sample_var, x, eps)
running_mean = momentum * running_mean + (1 - momentum) * sample_mean
running_var = momentum * running_var + (1 - momentum) * sample_var
elif mode == 'test':
x_norm = (x - sample_mean) / np.sqrt(sample_var + eps)
out = gamma * x_norm + beta
else:
raise ValueError('Invalid forward batchnorm mode "%s"' % mode)
# Store the updated running means back into bn_param
bn_param['running_mean'] = running_mean
bn_param['running_var'] = running_var
return out, cache
def batchnorm_backward(dout, cache):
"""
Backward pass for batch normalization.
For this implementation, you should write out a computation graph for
batch normalization on paper and propagate gradients backward through
intermediate nodes.
Inputs:
- dout: Upstream derivatives, of shape (N, D)
- cache: Variable of intermediates from batchnorm_forward.
Returns a tuple of:
- dx: Gradient with respect to inputs x, of shape (N, D)
- dgamma: Gradient with respect to scale parameter gamma, of shape (D,)
- dbeta: Gradient with respect to shift parameter beta, of shape (D,)
"""
dx, dgamma, dbeta = None, None, None
N, D = x.shape
x_norm, gamma, beta, sample_mean, sample_var, x, eps = cache
dnorm = gamma * dout
dvar = -0.5 * np.sum(dnorm * (x - sample_mean), axis=0) * np.power(sample_var + eps, -3/2)
dmean = -1.0 * np.sum(dnorm * np.power(sample_var + eps, -1/2), axis=0) - 2.0 * dvar * np.mean(x - sample_mean, axis=0)
dgamma = np.sum(dout * x_norm, axis=0)
dbeta = np.sum(dout, axis=0)
dx = dnorm * np.power(sample_var + eps, -1/2) + 2.0/N * dnorm * (x - sample_mean) + 1.0 * dmean/N
return dx, dgamma, dbeta
def batchnorm_backward_alt(dout, cache):
"""
Alternative backward pass for batch normalization.
For this implementation you should work out the derivatives for the batch
normalizaton backward pass on paper and simplify as much as possible. You
should be able to derive a simple expression for the backward pass.
Note: This implementation should expect to receive the same cache variable
as batchnorm_backward, but might not use all of the values in the cache.
Inputs / outputs: Same as batchnorm_backward