背景
介绍卷积层 (CONV)和池化层(POOL) 的前向和后向传播操作。
本文约定如下:
上标 [l] 为该变量所在的层数,即第 lth 层;
- 例如: a[4] 为第 4th 层的激活函数, W[5] 和 b[5] 则对应的是第5层( 5th )的参数。
上标 (i) 表示第 ith 个样本。
- 例如: x(i) 表示 第 ith 个训练样本的输入
下标 i 表示向量的第
ith 个分量.- 例如:
a[l]i
表示
l
层激活函数的第
ith 分量。假设是一个全连接层fully connected (FC) 。
- 例如:
a[l]i
表示
l
层激活函数的第
nH , nW and nC 分别表示给定层的高度,宽度和通道数。 例如
- 例如 n[l]H , n[l]W , n[l]C 表示第 l 层的参数
nHprev , nWprev and nCprev 则表示之前层的高,宽和通道数。其等价于 n[l−1]H , n[l−1]W , n[l−1]C
1-大纲
对于卷积层, 包括以下几个步骤:
- Zero Padding
- Convolve window
- Convolution forward
- Convolution backward (optional)
池化层, 包括以下几个步骤:
- Pooling forward
- Create mask
- Distribute value
- Pooling backward (optional)
对于上述这些步骤,本文都将基于 numpy
进行实现。此后的作业,我们再采用TensorFlow
中的函数来构建该网络。
标准的卷积神经网络如下:
注意,每个前向传播都有一个对应的后向传播,为此,在计算前向传播过程,需要将中间结果存储在cache中,以便在后向传播中梯度的计算中使用。
2-卷积神经网络
对于三通道(RGB)的体数据,要求filter也是3个通道的深度,然后,在体数据上对应位置相乘,之后求和作为一次体数据的卷积输出。为此,我们需要先构造两个辅助函数如下:
2-1 Zero-Padding
Zero-Padding的使用是为了避免卷积操作之后图像尺寸的锐减,同时也避免了边缘信息的丢失。
np.pad
函数的使用方法:
第一个参数是待填充数组
第二个参数是填充的形状,按照维度的顺序依次指定padding的尺寸。例如,第1维度中的(2,3)表示前面两个,后面三个。
第三个参数是填充的方法。
比如:
#1维矩阵
arr1D = np.array([1, 1, 2, 2, 3, 4])
'''不同的填充方法'''
print('constant: ' + str(np.pad(arr1D, (2, 3), 'constant')))
constant: [0 0 1 1 2 2 3 4 0 0 0]
arr3D = np.array([[[1, 1, 2, 2, 3, 4], [1, 1, 2, 2, 3, 4], [1, 1, 2, 2, 3, 4]],
[[1, 1, 2, 3, 4, 5], [1, 1, 2, 3, 4, 5], [1, 1, 2, 3, 4, 5]],
[[1, 1, 2, 2, 3, 6], [1, 1, 2, 2, 3, 6], [1, 1, 2, 2, 3, 6]]])
print("init array:\n"+str(arr3D))
print('constant: \n' + str(np.pad(arr3D, ((0, 0), (1, 1), (2, 2)), 'constant')))#可以尝试将第0位改变下
输出结果:
init array:
[[[1 1 2 2 3 4]
[1 1 2 2 3 4]
[1 1 2 2 3 4]]
[[1 1 2 3 4 5]
[1 1 2 3 4 5]
[1 1 2 3 4 5]]
[[1 1 2 2 3 6]
[1 1 2 2 3 6]
[1 1 2 2 3 6]]]
constant:
[[[0 0 0 0 0 0 0 0 0 0]
[0 0 1 1 2 2 3 4 0 0]
[0 0 1 1 2 2 3 4 0 0]
[0 0 1 1 2 2 3 4 0 0]
[0 0 0 0 0 0 0 0 0 0]]
[[0 0 0 0 0 0 0 0 0 0]
[0 0 1 1 2 3 4 5 0 0]
[0 0 1 1 2 3 4 5 0 0]
[0 0 1 1 2 3 4 5 0 0]
[0 0 0 0 0 0 0 0 0 0]]
[[0 0 0 0 0 0 0 0 0 0]
[0 0 1 1 2 2 3 6 0 0]
[0 0 1 1 2 2 3 6 0 0]
[0 0 1 1 2 2 3 6 0 0]
[0 0 0 0 0 0 0 0 0 0]]]
padding定义函数如下:
# GRADED FUNCTION: zero_pad
def zero_pad(X, pad):
"""
Pad with zeros all images of the dataset X. The padding is applied to the height and width of an image,
as illustrated in Figure 1.
Argument:
X -- python numpy array of shape (m, n_H, n_W, n_C) representing a batch of m images
pad -- integer, amount of padding around each image on vertical and horizontal dimensions
Returns:
X_pad -- padded image of shape (m, n_H + 2*pad, n_W + 2*pad, n_C)
"""
### START CODE HERE ### (≈ 1 line)
X_pad = np.pad(X, ((0, 0),(pad, pad),(pad, pad),(0, 0)), 'constant', constant_values=0)
### END CODE HERE ###
return X_pad
测试代码:
np.random.seed(1)
x = np.random.randn(4, 3, 3, 2)
x_pad = zero_pad(x, 2)
print ("x.shape =", x.shape)
print ("x_pad.shape =", x_pad.shape)
print("x[1].shape=",x[1].shape)
print("x[1,1].shape=",x[1,1].shape)
print("x[1,1,1].shape=",x[1,1,1].shape)
print("x[1,1,1]=",x[1,1,1])
print ("x[1] =", x[1])
print ("x[1,1] =", x[1,1])
print ("x_pad[1,1] =", x_pad[1,1])
fig, axarr = plt.subplots(1, 2)
axarr[0].set_title('x')
axarr[0].imshow(x[0,:,:,0])
axarr[1].set_title('x_pad')
axarr[1].imshow(x_pad[0,:,:,0])
输出结果:
x.shape = (4, 3, 3, 2)
x_pad.shape = (4, 7, 7, 2)
(3, 3, 2)
(3, 2)
(2,)
x[1,1,1]= [-0.12289023 -0.93576943]
x[1] = [[[ 0.04221375 0.58281521]
[-1.10061918 1.14472371]
[ 0.90159072 0.50249434]]
[[ 0.90085595 -0.68372786]
[-0.12289023 -0.93576943]
[-0.26788808 0.53035547]]
[[-0.69166075 -0.39675353]
[-0.6871727 -0.84520564]
[-0.67124613 -0.0126646 ]]]
x[1,1] = [[ 0.90085595 -0.68372786]
[-0.12289023 -0.93576943]
[-0.26788808 0.53035547]]
x_pad[1,1] = [[ 0. 0.]
[ 0. 0.]
[ 0. 0.]
[ 0. 0.]
[ 0. 0.]
[ 0. 0.]
[ 0. 0.]]
2-2 卷积操作
图像卷积操作如下:
注:这里我们设置的步长是1的卷积操作。
对输入图像做卷积操作后,输出的图像尺寸会变小(所以才需要上述的padding),但是提取出了图像特征。
定义图像卷积操作函数:
# GRADED FUNCTION: conv_single_step
def conv_single_step(a_slice_prev, W, b):
"""
Apply one filter defined by parameters W on a single slice (a_slice_prev) of the output activation
of the previous layer.
Arguments:
a_slice_prev -- slice of input data of shape (f, f, n_C_prev)
W -- Weight parameters contained in a window - matrix of shape (f, f, n_C_prev)
b -- Bias parameters contained in a window - matrix of shape (1, 1, 1)
Returns:
Z -- a scalar value, result of convolving the sliding window (W, b) on a slice x of the input data
"""
### START CODE HERE ### (≈ 2 lines of code)
# Element-wise product between a_slice and W. Add bias.
s = np.multiply(a_slice_prev, W) + b
# Sum over all entries of the volume s
Z = np.sum(s)
### END CODE HERE ###
return Z
我们取一个子区域和卷积核进行卷积测试。图像中其他的区域类似,只是移动filter,在图像中进行遍历而已。
np.random.seed(1)
a_slice_prev = np.random.randn(4, 4, 3)
W = np.random.randn(4, 4, 3)
b = np.random.randn(1, 1, 1)
Z = conv_single_step(a_slice_prev, W, b)
print("Z =", Z)
输出结果:
Z = -23.1602122025
该结果值即为一次卷积操作的输出结果。
2-3 卷积神经网络的前向传播
由于一个filter的输出结果是一张图像,要是获得多张的stack图像,则需要采用多个filter对输入图像做卷积操作,再将结果stack起来。
当前层输出的长宽与上一层的关系:
卷积层的前向传播:
# GRADED FUNCTION: conv_forward
def conv_forward(A_prev, W, b, hparameters):
"""
Implements the forward propagation for a convolution function
Arguments:
A_prev -- output activations of the previous layer, numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev)
W -- Weights, numpy array of shape (f, f, n_C_prev, n_C)
b -- Biases, numpy array of shape (1, 1, 1, n_C)
hparameters -- python dictionary containing "stride" and "pad"
Returns:
Z -- conv output, numpy array of shape (m, n_H, n_W, n_C)
cache -- cache of values needed for the conv_backward() function
"""
### START CODE HERE ###
# Retrieve dimensions from A_prev's shape (≈1 line)
(m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape
#输入图像的尺寸信息
# Retrieve dimensions from W's shape (≈1 line)
(f, f, n_C_prev, n_C) = W.shape
#输入的filter的尺寸信息
# Retrieve information from "hparameters" (≈2 lines)
#超参数信息,pad和步径大小
stride = hparameters['stride']
pad = hparameters['pad']
# Compute the dimensions of the CONV output volume using the formula given above. Hint: use int() to floor. (≈2 lines)
n_H = 1 + int((n_H_prev + 2 * pad - f) / stride)
n_W = 1 + int((n_W_prev + 2 * pad - f) / stride)
#输出的图像尺寸信息
# Initialize the output volume Z with zeros. (≈1 line)
#为输出的图像初始化
Z = np.zeros((m, n_H, n_W, n_C))
# Create A_prev_pad by padding A_prev
#为输入创建padding
A_prev_pad = zero_pad(A_prev, pad)
#遍历样本
for i in range(m): # loop over the batch of training examples
a_prev_pad = A_prev_pad[i] # Select ith training example's padded activation
for h in range(n_H): # loop over vertical axis of the output volume
for w in range(n_W): # loop over horizontal axis of the output volume
for c in range(n_C): # loop over channels (= #filters) of the output volume
# Find the corners of the current "slice" (≈4 lines)
vert_start = h * stride
vert_end = vert_start + f
horiz_start = w * stride
horiz_end = horiz_start + f
# Use the corners to define the (3D) slice of a_prev_pad (See Hint above the cell). (≈1 line)
a_slice_prev = a_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :]
#取一个样本的一个模板尺寸的数据
#样本数据和filter做卷积运算
# Convolve the (3D) slice with the correct filter W and bias b, to get back one output neuron. (≈1 line)
#Z[i, h, w, c] = np.sum(np.multiply(a_slice_prev, W[:, :, :, c]) + b[:, :, :, c])#方式1
Z[i, h, w, c] = conv_single_step(a_slice_prev, W[:, :, :, c],b[:, :, :, c])#方式2
### END CODE HERE ###
# Making sure your output shape is correct
assert(Z.shape == (m, n_H, n_W, n_C))
# Save information in "cache" for the backprop
cache = (A_prev, W, b, hparameters)
return Z, cache
测试代码:
np.random.seed(1)
A_prev = np.random.randn(10,4,4,3)
W = np.random.randn(2,2,3,8)
b = np.random.randn(1,1,1,8)
hparameters = {"pad" : 2,
"stride": 1}
Z, cache_conv = conv_forward(A_prev, W, b, hparameters)
print("Z's mean =", np.mean(Z))
print("cache_conv[0][1][2][3] =", cache_conv[0][1][2][3])
测试代码运行结果:
Z's mean = 0.155859324889
cache_conv[0][1][2][3] = [-0.20075807 0.18656139 0.41005165]
当然,在卷积层中应该会包含一个激活函数,完整如下:
# Convolve the window to get back one output neuron
Z[i, h, w, c] = ...
# Apply activation
A[i, h, w, c] = activation(Z[i, h, w, c])
3- 池化层
池化层会减小输出图像的长和宽(降维),同时也在引入不变性的同时捕获主要的特征。一般有两种池化类型:最大值和平均值。
注意,池化层是没有参数需要进行训练的。
一般池化层是不使用padding的,输出的图像尺寸:
3-1 前向池化
代码实现:
# GRADED FUNCTION: pool_forward
def pool_forward(A_prev, hparameters, mode = "max"):
"""
Implements the forward pass of the pooling layer
Arguments:
A_prev -- Input data, numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev)
hparameters -- python dictionary containing "f" and "stride"
mode -- the pooling mode you would like to use, defined as a string ("max" or "average")
Returns:
A -- output of the pool layer, a numpy array of shape (m, n_H, n_W, n_C)
cache -- cache used in the backward pass of the pooling layer, contains the input and hparameters
"""
# Retrieve dimensions from the input shape
(m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape
# Retrieve hyperparameters from "hparameters"
f = hparameters["f"]
stride = hparameters["stride"]
#这里没有用padding了,只有filter的尺寸
# Define the dimensions of the output
n_H = int(1 + (n_H_prev - f) / stride)
n_W = int(1 + (n_W_prev - f) / stride)
n_C = n_C_prev
# Initialize output matrix A
A = np.zeros((m, n_H, n_W, n_C))
#下面的流程和卷积层很像,只是filter的类型不同
### START CODE HERE ###
for i in range(m): # loop over the training examples
for h in range(n_H): # loop on the vertical axis of the output volume
for w in range(n_W): # loop on the horizontal axis of the output volume
for c in range (n_C): # loop over the channels of the output volume
# Find the corners of the current "slice" (≈4 lines)
vert_start = h * stride
vert_end = vert_start + f
horiz_start = w * stride
horiz_end = horiz_start + f
# Use the corners to define the current slice on the ith training example of A_prev, channel c. (≈1 line)
a_prev_slice = A_prev[i, vert_start:vert_end, horiz_start:horiz_end, c]
# Compute the pooling operation on the slice. Use an if statment to differentiate the modes. Use np.max/np.mean.
if mode == "max":
A[i, h, w, c] = np.max(a_prev_slice)
elif mode == "average":
A[i, h, w, c] = np.mean(a_prev_slice)
### END CODE HERE ###
# Store the input and hparameters in "cache" for pool_backward()
cache = (A_prev, hparameters)
# Making sure your output shape is correct
assert(A.shape == (m, n_H, n_W, n_C))
return A, cache
测试代码:
np.random.seed(1)
A_prev = np.random.randn(2, 4, 4, 3)
hparameters = {"stride" : 1, "f": 4}
A, cache = pool_forward(A_prev, hparameters)
print("mode = max")
print("A =", A)
print()
A, cache = pool_forward(A_prev, hparameters, mode = "average")
print("mode = average")
print("A =", A)
测试代码的输出结果:
mode = max
A = [[[[ 1.74481176 1.6924546 2.10025514]]]
[[[ 1.19891788 1.51981682 2.18557541]]]]
mode = average
A = [[[[-0.09498456 0.11180064 -0.14263511]]]
[[[-0.09525108 0.28325018 0.33035185]]]]
4- 卷积神经网络的后向传播
4.1 卷积层的后向传播
计算dA:
da_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :] += W[:,:,:,c] * dZ[i, h, w, c]
计算dW:
dW[:,:,:,c] += a_slice * dZ[i, h, w, c]
计算db:
db[:,:,:,c] += dZ[i, h, w, c]
后向传播代码:
def conv_backward(dZ, cache):
"""
Implement the backward propagation for a convolution function
Arguments:
dZ -- gradient of the cost with respect to the output of the conv layer (Z), numpy array of shape (m, n_H, n_W, n_C)
cache -- cache of values needed for the conv_backward(), output of conv_forward()
Returns:
dA_prev -- gradient of the cost with respect to the input of the conv layer (A_prev),
numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev)
dW -- gradient of the cost with respect to the weights of the conv layer (W)
numpy array of shape (f, f, n_C_prev, n_C)
db -- gradient of the cost with respect to the biases of the conv layer (b)
numpy array of shape (1, 1, 1, n_C)
"""
### START CODE HERE ###
# Retrieve information from "cache"
(A_prev, W, b, hparameters) = cache
# Retrieve dimensions from A_prev's shape
(m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape
# Retrieve dimensions from W's shape
(f, f, n_C_prev, n_C) = W.shape
# Retrieve information from "hparameters"
stride = hparameters['stride']
pad = hparameters['pad']
# Retrieve dimensions from dZ's shape
(m, n_H, n_W, n_C) = dZ.shape
# Initialize dA_prev, dW, db with the correct shapes
dA_prev = np.zeros((m, n_H_prev, n_W_prev, n_C_prev))
dW = np.zeros((f, f, n_C_prev, n_C))
db = np.zeros((1, 1, 1, n_C))
# Pad A_prev and dA_prev
A_prev_pad = zero_pad(A_prev, pad)
dA_prev_pad = zero_pad(dA_prev, pad)
for i in range(m): # loop over the training examples
# select ith training example from A_prev_pad and dA_prev_pad
a_prev_pad = A_prev_pad[i]
da_prev_pad = dA_prev_pad[i]
for h in range(n_H): # loop over vertical axis of the output volume
for w in range(n_W): # loop over horizontal axis of the output volume
for c in range(n_C): # loop over the channels of the output volume
# Find the corners of the current "slice"
vert_start = h * stride
vert_end = vert_start + f
horiz_start = w * stride
horiz_end = horiz_start + f
# Use the corners to define the slice from a_prev_pad
a_slice = a_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :]
# Update gradients for the window and the filter's parameters using the code formulas given above
da_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :] += W[:,:,:,c] * dZ[i, h, w, c]
dW[:,:,:,c] += a_slice * dZ[i, h, w, c]#反向传播dW
db[:,:,:,c] += dZ[i, h, w, c]#反向传播db
# Set the ith training example's dA_prev to the unpaded da_prev_pad (Hint: use X[pad:-pad, pad:-pad, :])
dA_prev[i, :, :, :] = dA_prev_pad[i, pad:-pad, pad:-pad, :]#反向传播dA
### END CODE HERE ###
# Making sure your output shape is correct
assert(dA_prev.shape == (m, n_H_prev, n_W_prev, n_C_prev))
return dA_prev, dW, db
测试代码:
np.random.seed(1)
dA, dW, db = conv_backward(Z, cache_conv)
print("dA_mean =", np.mean(dA))
print("dW_mean =", np.mean(dW))
print("db_mean =", np.mean(db))
测试代码运行结果:
dA_mean = 9.60899067587
dW_mean = 10.5817412755
db_mean = 76.3710691956
4.2 池化层-后向传播
4.2.1 Max pooling - backward pass
在介绍池化层的后向传播之前,先介绍一个定义的函数create_mask_from_window()
用以提取最大值的位置。
def create_mask_from_window(x):
"""
Creates a mask from an input matrix x, to identify the max entry of x.
Arguments:
x -- Array of shape (f, f)
Returns:
mask -- Array of the same shape as window, contains a True at the position corresponding to the max entry of x.
"""
### START CODE HERE ### (≈1 line)
mask = (x == np.max(x))
### END CODE HERE ###
return mask
测试代码:
np.random.seed(1)
x = np.random.randn(2,3)
mask = create_mask_from_window(x)
print('x = ', x)
print("mask = ", mask)
输出结果:
x = [[ 1.62434536 -0.61175641 -0.52817175]
[-1.07296862 0.86540763 -2.3015387 ]]
mask = [[ True False False]
[False False False]]
之所以要记录最大值得位置,是由于在前向传播过程只有最大值点对cost有贡献。反向传播是计算代价对应的梯度,对于有贡献的部分其梯度值是非0。
4.2.2 Average pooling - backward pass
同理对于均值池化的反向传播,我们可以构造一个类似的mask:
对于一个2*2的filter。我们需要把值平均分成2*2=4份,传递到前边小区域的4个单元,实现区域的分散扩大化:
def distribute_value(dz, shape):
"""
Distributes the input value in the matrix of dimension shape
Arguments:
dz -- input scalar
shape -- the shape (n_H, n_W) of the output matrix for which we want to distribute the value of dz
Returns:
a -- Array of size (n_H, n_W) for which we distributed the value of dz
"""
### START CODE HERE ###
# Retrieve dimensions from shape (≈1 line)
(n_H, n_W) = shape
# Compute the value to distribute on the matrix (≈1 line)
average = dz / (n_H * n_W)
# Create a matrix where every entry is the "average" value (≈1 line)
a = np.ones(shape) * average
### END CODE HERE ###
return a
测试代码:
a = distribute_value(2, (2,2))
print('distributed value =', a)
测试代码输出:
distributed value = [[ 0.5 0.5]
[ 0.5 0.5]]
可以看出,将原来的1*1的矩阵(值=2),反向传播为2*2的矩阵,且每个值都是0.5。这就是反向传播过程中对每个点的处理方式。
4.2.3 池化层的后向传播汇总
池化层后向传播代码:
def pool_backward(dA, cache, mode = "max"):
"""
Implements the backward pass of the pooling layer
Arguments:
dA -- gradient of cost with respect to the output of the pooling layer, same shape as A
cache -- cache output from the forward pass of the pooling layer, contains the layer's input and hparameters
mode -- the pooling mode you would like to use, defined as a string ("max" or "average")
Returns:
dA_prev -- gradient of cost with respect to the input of the pooling layer, same shape as A_prev
"""
### START CODE HERE ###
# Retrieve information from cache (≈1 line)
(A_prev, hparameters) = cache
# Retrieve hyperparameters from "hparameters" (≈2 lines)
stride = hparameters['stride']
f = hparameters['f']
# Retrieve dimensions from A_prev's shape and dA's shape (≈2 lines)
m, n_H_prev, n_W_prev, n_C_prev = A_prev.shape
m, n_H, n_W, n_C = dA.shape
# Initialize dA_prev with zeros (≈1 line)
dA_prev = np.zeros_like(A_prev)
#依据给定数组(A_prev)的形状和类型返回一个新的元素全部为0的数组
#等同于A_prev.copy().fill(0)。
for i in range(m): # loop over the training examples
# select training example from A_prev (≈1 line)
a_prev = A_prev[i]
for h in range(n_H): # loop on the vertical axis
for w in range(n_W): # loop on the horizontal axis
for c in range(n_C): # loop over the channels (depth)
# Find the corners of the current "slice" (≈4 lines)
vert_start = h * stride
vert_end = vert_start + f
horiz_start = w * stride
horiz_end = horiz_start + f
# Compute the backward propagation in both modes.
if mode == "max":
# Use the corners and "c" to define the current slice from a_prev (≈1 line)
a_prev_slice = a_prev[vert_start:vert_end, horiz_start:horiz_end, c]
# Create the mask from a_prev_slice (≈1 line)
mask = create_mask_from_window(a_prev_slice)
# Set dA_prev to be dA_prev + (the mask multiplied by the correct entry of dA) (≈1 line)
dA_prev[i, vert_start: vert_end, horiz_start: horiz_end, c] += mask * dA[i, vert_start, horiz_start, c]
#注意这里的计算
elif mode == "average":
# Get the value a from dA (≈1 line)
da = dA[i, vert_start, horiz_start, c]
# Define the shape of the filter as fxf (≈1 line)
shape = (f, f)
# Distribute it to get the correct slice of dA_prev. i.e. Add the distributed value of da. (≈1 line)
dA_prev[i, vert_start: vert_end, horiz_start: horiz_end, c] += distribute_value(da, shape)
#注意这里的计算
### END CODE ###
# Making sure your output shape is correct
assert(dA_prev.shape == A_prev.shape)
return dA_prev
测试代码:
np.random.seed(1)
A_prev = np.random.randn(5, 5, 3, 2)
hparameters = {"stride" : 1, "f": 2}
A, cache = pool_forward(A_prev, hparameters)
dA = np.random.randn(5, 4, 2, 2)
dA_prev = pool_backward(dA, cache, mode = "max")
print("mode = max")
print('mean of dA = ', np.mean(dA))
print('dA_prev[1,1] = ', dA_prev[1,1])
print()
dA_prev = pool_backward(dA, cache, mode = "average")
print("mode = average")
print('mean of dA = ', np.mean(dA))
print('dA_prev[1,1] = ', dA_prev[1,1])
测试代码运行结果:
mode = max
mean of dA = 0.145713902729
dA_prev[1,1] = [[ 0. 0. ]
[ 5.05844394 -1.68282702]
[ 0. 0. ]]
mode = average
mean of dA = 0.145713902729
dA_prev[1,1] = [[ 0.08485462 0.2787552 ]
[ 1.26461098 -0.25749373]
[ 1.17975636 -0.53624893]]