这个作业对应需要掌握:
单隐藏层下的神经网络的构建,正向传播以及反向传播。
在做作业的时候有一个地方找了很久才把错误找出来:
因为这个作业解决的是一个二分类问题,所以在输出层的时候需要使用sigmoid作为激励函数,而自己在隐藏层和输出层都使用了tanh()作为激励函数,导致最后的结果出现错误,找了很久才发现。。。
先上此次作业的神经网络的架构:
很明显这是一个两层网络,一般不包括输入层在内。
其中隐藏层有四个神经元单元,输入层输入的数据集的特征是二维的。
其次说明一下各层的参数W以及b的维度情况:
因为第1层(隐藏层)有4个单元,并且每一个单元都拥有2个输入点,所以
𝑊[1]为4x2的矩阵,b[1]为4x1的矩阵。同理W[2]为1x4的矩阵,b[2]为1x1的矩阵。
再次说明向前传播时的计算公式:
这里引用作业中的文字:
4 - Neural Network model
Mathematically:
For one example
x
(
i
)
x^{(i)}
x(i):
z
[
1
]
(
i
)
=
W
[
1
]
x
(
i
)
+
b
[
1
]
(
i
)
(1)
z^{[1] (i)} = W^{[1]} x^{(i)} + b^{[1] (i)}\tag{1}
z[1](i)=W[1]x(i)+b[1](i)(1)
a
[
1
]
(
i
)
=
tanh
(
z
[
1
]
(
i
)
)
(2)
a^{[1] (i)} = \tanh(z^{[1] (i)})\tag{2}
a[1](i)=tanh(z[1](i))(2)
z
[
2
]
(
i
)
=
W
[
2
]
a
[
1
]
(
i
)
+
b
[
2
]
(
i
)
(3)
z^{[2] (i)} = W^{[2]} a^{[1] (i)} + b^{[2] (i)}\tag{3}
z[2](i)=W[2]a[1](i)+b[2](i)(3)
y
^
(
i
)
=
a
[
2
]
(
i
)
=
σ
(
z
[
2
]
(
i
)
)
(4)
\hat{y}^{(i)} = a^{[2] (i)} = \sigma(z^{ [2] (i)})\tag{4}
y^(i)=a[2](i)=σ(z[2](i))(4)
Given the predictions on all the examples, you can also compute the cost
J
J
J as follows:
J
=
−
1
m
∑
i
=
0
m
(
y
(
i
)
log
(
a
[
2
]
(
i
)
)
+
(
1
−
y
(
i
)
)
log
(
1
−
a
[
2
]
(
i
)
)
)
(6)
J = - \frac{1}{m} \sum\limits_{i = 0}^{m} \large\left(\small y^{(i)}\log\left(a^{[2] (i)}\right) + (1-y^{(i)})\log\left(1- a^{[2] (i)}\right) \large \right) \small \tag{6}
J=−m1i=0∑m(y(i)log(a[2](i))+(1−y(i))log(1−a[2](i)))(6)
Reminder: The general methodology to build a Neural Network is to:
1. Define the neural network structure ( # of input units, # of hidden units, etc).
2. Initialize the model’s parameters
3. Loop:
- Implement forward propagation
- Compute loss
- Implement backward propagation to get the gradients
- Update parameters (gradient descent)
参数的具体含义不再详述。
拥有计算公式后就可以实现向前传播forward_propagation()函数:
# GRADED FUNCTION: forward_propagation
def forward_propagation(X, parameters):
"""
Argument:
X -- input data of size (n_x, m)
parameters -- python dictionary containing your parameters (output of initialization function)
Returns:
A2 -- The sigmoid output of the second activation
cache -- a dictionary containing "Z1", "A1", "Z2" and "A2"
"""
# Retrieve each parameter from the dictionary "parameters"
### START CODE HERE ### (≈ 4 lines of code)
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
### END CODE HERE ###
# Implement Forward Propagation to calculate A2 (probabilities)
### START CODE HERE ### (≈ 4 lines of code)
Z1 = np.dot(W1,X) + b1
A1 = np.tanh(Z1)
Z2 = np.dot(W2,A1) + b2
A2 = sigmoid(Z2) #注意这里因为是二分类问题,所以输出层使用sigmoid激励函数。
### END CODE HERE ###
assert(A2.shape == (1, X.shape[1]))
cache = {"Z1": Z1,
"A1": A1,
"Z2": Z2,
"A2": A2}
return A2, cache #返回此次向前传播过程中得到的预测值以及各神经单元的激励值,后面反向传播时需要用到
拥有预测值后就可以按照代价公式J通过构造cost_compute()函数计算此次预测的代价值:
# GRADED FUNCTION: compute_cost
def compute_cost(A2, Y, parameters):
"""
Computes the cross-entropy cost given in equation (13)
Arguments:
A2 -- The sigmoid output of the second activation, of shape (1, number of examples)
Y -- "true" labels vector of shape (1, number of examples)
parameters -- python dictionary containing your parameters W1, b1, W2 and b2
Returns:
cost -- cross-entropy cost given equation (13)
"""
m = Y.shape[1] # number of example
# Compute the cross-entropy cost
### START CODE HERE ### (≈ 2 lines of code)
logprobs = Y*np.log(A2) + (1-Y)*np.log(1-A2)
cost = -1 / m * np.sum(logprobs)
### END CODE HERE ###
cost = np.squeeze(cost) # makes sure cost is the dimension we expect.
# E.g., turns [[17]] into 17
assert(isinstance(cost, float))
return cost
然后执行反向传播backward_propagation()函数:
# GRADED FUNCTION: backward_propagation
def backward_propagation(parameters, cache, X, Y):
"""
Implement the backward propagation using the instructions above.
Arguments:
parameters -- python dictionary containing our parameters
cache -- a dictionary containing "Z1", "A1", "Z2" and "A2".
X -- input data of shape (2, number of examples)
Y -- "true" labels vector of shape (1, number of examples)
Returns:
grads -- python dictionary containing your gradients with respect to different parameters
"""
m = X.shape[1]
# First, retrieve W1 and W2 from the dictionary "parameters".
### START CODE HERE ### (≈ 2 lines of code)
W1 = parameters["W1"]
W2 = parameters["W2"]
### END CODE HERE ###
# Retrieve also A1 and A2 from dictionary "cache".
### START CODE HERE ### (≈ 2 lines of code)
A1 = cache["A1"]
A2 = cache["A2"]
### END CODE HERE ###
# Backward propagation: calculate dW1, db1, dW2, db2.
### START CODE HERE ### (≈ 6 lines of code, corresponding to 6 equations on slide above)
dZ2 = A2 - Y
dW2 = 1 / m * np.dot(dZ2,A1.T)
db2 = 1 / m * np.sum(dZ2, axis=1, keepdims=True)
dZ1 = np.dot(W2.T, dZ2) * (1-np.power(A1,2))
dW1 = 1 / m * np.dot(dZ1, X.T)
db1 = 1 / m * np.sum(dZ1, axis=1, keepdims=True)
### END CODE HERE ###
grads = {"dW1": dW1,
"db1": db1,
"dW2": dW2,
"db2": db2}
return grads
反向传播是神经网络中最难的部分,需要用到高等数学中的链式求导法则,并且还需要使用线性代数的知识进行向量化,是非常难理解的一部分。
这里推荐反向传播的详解:
https://blog.csdn.net/qq_29407397/article/details/90599460
这个比较好懂。
这里的backward_propagation()函数返回的是各个神经单元处参数的梯度,所以向后传播实际上也是求梯度的过程,主要是网络太复杂、链式求导参数太多,最好是动手验算一下。。。
最后执行参数更新update_parameters():
# GRADED FUNCTION: update_parameters
def update_parameters(parameters, grads, learning_rate = 1.2):
"""
Updates parameters using the gradient descent update rule given above
Arguments:
parameters -- python dictionary containing your parameters
grads -- python dictionary containing your gradients
Returns:
parameters -- python dictionary containing your updated parameters
"""
# Retrieve each parameter from the dictionary "parameters"
### START CODE HERE ### (≈ 4 lines of code)
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
### END CODE HERE ###
# Retrieve each gradient from the dictionary "grads"
### START CODE HERE ### (≈ 4 lines of code)
dW1 = grads["dW1"]
db1 = grads["db1"]
dW2 = grads["dW2"]
db2 = grads["db2"]
## END CODE HERE ###
# Update rule for each parameter
### START CODE HERE ### (≈ 4 lines of code)
W1 -= learning_rate*dW1
b1 -= learning_rate*db1
W2 -= learning_rate*dW2
b2 -= learning_rate*db2
### END CODE HERE ###
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2}
return parameters
到这里,该作业构建的神经网络的一次传播就算结束。
真正运行的时候设置一定的迭代次数、学习率即可。
附上训练后的分类效果:
训练集点图:
使用逻辑回归的分类效果:
使用在本作业搭建的神经网络上训练后的分类效果:
换一个训练集:
训练后分类效果:
结语:
经过上面的比对,可以很明显的看出神经网络的优越。
同时通过该次作业,稍微入门了神经网络以及其中的正反向传播的原理。不足之处在于本次作业还没有使用正则化来避免出现过拟合的现象。。。