继续撸猫!!
Accuracy: 1.0
y = 1.0, your L-layer model predicts a “cat” picture.
任务
我们之前已经试过用logistic回归模型对于猫进行预测,这次,我们使用单隐层神经网络和L层神经网络对其进行更进一步预测,看看效果有何变化。
首先还是导入各种包加载各种东西
加载看一下,我们的标记数据
再查看一下我们的训练集测试集,图片规模等数据
与往常一样,我们在将图片输入到神经网络之前,要进行重塑和标准化
# Reshape the training and test examples
train_x_flatten = train_x_orig.reshape(train_x_orig.shape[0], -1).T # The "-1" makes reshape flatten the remaining dimensions
test_x_flatten = test_x_orig.reshape(test_x_orig.shape[0], -1).T
# Standardize data to have feature values between 0 and 1.
train_x = train_x_flatten/255.
test_x = test_x_flatten/255.
print ("train_x's shape: " + str(train_x.shape))
print ("test_x's shape: " + str(test_x.shape))
12288是3×64×64,图像压缩成一维之后的长度。
建立神经网络
现在你已经熟悉了数据集,是时候建立一个深度神经网络来区分猫图像和非猫图像了。
你将建立两个不同的模型:
- 2层神经网络
- L层深度神经网络
然后,你将比较这些模型的性能,并尝试不同的 L L L值。
让我们看一下两种架构。
2层神经网络
图2:2层神经网络。
该模型可以总结为:INPUT -> LINEAR -> RELU -> LINEAR -> SIGMOID -> OUTPUT
图2的详细架构:
- 输入维度为(64,64,3)的图像,将其展平为大小为 ( 12288 , 1 ) (12288,1) (12288,1)的向量。
- 相应的向量: [ x 0 , x 1 , . . . , x 12287 ] T [x_0,x_1,...,x_{12287}]^T [x0,x1,...,x12287]T乘以大小为 ( n [ 1 ] , 12288 ) (n^{[1]}, 12288) (n[1],12288)的权重矩阵 W [ 1 ] W^{[1]} W[1]。
- 然后添加一个偏差项并按照公式获得以下向量: [ a 0 [ 1 ] , a 1 [ 1 ] , . . . , a n [ 1 ] − 1 [ 1 ] ] T [a_0^{[1]}, a_1^{[1]},..., a_{n^{[1]}-1}^{[1]}]^T [a0[1],a1[1],...,an[1]−1[1]]T。
- 然后,重复相同的过程。
- 将所得向量乘以 W [ 2 ] W^{[2]} W[2]并加上截距(偏差)。
- 最后,采用结果的sigmoid值。 如果大于0.5,则将其分类为猫。
L层深度神经网络
用上面的方式很难表示一个L层的深度神经网络。 这是一个简化的网络表示形式:
图3:L层神经网络。
该模型可以总结为:***[LINEAR -> RELU]
×
\times
× (L-1) -> LINEAR -> SIGMOID***
图3的详细结构:
- 输入维度为(64,64,3)的图像,将其展平为大小为 ( 12288 , 1 ) (12288,1) (12288,1)的向量。
- 相应的向量: [ x 0 , x 1 , . . . , x 12287 ] T [x_0,x_1,...,x_{12287}]^T [x0,x1,...,x12287]T乘以权重矩阵 W [ 1 ] W^{[1]} W[1],然后加上截距 b [ 1 ] b^{[1]} b[1],结果为线性单位。
- 接下来计算获得的线性单元。对于每个 ( W [ l ] , b [ l ] ) (W^{[l]}, b^{[l]}) (W[l],b[l]),可以重复数次,具体取决于模型体系结构。
- 最后,采用最终线性单位的sigmoid值。如果大于0.5,则将其分类为猫。
通用步骤
与往常一样,你将遵循深度学习步骤来构建模型:
1.初始化参数/定义超参数
2.循环num_iterations次:
a. 正向传播
b. 计算损失函数
C. 反向传播
d. 更新参数(使用参数和反向传播的梯度)
4.使用训练好的参数来预测标签
现在让我们实现这两个模型!
两层
# GRADED FUNCTION: two_layer_model
def two_layer_model(X, Y, layers_dims, learning_rate = 0.0075, num_iterations = 3000, print_cost=False):
"""
Implements a two-layer neural network: LINEAR->RELU->LINEAR->SIGMOID.
Arguments:
X -- input data, of shape (n_x, number of examples)
Y -- true "label" vector (containing 0 if cat, 1 if non-cat), of shape (1, number of examples)
layers_dims -- dimensions of the layers (n_x, n_h, n_y)
num_iterations -- number of iterations of the optimization loop
learning_rate -- learning rate of the gradient descent update rule
print_cost -- If set to True, this will print the cost every 100 iterations
Returns:
parameters -- a dictionary containing W1, W2, b1, and b2
"""
np.random.seed(1)
grads = {}
costs = [] # to keep track of the cost
m = X.shape[1] # number of examples
(n_x, n_h, n_y) = layers_dims
# Initialize parameters dictionary, by calling one of the functions you'd previously implemented
### START CODE HERE ### (≈ 1 line of code)
parameters = initialize_parameters(n_x, n_h, n_y)
### END CODE HERE ###
# Get W1, b1, W2 and b2 from the dictionary parameters.
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: LINEAR -> RELU -> LINEAR -> SIGMOID. Inputs: "X, W1, b1". Output: "A1, cache1, A2, cache2".
### START CODE HERE ### (≈ 2 lines of code)
A1, cache1 =linear_activation_forward(X, W1, b1, activation = "relu")
A2, cache2 = linear_activation_forward(A1, W2, b2, activation = "sigmoid")
### END CODE HERE ###
# Compute cost
### START CODE HERE ### (≈ 1 line of code)
cost = compute_cost(A2, Y)
### END CODE HERE ###
# Initializing backward propagation
dA2 = - (np.divide(Y, A2) - np.divide(1 - Y, 1 - A2))
# Backward propagation. Inputs: "dA2, cache2, cache1". Outputs: "dA1, dW2, db2; also dA0 (not used), dW1, db1".
### START CODE HERE ### (≈ 2 lines of code)
dA1, dW2, db2 = linear_activation_backward(dA2, cache2, activation = "sigmoid")
dA0, dW1, db1 = linear_activation_backward(dA1, cache1, activation = "relu")
### END CODE HERE ###
# Set grads['dWl'] to dW1, grads['db1'] to db1, grads['dW2'] to dW2, grads['db2'] to db2
grads['dW1'] = dW1
grads['db1'] = db1
grads['dW2'] = dW2
grads['db2'] = db2
# Update parameters.
### START CODE HERE ### (approx. 1 line of code)
parameters = update_parameters(parameters, grads, learning_rate)
### END CODE HERE ###
# Retrieve W1, b1, W2, b2 from parameters
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
# Print the cost every 100 training example
if print_cost and i % 100 == 0:
print("Cost after iteration {}: {}".format(i, np.squeeze(cost)))
if print_cost and i % 100 == 0:
costs.append(cost)
# plot the cost
plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('iterations (per tens)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters
注意:你可能会注意到,以较少的迭代次数(例如1500)运行模型可以使测试集具有更高的准确性。 这称为“尽早停止”,我们将在下一课程中讨论。 提前停止是防止过拟合的一种方法。
恭喜你!看来你的2层神经网络的性能(72%)比逻辑回归实现(70%,第2周的作业)更好。 让我们看看使用 L L L层模型是否可以做得更好。
L层神经网络
# GRADED FUNCTION: L_layer_model
def L_layer_model(X, Y, layers_dims, learning_rate = 0.0075, num_iterations = 3000, print_cost=False):#lr was 0.009
"""
Implements a L-layer neural network: [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID.
Arguments:
X -- data, numpy array of shape (number of examples, num_px * num_px * 3)
Y -- true "label" vector (containing 0 if cat, 1 if non-cat), of shape (1, number of examples)
layers_dims -- list containing the input size and each layer size, of length (number of layers + 1).
learning_rate -- learning rate of the gradient descent update rule
num_iterations -- number of iterations of the optimization loop
print_cost -- if True, it prints the cost every 100 steps
Returns:
parameters -- parameters learnt by the model. They can then be used to predict.
"""
np.random.seed(1)
costs = [] # keep track of cost
# Parameters initialization.
### START CODE HERE ###
parameters = initialize_parameters_deep(layers_dims)
### END CODE HERE ###
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: [LINEAR -> RELU]*(L-1) -> LINEAR -> SIGMOID.
### START CODE HERE ### (≈ 1 line of code)
AL, caches = L_model_forward(X, parameters)
### END CODE HERE ###
# Compute cost.
### START CODE HERE ### (≈ 1 line of code)
cost = compute_cost(AL, Y)
### END CODE HERE ###
# Backward propagation.
### START CODE HERE ### (≈ 1 line of code)
grads = L_model_backward(AL, Y, caches)
### END CODE HERE ###
# Update parameters.
### START CODE HERE ### (≈ 1 line of code)
parameters = update_parameters(parameters, grads, learning_rate)
### END CODE HERE ###
# Print the cost every 100 training example
if print_cost and i % 100 == 0:
print ("Cost after iteration %i: %f" %(i, cost))
if print_cost and i % 100 == 0:
costs.append(cost)
# plot the cost
plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('iterations (per tens)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters
Good!在相同的测试集上,你的5层的神经网络似乎比2层神经网络具有更好的性能(80%)。做的好!
在下一作业教程“改善深度神经网络”中,你将学习如何通过系统地匹配更好的超参数(学习率,层数,迭代次数以及下一门课程中还将学习到的其他参数)来获得更高的准确性。
结果分析
首先,让我们看一下L层模型标记错误的一些图像。 这将显示一些分类错误的图像。
该模型在表现效果较差的的图像包括:
- 猫身处于异常位置
- 图片背景与猫颜色类似
- 猫的种类和颜色稀有
- 相机角度
- 图片的亮度
- 比例变化(猫的图像很大或很小)
使用自己图像进行测试
## START CODE HERE ##
my_image = "timg.jpg" # change this to the name of your image file
my_label_y = [1] # the true class of your image (1 -> cat, 0 -> non-cat)
## END CODE HERE ##
fname = my_image
image = np.array(plt.imread(fname))
my_image = np.array(Image.fromarray(image).resize(size=(num_px,num_px))).reshape((num_px*num_px*3,1))
my_predicted_image = predict(my_image, my_label_y, parameters)
plt.imshow(image)
print ("y = " + str(np.squeeze(my_predicted_image)) + ", your L-layer model predicts a \"" + classes[int(np.squeeze(my_predicted_image)),].decode("utf-8") + "\" picture.")