吴恩达 机器学习 第五周作业 Neural Network Learning
Octave代码
nnCostFunction.m
function [J grad] = nnCostFunction(nn_params, ...
input_layer_size, ...
hidden_layer_size, ...
num_labels, ...
X, y, lambda)
%NNCOSTFUNCTION Implements the neural network cost function for a two layer
%neural network which performs classification
% [J grad] = NNCOSTFUNCTON(nn_params, hidden_layer_size, num_labels, ...
% X, y, lambda) computes the cost and gradient of the neural network. The
% parameters for the neural network are "unrolled" into the vector
% nn_params and need to be converted back into the weight matrices.
%
% The returned parameter grad should be a "unrolled" vector of the
% partial derivatives of the neural network.
%
% Reshape nn_params back into the parameters Theta1 and Theta2, the weight matrices
% for our 2 layer neural network
Theta1 = reshape(nn_params(1:hidden_layer_size * (input_layer_size + 1)), ...
hidden_layer_size, (input_layer_size + 1));
Theta2 = reshape(nn_params((1 + (hidden_layer_size * (input_layer_size + 1))):end), ...
num_labels, (hidden_layer_size + 1));
% Setup some useful variables
m = size(X, 1);
% You need to return the following variables correctly
J = 0;
Theta1_grad = zeros(size(Theta1));
Theta2_grad = zeros(size(Theta2));
% ====================== YOUR CODE HERE ======================
% Instructions: You should complete the code by working through the
% following parts.
%
% Part 1: Feedforward the neural network and return the cost in the
% variable J. After implementing Part 1, you can verify that your
% cost function computation is correct by verifying the cost
% computed in ex4.m
%
% Part 2: Implement the backpropagation algorithm to compute the gradients
% Theta1_grad and Theta2_grad. You should return the partial derivatives of
% the cost function with respect to Theta1 and Theta2 in Theta1_grad and
% Theta2_grad, respectively. After implementing Part 2, you can check
% that your implementation is correct by running checkNNGradients
%
% Note: The vector y passed into the function is a vector of labels
% containing values from 1..K. You need to map this vector into a
% binary vector of 1's and 0's to be used with the neural network
% cost function.
%
% Hint: We recommend implementing backpropagation using a for-loop
% over the training examples if you are implementing it for the
% first time.
%
% Part 3: Implement regularization with the cost function and gradients.
%
% Hint: You can implement this around the code for
% backpropagation. That is, you can compute the gradients for
% the regularization separately and then add them to Theta1_grad
% and Theta2_grad from Part 2.
%
% Part1
X = [ones(m, 1) X];
for i = 1 : m;
z2 = Theta1 * X(i, :)';
a2 = sigmoid(z2);
a2 = [ones(1, 1); a2];
z3 = Theta2 * a2;
a3 = sigmoid(z3);
temp = y(i);
y_cls = zeros(num_labels, 1);
y_cls(y(i), 1) = 1;
J = J + 1 / m * sum(-y_cls .* log(a3) - (1 - y_cls) .* log(1 - a3));
end;
J = J + 0.5 * lambda / m * (sum(sum(Theta1(:, 2:end) .^ 2)) + sum(sum(Theta2(:, 2:end) .^ 2)));
% Part2
for t = 1 : m;
a1 = X(t, :)';
z2 = Theta1 * a1;
a2 = sigmoid(z2);
a2 = [ones(1, 1); a2];
z3 = Theta2 * a2;
a3 = sigmoid(z3);
temp = y(t);
y_cls = zeros(num_labels, 1);
y_cls(y(t), 1) = 1;
d3 = a3 - y_cls;
d2 = Theta2'(2:end, :) * d3 .* sigmoidGradient(z2);
Theta1_grad = Theta1_grad + d2 * a1';
Theta2_grad = Theta2_grad + d3 * a2';
end;
% Part 3
Theta1(:, 1) = 0;
Theta2(:, 1) = 0;
Theta1_grad = 1 / m * Theta1_grad + lambda / m * Theta1;
Theta2_grad = 1 / m * Theta2_grad + lambda / m * Theta2;
% -------------------------------------------------------------
% =========================================================================
% Unroll gradients
grad = [Theta1_grad(:) ; Theta2_grad(:)];
end
sigmoidGradient.m
function g = sigmoidGradient(z)
%SIGMOIDGRADIENT returns the gradient of the sigmoid function
%evaluated at z
% g = SIGMOIDGRADIENT(z) computes the gradient of the sigmoid function
% evaluated at z. This should work regardless if z is a matrix or a
% vector. In particular, if z is a vector or matrix, you should return
% the gradient for each element.
g = zeros(size(z));
% ====================== YOUR CODE HERE ======================
% Instructions: Compute the gradient of the sigmoid function evaluated at
% each value of z (z can be a matrix, vector or scalar).
g_temp = 1.0 ./ (1.0 + exp(-z));
g = g_temp .* (1 - g_temp);
% =============================================================
end
运行结果
Octave command prompt
octave:20> ex4
Loading and Visualizing Data ...
Program paused. Press enter to continue.
Loading Saved Neural Network Parameters ...
Feedforward Using Neural Network ...
Cost at parameters (loaded from ex4weights): 0.287629
(this value should be about 0.287629)
Program paused. Press enter to continue.
Checking Cost Function (w/ Regularization) ...
Cost at parameters (loaded from ex4weights): 0.383770
(this value should be about 0.383770)
Program paused. Press enter to continue.
Evaluating sigmoid gradient...
Sigmoid gradient evaluated at [-1 -0.5 0 0.5 1]:
0.196612 0.235004 0.250000 0.235004 0.196612
Program paused. Press enter to continue.
Initializing Neural Network Parameters ...
Checking Backpropagation...
-9.2783e-03 -9.2783e-03
8.8991e-03 8.8991e-03
-8.3601e-03 -8.3601e-03
7.6281e-03 7.6281e-03
-6.7480e-03 -6.7480e-03
-3.0498e-06 -3.0498e-06
1.4287e-05 1.4287e-05
-2.5938e-05 -2.5938e-05
3.6988e-05 3.6988e-05
-4.6876e-05 -4.6876e-05
-1.7506e-04 -1.7506e-04
2.3315e-04 2.3315e-04
-2.8747e-04 -2.8747e-04
3.3532e-04 3.3532e-04
-3.7622e-04 -3.7622e-04
-9.6266e-05 -9.6266e-05
1.1798e-04 1.1798e-04
-1.3715e-04 -1.3715e-04
1.5325e-04 1.5325e-04
-1.6656e-04 -1.6656e-04
3.1454e-01 3.1454e-01
1.1106e-01 1.1106e-01
9.7401e-02 9.7401e-02
1.6409e-01 1.6409e-01
5.7574e-02 5.7574e-02
5.0458e-02 5.0458e-02
1.6457e-01 1.6457e-01
5.7787e-02 5.7787e-02
5.0753e-02 5.0753e-02
1.5834e-01 1.5834e-01
5.5924e-02 5.5924e-02
4.9162e-02 4.9162e-02
1.5113e-01 1.5113e-01
5.3697e-02 5.3697e-02
4.7146e-02 4.7146e-02
1.4957e-01 1.4957e-01
5.3154e-02 5.3154e-02
4.6560e-02 4.6560e-02
The above two columns you get should be very similar.
(Left-Your Numerical Gradient, Right-Analytical Gradient)
If your backpropagation implementation is correct, then
the relative difference will be small (less than 1e-9).
Relative Difference: 2.08374e-11
Program paused. Press enter to continue.
Checking Backpropagation (w/ Regularization) ...
-9.2783e-03 -9.2783e-03
8.8991e-03 8.8991e-03
-8.3601e-03 -8.3601e-03
7.6281e-03 7.6281e-03
-6.7480e-03 -6.7480e-03
-1.6768e-02 -1.6768e-02
3.9433e-02 3.9433e-02
5.9336e-02 5.9336e-02
2.4764e-02 2.4764e-02
-3.2688e-02 -3.2688e-02
-6.0174e-02 -6.0174e-02
-3.1961e-02 -3.1961e-02
2.4923e-02 2.4923e-02
5.9772e-02 5.9772e-02
3.8641e-02 3.8641e-02
-1.7370e-02 -1.7370e-02
-5.7566e-02 -5.7566e-02
-4.5196e-02 -4.5196e-02
9.1459e-03 9.1459e-03
5.4610e-02 5.4610e-02
3.1454e-01 3.1454e-01
1.1106e-01 1.1106e-01
9.7401e-02 9.7401e-02
1.1868e-01 1.1868e-01
3.8193e-05 3.8193e-05
3.3693e-02 3.3693e-02
2.0399e-01 2.0399e-01
1.1715e-01 1.1715e-01
7.5480e-02 7.5480e-02
1.2570e-01 1.2570e-01
-4.0759e-03 -4.0759e-03
1.6968e-02 1.6968e-02
1.7634e-01 1.7634e-01
1.1313e-01 1.1313e-01
8.6163e-02 8.6163e-02
1.3229e-01 1.3229e-01
-4.5296e-03 -4.5296e-03
1.5005e-03 1.5005e-03
The above two columns you get should be very similar.
(Left-Your Numerical Gradient, Right-Analytical Gradient)
If your backpropagation implementation is correct, then
the relative difference will be small (less than 1e-9).
Relative Difference: 2.01903e-11
Cost at (fixed) debugging parameters (w/ lambda = 3.000000): 0.576051
(for lambda = 3, this value should be about 0.576051)
Program paused. Press enter to continue.
Training Neural Network...
Iteration 50 | Cost: 4.366683e-01
Program paused. Press enter to continue.
Visualizing Neural Network...
Program paused. Press enter to continue.
Training Set Accuracy: 96.060000