Machine Learning week 5 programming exercise Neural Network Learning

Neural Networks Learning

       这次试用的数据和上次是一样的数据。5000个training example,每一个代表一个数字的图像,图像是20x20的灰度图,400个像素的每个位置的灰度值组成了一个training example。

                                                                  


Model representation

这里我们使用如下的模型,共3层,其中第一层input有400个feature,另外还有bias unit。
                                                  
说明一:Andrew已经为我们 准备好了神经网络的参数初始化值,只要我们load('ex4weights.mat'),即可得到该初始化值。 神经网络的训练是在该初始化的基础上开始训练的,如果 初始化值不合适,可能会收敛到局部最优解,这也是训练神经网络最容易犯的错误。
引申一下,DBN算法的思想是 先用DBN进行无监督训练,得到网络的初始参数,然后将 该参数来初始化神经网络,最后进行有监督的训练。原因就是, DBN的无监督训练得到的初始参数来初始化神经网络,可以避免收敛到局部最优解

                                    

Feedforward and cost function

现在我们需要计算cost function和gradient

这里是没有regularization的情况下的cost function公式,m表示m个training example,k表示k个output,这里我们有10个output,分别代表1-10
                        
y的值需要从数字形式转化为向量形式,如下,下面的例子中第一列表示1,第二列表示2
                                              

Regularized cost function

下面是regularized cost function,其实就是在普通的cost function后面加上每个theta的平方,当然不要忘了系数。不过这里并不是所有的theta,所有跟bias unit相关的theta都要排除。
                                
下面的代码是:nnCostFunction,只实现了正则化后的代价函数,若要实现非正则化的代价函数,则只需令 lambda=0。
function [J grad] = nnCostFunction(nn_params, ...
                                   input_layer_size, ...
                                   hidden_layer_size, ...
                                   num_labels, ...
                                   X, y, lambda)
%NNCOSTFUNCTION Implements the neural network cost function for a two layer
%neural network which performs classification
%   [J grad] = NNCOSTFUNCTON(nn_params, hidden_layer_size, num_labels, ...
%   X, y, lambda) computes the cost and gradient of the neural network. The
%   parameters for the neural network are "unrolled" into the vector
%   nn_params and need to be converted back into the weight matrices. 
% 
%   The returned parameter grad should be a "unrolled" vector of the
%   partial derivatives of the neural network.
%

% Reshape nn_params back into the parameters Theta1 and Theta2, the weight matrices
% for our 2 layer neural network
Theta1 = reshape(nn_params(1:hidden_layer_size * (input_layer_size + 1)), ...
                 hidden_layer_size, (input_layer_size + 1));

Theta2 = reshape(nn_params((1 + (hidden_layer_size * (input_layer_size + 1))):end), ...
                 num_labels, (hidden_layer_size + 1));

% Setup some useful variables
m = size(X, 1);
         
% You need to return the following variables correctly 
J = 0;
Theta1_grad = zeros(size(Theta1));
Theta2_grad = zeros(size(Theta2));

% ====================== YOUR CODE HERE ======================
% Instructions: You should complete the code by working through the
%               following parts.
%
% Part 1: Feedforward the neural network and return the cost in the
%         variable J. After implementing Part 1, you can verify that your
%         cost function computation is correct by verifying the cost
%         computed in ex4.m
%
% Part 2: Implement the backpropagation algorithm to compute the gradients
%         Theta1_grad and Theta2_grad. You should return the partial derivatives of
%         the cost function with respect to Theta1 and Theta2 in Theta1_grad and
%         Theta2_grad, respectively. After implementing Part 2, you can check
%         that your implementation is correct by running checkNNGradients
%
%         Note: The vector y passed into the function is a vector of labels
%               containing values from 1..K. You need to map this vector into a 
%               binary vector of 1's and 0's to be used with the neural network
%               cost function.
%
%         Hint: We recommend implementing backpropagation using a for-loop
%               over the training examples if you are implementing it for the 
%               first time.
%
% Part 3: Implement regularization with the cost function and gradients.
%
%         Hint: You can implement this around the code for
%               backpropagation. That is, you can compute the gradients for
%               the regularization separately and then add them to Theta1_grad
%               and Theta2_grad from Part 2.
%
%%完全向量化版
a1 = [ones(m,1) X]; %5000*401
z2 = a1*Theta1';  %5000*25
a2 = [ones(size(z2,1),1) sigmoid(z2)]; %5000*(25+1)
z3 = a2*Theta2'; %5000*10
a3 = sigmoid(z3);
h=a3;
%-----------------Part 3: Compute Cost (Feedforward)-------   -------------
Y=zeros(m,num_labels);
for i=1:num_labels
    Y(:,i)=(y==i);
end
J = 1/m*sum(sum(-Y.*log(h)-(1-Y).*log(1-h)));
%--------------------------------------------------------------------------
%%正则化后
J = J + lambda/2/m*( sum(sum(Theta1(:,2:end).^2)) + sum(sum(Theta2(:,2:end).^2)) );
% compute delta
delta3=zeros(m, num_labels);
for k = 1 : num_labels
    delta3(:,k) = a3(:,k) - (y==k); %5000*10
end
delta2 = delta3 * Theta2 .* [ones(size(z2,1),1) sigmoidGradient(z2)]; %5000*26

%compute Delta
Delta1 = delta2(:,2:end)' * a1;  %25*401
Delta2 = delta3' * a2; %10*26

% compute Theta_grad
Theta1_grad = 1/m*Delta1;
Theta2_grad = 1/m*Delta2;

% 正则化grad
reg1 = lambda/m*Theta1;
reg2 = lambda/m*Theta2;
reg1(:,1) = 0;
reg2(:,1) = 0;
Theta1_grad = Theta1_grad + reg1;
Theta2_grad = Theta2_grad + reg2;



% -------------------------------------------------------------

% =========================================================================

% Unroll gradients
grad = [Theta1_grad(:) ; Theta2_grad(:)];


end

对该代码进行一些说明:
function [J grad] = nnCostFunction(nn_params,  input_layer_size, hidden_layer_size, num_labels, X, y, lambda)
输入参数:nn_params-神经网络的参数
                 input_layer_size,hidden_layer_size, num_labels  -描述神经网络的各层结点
                 X, y -神经网络的训练数据,输入层(X)和输出层(y)
                正则化代价函数的表达式如下,可以看出J(θ)是关于训练数据(X,y),神经网络参数,还要正则化参数lambda的函数
                所以nnCostFunction()的输入参数包含以上的数据,自然就可以计算出,该数据下对应的代价值


神经网络算法是通过反向传播来计算梯度的,即J(θ)的导数
在反向传播过程中,我们要计算除了input layer之外的每层的小 delta,也就是δ。参考下图,
                                                  
计算过程分为下面5步,每个training example都循环进行1-4:
1 forward propagation计算每一层的函数z,和每一层的activation,不要忘记途中增加bias unit
2 计算第三层的δ,

3 计算第二层的δ,

4计算每一层的Δ,这里要搞清楚,δ代表当前层的所有unit个数,如果是hidden layer的话,还包括了bias unit。Δ的最终结果是当前层的theta的偏导数,和当前层的theta维数一致,通过δ计算Δ的时候,由于后层的δ中的bias unit并不是有当前层的theta计算到的,反向计算theta的偏导数,Δ的时候则要把其中的bias unit去掉

5 上面m个training example都循环计算完成之后,计算最后的偏导数


实际上这里的第一步,可以一次性使用矩阵操作计算完成所有m个训练数据的结果,但是后面的几步,还是要循环计算每个训练数据的结果
完成第一步:前向计算
------------------------------------------------------------------------------------------------------------------------------------------------------
1 forward propagation计算每一层的函数z,和每一层的activation,不要忘记途中增加bias unit
%%完全向量化版
a1 = [ones(m,1) X]; %5000*401
z2 = a1*Theta1';  %5000*25
a2 = [ones(size(z2,1),1) sigmoid(z2)]; %5000*(25+1)
z3 = a2*Theta2'; %5000*10
a3 = sigmoid(z3);
h=a3;
---------------------------------------------------------------------------------------------------------------------------------------------------------------
计算正则化后的代价函数
J = 1/m*sum(sum(-Y.*log(h)-(1-Y).*log(1-h)));
%--------------------------------------------------------------------------
%%正则化后
J = J + lambda/2/m*( sum(sum(Theta1(:,2:end).^2)) + sum(sum(Theta2(:,2:end).^2)) );

---------------------------------------------------------------------------------------------------------------------------------------------------------------

将训练数据的输出层转化为向量的形式

                                                        

Y=zeros(m,num_labels);
for i=1:num_labels
    Y(:,i)=(y==i);
end

-----------------------------------------------------------------------------------------------------------------------------------------------------------------

计算每一层的

                      

delta3=zeros(m, num_labels);
for k = 1 : num_labels
    delta3(:,k) = a3(:,k) - (y==k); %5000*10
end
delta2 = delta3 * Theta2 .* [ones(size(z2,1),1) sigmoidGradient(z2)]; %5000*26


%compute Delta
Delta1 = delta2(:,2:end)' * a1;  %25*401
Delta2 = delta3' * a2; %10*26


% compute Theta_grad
Theta1_grad = 1/m*Delta1;
Theta2_grad = 1/m*Delta2;

-------------------------------------------------------------------------------------------------------------------------------------------------

Regularized Neural Networks


计算的时候,所有和bias项对应的theta都不需要做regularization

% 正则化grad
reg1 = lambda/m*Theta1;
reg2 = lambda/m*Theta2;
reg1(:,1) = 0;
reg2(:,1) = 0;
Theta1_grad = Theta1_grad + reg1;
Theta2_grad = Theta2_grad + reg2;

---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Random initialization

为了打破symmetry breaking,我们要在选取初始theta的时候,每次选择某一层的theta,然后让每个这一层的theta都随机选取,范围 [-ξ,ξ]
有如下经验公式可用来选择合适的theta范围
                     
确定了范围之后,下面是随机选择的过程
% Randomly initialize the weights to small values
epsilon init = 0.12;
W = rand(L out, 1 + L in) * 2 * epsilon init − epsilon init;

Gradient checking

梯度校验法是本周课程中非常值得学习的技巧。首先一个问题,如何判断我们神经网络中计算梯度(代价函数的导数是否是正确的呢)。
       首先,明确一点,我们使用反向传播算法来计算梯度的,并不是直接使用代价函数的解析式求导之后来计算,因为该代价函数求导之后找不到解析式。回想我们在函数 nnCostFunction()中计算出了什么?算出了在神经网络参数nn_params,和对应训练数据下的代价函数值,和对应的导数值(梯度),在该函数中,我们是用反向传播算法来计算梯度的。代价函数可以是看出关于神经网络参数θ的函数J(θ)。
        所以,我们可以考虑利用数值计算的方法来计算函数J(θ)的导数,来与利用反向传播算法计算得到的梯度进行比较。如果,代价函数计算正确,反向传播计算梯度的方法正确,那么数值计算得到的梯度和反向传播计算得到的梯度,应该是非常相近的。
      
   


                               
先把神经网络的各层参数Θ(1), Θ(2),整合在一个向量里面θ,那么代价函数J(θ)就必须对该向量θ的每一个分量(θ1,θ2,θ3,.....θn)求偏导,得到关于每一个θ分量的梯度。
                                       
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
function checkNNGradients(lambda)
%CHECKNNGRADIENTS Creates a small neural network to check the
%backpropagation gradients
%   CHECKNNGRADIENTS(lambda) Creates a small neural network to check the
%   backpropagation gradients, it will output the analytical gradients
%   produced by your backprop code and the numerical gradients (computed
%   using computeNumericalGradient). These two gradient computations should
%   result in very similar values.
%

if ~exist('lambda', 'var') || isempty(lambda)
    lambda = 0;
end

input_layer_size = 3;
hidden_layer_size = 5;
num_labels = 3;
m = 5;

% We generate some 'random' test data
Theta1 = debugInitializeWeights(hidden_layer_size, input_layer_size);
Theta2 = debugInitializeWeights(num_labels, hidden_layer_size);
% Reusing debugInitializeWeights to generate X
X  = debugInitializeWeights(m, input_layer_size - 1);
y  = 1 + mod(1:m, num_labels)';

% Unroll parameters
nn_params = [Theta1(:) ; Theta2(:)];

% Short hand for cost function
costFunc = @(p) nnCostFunction(p, input_layer_size, hidden_layer_size, ...
                               num_labels, X, y, lambda);

[cost, grad] = costFunc(nn_params);
numgrad = computeNumericalGradient(costFunc, nn_params);

% Visually examine the two gradient computations.  The two columns
% you get should be very similar. 
disp([numgrad grad]);
fprintf(['The above two columns you get should be very similar.\n' ...
         '(Left-Your Numerical Gradient, Right-Analytical Gradient)\n\n']);

% Evaluate the norm of the difference between two solutions.  
% If you have a correct implementation, and assuming you used EPSILON = 0.0001 
% in computeNumericalGradient.m, then diff below should be less than 1e-9
diff = norm(numgrad-grad)/norm(numgrad+grad);

fprintf(['If your backpropagation implementation is correct, then \n' ...
         'the relative difference will be small (less than 1e-9). \n' ...
         '\nRelative Difference: %g\n'], diff);

end<span style="color:#ff0000;">
</span>


----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
%获取nnCostFunction的函数指针,传人相应参数
costFunc = @(p) nnCostFunction(p, input_layer_size, hidden_layer_size, ...
                               num_labels, X, y, lambda);

%利用nnCostFunction计算相应参数下的梯度值
[cost, grad] = costFunc(nn_params);
%利用数值计算computeNumericalGradient来计算梯度值
numgrad = computeNumericalGradient(costFunc, nn_params);

% Visually examine the two gradient computations.  The two columns
% you get should be very similar. 
disp([numgrad grad]);
----------------------------------------------------------------------------------------------------------------------------------------------------------

从函数的调用可知 numgrad = computeNumericalGradient(costFunc, nn_params);J就是nnCostFunction的函数指针


function numgrad = computeNumericalGradient(J, theta)
%COMPUTENUMERICALGRADIENT Computes the gradient using "finite differences"
%and gives us a numerical estimate of the gradient.
%   numgrad = COMPUTENUMERICALGRADIENT(J, theta) computes the numerical
%   gradient of the function J around theta. Calling y = J(theta) should
%   return the function value at theta.

% Notes: The following code implements numerical gradient checking, and 
%        returns the numerical gradient.It sets numgrad(i) to (a numerical 
%        approximation of) the partial derivative of J with respect to the 
%        i-th input argument, evaluated at theta. (i.e., numgrad(i) should 
%        be the (approximately) the partial derivative of J with respect 
%        to theta(i).)
%                

numgrad = zeros(size(theta));
perturb = zeros(size(theta));
e = 1e-4;
for p = 1:numel(theta)
    % Set perturbation vector
    perturb(p) = e;
    loss1 = J(theta - perturb);
    loss2 = J(theta + perturb);
    % Compute Numerical Gradient
    numgrad(p) = (loss2 - loss1) / (2*e);
    perturb(p) = 0;
end

end
%for循环,对每一个参数都循环一层,求偏导。

for p = 1:numel(theta)
    % Set perturbation vector
    perturb(p) = e;

    %J就是nnCostFunction的函数指针,计算θ=theta - perturb,下的代价函数
    loss1 = J(theta - perturb);       

    %计算θ=theta+perturb,下的代价函数
    loss2 = J(theta + perturb);
    % Compute Numerical Gradient

    %数值计算公式求梯度。
    numgrad(p) = (loss2 - loss1) / (2*e);
    perturb(p) = 0;
end


 

Backpropagation

 

Sigmoid gradient

sigmoid gradient反向传播计算gradient的公式的一部分

function g = sigmoidGradient(z)
%SIGMOIDGRADIENT returns the gradient of the sigmoid function
%evaluated at z
%   g = SIGMOIDGRADIENT(z) computes the gradient of the sigmoid function
%   evaluated at z. This should work regardless if z is a matrix or a
%   vector. In particular, if z is a vector or matrix, you should return
%   the gradient for each element.

g = zeros(size(z));

% ====================== YOUR CODE HERE ======================
% Instructions: Compute the gradient of the sigmoid function evaluated at
%               each value of z (z can be a matrix, vector or scalar).

g = sigmoid(z).*(1-sigmoid(z));

% =============================================================
end




  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值