Coursera 机器学习 week4(神经网络)作业

在完成作业之前,我们很有必要知道一些Octave的语法,才能看懂作业要求。
1、randperm:生成随机序列

 A=[2,3,4,5]
A =

   2   3   4   5

>> rand_indices=randperm(length(A))
rand_indices =

   1   4   2   3

>> A(:,rand_indices(1:3))
ans =

   2   5   3

2、绘图
hist():绘制直方图;
plot(x, y):绘制二维曲线图。
xlable()/ylable():添加x和y轴的说明;
legend():添加图例;
title():添加图的名称;
axis():变换坐标范围;
colorbar:添加颜色指示条;
hold on:继续添加图;
close:关闭图片窗口;
clf:清除图;
figure(1):开启第一个窗口;
subplot(1, 2, 1):将窗口划成两半,进入第一半;
print -dpng ‘picName.png’:保存图片;
colormap:设定和获取当前的色图。
使用方法
色图是一个m*3的实数矩阵,实数的大小在0到1.0之间,每一行是定义一种颜色的一个RGB向量。颜色图的第k行定义第k种颜色,其中 map(k,:)=[r(k) g(k) b(k)]定义为红、绿、蓝亮度。
(1)colormap(map)
设置颜色图为矩阵map。如果map中的任何值在区间[0,1]之外,MATLAB返回错误:Colormap must have values in [0,1]。
(2)colormap(‘default’)
将当前的颜色图设置为默认的颜色图。
(3)cmap=colormap
返回当前的颜色图。返回的值都在区间[0,1]内。

3、行与列的最大值
MAX函数
当A是一个列向量时候,返回一个最大值,在此不在赘述。
当Amxn是一个矩阵的时候,有以下几种情况:
① C = max(max(A)),返回矩阵最大值
② D = max(A,[],1),每一列的最值,得到行向量
③ E = max(A,[],2),每一行的最值,得到列向量

>> max(A)
ans =  3
>> A=[1,2;3,4]
A =

   1   2
   3   4

>> max(A)   %the max value of each colonm
ans =

   3   4

>> max(A,[],1) %the max value of each colonm
ans =

   3   4
>> max(A,[],2) %the max value of each row
ans =

   2
   4
>> max(max(A))
ans =  4

接下来看作业。
这里写图片描述
作业提供了数据集.mat格式的文件5000*400,是手写数字,5000个20*20灰度图象的样本,灰度用浮点表示。
数据集的第二部分是维度5000的y向量,0 被标记为10,1~9标记1~9。
直接用load命令就可以导入octave,随机选择100行画图。
这里写图片描述
1、lrCostFunction.m
正规化后的代价函数和梯度下降向量化为:
这里写图片描述
这里写图片描述

function [J, grad] = lrCostFunction(theta, X, y, lambda)
%LRCOSTFUNCTION Compute cost and gradient for logistic regression with 
%regularization
%   J = LRCOSTFUNCTION(theta, X, y, lambda) computes the cost of using
%   theta as the parameter for regularized logistic regression and the
%   gradient of the cost w.r.t. to the parameters. 

% Initialize some useful values
m = length(y); % number of training examples

% You need to return the following variables correctly 
J = 0;
grad = zeros(size(theta));

% ====================== YOUR CODE HERE ======================
% Instructions: Compute the cost of a particular choice of theta.
%               You should set J to the cost.
%               Compute the partial derivatives and set grad to the partial
%               derivatives of the cost w.r.t. each parameter in theta
%
% Hint: The computation of the cost function and gradients can be
%       efficiently vectorized. For example, consider the computation
%
%           sigmoid(X * theta)
%
%       Each row of the resulting matrix will contain the value of the
%       prediction for that example. You can make use of this to vectorize
%       the cost function and gradient computations. 
%
% Hint: When computing the gradient of the regularized cost function, 
%       there're many possible vectorized solutions, but one solution
%       looks like:
%           grad = (unregularized gradient for logistic regression)
%           temp = theta; 
%           temp(1) = 0;   % because we don't add anything for j = 0  
%           grad = grad + YOUR_CODE_HERE (using the temp variable)
%
J=(-log(sigmoid(theta'*X'))*y-log(1-sigmoid(theta'*X'))*(1-y))/m+lambda*(theta(2:size(theta,1),1)'*theta(2:size(theta,1),1))/2/m;
grad(1,1)=X(:,1)'*(sigmoid(theta'*X')'-y)/m;
grad(2:size(theta,1),1)=X(:,2:size(theta,1))'*(sigmoid(theta'*X')'-y)/m+theta(2:size(theta,1),1)*lambda/m;

% =============================================================
%矩阵转为列向量
grad = grad(:);

end

2、oneVsAll(X, y, num_labels, lambda):输入的参数num_labels表示标签的个数,也就是逻辑回归分类器的个数,这里返回的矩阵all_theta的第i行表示对标签i的分类。

function [all_theta] = oneVsAll(X, y, num_labels, lambda)
%ONEVSALL trains multiple logistic regression classifiers and returns all
%the classifiers in a matrix all_theta, where the i-th row of all_theta 
%corresponds to the classifier for label i
%   [all_theta] = ONEVSALL(X, y, num_labels, lambda) trains num_labels
%   logistic regression classifiers and returns each of these classifiers
%   in a matrix all_theta, where the i-th row of all_theta corresponds 
%   to the classifier for label i

% Some useful variables
m = size(X, 1);
n = size(X, 2);

% You need to return the following variables correctly 
all_theta = zeros(num_labels, n + 1);

% Add ones to the X data matrix
%ones(m,n)表示初始化一个m*n的矩阵,值全为1
%[A B]表示A作为列插到B的前面,[A,B]表示作为行插到B的前面
X = [ones(m, 1) X];

% ====================== YOUR CODE HERE ======================
% Instructions: You should complete the following code to train num_labels
%               logistic regression classifiers with regularization
%               parameter lambda. 
%
% Hint: theta(:) will return a column vector.
%
% Hint: You can use y == c to obtain a vector of 1's and 0's that tell you
%       whether the ground truth is true/false for this class.
%
% Note: For this assignment, we recommend using fmincg to optimize the cost
%       function. It is okay to use a for-loop (for c = 1:num_labels) to
%       loop over the different classes.
%
%       fmincg works similarly to fminunc, but is more efficient when we
%       are dealing with large number of parameters.
%
% Example Code for fmincg:
%
%     % Set Initial theta
%     initial_theta = zeros(n + 1, 1);
%     
%     % Set options for fminunc
%     options = optimset('GradObj', 'on', 'MaxIter', 50);
% 
%     % Run fmincg to obtain the optimal theta
%     % This function will return theta and the cost 
%     [theta] = ...
%         fmincg (@(t)(lrCostFunction(t, X, (y == c), lambda)), ...
%                 initial_theta, options);
%
initial_theta=zeros(n+1,1);
options=optimset('GradObj','on','MaxIter',50);
for i=1:num_labels
  all_theta(i,:)=fmincg(@(t)(lrCostFunction(t,X,(y==i),lambda)),initial_theta,options);

% =========================================================================


end

3、predictOneVsAll(all_theta, X):使用多分类逻辑回归函数进行预测

function p = predictOneVsAll(all_theta, X)
%PREDICT Predict the label for a trained one-vs-all classifier. The labels 
%are in the range 1..K, where K = size(all_theta, 1). 
%  p = PREDICTONEVSALL(all_theta, X) will return a vector of predictions
%  for each example in the matrix X. Note that X contains the examples in
%  rows. all_theta is a matrix where the i-th row is a trained logistic
%  regression theta vector for the i-th class. You should set p to a vector
%  of values from 1..K (e.g., p = [1; 3; 1; 2] predicts classes 1, 3, 1, 2
%  for 4 examples) 

m = size(X, 1);
num_labels = size(all_theta, 1);

% You need to return the following variables correctly 
p = zeros(size(X, 1), 1);

% Add ones to the X data matrix
X = [ones(m, 1) X];

% ====================== YOUR CODE HERE ======================
% Instructions: Complete the following code to make predictions using
%               your learned logistic regression parameters (one-vs-all).
%               You should set p to a vector of predictions (from 1 to
%               num_labels).
%
% Hint: This code can be done all vectorized using the max function.
%       In particular, the max function can also return the index of the 
%       max element, for more information see 'help max'. If your examples 
%       are in rows, then, you can use max(A, [], 2) to obtain the max 
%       for each row.
%       

tmp=X*all_theta';
[value,index]=max(tmp,[],2);
p=index;
% =========================================================================


end

4、predict.m:使用已经训练好的参数,用神经网络进行预测
这个神经网络有3层,隐藏层有25个单元
所以Theta1是25×401,Theta2是10×26
在计算a(2)的时候记得给x(i)加上x0
求出的应该是hθ(x)k个元素中的最大值的下标

function p = predict(Theta1, Theta2, X)
%PREDICT Predict the label of an input given a trained neural network
%   p = PREDICT(Theta1, Theta2, X) outputs the predicted label of X given the
%   trained weights of a neural network (Theta1, Theta2)

% Useful values
m = size(X, 1);
num_labels = size(Theta2, 1);

% You need to return the following variables correctly 
p = zeros(size(X, 1), 1);

% ====================== YOUR CODE HERE ======================
% Instructions: Complete the following code to make predictions using
%               your learned neural network. You should set p to a 
%               vector containing labels between 1 to num_labels.
%
% Hint: The max function might come in useful. In particular, the max
%       function can also return the index of the max element, for more
%       information see 'help max'. If your examples are in rows, then, you
%       can use max(A, [], 2) to obtain the max for each row.
%

X=[ones(m,1) X];
layer2=sigmoid(X*Theta1');
layer2=[ones(size(layer2,1),1) layer2];
layer3=sigmoid(layer2*Theta2');
[a,b]=max(layer3,[],2);
p=b;

% =========================================================================


end

注意:
1)任何的布尔函数都可以由两层单元的网络准确表示,但是所需的隐藏层神经元的数量随网络输入数量呈指数级增长;
2)任意连续函数都可由一个两层的网络以任意精度逼近。这里的两层网络是指隐藏层使用sigmoid单元、输出层使用非阈值的线性单元;
3)任意函数都可由一个三层的网络以任意精度逼近。其两层隐藏层使用sigmoid单元、输出层使用非阈值的线性单元。

参考:https://blog.csdn.net/icecutie/article/details/51046035

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值