吴恩达机器学习在线课程--【实验三】完成和总结--包括完整代码

>吴恩达机器学习课程链接
>课程总结和笔记链接
实验三的原始代码和使用数据可至课程链接-课时67-章节9编程作业中下载

包括了实验二中的使用了正则化项后的逻辑回归的最优化参数求解,重点是应用于多分类,采用一对多形式,对每一种分类进行“是/否”预测,得到分类。
环境——Matlab R2018b/Octave

One-vs-all

Part 1: Loading and Visualizing Data

这个训练集一共有5000条数据,每条数据包含400维特征。一共10个分类。
运行结果

Part 2a: Vectorize Logistic Regression

lrCostFunction.m
加入正则化项的逻辑回归的损失函数计算,和实验二中的相同。

function [J, grad] = lrCostFunction(theta, X, y, lambda)
%LRCOSTFUNCTION Compute cost and gradient for logistic regression with 
%regularization
%   J = LRCOSTFUNCTION(theta, X, y, lambda) computes the cost of using
%   theta as the parameter for regularized logistic regression and the
%   gradient of the cost w.r.t. to the parameters. 

% Initialize some useful values
m = length(y); % number of training examples

% You need to return the following variables correctly 
J = 0;
grad = zeros(size(theta));

% ====================== YOUR CODE HERE ======================
% Instructions: Compute the cost of a particular choice of theta.
%               You should set J to the cost.
%               Compute the partial derivatives and set grad to the partial
%               derivatives of the cost w.r.t. each parameter in theta
%
% Hint: The computation of the cost function and gradients can be
%       efficiently vectorized. For example, consider the computation
%
%           sigmoid(X * theta)
%
%       Each row of the resulting matrix will contain the value of the
%       prediction for that example. You can make use of this to vectorize
%       the cost function and gradient computations. 
%
% Hint: When computing the gradient of the regularized cost function, 
%       there're many possible vectorized solutions, but one solution
%       looks like:
%           grad = (unregularized gradient for logistic regression)
%           temp = theta; 
%           temp(1) = 0;   % because we don't add anything for j = 0  
%           grad = grad + YOUR_CODE_HERE (using the temp variable)
%

pos = y == 1;
neg = y == 0;

h_pos = sigmoid(X(pos, :) * theta);
J_pos = sum(-log(h_pos));

h_neg = sigmoid(X(neg, :) * theta);
J_neg = sum(-log(1 - h_neg));

J_reg = lambda/2 * sum(theta(2:end, :) .^ 2);
J = (J_pos + J_neg + J_reg)/m;

grad = (sum(X .* (sigmoid(X * theta) - y)))' / m;
grad_reg = ((lambda * theta(2:end, :)) / m);
grad(2:end, :) = grad(2:end, :) + grad_reg;

% =============================================================

grad = grad(:);

end

运行结果

Testing lrCostFunction() with regularization
Cost: 2.534819
Expected cost: 2.534819
Gradients:
 0.146561 
 -0.548558 
 0.724722 
 1.398003 
Expected gradients:
 0.146561
 -0.548558
 0.724722
 1.398003
Program paused. Press enter to continue.

Part 2b: One-vs-All Training

oneVsAll.m
通过最小化损失函数计算所有十个类别的最优参数。
参数为10行400列矩阵。

 function [all_theta] = oneVsAll(X, y, num_labels, lambda)
%ONEVSALL trains multiple logistic regression classifiers and returns all
%the classifiers in a matrix all_theta, where the i-th row of all_theta 
%corresponds to the classifier for label i
%   [all_theta] = ONEVSALL(X, y, num_labels, lambda
  • 1
    点赞
  • 7
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值