Angrew Machine Learning ex3

lrCostFunction.m

h = sigmoid(X * theta);
theta(1,:) = 0; %This is the same with the file 'costFunctionReg.m' of ex2 except this line.
								%For adding the multi-classification,we should take the matrix instead of vector.Therefore, we set the first row to 0 in order to get them away from the computation of regularization.
grad = (X' * (h - y) + lambda * theta) / m;
J = (y' * log(h) + (1 .- y') * log(1 .- h) - lambda / 2 * sum(theta .^ 2)) / m;
J = 0 - J;


predictOneVsAll.m

p = X * all_theta';
[_, p] = max(p, [], 2);   
%max(p, [], 2) return two elements. The first of them is the max element of each row
%The second of them is the index of the max element.


 


oneVsAll.m

 

options = optimset('GradObj', 'on', 'MaxIter', 50);
for c = 1:num_labels
  initial_theta = zeros((n + 1) , 1);
  [theta, J, exitflag] = ...
    fmincg (@(t)(lrCostFunction(t, X, (y == c), lambda)), ...  %Train on the set in batch size. Each batch consisted of the pictures whhose num is c.
                initial_theta, options);
   all_theta(i,:) = theta';
end


predict.m

 

X = [ones(m, 1) X];
a = sigmoid(Theta1 * X');
a = [ones(1, size(a, 2)) ;  a];
[_, p] = max(a' * Theta2', [] ,2);

 

这是笔者自己想到一些对吴恩达机器学习课程的编程作业的实现方式,如果你有更好的实现方式,欢迎在评论区讨论。

这里只是部分代码,全部代码在 

https://download.csdn.net/download/ti_an_di/10590380

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值