Angrew Machine Learning ex4

sigmoidGradient.m

t = sigmoid(z);
g = t .* (1 .- t);

randInitializeWeights.m

% Randomly initialize the weights to small values
epsilon_init = 0.12;
W = rand(L_out, 1 + L_in) * 2 * epsilon_init - epsilon_init;

nnCostFunction.m

%=========================== forward propgation =========
X = [ones(m, 1) X];
z1 = X * Theta1';
a1 = sigmoid(z1);
a1 = [ones(m, 1) a1]; %5000 X 26
z2 = a1 * Theta2';
h = sigmoid(z2);     %h equals to a2
%=========================== end of forward propgation===
log_h = log(h);
log_1_h = log(1 - h);
delta_3 = h; %5000 X 10
%y contains of the answer instead of the vector
%We execute the mapping here. 
for i = 1:m
  temp = log_1_h(i, :);
  temp(y(i)) = log_h(i, y(i));
  delta_3(i, y(i)) = h(i, y(i)) - 1;
  J += sum(temp);
end
%The bias units don't take part in the computation of delta
delta_2 = delta_3 * Theta2(:,2 : hidden_layer_size + 1) .* sigmoidGradient(z1); %5000 X 25
Theta1(:, 1) = 0; %Close bias unit
Theta2(:, 1) = 0;
J -= (sum(sum(Theta1 .^ 2)) + sum(sum(Theta2 .^ 2))) * lambda / 2;
J = - J /m;
%=========================== back propgation==============
Theta2_grad = (delta_3' * a1 + lambda * Theta2) / m;
Theta1_grad = (delta_2' * X + lambda * Theta1) / m;

这是笔者自己想到一些对吴恩达机器学习课程的编程作业的实现方式,如果你有更好的实现方式,欢迎在评论区讨论。

这里只是部分代码,全部代码在 

https://download.csdn.net/download/ti_an_di/10590380

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值