Deep Learning by Andrew Ng --- Softmax regression

这是UFLDL的编程练习。

Weight decay(Softmax 回归有一个不寻常的特点:它有一个“冗余”的参数集)后的cost function和梯度函数:
  • cost function:
    J(θ)=1mi=1mj=1k1{y(i)=j}logeθTjx(i)kl=1eθTlx(i)+λ2i=1kj=0nθ2ij
  • 梯度函数:

θjJ(θ)=1mi=1m[x(i)(1{y(i)=j}p(y(i)=j|x(i);θ))]+λθj
p(y(i)=j|x(i);θ))等于UFLDL练习中step2中的h。

bsxfun函数的使用:
  • to prevent overflow, simply subtract some large constant value from each of the
    θTjx(i)
    terms before computing the exponential:
    % M is the matrix as described in the text
    M = bsxfun(@minus, M, max(M, [], 1));
  • use the following code to compute the hypothesis:
    % M is the matrix as described in the text
    M = bsxfun(@rdivide, M, sum(M)
练习题答案(建议自己完成,后参考):
  • softmaxCost.m:
M = theta*data; %exp(theta(l)' * x(i))
M = bsxfun(@minus, M, max(M, [], 1));  
h = exp(M);
h =  bsxfun(@rdivide, h, sum(h));  
size(groundTruth);
cost = -1/numCases*sum(sum(groundTruth.*log(h)))+lambda/2*sum(sum(theta.^2));  
thetagrad = -1/numCases*((groundTruth-h)*data')+lambda*theta; 
  • softPredict.m:
[index ,  pred]= max(theta * data,[],1);
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值