UFLDL Exercise:Sparse Autoencoder

UFLDL中的稀疏自编码器练习的代码:

练习题网址:http://deeplearning.stanford.edu/wiki/index.php/Exercise:Sparse_Autoencoder

注意:下面三个文件只是粘贴了其中需要完成的部分代码,并非文件的全部代码!

sampleIMAGES.m

for i = 1:numpatches
    imageNum = ceil(rand()*10);
    x = ceil(rand()*505);
    y = ceil(rand()*505);
    patch = IMAGES(x:x+7,y:y+7,imageNum);
    patches(:,i) = patch(:);
end
sparseAutoencoderCost.m

[n m] = size(data);
Z2 = W1*data+repmat(b1,1,m);    % [hiddenSize*visibleSize] * [visibleSize*m] = [hiddenSize*m]
A2 = sigmoid(Z2);               % activations of the hidden layer
Z3 = W2*A2+repmat(b2,1,m);      % [visibleSize*hiddenSize] * [hiddenSize*m] = [visibleSize*m]
A3 = sigmoid(Z3);               % activations of the output layer [visibleSize*m]

% only error term
error = A3 - data;
cost = 1/2*sum(error(:).^2)/m;  % (half) squared-error
% add weight decay
cost = cost + lambda/2*(sum(W1(:).^2)+sum(W2(:).^2));
% add sparsity
rho = mean(A2,2);
rho0 = sparsityParam;
% accumulated kl divergence between calculated average actiavtion and target activation
kl = sum(rho0*log(rho0./rho) + (1-rho0)*log((1-rho0)./(1-rho)));
cost = cost + beta*kl;

delta3 = -1*(data - A3).*A3.*(1-A3);
delta2 = (W2'*delta3+beta*repmat(-rho0./rho+(1-rho0)./(1-rho),1,m)).*A2.*(1-A2);

% only consider error term
W2grad = delta3*A2'/m;
b2grad = mean(delta3,2);
W1grad = delta2*data'/m;
b1grad = mean(delta2,2);

% add weight decay term
W2grad = W2grad + lambda*W2;
W1grad = W1grad + lambda*W1;
computeNumericalGradient.m

EPSILON = 1e-4;
e = zeros(size(theta));

for i = 1:size(theta,1)
    e(i) = 1;
    theta2 = theta+EPSILON*e;
    theta1 = theta-EPSILON*e;
    numgrad(i) = (J(theta2)-J(theta1))/(2*EPSILON);
    e(i) = 0;
end 
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 2
    评论
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值