UFLDL是吴恩达团队编写的较早的一门深度学习入门,里面理论加上练习的节奏非常好,每次都想快点看完理论去动手编写练习,因为他帮你打好了整个代码框架,也有详细的注释,所以我们只要实现一点核心的代码编写工作就行了,上手快!
我这里找不到新版对应这块的中文翻译了,-_-,这是倒数第2个练习了,就快完了!
第十节是:RICA(重建独立成分分析)
教程上说:RICA是为了克服ICA的正交性限制,而它的惩罚项不像ICA那么“严厉”,要减小的损失函数变为:
min
W
λ
∥
W
x
∥
1
+
1
2
∥
W
T
W
x
−
x
∥
2
2
\min _{W} \quad \lambda\|W x\|_{1}+\frac{1}{2}\left\|W^{T} W x-x\right\|_{2}^{2}
Wminλ∥Wx∥1+21∥∥WTWx−x∥∥22
前一项为惩罚项,后一项还是衡量编码后的与原来的差异。
现在我们要来计算梯度,损失函数有两项,所以要计算这两项对参数
W
W
W的梯度,首先看
∥
W
T
W
x
−
x
∥
2
2
\left\|W^{T} W x-x\right\|_{2}^{2}
∥∥WTWx−x∥∥22,可以从下面的图里面慢慢推出
∇
W
∥
W
T
W
x
−
x
∥
2
2
\nabla_{W}\left\|W^{T} W x-x\right\|_{2}^{2}
∇W∥∥WTWx−x∥∥22:
这是我的笔记中关于这个的推导:
我们有了误差
δ
\delta
δ和该层的输入,就可推出梯度:
∇
W
F
=
∇
W
F
+
(
∇
W
T
F
)
T
=
(
W
)
(
2
(
W
T
W
x
−
x
)
)
x
T
+
2
(
W
x
)
(
W
T
W
x
−
x
)
T
\nabla_{W} F =\nabla_{W} F + (\nabla_{W^{T}} F)^{T}\\ = (W)(2(W^{T}Wx -x)) x^{T} + 2(Wx)(W^{T}Wx - x)^{T}
∇WF=∇WF+(∇WTF)T=(W)(2(WTWx−x))xT+2(Wx)(WTWx−x)T
对于
λ
∥
W
x
∥
1
\lambda\|W x\|_{1}
λ∥Wx∥1这一项,我们是用
f
(
x
)
=
x
2
+
ϵ
f(x) = \sqrt{x^2 + \epsilon}
f(x)=x2+ϵ这种模式来模拟1-范数,也可以求得对
W
W
W的梯度,可以在我的笔记中找到(结论就是下面的红框里面标出来的):
好了现在可以放代码了:
zca2.m
function [Z] = zca2(x)
epsilon = 1e-4;
% You should be able to use the code from your PCA/ZCA exercise
% Retain all of the components from the ZCA transform (i.e. do not do
% dimensionality reduction)
% x is the input patch data of size 81*10000
% z is the ZCA transformed data. The dimenison of z = x.
%%% YOUR CODE HERE %%%
avg = mean(x, 1); % Compute the mean pixel intensity value separately for each patch.
x = x - repmat(avg, size(x, 1), 1);
sigma = x * x' / size(x, 2); % covariance matrix
[U,S,~] = svd(sigma);
xRot = U' * x; % rotated version of the data.
Z = U * diag(1./sqrt(diag(S) + epsilon)) * xRot;
softICACost.m
%% Your job is to implement the RICA cost and gradient
function [cost,grad] = softICACost(theta, x, params)
% unpack weight matrix
W = reshape(theta, params.numFeatures, params.n); % 50*81
% project weights to norm ball (prevents degenerate bases)
Wold = W;
W = l2rowscaled(W, 1);
%%% YOUR CODE HERE %%%
% cost Wgrad
lambda = 0.1; % 这是我实验过后比较好的
epsilon = 1e-2;
cost = lambda * sum(sum(sqrt((W*x).^2+epsilon))) + sum(sum((W'*W*x-x).^2)) / 2 ;
% Wgrad = lambda * (W' * W * x./sqrt((W*x).^2+epsilon))+ W * (W' * W * x - x) * x' + W * x * (W' * W * x - x)';
Wgrad = lambda * (W*x./sqrt((W*x).^2+epsilon))*x'+ W * (W' * W * x - x) * x' + W * x * (W' * W * x - x)';
% unproject gradient for minFunc
grad = l2rowscaledg(Wold, W, Wgrad, 1);
grad = grad(:);
运行结果:
运行runSoftICA.m
对权值矩阵进行可视化:可以看出这是一个边缘检测器
有理解不到位之处,还请指出,有更好的想法,可以在下方评论交流!