【读书1】【2017】MATLAB与深度学习——示例:MNIST(4)

代码中的变量y是该网络的最终输出。

The variable y of this code is the finaloutput of the network.

x = X(:, :, k); %Input, 28x28

y1 = Conv(x, W1);% Convolution, 20x20x20

y2 = ReLU(y1); %

y3 = Pool(y2); %Pool, 10x10x20

y4 = reshape(y3,[], 1); % 2000

v5 = W5*y4; % ReLU,360

y5 = ReLU(v5); %

v = Wo*y5; % Softmax, 10

y = Softmax(v); %

现在我们得到了计算输出,就可以计算误差了。

Now that we have the output, the error canbe calculated.

由于网络有10个输出节点,因此正确的输出应该是10×1的向量,才能够计算误差。

As the network has 10 output nodes, thecorrect output should be in a 10×1 vector in order to calculate the error.

然而,MNIST数据给出正确的输出作为相应的数字。

However, the MNIST data gives the correctoutput as the respective digit.

例如,如果输入图像表示4,那么正确的输出为4。

For example, if the input image indicates a4, the correct output will be given as a 4.

下面的代码将数值的正确输出转换为10×1向量。

The following listing converts the numericalcorrect output into a 10×1 vector.

这里省略了进一步的解释。

Further explanation is omitted.

d = zeros(10, 1);

d(sub2ind(size(d), D(k), 1)) = 1;

代码的最后一部分是误差的反向传播。

The last part of the process is theback-propagation of the error.

以下代码显示了从输出层到池化层的反向传播过程。

The following listing shows theback-propagation from the output layer to the subsequent layer to the poolinglayer.

由于本例使用交叉熵和softmax函数,所以输出节点增量与网络输出误差相同。

As this example employs cross entropy andsoftmax functions, the output node delta is the same as that of the networkoutput error.

下一个隐藏层采用ReLU激活函数。

The next hidden layer employs the ReLUactivation function.

这里没有什么特别的地方。

There is nothing particular there.

隐藏层与池化层之间的连接层仅仅是信号的重新排列。

The connecting layer between the hidden andpooling layers is just a rearrangement of the signal.

e = d - y;

delta = e;

e5 = Wo’ * delta;

delta5 = e5 .*(y5> 0);

e4 = W5’ * delta5;

e3 = reshape(e4, size(y3));

下面还有池化层和卷积层。

We have two more layers to go: the poolingand convolution layers.

以下代码为池化层-ReLU-卷积层之间的反向传播。

The following listing shows theback-propagation that passes through the pooling layer-ReLU-convolution layer.

这部分的解释超出了本书讲述的范围。

The explanation of this part is beyond thescope of this book.

当你将来需要的时候,该段代码可以作为参考。

Just refer to the code when you need it inthe future.

e2 = zeros(size(y2)); % Pooling

W3 = ones(size(y2)) / (2*2);

for c = 1:20

   e2(:,:, c) = kron(e3(:, :, c), ones([2 2])) .* W3(:, :, c);

end

delta2 = (y2 > 0) .* e2;

delta1_x = zeros(size(W1));

for c = 1:20

   delta1_x(:,:, c) = conv2(x(:, :), rot90(delta2(:, :, c), 2), 'valid');

end

——本文译自Phil Kim所著的《Matlab Deep Learning》

更多精彩文章请关注微信号:在这里插入图片描述

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值