【读书1】【2017】MATLAB与深度学习——示例:MNIST(9)

在这里插入图片描述
图6-23 ReLU函数在卷积层的特征映射上处理后的图像Image showing whatthe ReLU function processed on the feature map from the convolution layer

先前图像的暗像素被去除,并且当前图像在字母上的大部分是白色像素。

The dark pixels of the previous image areremoved, and the current images have mostly white pixels on the letter.

当我们考虑ReLU函数的定义时,这是一个合理的结果。

This is a reasonable result when weconsider the definition of the ReLU function.

现在我们再来看看图6-22。

Now, look at the Figure 6-22 again.

值得注意的是,第三行第四列的图像上包含几个亮像素。(这里所说的“bright pixels”貌似有误!!!)

It is noticeable that the image on thirdrow fourth column contains a few bright pixels.

在进行ReLU操作之后,该图像变得完全黑暗。

After the ReLU operation, this imagebecomes completely dark.

实际上,这不是一个好的标志,因为它不能捕捉到输入图像2的任何特征。

Actually, this is not a good sign becauseit fails to capture any feature of the input image of the 2.

这种情况需要通过更多的数据和训练来改进。

It needs to be improved through more dataand more training.

然而,分类仍然会起作用,因为特征映射的其它部分工作正常。

However, the classification stillfunctions, as the other parts of the feature map work properly.

图6-24示出了第五幅图的结果,该结果在对ReLU层的输出进行均值池化处理后得到。

Figure 6-24 shows the fifth result, whichprovides the images after the mean pooling process in which the ReLU layerproduces.

在这里插入图片描述
图6-24 均值池化处理后的图像The images after the meanpooling process

每幅图像在10×10像素空间中继承了池化前图像的形状,这是池化前图像大小的一半。

Each image inherits the shape of theprevious image in a 10×10 pixel space, which is half the previous size.

这表明池化层可以减少所需的资源。

This shows how much the pooling layer canreduce the required resources.

图6-24是特征提取神经网络输出的最终结果。

Figure 6-24 is the final result of thefeature extraction neural network.

这些图像被转换成一维向量并存储在分类神经网络中。

These images are transformed into aone-dimensional vector and stored in the classification neural network.

到此就完成了对示例代码的解释。

This completes the explanation of theexample code.

虽然在本例中只使用了一对卷积层和池化层,但是通常它们中的许多知识都可以用于大多数实际应用。

Although only one pair of convolution andpooling layers is employed in the example; usually many of them are used inmost practical applications.

包含网络主要特征的小图像越多,识别性能越好。

The more the small images that contain mainfeatures of the network, the better the recognition performance.

——本文译自Phil Kim所著的《Matlab Deep Learning》

更多精彩文章请关注微信号:在这里插入图片描述

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值