ConvolutionalNeural Networks (CNNs / ConvNets)卷积神经网络
Convolutional NeuralNetworks are very similar to ordinary Neural Networks from the previouschapter: they are made up of neurons that have learnable weights and biases.Each neuron receives some inputs, performs a dot product and optionally followsit with a non-linearity. The whole network still expresses a singledifferentiable score function: from the raw image pixels on one end to classscores at the other. And they still have a loss function (e.g. SVM/Softmax) onthe last (fully-connected) layer and all the tips/tricks we developed forlearning regular Neural Networks still apply. 在上一节课中,卷积神经网络和传统神经网络非常相似:都是由可学习的权重和偏移值组成。每一个神经神经元接受一些输入,进行一些点积运算,然后可以有选择性的在这个神经元后接入非线性层。整个网络表现一个单一可微分函数:从神经网络一端的原始图像像素点到另一端的类标签(类分数)。在最后一层我们会加入一个损失层(SVM/Softmax)。我们传授的所有的传统神经网络的技巧在如今仍然适用。
So what does change?ConvNet architectures make the explicit assumption that the inputs are images,which allows us to encode certain properties into the architecture. These thenmake the forward function more efficient to implement and vastly reduce theamount of parameters in the network.所以什么是改变呢?卷积体系结构做出了明确的设定,允许我们将确定的属性编码进这个体系结构中。这样,使得前向函数能更有效率的实现,同时极大的减少了网络中参数的规模。