【翻译】【VGGNet】VERY DEEP CONVOLUTIONAL NETWORKS FOR LARGE-SCALE IMAGE RECOGNITION

VERY DEEP CONVOLUTIONAL NETWORKS FOR LARGE-SCALE IMAGE RECOGNITION
用于大规模图像识别的非常深的卷积网络

作者:Karen Simonyan, Andrew Zisserman

论文地址:https://arxiv.org/pdf/1409.1556.pdf

ABSTRACT(摘要)

In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3 × 3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16–19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.
  在这项工作中,我们研究了卷积网络深度对其在大规模图像识别环境中的准确性的影响。我们的主要贡献是使用一个具有非常小的(3×3)卷积滤波器的架构对深度增加的网络进行了彻底的评估,这表明通过将深度推到16-19个权重层可以实现对先有技术配置的显著改善。这些发现是我们提交2014年ImageNet挑战赛的基础,我们的团队分别获得了定位和分类赛道的第一和第二名。我们还表明,我们的表征在其他数据集上有很好的通用性,它们在那里取得了最先进的结果。我们公开了两个表现最好的ConvNet模型,以促进在计算机视觉中使用深度视觉表示的进一步研究。

1 INTRODUCTION(介绍)

Convolutional networks (ConvNets) have recently enjoyed a great success in large-scale image and video recognition (Krizhevsky et al., 2012; Zeiler & Fergus, 2013; Sermanet et al., 2014; Simonyan & Zisserman, 2014) which has become possible due to the large public image repositories, such as ImageNet (Deng et al., 2009), and high-performance computing systems, such as GPUs or large-scale distributed clusters (Dean et al., 2012). In particular, an important role in the advance of deep visual recognition architectures has been played by the ImageNet Large-Scale Visual Recognition Challenge (ILSVRC) (Russakovsky et al., 2014), which has served as a testbed for a few generations of large-scale image classification systems, from high-dimensional shallow feature encodings (Perronnin et al., 2010) (the winner of ILSVRC-2011) to deep ConvNets (Krizhevsky et al., 2012) (the winner of ILSVRC-2012).
  卷积网络(ConvNets)最近在大规模图像和视频识别方面取得了巨大的成功(Krizhevsky等人,2012;Zeiler & Fergus,2013;Sermanet等人,2014;Simonyan & Zisserman,2014),这得益于大型公共图像库,如ImageNet(Deng等人,2009),以及高性能计算系统,如GPU或大规模分布式集群(Dean等人,2012)。特别是ImageNet大规模视觉识别挑战赛(ILSVRC)(Russakovsky等人,2014)在推进深度视觉识别架构方面发挥了重要作用,它作为几代大规模图像分类系统的测试平台,从高维浅层特征编码(Perronnin等人,2010)(ILSVRC-2011的冠军)到深度ConvNets(Krizhevsky等人,2012)(ILSVRC-2012的冠军)。
  With ConvNets becoming more of a commodity in the computer vision field, a number of attempts have been made to improve the original architecture of Krizhevsky et al. (2012) in a bid to achieve better accuracy. For instance, the best-performing submissions to the ILSVRC2013 (Zeiler & Fergus, 2013; Sermanet et al., 2014) utilised smaller receptive window size and smaller stride of the first convolutional layer. Another line of improvements dealt with training and testing the networks densely over the whole image and over multiple scales (Sermanet et al., 2014; Howard, 2014). In this paper, we address another important aspect of ConvNet architecture design – its depth. To this end, we fix other parameters of the architecture, and steadily increase the depth of the network by adding more convolutional layers, which is feasible due to the use of very small (3 × 3) convolution filters in all layers.
  随着ConvNets在计算机视觉领域成为更多的商品(有用的东西?),许多人尝试改进Krizhevsky等人(2012)的原始架构,以达到更好的准确性。例如,在ILSVRC2013中表现最好的作品(Zeiler & Fergus, 2013; Sermanet等人, 2014)利用了较小的感受野尺寸和较小的第一卷积层的步长。另一条改进路线是在整个图像和多个尺度上密集地训练和测试网络(Sermanet等人,2014;Howard,2014)。在本文中,我们讨论了ConvNet架构设计的另一个重要方面–其深度。为此,我们固定了架构的其他参数,并通过添加更多的卷积层来稳步增加网络的深度,由于在所有层中使用了非常小的(3×3)卷积滤波器,这一点是可行的。
  As a result, we come up with significantly more accurate ConvNet architectures, which not only achieve the state-of-the-art accuracy on ILSVRC classification and localisation tasks, but are also applicable to other image recognition datasets, where they achieve excellent performance even when used as a part of a relatively simple pipelines (e.g. deep features classified by a linear SVM without fine-tuning). We have released our two best-performing models1 to facilitate further research.
  因此,我们提出了更精确的ConvNet架构,它不仅在ILSVRC分类和定位任务上达到了最先进的精度,而且还适用于其他图像识别数据集,即使作为相对简单的管道的一部分(例如,由线性SVM分类的深度特征,无需微调),它们也能取得优异的性能。我们已经发布了我们两个表现最好的模型,以促进进一步的研究。
  The rest of the paper is organised as follows. In Sect. 2, we describe our ConvNet configurations. The details of the image classification training and evaluation are then presented in Sect. 3, and the configurations are compared on the ILSVRC classification task in Sect. 4. Sect. 5 concludes the paper. For completeness, we also describe and assess our ILSVRC-2014 object localisation system in Appendix A, and discuss the generalisation of very deep features to other datasets in Appendix B. Finally, Appendix C contains the list of major paper revisions.
  本文的其余部分组织如下。在第2节,我们描述了我们的ConvNet配置。然后在第3节中介绍了图像分类训练和评估的细节。在第3节中,我们在ILSVRC分类任务中对这些配置进行了比较。4. 第5节是本文的结论。为了完整起见,我们还在附录A中描述和评估了我们的ILSVRC-2014物体定位系统,并在附录B中讨论了非常深入的特征对其他数据集的通用性。最后,附录C包含了论文的主要修订清单。

2 CONVNET CONFIGURATIONS(卷积网络配置)

To measure the improvement brought by the increased ConvNet depth in a fair setting, all our ConvNet layer configurations are designed using the same principles, inspired by Ciresan et al. (2011); Krizhevsky et al. (2012). In this section, we first describe a generic layout of our ConvNet configurations (Sect. 2.1) and then detail the specific configurations used in the evaluation (Sect. 2.2). Our design choices are then discussed and compared to the prior art in Sect. 2.3.
  为了在公平的环境下衡量ConvNet深度增加带来的改进,我们所有的ConvNet层配置都是采用相同的原则设计的,灵感来自Ciresan等人(2011);Krizhevsky等人(2012)。在本节中,我们首先描述了ConvNet配置的通用布局(第2.1节),然后详细介绍了评估中使用的具体配置(第2.2节)。然后在第2.3节中讨论了我们的设计选择并与现有技术进行了比较。

2.1 ARCHITECTURE

During training, the input to our ConvNets is a fixed-size 224 × 224 RGB image. The only preprocessing we do is subtracting the mean RGB value, computed on the training set, from each pixel. The image is passed through a stack of convolutional (conv.) layers, where we use filters with a very small receptive field: 3 × 3 (which is the smallest size to capture the notion of left/right, up/down, center). In one of the configurations we also utilise 1 × 1 convolution filters, which can be seen as a linear transformation of the input channels (followed by non-linearity). The convolution stride is fixed to 1 pixel; the spatial padding of conv. layer input is such that the spatial resolution is preserved after convolution, i.e. the padding is 1 pixel for 3 × 3 conv. layers. Spatial pooling is carried out by five max-pooling layers, which follow some of the conv. layers (not all the conv. layers are followed by max-pooling). Max-pooling is performed over a 2 × 2 pixel window, with stride 2.
  在训练期间,我们的ConvNets的输入是一个固定大小的224×224的RGB图像。我们所做的唯一预处理是减去每个像素的RGB平均值,该值是在训练集上计算出来的。图像被传递到卷积(conv.)层的堆栈中,我们使用具有非常小的感受野的过滤器:3×3(这是捕捉左/右、上/下、中心概念的最小尺寸)。在其中一个配置中,我们还利用了1×1的卷积滤波器,它可以被看作是输入通道的线性变换(其次是非线性)。卷积跨度固定为1像素;卷积层输入的空间填充是为了在卷积后保留空间分辨率,即3×3卷积层的填充为1像素。空间池化由五个最大池化层进行,它们跟随一些卷积层(不是所有的卷积层都跟随最大池化)。最大集合是在一个2×2像素的窗口上进行的,跨度为2。
  A stack of convolutional layers (which has a different depth in different architectures) is followed by three Fully-Connected (FC) layers: the first two have 4096 channels each, the third performs 1000way ILSVRC classification and thus contains 1000 channels (one for each class). The final layer is the soft-max layer. The configuration of the fully connected layers is the same in all networks.
  卷积层的堆叠(在不同的架构中具有不同的深度)之后是三个全连接(FC)层:前两个层各有4096个通道,第三个层进行1000路ILSVRC分类,因此包含1000个通道(每类一个)。最后一层是 soft-max 层。全连接层的配置在所有网络中都是一样的。
  All hidden layers are equipped with the rectification (ReLU (Krizhevsky et al., 2012)) non-linearity. We note that none of our networks (except for one) contain Local Response Normalisation (LRN) normalisation (Krizhevsky et al., 2012): as will be shown in Sect. 4, such normalisation does not improve the performance on the ILSVRC dataset, but leads to increased memory consumption and computation time. Where applicable, the parameters for the LRN layer are those of (Krizhevsky et al., 2012).
  所有隐藏层都配备了rectification(ReLU(Krizhevsky等人,2012))非线性。我们注意到,我们的网络(除了一个)都没有包含局部响应归一化(LRN)的归一化(Krizhevsky等人,2012):正如我们在第4节中所显示的,这种归一化并没有提高ILSVRC数据的性能,而是导致了内存消耗和计算时间的增加。在适用的地方,LRN层的参数是(Krizhevsky等人,2012)的参数。

2.2 CONFIGURATIONS(配置)

The ConvNet configurations, evaluated in this paper, are outlined in Table 1, one per column. In the following we will refer to the nets by their names (A–E). All configurations follow the generic design presented in Sect. 2.1, and differ only in the depth: from 11 weight layers in the network A (8 conv. and 3 FC layers) to 19 weight layers in the network E (16 conv. and 3 FC layers). The width of conv. layers (the number of channels) is rather small, starting from 64 in the first layer and then increasing by a factor of 2 after each max-pooling layer, until it reaches 512.
  表1列出了本文评估的ConvNet配置,每列一个。在下文中,我们将用它们的名字(A-E)来指代这些网络。所有的配置都遵循第2.1节中的通用设计,只在深度上有所不同:从网络A的11个权重层(8个conv和3个FC层)到网络E的19个权重层(16个conv.和3个FC层)。卷积层的宽度(通道的数量)相当小,从第一层的64开始,然后在每个最大池化层之后增加2倍,直到达到512。
在这里插入图片描述
  In Table 2 we report the number of parameters for each configuration. In spite of a large depth, the number of weights in our nets is not greater than the number of weights in a more shallow net with larger conv. layer widths and receptive fields (144M weights in (Sermanet et al., 2014)).
  在表2中,我们报告了每种配置的参数数量。尽管深度很大,但我们的网中的权重数量并不大于具有较大卷积层宽度和感受野的更浅的网中的权重数量(在(Sermanet等人,2014)中为144M权重)。
在这里插入图片描述

2.3 DISCUSSION(讨论)

Our ConvNet configurations are quite different from the ones used in the top-performing entries of the ILSVRC-2012 (Krizhevsky et al., 2012) and ILSVRC-2013 competitions (Zeiler & Fergus, 2013; Sermanet et al., 2014). Rather than using relatively large receptive fields in the first conv. layers (e.g. 11 × 11 with stride 4 in (Krizhevsky et al., 2012), or 7 × 7 with stride 2 in (Zeiler & Fergus, 2013; Sermanet et al., 2014)), we use very small 3 × 3 receptive fields throughout the whole net, which are convolved with the input at every pixel (with stride 1). It is easy to see that a stack of two 3 × 3 conv. layers (without spatial pooling in between) has an effective receptive field of 5 × 5; three such layers have a 7 × 7 effective receptive field. So what have we gained by using, for instance, a stack of three 3 × 3 conv. layers instead of a single 7 × 7 layer? First, we incorporate three non-linear rectification layers instead of a single one, which makes the decision function more discriminative. Second, we decrease the number of parameters: assuming that both the input and the output of a three-layer 3 × 3 convolution stack has C channels, the stack is parametrised by 3 ( 3 2 C 2 ) = 27 C 2 3 (3^2C^2) = 27C^2 3(32C2)=27C2 weights; at the same time, a single 7 × 7 conv. layer would require 7 2 C 2 = 49 C 2 7^2C^2 = 49C^2 72C2=49C2 parameters, i.e. 81% more. This can be seen as imposing a regularisation on the 7 × 7 conv. filters, forcing them to have a decomposition through the 3 × 3 filters (with non-linearity injected in between).
  我们的ConvNet配置与ILSVRC-2012(Krizhevsky等人,2012)和ILSVRC-2013比赛(Zeiler & Fergus,2013;Sermanet等人,2014)中表现最好的作品所使用的配置完全不同。我们没有在第一个卷积层中使用相对较大的感受野(例如11×11,步长为4(Krizhevsky等人,2012),或7×7,步长为2(Zeiler & Fergus,2013;Sermanet等人,2014)),而是在整个网络中使用非常小的3×3感受野,在每个像素处与输入进行卷积(步长1)。很容易看出,两个3×3的卷积层(中间没有空间池化)的有效感受野是5×5;三个这样的层的有效感受野是7×7。那么,举例来说,我们使用三个3×3卷积层的堆叠而不是单一的7×7层有什么好处呢?首先,我们加入了三个非线性整流层,而不是单一的整流层,这使得决策函数更具有辨别力。其次,我们减少了参数的数量:假设三层3×3卷积层的输入和输出都有C个通道,那么该层的参数为 3 ( 3 2 C 2 ) = 27 C 2 3 (3^2C^2) = 27C^2 3(32C2)=27C2个权重;同时,单个7×7卷积层需要 7 2 C 2 = 49 C 2 7^2C^2 = 49C^2 72C2=49C2个参数,即增加81%。这可以被看作是对7×7卷积滤波器施加了一个正则化,迫使它们通过3×3滤波器进行分解(中间注入了非线性)。
  The incorporation of 1 × 1 conv. layers (configuration C, Table 1) is a way to increase the nonlinearity of the decision function without affecting the receptive fields of the conv. layers. Even though in our case the 1 × 1 convolution is essentially a linear projection onto the space of the same dimensionality (the number of input and output channels is the same), an additional non-linearity is introduced by the rectification function. It should be noted that 1 × 1 conv. layers have recently been utilised in the “Network in Network” architecture of Lin et al. (2014).
  纳入1×1卷积层(配置C,表1)是增加决策函数的非线性而不影响卷积层的感受野的一种方法。尽管在我们的例子中,1×1卷积基本上是对相同维度空间的线性投影(输入和输出通道的数量是相同的),但整流函数引入了一个额外的非线性因素。应该注意的是,1×1卷积层最近在Lin等人(2014)的 "Network in Network "架构中得到了利用。
  Small-size convolution filters have been previously used by Ciresan et al. (2011), but their nets are significantly less deep than ours, and they did not evaluate on the large-scale ILSVRC dataset. Goodfellow et al. (2014) applied deep ConvNets (11 weight layers) to the task of street number recognition, and showed that the increased depth led to better performance. GoogLeNet (Szegedy et al., 2014), a top-performing entry of the ILSVRC-2014 classification task, was developed independently of our work, but is similar in that it is based on very deep ConvNets (22 weight layers) and small convolution filters (apart from 3 × 3, they also use 1 × 1 and 5 × 5 convolutions). Their network topology is, however, more complex than ours, and the spatial resolution of the feature maps is reduced more aggressively in the first layers to decrease the amount of computation. As will be shown in Sect. 4.5, our model is outperforming that of Szegedy et al. (2014) in terms of the single-network classification accuracy.
  Ciresan等人(2011)曾经使用过小尺寸的卷积滤波器,但是他们的网络深度明显低于我们的网络,而且他们没有在大规模的ILSVRC数据集上进行评估。Goodfellow等人(2014年)将深度ConvNets(11个权重层)应用于街道号码识别任务,结果显示深度的增加带来了更好的性能。GoogLeNet(Szegedy等人,2014)是ILSVRC-2014分类任务中表现最好的作品,它是独立于我们的工作而开发的,但它的相似之处在于它是基于非常深的ConvNets(22个权重层)和小型卷积滤波器(除了3×3,他们还使用1×1和5×5卷积)。然而,他们的网络拓扑结构比我们的更复杂,特征图的空间分辨率在第一层被更积极地降低以减少计算量。正如在第4.5节中所显示的,我们的模型优于我们的模型。4.5节显示,我们的模型在单网络分类精度方面优于Szegedy等人(2014)的模型。

3 CLASSIFICATION FRAMEWORK(分类框架)

In the previous section we presented the details of our network configurations. In this section, we describe the details of classification ConvNet training and evaluation.
  在上一节中,我们介绍了我们网络配置的细节。在本节中,我们描述了分类ConvNet训练和评估的细节。

3.1 TRAINING(训练)

The ConvNet training procedure generally follows Krizhevsky et al. (2012) (except for sampling the input crops from multi-scale training images, as explained later). Namely, the training is carried out by optimising the multinomial logistic regression objective using mini-batch gradient descent (based on back-propagation (LeCun et al., 1989)) with momentum. The batch size was set to 256, momentum to 0.9. The training was regularised by weight decay (the L2 penalty multiplier set to 5 ⋅ 1 0 − 4 5· 10^{-4} 5104) and dropout regularisation for the first two fully-connected layers (dropout ratio set to 0.5). The learning rate was initially set to 1 0 − 2 10^{−2} 102, and then decreased by a factor of 10 when the validation set accuracy stopped improving. In total, the learning rate was decreased 3 times, and the learning was stopped after 370K iterations (74 epochs). We conjecture that in spite of the larger number of parameters and the greater depth of our nets compared to (Krizhevsky et al., 2012), the nets required less epochs to converge due to (a) implicit regularisation imposed by greater depth and smaller conv. filter sizes; (b) pre-initialisation of certain layers.
  ConvNet的训练程序一般遵循Krizhevsky等人(2012)的方法(除了从多尺度训练图像中对输入的裁剪进行抽样,后面会解释)。也就是说,训练是通过使用小型批次梯度下降法(基于反向传播法(LeCun等人,1989))优化多交叉逻辑回归目标来进行的,并带有动力。批量大小被设置为256,动量为0.9。训练通过权重衰减(L2惩罚乘数设置为 5 ⋅ 1 0 − 4 5· 10^{-4} 5104)和前两个全连接层的dropout正则化(dropout比率设置为0.5)进行归一化。学习率最初被设定为 1 0 − 2 10^{−2} 102,当验证集的准确性不再提高时,学习率又降低了10倍。总的来说,学习率降低了3次,在37万次迭代(74个epochs)后停止学习。我们推测,尽管与(Krizhevsky等人,2012)相比,我们的网络有更多的参数和更大的深度,但网络需要更少的epochs来收敛,这是因为(a)更大的深度和更小的卷积过滤器尺寸带来的隐性正则化;(b)某些层的预初始化。
  The initialisation of the network weights is important, since bad initialisation can stall learning due to the instability of gradient in deep nets. To circumvent this problem, we began with training the configuration A (Table 1), shallow enough to be trained with random initialisation. Then, when training deeper architectures, we initialised the first four convolutional layers and the last three fully-connected layers with the layers of net A (the intermediate layers were initialised randomly). We did not decrease the learning rate for the pre-initialised layers, allowing them to change during learning. For random initialisation (where applicable), we sampled the weights from a normal distribution with the zero mean and 1 0 − 2 10^{−2} 102 variance. The biases were initialised with zero. It is worth noting that after the paper submission we found that it is possible to initialise the weights without pre-training by using the random initialisation procedure of Glorot & Bengio (2010).
  网络权重的初始化很重要,因为不好的初始化会因为深层网络中梯度的不稳定性而导致学习停滞。为了规避这个问题,我们从训练配置A(表1)开始,这个配置足够浅,可以用随机初始化进行训练。然后,在训练更深的架构时,我们用网A的层来初始化前四个卷积层和最后三个完全连接层(中间层是随机初始化的)。我们没有降低预初始化层的学习率,允许它们在学习过程中发生变化。对于随机初始化(如适用),我们从正态分布中抽出权重,其均值为零,方差为 1 0 − 2 10^{−2} 102。偏置被初始化为零。值得注意的是,在论文提交后,我们发现可以通过使用Glorot & Bengio(2010)的随机初始化程序来初始化权重,而无需预训练。
  To obtain the fixed-size 224×224 ConvNet input images, they were randomly cropped from rescaled training images (one crop per image per SGD iteration). To further augment the training set, the crops underwent random horizontal flipping and random RGB colour shift (Krizhevsky et al., 2012). Training image rescaling is explained below.
  为了获得固定尺寸的224×224 ConvNet输入图像,他们从重新缩放的训练图像中随机裁剪(每个SGD迭代的图像有一个裁剪)。为了进一步增加训练集,裁剪的图像经过随机水平翻转和随机RGB颜色移动(Krizhevsky等人,2012)。训练图像的重新缩放将在下面解释。
  Training image size. Let S be the smallest side of an isotropically-rescaled training image, from which the ConvNet input is cropped (we also refer to S as the training scale). While the crop size is fixed to 224 × 224, in principle S can take on any value not less than 224: for S = 224 the crop will capture whole-image statistics, completely spanning the smallest side of a training image; for S ≫ 224 the crop will correspond to a small part of the image, containing a small object or an object part.
  训练图像大小。让S是等向缩放的训练图像的最小边,ConvNet的输入就是从这个边裁剪出来的(我们也把S称为训练比例)。虽然裁剪尺寸固定为224×224,但原则上S可以采取不低于224的任何数值:对于S=224,裁剪将捕获整个图像的统计数据,完全跨越训练图像的最小一面;对于S≫224,裁剪将对应于图像的一小部分,包含一个小物体或一个物体的一部分。
  We consider two approaches for setting the training scale S. The first is to fix S, which corresponds to single-scale training (note that image content within the sampled crops can still represent multiscale image statistics). In our experiments, we evaluated models trained at two fixed scales: S = 256 (which has been widely used in the prior art (Krizhevsky et al., 2012; Zeiler & Fergus, 2013; Sermanet et al., 2014)) and S = 384. Given a ConvNet configuration, we first trained the network using S = 256. To speed-up training of the S = 384 network, it was initialised with the weights pre-trained with S = 256, and we used a smaller initial learning rate of 1 0 − 3 10^{−3} 103.
  我们考虑了两种设置训练尺度S的方法。第一种是固定S,这相当于单尺度训练(注意,采样剪切内的图像内容仍然可以代表多尺度图像统计)。在我们的实验中,我们评估了在两个固定尺度下训练的模型。S=256(这在现有技术中被广泛使用(Krizhevsky等人,2012;Zeiler & Fergus,2013;Sermanet等人,2014))和S=384。给定一个ConvNet配置,我们首先使用S = 256来训练网络。为了加快 S = 384 S = 384 S=384网络的训练,我们用 S = 256 S = 256 S=256的权重进行初始化,并使用较小的初始学习率 1 0 − 3 10^{-3} 103
  The second approach to setting S is multi-scale training, where each training image is individually rescaled by randomly sampling S from a certain range [ S m i n , S m a x S_{min}, S_{max} Smin,Smax] (we used S m i n = 256 S_{min} = 256 Smin=256 and S m a x = 512 S_{max} = 512 Smax=512). Since objects in images can be of different size, it is beneficial to take this into account during training. This can also be seen as training set augmentation by scale jittering, where a single model is trained to recognise objects over a wide range of scales. For speed reasons, we trained multi-scale models by fine-tuning all layers of a single-scale model with the same configuration, pre-trained with fixed S = 384 S = 384 S=384.
  设置S的第二种方法是多尺度训练,即通过从一定范围[ S m i n , S m a x S_{min}, S_{max} Smin,Smax]中随机抽取S,对每张训练图像进行单独的重新缩放(我们使用 S m i n = 256 S_{min}=256 Smin=256 S m a x = 512 S_{max}=512 Smax=512)。由于图像中的物体可以有不同的尺寸,在训练中考虑到这一点是很有好处的。这也可以看作是通过尺度抖动来增加训练集,即训练一个单一的模型来识别各种尺度的物体。出于速度方面的考虑,我们通过对具有相同配置的单尺度模型的所有层进行微调来训练多尺度模型,预训练固定的 S = 384 S=384 S=384

3.2 TESTING(测试)

At test time, given a trained ConvNet and an input image, it is classified in the following way. First, it is isotropically rescaled to a pre-defined smallest image side, denoted as Q (we also refer to it as the test scale). We note that Q is not necessarily equal to the training scale S (as we will show in Sect. 4, using several values of Q for each S leads to improved performance). Then, the network is applied densely over the rescaled test image in a way similar to (Sermanet et al., 2014). Namely, the fully-connected layers are first converted to convolutional layers (the first FC layer to a 7 × 7 conv. layer, the last two FC layers to 1 × 1 conv. layers). The resulting fully-convolutional net is then applied to the whole (uncropped) image. The result is a class score map with the number of channels equal to the number of classes, and a variable spatial resolution, dependent on the input image size. Finally, to obtain a fixed-size vector of class scores for the image, the class score map is spatially averaged (sum-pooled). We also augment the test set by horizontal flipping of the images; the soft-max class posteriors of the original and flipped images are averaged to obtain the final scores for the image.
  在测试时,给定一个训练有素的ConvNet和一个输入图像,它以下列方式进行分类。首先,它被各向同性地重新缩放到预先定义的最小图像边,表示为Q(我们也把它称为测试尺度)。我们注意到,Q不一定等于训练尺度S(正如我们将在第4节中显示的那样,对每个S使用几个Q值会导致性能的提高)。然后,该网络以类似于(Sermanet等人,2014)的方式密集地应用于重新缩放的测试图像。也就是说,全连接层首先被转换为卷积层(第一个FC层为7×7卷积层,最后两个FC层为1×1卷积层)。然后将得到的全卷积网应用于整个(未裁剪的)图像。其结果是一个类分图,通道数等于类的数量,空间分辨率可变,取决于输入图像的大小。最后,为了得到一个固定大小的图像的类分数向量,对类分数图进行空间平均化(sum-pooled)。我们还通过水平翻转图像来增加测试集;对原始图像和翻转图像的soft-max类后验进行平均,以获得图像的最终得分。
  Since the fully-convolutional network is applied over the whole image, there is no need to sample multiple crops at test time (Krizhevsky et al., 2012), which is less efficient as it requires network re-computation for each crop. At the same time, using a large set of crops, as done by Szegedy et al. (2014), can lead to improved accuracy, as it results in a finer sampling of the input image compared to the fully-convolutional net. Also, multi-crop evaluation is complementary to dense evaluation due to different convolution boundary conditions: when applying a ConvNet to a crop, the convolved feature maps are padded with zeros, while in the case of dense evaluation the padding for the same crop naturally comes from the neighbouring parts of an image (due to both the convolutions and spatial pooling), which substantially increases the overall network receptive field, so more context is captured. While we believe that in practice the increased computation time of multiple crops does not justify the potential gains in accuracy, for reference we also evaluate our networks using 50 crops per scale (5 × 5 regular grid with 2 flips), for a total of 150 crops over 3 scales, which is comparable to 144 crops over 4 scales used by Szegedy et al. (2014).
  由于完全卷积网络应用于整个图像,因此不需要在测试时对多个裁剪进行采样(Krizhevsky等人,2012),这样做效率较低,因为它需要对每个裁剪重新进行网络计算。同时,像Szegedy等人(2014)所做的那样,使用一组大的裁剪,可以导致准确性的提高,因为与完全卷积网相比,它对输入图像的采样更精细。另外,由于不同的卷积边界条件,多裁剪评估是对密集评估的补充:当将ConvNet应用于一个裁剪时,卷积的特征图被填充了零,而在密集评估的情况下,同一裁剪的填充物自然来自图像的相邻部分(由于卷积和空间池),这大大增加了整个网络的感受野,因此可以捕获更多的背景。虽然我们认为在实践中,多个裁剪所增加的计算时间并不能证明在准确性方面的潜在收益是合理的,但作为参考,我们也使用每个尺度50个裁剪(5×5的规则网格,2次翻转)来评估我们的网络,在3个尺度上总共有150个裁剪,这与Szegedy等人(2014)使用的4个尺度上的144个裁剪相当。

3.3 IMPLEMENTATION DETAILS(实施细节)

Our implementation is derived from the publicly available C++ Caffe toolbox (Jia, 2013) (branched out in December 2013), but contains a number of significant modifications, allowing us to perform training and evaluation on multiple GPUs installed in a single system, as well as train and evaluate on full-size (uncropped) images at multiple scales (as described above). Multi-GPU training exploits data parallelism, and is carried out by splitting each batch of training images into several GPU batches, processed in parallel on each GPU. After the GPU batch gradients are computed, they are averaged to obtain the gradient of the full batch. Gradient computation is synchronous across the GPUs, so the result is exactly the same as when training on a single GPU.
  我们的实现源自公开的C++ Caffe工具箱(Jia, 2013)(2013年12月分出),但包含一些重要的修改,使我们能够在安装在一个系统中的多个GPU上进行训练和评估,以及在多种比例的全尺寸(未剪裁)图像上进行训练和评估(如上所述)。多GPU训练利用了数据的并行性,并通过将每批训练图像分成几个GPU批次,在每个GPU上并行处理来进行。在计算完GPU批次的梯度后,它们被平均化以获得整个批次的梯度。梯度计算在各GPU之间是同步的,所以结果与在单个GPU上训练时完全相同。
  While more sophisticated methods of speeding up ConvNet training have been recently proposed (Krizhevsky, 2014), which employ model and data parallelism for different layers of the net, we have found that our conceptually much simpler scheme already provides a speedup of 3.75 times on an off-the-shelf 4-GPU system, as compared to using a single GPU. On a system equipped with four NVIDIA Titan Black GPUs, training a single net took 2–3 weeks depending on the architecture.
  虽然最近提出了更复杂的加速ConvNet训练的方法(Krizhevsky,2014),这些方法对网络的不同层采用模型和数据并行,但我们发现,与使用单个GPU相比,我们概念上更简单的方案已经在一个现成的4GPU系统上提供了3.75倍的速度。在一个配备了四颗英伟达Titan Black GPU的系统上,训练一个网需要2-3周的时间,具体时间取决于架构。

4 CLASSIFICATION EXPERIMENTS(分类实验)

Dataset. In this section, we present the image classification results achieved by the described ConvNet architectures on the ILSVRC-2012 dataset (which was used for ILSVRC 2012–2014 challenges). The dataset includes images of 1000 classes, and is split into three sets: training (1.3M images), validation (50K images), and testing (100K images with held-out class labels). The classification performance is evaluated using two measures: the top-1 and top-5 error. The former is a multi-class classification error, i.e. the proportion of incorrectly classified images; the latter is the main evaluation criterion used in ILSVRC, and is computed as the proportion of images such that the ground-truth category is outside the top-5 predicted categories.
  数据集。在本节中,我们介绍了所描述的ConvNet架构在ILSVRC-2012数据集上取得的图像分类结果(ILSVRC 2012-2014挑战赛使用的是该数据集)。该数据集包括1000个类别的图像,并被分成三组:训练(130万张图像)、验证(5万张图像)和测试(10万张带有保留类别标签的图像)。分类性能用两种方法评估:Top-1和Top-5错误。前者是一个多类分类误差,即错误分类的图像比例;后者是ILSVRC使用的主要评价标准,计算为图像的比例,即地面真实类别在前5个预测类别之外。
  For the majority of experiments, we used the validation set as the test set. Certain experiments were also carried out on the test set and submitted to the official ILSVRC server as a “VGG” team entry to the ILSVRC-2014 competition (Russakovsky et al., 2014).
  在大多数实验中,我们使用验证集作为测试集。某些实验也是在测试集上进行的,并提交给ILSVRC的官方服务器,作为ILSVRC-2014比赛的 "VGG "团队作品(Russakovsky等人,2014)。

4.1 SINGLE SCALE EVALUATION(单一规模评估)

We begin with evaluating the performance of individual ConvNet models at a single scale with the layer configurations described in Sect. 2.2. The test image size was set as follows: Q = S Q = S Q=S for fixed S S S, and Q = 0.5 ( S m i n + S m a x ) Q = 0.5(S_{min} + S_{max}) Q=0.5(Smin+Smax) for jittered S ∈ [ S m i n + S m a x ] S ∈ [S_{min} + S_{max}] S[Smin+Smax]. The results of are shown in Table 3.
  我们首先评估了单个ConvNet模型在单一规模下的性能,以及第2.2节所述的层配置。测试图像的大小被设定为如下。对于固定的 S S S Q = S Q=S Q=S;对于抖动的 S ∈ [ S m i n + S m a x ] , Q = 0.5 ( S m i n + S m a x ) S∈[S_{min}+S_{max}], Q=0.5(S_{min}+S_{max}) S[Smin+Smax],Q=0.5(Smin+Smax)。其结果见表3。
   在这里插入图片描述
  First, we note that using local response normalisation (A-LRN network) does not improve on the model A without any normalisation layers. We thus do not employ normalisation in the deeper architectures (B–E).
  首先,我们注意到,使用局部响应归一化(A-LRN网络)并不能改善没有任何归一化层的模型A。因此,我们在更深的架构(B-E)中不采用归一化。
  Second, we observe that the classification error decreases with the increased ConvNet depth: from 11 layers in A to 19 layers in E. Notably, in spite of the same depth, the configuration C (which contains three 1 × 1 conv. layers), performs worse than the configuration D, which uses 3 × 3 conv. layers throughout the network. This indicates that while the additional non-linearity does help (C is better than B), it is also important to capture spatial context by using conv. filters with non-trivial receptive fields (D is better than C). The error rate of our architecture saturates when the depth reaches 19 layers, but even deeper models might be beneficial for larger datasets. We also compared the net B with a shallow net with five 5 × 5 conv. layers, which was derived from B by replacing each pair of 3 × 3 conv. layers with a single 5 × 5 conv. layer (which has the same receptive field as explained in Sect. 2.3). The top-1 error of the shallow net was measured to be 7% higher than that of B (on a center crop), which confirms that a deep net with small filters outperforms a shallow net with larger filters.
  其次,我们观察到分类误差随着ConvNet深度的增加而减少:从A的11层到E的19层。值得注意的是,尽管深度相同,配置C(包含三个1×1的卷积层)比配置D(整个网络使用3×3的卷积层)表现更差。这表明,虽然额外的非线性确实有帮助(C比B好),但通过使用具有非三层感受野的卷积滤波器来捕捉空间背景也很重要(D比C好)。当深度达到19层时,我们架构的错误率就饱和了,但更深的模型对更大的数据集可能是有益的。我们还将B网与具有5个5×5卷积层的浅网进行了比较,该浅网是通过将每对3×3卷积层替换为单一的5×5卷积层(如第2.3节所述,它具有相同的感受野)而从B网衍生出来的。经测量,浅网的top-1误差比B的误差高7%(在中心裁剪上),这证实了具有小过滤器的深网优于具有大过滤器的浅网。
  Finally, scale jittering at training time (S ∈ [256; 512]) leads to significantly better results than training on images with fixed smallest side (S = 256 or S = 384), even though a single scale is used at test time. This confirms that training set augmentation by scale jittering is indeed helpful for capturing multi-scale image statistics.
  最后,在训练时进行尺度抖动(S∈[256;512])导致的结果明显好于在具有固定最小边的图像(S=256或S=384)上的训练,即使在测试时使用单一尺度。这证实了通过尺度抖动来增加训练集确实有助于捕捉多尺度图像统计。

4.2 MULTI-SCALE EVALUATION(多尺度评估)

Having evaluated the ConvNet models at a single scale, we now assess the effect of scale jittering at test time. It consists of running a model over several rescaled versions of a test image (corresponding to different values of Q), followed by averaging the resulting class posteriors. Considering that a large discrepancy between training and testing scales leads to a drop in performance, the models trained with fixed S were evaluated over three test image sizes, close to the training one: Q = S − 32 , S , S + 32 Q = {S − 32, S, S + 32} Q=S32,S,S+32. At the same time, scale jittering at training time allows the network to be applied to a wider range of scales at test time, so the model trained with variable S ∈ [ S m i n ; S m a x ] S ∈ [S_{min}; S_{max}] S[Smin;Smax] was evaluated over a larger range of sizes Q = { S m i n , 0.5 ( S m i n + S m a x ) , S m a x } Q = \{S_{min}, 0.5(S_{min} + S_{max}), S_{max}\} Q={Smin,0.5(Smin+Smax),Smax}.
  在评估了单一比例的ConvNet模型后,我们现在评估测试时比例抖动的影响。它包括在测试图像的几个重新缩放的版本上运行一个模型(对应于不同的Q值),然后对产生的类别后验进行平均。考虑到训练和测试规模之间的巨大差异会导致性能下降,用固定的S训练的模型在三种测试图像尺寸上进行评估,接近训练尺寸: Q = S − 32 , S , S + 32 Q = {S - 32, S, S + 32} Q=S32,S,S+32。同时,训练时的尺度抖动允许网络在测试时应用于更大的尺度范围,所以用变量 S ∈ [ S m i n ; S m a x ] S∈[S_{min}; S_{max}] S[Smin;Smax]训练的模型在更大的尺寸范围内进行评估 Q = { S m i n , 0.5 ( S m i n + S m a x ) , S m a x } Q = \{S_{min}, 0.5(S_{min} + S_{max}), S_{max}\} Q={Smin,0.5(Smin+Smax),Smax}
  The results, presented in Table 4, indicate that scale jittering at test time leads to better performance (as compared to evaluating the same model at a single scale, shown in Table 3). As before, the deepest configurations (D and E) perform the best, and scale jittering is better than training with a fixed smallest side S. Our best single-network performance on the validation set is 24.8%/7.5% top-1/top-5 error (highlighted in bold in Table 4). On the test set, the configuration E achieves 7.3% top-5 error.
  表4中的结果表明,测试时的规模抖动会导致更好的性能(与表3中所示的在单一规模下评估同一模型相比)。和以前一样,最深的配置(D和E)表现最好,规模抖动比用固定的最小边S训练要好。我们在验证集上最好的单网络性能是24.8%/7.5%的top-1/top-5错误(表4中用粗体字突出)。在测试集上,配置E取得了7.3%的Top-5错误。
  
在这里插入图片描述

4.3 MULTI-CROP EVALUATION(多裁剪评估)

In Table 5 we compare dense ConvNet evaluation with mult-crop evaluation (see Sect. 3.2 for details). We also assess the complementarity of the two evaluation techniques by averaging their softmax outputs. As can be seen, using multiple crops performs slightly better than dense evaluation, and the two approaches are indeed complementary, as their combination outperforms each of them. As noted above, we hypothesize that this is due to a different treatment of convolution boundary conditions.
  在表5中,我们比较了密集ConvNet评价和多作物评价(详见3.2节)。我们还通过对它们的softmax输出进行平均来评估这两种评价技术的互补性。可以看出,使用多作物的表现略好于密集评价,这两种方法确实是互补的,因为它们的组合优于它们各自的表现。如上所述,我们假设这是由于对卷积边界条件的不同处理所致。
  在这里插入图片描述

4.4 CONVNET FUSION(卷积网络融合)

Up until now, we evaluated the performance of individual ConvNet models. In this part of the experiments, we combine the outputs of several models by averaging their soft-max class posteriors. This improves the performance due to complementarity of the models, and was used in the top ILSVRC submissions in 2012 (Krizhevsky et al., 2012) and 2013 (Zeiler & Fergus, 2013; Sermanet et al., 2014).
  到目前为止,我们评估了单个ConvNet模型的性能。在这部分实验中,我们通过对几个模型的soft-max类后验值进行平均,将其输出结合起来。由于模型的互补性,这提高了性能,并在2012年(Krizhevsky等人,2012年)和2013年(Zeiler & Fergus,2013年;Sermanet等人,2014年)提交的ILSVRC顶级文件中使用。
  The results are shown in Table 6. By the time of ILSVRC submission we had only trained the single-scale networks, as well as a multi-scale model D (by fine-tuning only the fully-connected layers rather than all layers). The resulting ensemble of 7 networks has 7.3% ILSVRC test error. After the submission, we considered an ensemble of only two best-performing multi-scale models (configurations D and E), which reduced the test error to 7.0% using dense evaluation and 6.8% using combined dense and multi-crop evaluation. For reference, our best-performing single model achieves 7.1% error (model E, Table 5).
  结果显示在表6中。到ILSVRC提交时,我们只训练了单尺度网络,以及多尺度模型D(通过只对全连接层而不是所有层进行微调)。结果7个网络的合集有7.3%的ILSVRC测试误差。提交后,我们考虑了只有两个表现最好的多尺度模型(配置D和E)的合集,使用密集评估将测试误差降低到7.0%,使用密集和多此案件综合评估将测试误差降低到6.8%。作为参考,我们表现最好的单一模型实现了7.1%的误差(模型E,表5)。
  在这里插入图片描述

4.5 COMPARISON WITH THE STATE OF THE ART(与先进模型的对比)

Finally, we compare our results with the state of the art in Table 7. In the classification task of ILSVRC-2014 challenge (Russakovsky et al., 2014), our “VGG” team secured the 2nd place with 7.3% test error using an ensemble of 7 models. After the submission, we decreased the error rate to 6.8% using an ensemble of 2 models.
  最后,我们将我们的结果与表7中的技术现状进行了比较。在ILSVRC-2014挑战赛的分类任务中(Russakovsky等人,2014),我们的 "VGG "团队使用7个模型的集合,以7.3%的测试误差获得了第二名。提交后,我们使用2个模型的组合将错误率降低到6.8%。
  在这里插入图片描述
  As can be seen from Table 7, our very deep ConvNets significantly outperform the previous generation of models, which achieved the best results in the ILSVRC-2012 and ILSVRC-2013 competitions. Our result is also competitive with respect to the classification task winner (GoogLeNet with 6.7% error) and substantially outperforms the ILSVRC-2013 winning submission Clarifai, which achieved 11.2% with outside training data and 11.7% without it. This is remarkable, considering that our best result is achieved by combining just two models – significantly less than used in most ILSVRC submissions. In terms of the single-net performance, our architecture achieves the best result (7.0% test error), outperforming a single GoogLeNet by 0.9%. Notably, we did not depart from the classical ConvNet architecture of LeCun et al. (1989), but improved it by substantially increasing the depth.
  从表7中可以看出,我们的深度ConvNets明显优于前一代模型,后者在ILSVRC-2012和ILSVRC-2013比赛中取得了最佳成绩。我们的结果与分类任务冠军(GoogLeNet,误差6.7%)相比也很有竞争力,并大大超过了ILSVRC-2013的获奖作品Clarifai,后者在有外部训练数据的情况下取得了11.2%的成绩,在没有外部训练数据的情况下取得了11.7%。考虑到我们的最佳结果是通过结合两个模型实现的–明显少于大多数ILSVRC提交的结果,这一点非常了不起。在单网性能方面,我们的架构取得了最好的结果(7.0%的测试误差),比单个GoogLeNet的性能高出0.9%。值得注意的是,我们并没有偏离LeCun等人(1989)的经典ConvNet架构,而是通过大幅增加深度来改进它。

5 CONCLUSION(结论)

In this work we evaluated very deep convolutional networks (up to 19 weight layers) for largescale image classification. It was demonstrated that the representation depth is beneficial for the classification accuracy, and that state-of-the-art performance on the ImageNet challenge dataset can be achieved using a conventional ConvNet architecture (LeCun et al., 1989; Krizhevsky et al., 2012) with substantially increased depth. In the appendix, we also show that our models generalise well to a wide range of tasks and datasets, matching or outperforming more complex recognition pipelines built around less deep image representations. Our results yet again confirm the importance of depth in visual representations.
  在这项工作中,我们评估了非常深的卷积网络(多达19个权重层)用于大规模的图像分类。结果表明,表示深度有利于分类精度,在ImageNet挑战数据集上的最先进的性能可以通过使用传统的ConvNet架构(LeCun等人,1989年;Krizhevsky等人,2012年)来实现,而深度大大增加。在附录中,我们还表明,我们的模型可以很好地适用于各种任务和数据集,匹配或优于围绕较低深度的图像表征建立的更复杂的识别管道。我们的结果再次证实了深度在视觉表示中的重要性。


  1. https://www.robots.ox.ac.uk/~vgg/research/very_deep/ ↩︎

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值