【论文翻译】Deep Residual Learning for Image Recognition

Deep Residual Learning for Image Recognition

Kaiming He Xiangyu Zhang Shaoqing Ren Jian Sun
Microsoft Research*

用于图像识别的深度残差学习

Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously.We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions.We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8×deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers.

摘要: 更深层的神经网络更难训练,我们提出一种残差网络学习框架,训练网络比以前更深,更方便训练。 我们显式地将层重新定义为参照层输入的学习残差函数,而不是学习未引用的函数。 我们提供了全面的经验证据,表明这些剩余网络更容易优化, 并且可以从大大增加的深度中获得精度。 在Image Net数据集上,我们评估了深度高达152层的残差网络,这个网络层数比VGG网络深8倍,但是仍然只有较低的复杂度。 这些残差网的集合在Image Net测试集上达到3.57%的误差。 这一结果赢得了ILSVRC2015分类任务的第一名。 我们还对CIFAR-10进行了100层和1000层的分析。

The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

    深度的表示对于许多视觉识别任务至关重要。 仅由于我们非常深度的表示,我们在COCO对象检测数据集上获得了28%的相对提升。 深度残差网络是我们提交ILSVRC和COCO2015竞赛的基础, 在这方面,我们还赢得了ImageNet检测,ImageNet定位,COCO检测,COCO分割的第一名。

1.Introduction

    Deep convolutional neural networks [22, 21] have led to a series of breakthroughs for image classification [21, 49, 39]. Deep networks naturally integrate low/mid/high-level features [49] and classifiers in an end-to-end multi-layer fashion, and the “levels” of features can be enriched by the number of stacked layers (depth). Recent evidence [40, 43] reveals that network depth is of crucial importance, and the leading results [40, 43, 12, 16] on the challenging ImageNet dataset [35] all exploit “very deep” [40] models, with a depth of sixteen [40] to thirty [16]. Many other non-trivial visual recognition tasks [7, 11, 6, 32, 27] have alsogreatly benefited from very deep models.
    深卷积神经网络[22,21]导致了图像分类的一系列突破[21,49, 39], 深度网络自然地以端到端的多层方式集成低/中/高级特征[49]和分类器, 特征的“层次”可以通过堆叠层的数量(深度)来丰富。 最近的证据[40,43]表明,网络深度至关重要,这样的结果是[40,43,12,16]在ImageNet的挑战赛上所有人都使用了“非常深”的模型, 深度为16[40]到30[16]。 许多其他非平凡的视觉识别任务也[7,11,6,32,27] 从非常深的模型中受益匪浅。

    Driven by the significance of depth, a question arises: Is learning better networks as easy as stacking more layers? An obstacle to answering this question was the notorious problem of vanishing/exploding gradients [14, 1, 8], which hamper convergence from the beginning. This problem, however, has been largely addressed by normalized initialization [23, 8, 36, 12] and intermediate normalization layers [16], which enable networks with tens of layers to start converging for stochastic gradient descent (SGD) with backpropagation [22].
    在深度意义的驱动下,出现了一个问题:学习更好的网络是否像堆叠更多的层一样容易? 回答这个notorious问题的一个障碍是[14,1,8]梯度的消失或者爆炸。 从一开始就阻碍了收敛。然而,这一问题在很大程度上是通过normalized初始化[23,8,36,12]和中间归一化层[16]来解决的,通过反向传播和随机梯度下降,使得具有数十层的网络趋于收敛。

    When deeper networks are able to start converging, a degradation problem has been exposed: with the network depth increasing, accuracy gets saturated (which might be unsurprising) and then degrades rapidly. Unexpectedly, such degradation is not caused by overfitting, and adding more layers to a suitably deep model leads to higher training error, as reported in [10, 41] and thoroughly verified by our experiments. Fig. 1 shows a typical example.
    当更深的网络能够开始收敛时,一个退化问题已经暴露出来: 随着网络深度的增加,精度趋于饱和(这可能是不足为奇的),然后迅速下降。出乎意料的是,这种退化并不是过拟合造成的, 在适当的深度模型中添加更多的层会导致更高的训练误差,如[10,41]所报道的,并通过我们的实验得到了彻底的验证。 图1所示:一个典型的例子。 在这里插入图片描述
    The degradation (of training accuracy) indicates that not all systems are similarly easy to optimize. Let us consider a shallower architecture and its deeper counterpart that adds more layers onto it. There exists a solution by construction to the deeper model: the added layers are identity mapping, and the other layers are copied from the learned shallower model. The existence of this constructed solution indicates that a deeper model should produce no higher training error than its shallower counterpart. But experiments show that our current solvers on hand are unable to find solutions that are comparably good or better than the constructed solution(or unable to do so in feasible time).
    (训练精度的)退化表明,并非所有系统都同样容易优化。 让我们考虑一个更浅的建筑和它更深的对应物–增加更多的层。 通过构造更深的模型存在一个解决方案:添加的层是身份映射,其他层是从学习的浅模型中复制的。 这种构造解的存在表明,更深的模型不应产生比较浅的模型更高的训练误差。 但实验表明,我们目前手头的求解模型无法找到解决方案-- 与构造的解相比较好或更好(或无法在可行的时间内这样做)。

    In this paper, we address the degradation problem by introducing a deep residual learning framework. Instead of hoping each few stacked layers directly fit a desired underlying mapping, we explicitly let these layers fit a residual mapping. Formally, denoting the desired underlying mapping as H(x), we let the stacked nonlinear layers fit another mapping of F(x) := H(x)−x. The original mapping is recast into F(x)+x. We hypothesize that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers.
    本文通过引入深度残差学习框架来解决退化问题。 我们不希望每几个堆叠层直接适合所需的底层映射,而是显式地让这些堆叠层适合剩余映射。 形式上,表示所需的底层映射为H(x),我们让叠加的非线性层拟合另一个映射:F(X)=H(X)−x。 将原始映射recast为F(x)+x。 我们假设优化残差映射比优化原始的、未引用的映射更容易。一个极端是:如果身份映射是最优的, 将残差推到零比通过一堆非线性层拟合身份映射更容易。

    The formulation of F(x) +x can be realized by feedforward neural networks with “shortcut connections” (Fig. 2). Shortcut connections [2, 33, 48] are those skipping one or more layers. In our case, the shortcut connections simply perform identity mapping, and their outputs are added to the outputs of the stacked layers (Fig. 2). Identity shortcut connections add neither extra parameter nor computational complexity. The entire network can still be trained end-to-end by SGD with backpropagation, and can be easily implemented using common libraries (e.g., Caffe [19])without modifying the solvers.
    F(X)+x的公式可以通过具有“快捷连接”的前馈神经网络来实现(如图2所示)。
在这里插入图片描述
快捷连接[2,33,48]是那些跳过一个或多个层的连接。 在我们的例子中,快捷连接只是执行标识映射,它们的输出被添加到堆叠层的输出中(如图2所示)。身份快捷连接不会增加额外的参数和计算复杂度。整个网络仍然可以由SGD进行端到端的反向传播训练,并且可以很容易地使用公共库(例如Caffe[19])实现,而不需要修改求解模型。

    We present comprehensive experiments on ImageNet [35] to show the degradation problem and evaluate our method. We show that: 1) Our extremely deep residual nets are easy to optimize, but the counterpart “plain” nets (that simply stack layers) exhibit higher training error when the depth increases; 2) Our deep residual nets can easily enjoy accuracy gains from greatly increased depth, producing results substantially better than previous networks.
    我们在Image Net[35]上进行了综合实验,以显示退化问题并对我们的方法进行了评估。 我们表明:1)我们极深的残差网络很容易优化,但对应的“普通”网(即简单的堆叠层)在深度增加时表现出较高的训练误差;2)我们的深度残差网络可以很容易地从大大增加的深度中获得精度,比以前的网络产生的结果要好得多。

    Similar phenomena are also shown on the CIFAR-10 set [20], suggesting that the optimization difficulties and the effects of our method are not just akin to a particular dataset. We present successfully trained models on this dataset with over 100 layers, and explore models with over 1000 layers.
    在CIFAR-10集[20]上也显示了类似的现象,这表明优化困难和我们的方法的效果不仅(akin)类似于特定的数据集。 我们在这个数据集上展示了超过100层的成功训练的模型,并探索了超过1000层的模型。

    On the ImageNet classification dataset [35], we obtain excellent results by extremely deep residual nets. Our 152-layer residual net is the deepest network ever presented on ImageNet, while still having lower complexity than VGG nets [40]. Our ensemble has 3.57% top-5 error on the ImageNet test set, and won the 1st place in the ILSVRC 2015 classification competition. The extremely deep representations also have excellent generalization performance on other recognition tasks, and lead us to further win the 1st places on: ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation in ILSVRC & COCO 2015 competitions. This strong evidence shows that the residual learning principle is generic, and we expect that it is applicable in other vision and non-vision problems.
    在ImageNet分类数据集上[35],我们通过极深的残差网络获得了优异的结果。 我们的152层残差网络是ImageNet上呈现的最深的网络,但它的复杂度仍然低于VGG网[40]。 我们的ensemble有3.57%的top-5错误在ImageNet测试集上, 并在ILSVRC 2015分类竞赛中获得第一名。 极深的表示在其他识别任务上也有很好的泛化性能, 并带领我们在ILSVRC和COCO2015竞赛中进一步赢得第一名:ImageNet检测、ImageNet定位、COCO检测和COCO分割。 这一有力证据表明,残差学习原理是通用的,我们期望它适用于其他视觉和非视觉问题。

2.Related Work

    Residual Representations. In image recognition, VLAD [18] is a representation that encodes by the residual vectors with respect to a dictionary, and Fisher Vector [30] can be formulated as a probabilistic version [18] of VLAD. Both of them are powerful shallow representations for image retrieval and classification [4, 47]. For vector quantization, encoding residual vectors [17] is shown to be more effective than encoding original vectors.
    在图像识别中,VLAD[18]是一种由残差向量对字典进行编码的表示, 而Fisher向量[30]可以表示为VLAD的概率版本[18]。 它们都是图像检索和分类[4,47]的强大浅层表示。 对于矢量量化,编码残差向量[17]比编码原始向量更有效。

    In low-level vision and computer graphics, for solving Partial Differential Equations (PDEs), the widely used Multigrid method [3] reformulates the system as subproblems at multiple scales, where each subproblem is responsible for the residual solution between a coarser and a finer scale. An alternative to Multigrid is hierarchical basis preconditioning [44, 45], which relies on variables that represent residual vectors between two scales. It has been shown [3, 44, 45] that these solvers converge much faster than standard solvers that are unaware of the residual nature of the solutions. These methods suggest that a good reformulation or preconditioning can simplify the optimization.
    在低级视觉和计算机图形学中,为了求解偏微分方程(PDEs),广泛使用的Multigrid方法[3]在多个尺度上将系统重新定义为子问题,每一个子问题都负责较粗和较细尺度之间的残差解residual solution。 替代Multigrid的方法是分层基础预处理[44,45], 它依赖于表示两个尺度之间剩余向量的变量。 [3,44,45]表明,这些求解器的收敛速度比不知道解的残差性质的标准求解器快得多。 这些方法表明,良好的重新制定或预处理可以简化优化。

    Shortcut Connections. Practices and theories that lead to shortcut connections [2, 33, 48] have been studied for a long time. An early practice of training multi-layer perceptrons (MLPs) is to add a linear layer connected from the network input to the output [33, 48]. In [43, 24], a few intermediate layers are directly connected to auxiliary classifiers for addressing vanishing/exploding gradients. The papers of [38, 37, 31, 46] propose methods for centering layer responses, gradients, and propagated errors, implemented by shortcut connections. In [43], an “inception” layer is composed of a shortcut branch and a few deeper branches.
    Shortcut Connections.快捷连接。导致快捷连接的实践和理论[2,33,48]已经研究了很长时间。 训练多层感知器(MLPs)的一个早期实践是将一个线性层从网络输入连接到输出[33,48]。在[43,24]中,一些中间层直接连接到辅助分类器,用于(addressing)寻址梯度消失/爆炸。 [38,37,31,46]的论文提出了通过快捷连接实现的居中层响应、梯度和传播错误的方法。 在[43]中,“(inception)初始”层由一个快捷分支和几个更深的分支组成。

    Concurrent with our work, “highway networks” [41, 42] present shortcut connections with gating functions [15]. These gates are data-dependent and have parameters, in contrast to our identity shortcuts that are parameter-free. When a gated shortcut is “closed” (approaching zero), the layers in highway networks represent non-residual functions. On the contrary, our formulation always learns residual functions; our identity shortcuts are never closed, and all information is always passed through, with additional residual functions to be learned. In addition, highway networks have not demonstrated accuracy gains with extremely increased depth (e.g., over 100 layers).
    与我们的工作同时,“highway networks”[41,42]提出了门控功能[15]的快捷连接。 这些门(gate)是数据依赖的,而且有参数,相反,我们的身份快捷方式是无参数的。 当门控快捷方式“关闭”(接近零)时,高速公路网络中的层表示非残差函数。 相反,我们的公式总是学习剩余函数; 我们的身份快捷方式永远不会关闭,所有信息总是通过的,需要学习额外的剩余函数。 此外,高速网络随着极高的深度增加并没有显示出精度增益(例如,超过100层时)。

3. Deep Residual Learning

3.1Residual Learning
    Let us consider H(x) as an underlying mapping to be fit by a few stacked layers (not necessarily the entire net),with x denoting the inputs to the first of these layers. If one hypothesizes that multiple nonlinear layers can asymptotically approximate complicated functions2, then it is equivalent to hypothesize that they can asymptotically approximate the residual functions, i.e., H(x) − x (assuming that the input and output are of the same dimensions). So rather than expect stacked layers to approximate H(x), we explicitly let these layers approximate a residual function F(x) := H(x) − x. The original function thus becomes F(x)+x. Although both forms should be able to asymptotically approximate the desired functions (as hypothesized), the ease of learning might be different.
3.1残差学习
    让我们考虑H(x)作为一个底层映射,由几个堆叠层(不一定是整个网络)拟合,用x表示这些层中的第一层的输入。 假设多个非线性层可以渐近逼近复杂函数,然后等价于假设它们可以渐近逼近残差函数,i.e.,H(X)−x(假设输入和输出的维数相同)。 因此,我们不期望叠层近似H(X),而是显式地让这些层近似于残差函数F(X):=H(X)−x。 因此,原始函数变为F(x)+x。 虽然这两种形式应该能够渐近逼近所需的函数(如假设), 学习的难易程度可能不同。

    This reformulation is motivated by the counterintuitive phenomena about the degradation problem (Fig. 1, left). As we discussed in the introduction, if the added layers can be constructed as identity mappings, a deeper model should have training error no greater than its shallower counterpart. The degradation problem suggests that the solvers might have difficulties in approximating identity mappings by multiple nonlinear layers. With the residual learning reformulation, if identity mappings are optimal, the solvers may simply drive the weights of the multiple nonlinear layers toward zero to approach identity mappings.
    这种改造是通过退化问题中违反直觉的现象来完成的(如图1左边所示)。 正如我们在导言中讨论的,如果添加的层可以构造为身份映射, 更深的模型应该具有不大于其较浅对应的训练误差。 退化问题表明,求解器可能难以用多个非线性层逼近恒等映射。 在残差学习重新计算的情况下,如果恒等映射是最优的,求解器可以简单地将多个非线性层的权重驱动到零,以接近恒等映射。

    In real cases, it is unlikely that identity mappings are optimal, but our reformulation may help to precondition the problem. If the optimal function is closer to an identity mapping than to a zero mapping, it should be easier for the solver to find the perturbations with reference to an identity mapping, than to learn the function as a new one. We show by experiments (Fig. 7) that the learned residual functions in general have small responses, suggesting that identity mappings provide reasonable preconditioning.
    在实际情况下,身份映射不太可能是最优的,但我们的重新计算可能有助于确定问题的先决条件。 如果最优函数更接近身份映射而不是零映射,对于求解器来说,它应该更容易找到与身份映射有关的扰动,而不是学习一个新的函数。我们通过实验来展示(如图7所示)学习的残差函数在一般情况下具有较小的响应,这表明身份映射提供了合理的预处理。
在这里插入图片描述
3.2. Identity Mapping by Shortcuts
    We adopt residual learning to every few stacked layers. A building block is shown in Fig. 2. Formally, in this paper we consider a building block defined as:
y= F(x, {Wi}) + x. (1)
    Here x and y are the input and output vectors of the layers considered. The function F(x, {Wi}) represents the residual mapping to be learned. For the example in Fig. 2 that has two layers, F = W2σ(W1x) in which σ denotes ReLU [29] and the biases are omitted for simplifying notations. The operation F + x is performed by a shortcut connection and element-wise addition. We adopt the second nonlinearity after the addition (i.e., σ(y), see Fig. 2).

3.2通过shortcuts进行身份映射
    我们对每几个堆叠层采用残差学习。一个(building block)积木如图2所示,从形式上讲,在本文中,我们考虑块定义为:
y = F(x, {Wi}) + x. (1)
    这里x和y是所考虑的层的输入和输出向量。 函数F(x,{Wi})表示要学习的残差映射。例如图2中的例子有两层, F=W2σ(W1x)其中激活函数σ是ReLU,为了简化符号,省略了偏置项。 操作F+x由快捷连接和元素添加执行。在加法后面我们采用第二非线性(即σ(Y),如图2所示)。

    The shortcut connections in Eqn.(1) introduce neither extra parameter nor computation complexity. This is not only attractive in practice but also important in our comparisons between plain and residual networks. We can fairly compare plain/residual networks that simultaneously have the same number of parameters, depth, width, and computational cost (except for the negligible element-wise addition).
    在Eqn.(1)的快捷连接中,既不引入额外的参数也不增加计算复杂度。 这不仅在实践中很有吸引力,而且在我们比较普通网络和残差网络时也很重要。 我们可以同时比较具有相同参数、深度、宽度和计算成本的普通/残差网络( 除了可忽略的元素加法)。

    The dimensions of x and F must be equal in Eqn.(1). If this is not the case (e.g., when changing the input/output channels), we can perform a linear projection Ws by the shortcut connections to match the dimensions:
y = F(x, {Wi}) + Wsx. (2)
    We can also use a square matrix Ws in Eqn.(1). But we will show by experiments that the identity mapping is sufficient for addressing the degradation problem and is economical, and thus Ws is only used when matching dimensions.

    在Eqn(1)中,x和F的维数必须相等,如果不是在这种情况下(例如,当改变输入/输出通道时),我们可以通过快捷连接来执行一个线性映射矩阵ws,以匹配维数:
y = F(x,{Wi}) + Wsx. (2)
    我们也可以在Eqn.(1)中用一个平方矩阵ws,但是我们通过实验将显示身份映射解决退化问题是充分并且经济的。因此ws只用来匹配维度。

    The form of the residual function F is flexible. Experiments in this paper involve a function F that has two or three layers (Fig. 5), while more layers are possible. But if F has only a single layer, Eqn.(1) is similar to a linear layer: y = W1x + x, for which we have not observed advantages.
    残差函数F 的形式是灵活的。本文的实验涉及一个具有两层或三层的函数F(如图5所示),当然更多的层也可以。但是如果F只有一层,那么Eqn.(1)和一个线性层相似:y = W1x + x, 我们没有观察到这些优势。

    We also note that although the above notations are about fully-connected layers for simplicity, they are applicable to convolutional layers. The function F(x, {Wi}) can represent multiple convolutional layers. The element-wise addition is performed on two feature maps, channel by channel.
    我们还注意到,为了简单起见,虽然上述符号描述是关于全连接层,但它们也适用于卷积层。 函数F(x,{Wi})可以表示多个卷积层。元素加法是在两个特征映射上逐通道执行的。

3.3. Network Architectures
    We have tested various plain/residual nets, and have observed consistent phenomena. To provide instances for discussion, we describe two models for ImageNet as follows.
3.3 网络架构
     我们测试了各种普通/残余网络,并观察到了一致的现象。 为了提供讨论的实例,我们描述了ImageNet的两个模型如下。

    Plain Network. Our plain baselines (Fig. 3, middle) are mainly inspired by the philosophy of VGG nets [40] (Fig. 3,left). The convolutional layers mostly have 3×3 filters and follow two simple design rules: (i) for the same output feature map size, the layers have the same number of filters; and (ii) if the feature map size is halved, the number of filters is doubled so as to preserve the time complexity per layer. We perform downsampling directly by convolutional layers that have a stride of 2. The network ends with a global average pooling layer and a 1000-way fully-connected layer with softmax. The total number of weighted layers is 34 in Fig. 3 (middle).
    普通网络。我们的普通基线(如图3,中,所示)主要受VGG网络[40]哲学的启发(如图3,左,所示)。 卷积层大多有3×3个滤波器,遵循两个简单的设计规则: (i)对于相同尺寸的输出特征映射,网络层具有相同数量的滤波器;(i)如果特征映射大小减半,则滤波器的数量将增加一倍,以保持每层的时间复杂度。 我们直接用步长为2的卷积层进行下采样。 网络以一个全局平均池化层和一个具有softmax的1000路的全连接层结束。加权层数为共计为34层。(如图3,中,所示)。
在这里插入图片描述
    It is worth noticing that our model has fewer filters and lower complexity than VGG nets [40] (Fig. 3, left). Our 34-layer baseline has 3.6 billion FLOPs (multiply-adds), which is only 18% of VGG-19 (19.6 billion FLOPs).
     值得注意的是,我们的模型比VGG网络具有更少的滤波器和更低的复杂度40。 我们的34层baseline基线有36亿FLOPS成倍增长(multiply-adds),仅占VGG-19(196亿FLOPS)的18%)。

    Residual Network. Based on the above plain network, we insert shortcut connections (Fig. 3, right) which turn the network into its counterpart residual version. The identity shortcuts (Eqn.(1)) can be directly used when the input and output are of the same dimensions (solid line shortcuts in Fig. 3). When the dimensions increase (dotted line shortcuts in Fig. 3), we consider two options: (A) The shortcut still performs identity mapping, with extra zero entries padded for increasing dimensions. This option introduces no extra parameter; (B) The projection shortcut in Eqn.(2) is used to match dimensions (done by 1×1 convolutions). For both options, when the shortcuts go across feature maps of two sizes, they are performed with a stride of 2.
    残差网络。 基于上述普通网络,我们插入快捷连接(如图3,右,所示)将网络转换为对应的残差版本。身份快捷方式(Eqn(1))当输入和输出维数相同时,可直接使用(图3中的实线快捷连线)。 当维数增加时(图3中虚线快捷连线), 我们考虑两个选项:(A)快捷方式仍然执行标识映射,为增加维度填充了额外的零项。 这种方式不引入额外参数; (B)Eqn(2)中的快捷映射方式,用于匹配维度(由1×1卷积完成)。 对于这两个选项,当shortcuts跨越两个大小的特征映射时, 他们以2的步长进行。

3.4. Implementation
    Our implementation for ImageNet follows the practice in [21, 40]. The image is resized with its shorter side randomly sampled in [256, 480] for scale augmentation [40]. A 224×224 crop is randomly sampled from an image or its horizontal flip, with the per-pixel mean subtracted [21]. The standard color augmentation in [21] is used. We adopt batch normalization (BN) [16] right after each convolution and before activation, following [16]. We initialize the weights as in [12] and train all plain/residual nets from scratch. We use SGD with a mini-batch size of 256. The learning rate starts from 0.1 and is divided by 10 when the error plateaus, and the models are trained for up to 60 × 104 iterations. We use a weight decay of 0.0001 and a momentum of 0.9. We do not use dropout [13], following the practice in [16].
3.4 执行情况
    我们对ImageNet的实现遵循了论文[21,40]的实践。图像被调整大小–将其较短的边随机采样在[256,480]的范围内做尺度增强[40]。 从图像或其水平翻转中随机抽取224×224个crop,用每像素均值去减[21]。 采用[21]中标准增色。 遵循论文[16],我们采用批归一化(B N)[16]在每次卷积后和激活前。 我们初始化权重,就像在论文[12]中一样,并从零开始训练所有的普通/残余网络。 我们使用SGD的小批量大小为256。 学习率从0.1开始,当遇到error plateaus时就除以10, 并对模型进行了长达60×104次迭代的训练。 我们使用的权重衰减为0.0001,动量momentum为0.9。和论文[16]中一样,我们没有使用dropout.

    In testing, for comparison studies we adopt the standard 10-crop testing [21]. For best results, we adopt the fully-convolutional form as in [40, 12], and average the scores at multiple scales (images are resized such that the shorter side is in {224, 256, 384, 480, 640})
    在测试中,为了进行比较研究,我们采用了标准的10-crop作为测试[21]。 为了获得最好的结果,我们采用了[40,12]中的完全卷积形式, 并在多个尺度上平均分数(图像被调整大小,使较短的一边在{224,256,384,480,640})。

4. Experiments

4.1. ImageNet Classification
    We evaluate our method on the ImageNet 2012 classification dataset [35] that consists of 1000 classes. The models are trained on the 1.28 million training images, and evaluated on the 50k validation images. We also obtain a final result on the 100k test images, reported by the test server.We evaluate both top-1 and top-5 error rates.
4实验
4.1ImageNet分类
    我们在包含1000个类别的ImageNet 2012分类数据集上进行了实验以验证我们的方法。模型在包含128万张图像上训练,并在5万张验证图片上去评估。 我们还获得了测试服务器报告的10万测试图像的最终结果。 我们评估top-1和top-5的错误率。

    Plain Networks. We first evaluate 18-layer and 34-layer plain nets. The 34-layer plain net is in Fig. 3 (middle). The 18-layer plain net is of a similar form. See Table 1 for detailed architectures.
    普通网络。我们首先评估了18层和34层的普通网络。 34层普通网络如图3(中)所示,十八层普通网络形式相似。 详细架构见表1。

    The results in Table 2 show that the deeper 34-layer plain net has higher validation error than the shallower 18-layer plain net. To reveal the reasons, in Fig. 4 (left) we compare their training/validation errors during the training procedure. We have observed the degradation problem - the34-layer plain net has higher training error throughout the whole training procedure, even though the solution space of the 18-layer plain network is a subspace of that of the 34-layer one.
    表2中的结果表明,较深的34层普通网络比较浅的18层普通网络具有更高的验证误差。为了揭示原因,在图4(左)中,我们比较了他们在训练过程中的训练/验证误差。我们观察到了退化问题, 在整个训练过程中,34层普通网络的训练误差较大, 尽管18层普通网络的解空间是34层普通网络的子空间。

    We argue that this optimization difficulty is unlikely to be caused by vanishing gradients. These plain networks are trained with BN [16], which ensures forward propagated signals to have non-zero variances. We also verify that the backward propagated gradients exhibit healthy norms with BN. So neither forward nor backward signals vanish. In fact, the 34-layer plain net is still able to achieve competitive accuracy (Table 3), suggesting that the solver works to some extent. We conjecture that the deep plain nets may have exponentially low convergence rates, which impact the reducing of the training error3. The reason for such optimization difficulties will be studied in the future.
    我们认为这种优化困难不太可能是由消失梯度引起的。 这些普通网络是用BN[16]训练的,这确保了正向传播信号具有非零方差。 我们还验证了反向传播梯度与BN表现出healthy健康的规范。 因此,前向和反向的信号都不会消失。 事实上,34层普通网络仍然能够达到具有竞争力的精度(表3), 建议求解器在一定程度上工作。 我们猜测深度普通网络可能具有指数低的收敛速度,其影响了训练误差的减少。这种优化困难的原因将在未来进行研究。

    Residual Networks. Next we evaluate 18-layer and 34-layer residual nets (ResNets). The baseline architectures are the same as the above plain nets, expect that a shortcut connection is added to each pair of 3×3 filters as in Fig. 3(right). In the first comparison (Table 2 and Fig. 4 right), we use identity mapping for all shortcuts and zero-padding for increasing dimensions (option A). So they have no extra parameter compared to the plain counterparts.
    残差网络。 接下来,我们评估18层和34层残差网络(ResNets)。 基线结构与上面普通网络相同,期待快捷连接被添加到每对3×3滤波器中,如图3 (右)所示。在第一次比较中(表2和图 4(右)), 我们使用标识映射的所有快捷方式和零填充增加维度(选项A)。 因此,与plain counterparts相比,它们没有额外的参数。

    We have three major observations from Table 2 and Fig. 4. First, the situation is reversed with residual learning – the 34-layer ResNet is better than the 18-layer ResNet (by 2.8%). More importantly, the 34-layer ResNet exhibits considerably lower training error and is generalizable to the validation data. This indicates that the degradation problem is well addressed in this setting and we manage to obtain accuracy gains from increased depth.
    通过表2和图4,我们观察到三个主要的信息,首先, 这种情况随着剩余学习而逆转-- 34层ResNet优于18层ResNet(2.8%)。 更重要的是,34层ResNet显示出相当低的训练误差,并可推广到验证数据。 这表明退化问题在这种情况下得到了很好的解决,我们设法从增加的深度中获得精度增益。

    Second, compared to its plain counterpart, the 34-layerResNet reduces the top-1 error by 3.5% (Table 2), resulting from the successfully reduced training error (Fig. 4 right vs. left). This comparison verifies the effectiveness of residual learning on extremely deep systems.
    其次,和plain counterpart相比,34层Res Net将top-1错误减少3.5%(表2), 结果是成功地减少了训练误差(图4,右vs左)。 这种比较验证了残差学习在极深系统上的有效性。

    Last, we also note that the 18-layer plain/residual nets are comparably accurate (Table 2), but the 18-layer ResNet converges faster (Fig. 4 right vs. left). When the net is “not overly deep” (18 layers here), the current SGD solver is still able to find good solutions to the plain net. In this case, the ResNet eases the optimization by providing faster convergence at the early stage.
    最后,我们还注意到,18层普通/残差网络是相当准确的(表2), 但18层ResNet收敛得更快(图4,右vs左)。 当网络“不太深”(这里有18层)时,当前的SGD求解器仍然能够找到普通网络的良好解决方案。 在这种情况下,ResNet通过在早期提供更快的收敛来简化优化。

    Identity vs. Projection Shortcuts. We have shown that parameter-free, identity shortcuts help with training. Next we investigate projection shortcuts (Eqn.(2)). In Table 3 we compare three options: (A) zero-padding shortcuts are used for increasing dimensions, and all shortcuts are parameter-free (the same as Table 2 and Fig. 4 right); (B) projection shortcuts are used for increasing dimensions, and other shortcuts are identity; and © all shortcuts are projections.
    身份与快捷映射Identity vs. Projection Shortcuts.我们展示了无参数, identity shortcuts身份快捷方式以帮助训练。接下来,我们研究投影快捷键(Eqn(2)。 在表3中,我们比较了三个选项:(A)zero-padding快捷键用于增加维度,所有快捷键都是无参数的(与表2和图4(右)相同); (B)投影快捷方式用于增加维数,其他快捷方式是身份标识; (C)所有的捷径都是映射all shortcuts are projections.

    Table 3 shows that all three options are considerably better than the plain counterpart. B is slightly better than A. We argue that this is because the zero-padded dimensions in A indeed have no residual learning. C is marginally better than B, and we attribute this to the extra parameters introduced by many (thirteen) projection shortcuts. But the small differences among A/B/C indicate that projection shortcuts are not essential for addressing the degradation problem. So we do not use option C in the rest of this paper, to reduce memory/time complexity and model sizes. Identity shortcuts are particularly important for not increasing the complexity of the bottleneck architectures that are introduced below.
    表3显示,所有三种选择都要比plain counterpart好得多。B稍微比A好一点。 我们认为这是因为A中的零添加维度确实没有残差学习。 C比B略好, 我们将其归因于许多(13)投影快捷方式引入的额外参数。 但A/B/C之间的微小差异表明 投影快捷方式对于解决退化问题并不是必不可少的。 因此,在本文的其余部分中,我们不使用选项C来降低内存/时间复杂度和模型大小。 身份快捷方式对于不增加下面介绍的瓶颈架构的复杂性尤为重要。

    Deeper Bottleneck Architectures. Next we describe our deeper nets for ImageNet. Because of concerns on the training time that we can afford, we modify the building block as a bottleneck design4. For each residual function F, we use a stack of 3 layers instead of 2 (Fig. 5). The three layers are 1×1, 3×3, and 1×1 convolutions, where the 1×1 layers are responsible for reducing and then increasing (restoring) dimensions, leaving the 3×3 layer a bottleneck with smaller input/output dimensions. Fig. 5 shows an example, where both designs have similar time complexity.
    更深的瓶颈架构。接下来我们描述在ImageNet上更深的网络。 由于对我们负担得起的训练时间的担忧,我们将构建块修改为瓶颈设计。 对于每个残差函数F,我们使用3层的堆栈而不是2层(如图5所示)。 这三层是1×1,3×3,和1×1的卷积层,其中1×1层负责减少,然后增加(恢复)尺寸, 使3×3层成为输入/输出尺寸较小的瓶颈(bottleneck)。图5显示了一个示例,其中两种设计具有相似的时间复杂度。

    The parameter-free identity shortcuts are particularly important for the bottleneck architectures. If the identity shortcut in Fig. 5 (right) is replaced with projection, one can show that the time complexity and model size are doubled, as the shortcut is connected to the two high-dimensional ends. So identity shortcuts lead to more efficient models for the bottleneck designs.
    无参数身份快捷方式对于瓶颈架构尤为重要。 如果图5(右)中的身份快捷方式用投影代替,可以看出时间复杂度和模型大小都将加倍, 因为快捷方式连接到两个高维端。 因此,身份快捷方式为瓶颈设计提供了更有效的模型。

    50-layer ResNet: We replace each 2-layer block in the 34-layer net with this 3-layer bottleneck block, resulting in a 50-layer ResNet (Table 1). We use option B for increasing dimensions. This model has 3.8 billion FLOPs.
    50层残差网络:在34层网与这个三层瓶颈块中,我们更换每个2层块,结果如表1所示50层残差网络。用方案B来增加维数。这个模型每秒能进行38亿次浮点数运算(FLOPs)。

    101-layer and 152-layer ResNets: We construct 101- layer and 152-layer ResNets by using more 3-layer blocks (Table 1). Remarkably, although the depth is significantly increased, the 152-layer ResNet (11.3 billion FLOPs) still has lower complexity than VGG-16/19 nets (15.3/19.6 billion FLOPs).
    101层和152层残差网络:我们通过使用更多的3层块来构造101层和152层Res网(表1)。值得注意的是,虽然深度显著增加,但152层ResNet(113亿FLOPs)的复杂性仍然低于VGG-16/19网(153/196亿FLOPs)。

    The 50/101/152-layer ResNets are more accurate than the 34-layer ones by considerable margins (Table 3 and 4). We do not observe the degradation problem and thus enjoy significant accuracy gains from considerably increased depth. The benefits of depth are witnessed for all evaluation metrics (Table 3 and 4).
    50/101/152层的ResNet比34层的ResNet在相当大的margins下更精确(表3和表4)。我们没有观察到退化问题,因此从大大增加的深度中获得了显著的精度增益。所有评价指标都见证了深度的好处(表3和表4)。

    Comparisons with State-of-the-art Methods. In Table 4 we compare with the previous best single-model results. Our baseline 34-layer ResNets have achieved very competitive accuracy. Our 152-layer ResNet has a single-model top-5 validation error of 4.49%. This single-model result outperforms all previous ensemble results (Table 5). We combine six models of different depth to form an ensemble (only with two 152-layer ones at the time of submitting). This leads to 3.57% top-5 error on the test set (Table 5). This entry won the 1st place in ILSVRC 2015.
    与最先进方法的比较。 在表4中,我们与以前的最佳单模型(single-model)结果进行了比较。 我们的基线34层ResNet已经达到了非常有竞争力的准确性。 我们的152层ResNet的single-model top-5验证误差为4.49%。 这个单一模型的结果优于所有以前的集成结果(表5)。我们将六种不同深度的模型组合成一个集合(在提交时只有两个152层的模型)。后来测试集上只有3.57%的top-5错误(表5)。 赢得了ILSVRC2015的第一名。

4.2. CIFAR-10 and Analysis
    We conducted more studies on the CIFAR-10 dataset [20], which consists of 50k training images and 10k testing images in 10 classes. We present experiments trained on the training set and evaluated on the test set. Our focus is on the behaviors of extremely deep networks, but not on pushing the state-of-the-art results, so we intentionally use simple architectures as follows.
4.2 CIFAR-10和分析
    我们对CIFAR-10数据集[20]进行了更多的研究,其中包含10个类别,5万张训练图片和1万张测试图片。 我们提出了在训练集上训练的实验,并在测试集上进行了评估。我们关注的是非常深网络, 但没有推动(pushing)最先进的结果,所以我们故意使用简单的体系结构如下。

    The plain/residual architectures follow the form in Fig. 3 (middle/right). The network inputs are 32×32 images, with the per-pixel mean subtracted. The first layer is 3×3 convolutions. Then we use a stack of 6n layers with 3×3 convolutions on the feature maps of sizes {32, 16, 8} respectively, with 2n layers for each feature map size. The numbers of filters are {16, 32, 64} respectively. The subsampling is performed by convolutions with a stride of 2. The network ends with a global average pooling, a 10-way fully-connected layer, and softmax. There are totally 6n+2 stacked weighted layers. The following table summarizes the architecture:
    普通/永久架构遵循图3(中/右)中的形式。网络的输入是32×32的图像,用每像素的平均值减去(with the per-pixel mean subtracted.)。第一层为3×3卷积。 然后,我们使用一个6n层的堆栈,分别在大小为{32,16,8}的特征图上做3×3的卷积,每个特征映射大小都有2n层。 滤波器的个数分别为{16,32,64}。次采样(subsampling)是通过步长为2的卷积来执行的。 网络以一个全局平均池、一个10路全连接层和softmax结束。 共有6n+2个堆叠加权层。 下表总结了体系结构:
在这里插入图片描述
    When shortcut connections are used, they are connected to the pairs of 3×3 layers (totally 3n shortcuts). On this dataset we use identity shortcuts in all cases (i.e., option A),
在这里插入图片描述
    so our residual models have exactly the same depth, width, and number of parameters as the plain counterparts.
    当使用快捷方式连接时,它们被连接到3×3层的对(总共3N快捷方式)。在此数据集上,我们在所有情况下都使用标识快捷方式(即选项A), 因此,我们的残差模型具有与普通模型完全相同的深度、宽度和参数数。

    We use a weight decay of 0.0001 and momentum of 0.9, and adopt the weight initialization in [12] and BN [16] but with no dropout. These models are trained with a mini-batch size of 128 on two GPUs. We start with a learning rate of 0.1, divide it by 10 at 32k and 48k iterations, and terminate training at 64k iterations, which is determined on a 45k/5k train/val split. We follow the simple data augmentation in [24] for training: 4 pixels are padded on each side, and a 32×32 crop is randomly sampled from the padded image or its horizontal flip. For testing, we only evaluate the single view of the original 32×32 image.
    我们使用0.0001的权重衰减和0.9的动量(momentum),并使用了自适应权重初始化[12]和BN[16],但没有用dropout。这些模型在两块GPU上使用128大小的mini-batch进行训练, 我们从0.1的学习率开始,在32k和48k的迭代中除以10, 并在64K迭代中终止训练,这是在45K/5K训练/验证(split)上确定的。 我们遵循训练[24]中的简单数据增强:每边填充4个像素,从填充图像或其水平翻转中随机采样32个×32个crop。

    We compare n = {3, 5, 7, 9}, leading to 20, 32, 44, and 56-layer networks. Fig. 6 (left) shows the behaviors of the plain nets. The deep plain nets suffer from increased depth, and exhibit higher training error when going deeper. This phenomenon is similar to that on ImageNet (Fig. 4, left) and on MNIST (see [41]), suggesting that such an optimization difficulty is a fundamental problem.
    我们比较了n={3,5,7,9},导致20,32,44和56层网络。 图 6(左)显示了普通网的behaviors。普通网络受深度增加的影响,越深,训练误差越大。 这种现象类似于ImageNet(图4,左)和MNIST(见[41]),表明这样的优化困难是一个基本问题。

    Fig. 6 (middle) shows the behaviors of ResNets. Also similar to the ImageNet cases (Fig. 4, right), our ResNets manage to overcome the optimization difficulty and demonstrate accuracy gains when the depth increases.
    图 6(中间)显示ResNets的behaviors。类似于ImageNet案例(图4右),我们的Resnet设法克服优化困难,并在深度增加时显示精度增益。

    We further explore n = 18 that leads to a 110-layer ResNet. In this case, we find that the initial learning rate of 0.1 is slightly too large to start converging5. So we use 0.01 to warm up the training until the training error is below 80% (about 400 iterations), and then go back to 0.1 and continue training. The rest of the learning schedule is as done previously. This 110-layer network converges well (Fig. 6, middle). It has fewer parameters than other deep and thin networks such as FitNet [34] and Highway [41] (Table 6),yet is among the state-of-the-art results (6.43%, Table 6).
    我们进一步探索n=18,导致110层ResNet。 在这种情况下,我们发现0.1的初始学习率略大,无法收敛。因此,我们使用0.01来热身训练,直到训练误差低于80%(约400次迭代),然后回到0.1,继续训练。剩余的学习时间和以前一样。在图6中间显示,110层的网络收敛较好。 它具有比其他深度和精简网络(如 FitNet [34] 和Highway [41](表 6) 更少的参数,而且,这是最先进的结果之一(6.43%,表6)。

    Analysis of Layer Responses. Fig. 7 shows the standard deviations (std) of the layer responses. The responses are the outputs of each 3×3 layer, after BN and before other nonlinearity (ReLU/addition). For ResNets, this analysis reveals the response strength of the residual functions. Fig. 7 shows that ResNets have generally smaller responses than their plain counterparts. These results support our basic motivation (Sec.3.1) that the residual functions might be generally closer to zero than the non-residual functions. We also notice that the deeper ResNet has smaller magnitudes of responses, as evidenced by the comparisons among ResNet-20, 56, and 110 in Fig. 7. When there are more layers, an individual layer of ResNets tends to modify the signal less.
    分析层响应。图7显示层响应的标准差,响应是BN之后、其他非线性之前的每3×3层的输出。 对于ResNet,本分析揭示了残差函数的响应强度。图 7表明ResNets的响应通常比plain counterparts响应小。这一结果支持我们motivation残差函数通常比非残差函数更接近于零。 我们还注意到,较深的ResNet具有较小的响应程度,如图7中ResNet-20、56和110之间的比较所证明的那样。 当有更多的层时,单个的ResNet层倾向于更少地修改信号。

    Exploring Over 1000 layers. We explore an aggressively deep model of over 1000 layers. We set n = 200 that leads to a 1202-layer network, which is trained as described above. Our method shows no optimization difficulty, and this 103-layer network is able to achieve training error <0.1% (Fig. 6, right). Its test error is still fairly good (7.93%, Table 6).
     探索超过1000层。 我们探索了一个超过1000层的积极(aggressively)深度模型。 我们设置n=200,此时有1202层网络,这是按照上面描述的方式训练。我们的方法没有显示优化困难,并且这个103层网络能够实现0.1%<训练误差(图6,(右))。 其测试误差仍然相当好(7.93%,表6)。

    But there are still open problems on such aggressively deep models. The testing result of this 1202-layer network is worse than that of our 110-layer network, although both have similar training error. We argue that this is because of overfitting. The 1202-layer network may be unnecessarily large (19.4M) for this small dataset. Strong regularization such as maxout [9] or dropout [13] is applied to obtain the best results ([9, 25, 24, 34]) on this dataset. In this paper, we use no maxout/dropout and just simply impose regularization via deep and thin architectures by design, without distracting from the focus on the difficulties of optimization.But combining with stronger regularization may improve results, which we will study in the future.
    但在如此激进(aggressively)的深度模型上仍然存在一些悬而未决的问题。这个1202层的网络测试结果比我们110层的网络要差,尽管他们俩的训练误差相似。我们认为原因是过拟合导致的, 对于这个小数据集,1202层网络可能太大了(19.4M)。 应用强正则化,如maxout[9]或dropout[13],在此数据集上获得最佳结果([9,25,24,34])。 在本文中,我们不使用maxout/dropout,只是通过设计简单地通过深和薄(thin)的体系结构强制正则化, 而不分散对优化困难的关注。但与更强的正则化相结合可能会提高结果,我们将在未来研究。

4.3. Object Detection on PASCAL and MS COCO
    Our method has good generalization performance on other recognition tasks. Table 7 and 8 show the object detection baseline results on PASCAL VOC 2007 and 2012 [5] and COCO [26]. We adopt Faster R-CNN [32] as the detection method. Here we are interested in the improvements of replacing VGG-16 [40] with ResNet-101. The detection implementation (see appendix) of using both models is the same, so the gains can only be attributed to better networks. Most remarkably, on the challenging COCO dataset we obtain a 6.0% increase in COCO’s standard metric (mAP@[.5,.95]), which is a 28% relative improvement. This gain is solely due to the learned representations.

4.3在PASCAL和MS COCO数据集上进行目标检测
    我们的方法在其他识别任务上具有良好的泛化性能。 表7和表8显示了PASCALVOC2007和2012年[5]和COCO[26]的目标检测基线结果。 我们采用FasterR-CNN[32]作为检测方法。 在这里,我们感兴趣的是用ResNet-101取代VGG-16[40]的改进。 使用这两种模型的检测实现(见附录)是相同的,因此增益只能归因于更好的网络。最值得注意的是,在具有挑战性的COCO数据集上,COCO的标准度量增加了6.0%(mAP@[0.5,0. 95]),这是28%的相对改善。 这种增益完全是由于学习的表示。

    Based on deep residual nets, we won the 1st places in several tracks in ILSVRC & COCO 2015 competitions: ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation. The details are in the appendix.
     基于深度残差网络,我们在ILSVRC和COCO2015比赛中获得了几个赛道的第一名:ImageNet检测、ImageNet定位、COCO检测和COCO分割。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值