Deep Residual Learning for Image Recognition【论文翻译】

DeepResidualLearningforImageRecognition

Kaiming He & Xiangyu Zhang & Shaoqing Ren & Jian Sun

Abstract

Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize,and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residualnets with adepth of up to152 layers—8× deeper than VGG nets but still having lower complexity.An ensemble of these residual nets achieves 3.57%error ontheImageNettestset. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers.
The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

摘要:更深的神经网络更难训练,我们提出了一个残差学习框架,以简化比以前使用的网络更深的网络的训练。我们明确地将层重述为学习参考层输入的残差函数,而不是学习未参考的函数。我们提供了全面的经验证据,表明这些残差网络更容易优化,并且可以从大大增加的深度中获得准确性。在ImageNet数据集上,我们评估了深度达152层的残差网络,比VGG网络深8倍,但仍然具有较低的复杂性。这个结果赢得了ILSVRC 2015分类任务的第1名。我们还介绍了对CIFAR-10的100层和1000层的分析
表征的深度对于许多视觉识别任务来说是至关重要的。仅仅由于我们极深的表征,我们在COCO对象检测数据集上获得了28%的相对改进。深度残差网是我们提交给ILSVRC & COCO 2015比赛的基础,我们还在ImageNet检测、ImageNet定位、COCO检测和COCO分割等任务中获得了第一名

1.Introduction

在这里插入图片描述
Figure 1. Training error (left) and test error (right) on CIFAR-10 with 20-layer and 56-layer “plain” networks. The deeper network has higher training error, and thus test error. Similar phenomena on ImageNet is presented in Fig. 4.
图1. 20层和56层 "普通 "网络在CIFAR-10上的训练误差如左图所示,测试误差如右图所示。较深的网络具有较高的训练误差,因此测试误差也较高,类似的现象在ImageNet上呈现在图4中

Deep convolutional neural networks have led to a series of breakthroughs for image classification. Deep networks naturally integrate low/mid/highlevel features and classifiers in an end-to-end multilayer fashion, and the “levels” of features can be enriched by the number of stacked layers (depth).Recent evidence revealsthatnetworkdepthisofcrucialimportance, and the leading results on the challenging ImageNet dataset all exploit “very deep” models, with a depth of sixteen to thirty . Many other nontrivial visual recognition tasks have also greatly benefited from very deep models.

深度卷积神经网络为图像分类带来了一系列的突破。深度网络自然地将从低、中、高层次的特征和分类器以端到端多层的方式整合在一起,特征的 "层次 "可以通过堆叠层数即深度来丰富。最近的证据表明,网络深度是至关重要的,在具有挑战性的ImageNet数据集上的领先结果都利用了 "非常深度 "模型,深度为16到30。许多其他非微不足道的视觉识别任务也从非常深的模型中获益匪浅

Driven by the significance of depth, a question arises: Is learning better networks as easy as stacking more layers? An obstacle to answering this question was the notorious problem of vanishing/exploding gradients, which hamper convergence from the beginning. This problem, however, has been largely addressed by normalized initialization and intermediate normalization layers,which enable networks with tens of layers to start converging for stochastic gradient descent (SGD) with backpropagation.

在深度重要性的驱动下,一个问题出现了,学习更好的网络是否和堆叠更多的层数一样容易?回答这个问题的一个障碍是臭名昭著的消失、爆炸梯度问题,它从一开始就阻碍了融合。然而,这个问题在很大程度上已经通过归一化初始化和中间归一化层得到了很大程度的解决,这使得有几十层的网络能够开始收敛,并进行随机梯度下降和反向传播

When deeper networks are able to start converging, a degradation problem has been exposed: with the network depth increasing, accuracy gets saturated (which might be unsurprising) and then degrades rapidly. Unexpectedly, such degradation is not caused by overfitting, and adding more layers to a suitably deep model leads to higher training error,as reported in and thoroughly verified by our experiments. Fig. 1 shows a typical example.

当更深的网络能够开始收敛时,一个退化的问题已经暴露出来,就是随着网络深度的增加,精度变得饱和,这可能并不奇怪,然后迅速退化。出乎意料的是,这样的退化并不是由过度建模造成的,而是由在一个适当深度的模型中,增加更多的层数会导致更高的训练误差,正如我们的实验所报告并彻底验证的那样,图1就是一个典型的例子

The degradation (of training accuracy) indicates that not all systems are similarly easy to optimize. Let us consider a shallower architecture and its deeper counterpart that adds more layers onto it. There exists a solution by construction to the deeper model: the added layers are identity mapping, and the other layers are copied from the learned shallower model. The existence of this constructed solution indicates that a deeper model should produce no higher training error than its shallower counterpart. But experiments show that our current solvers on hand are unable to find solutions that are comparably good or better than the constructed solution (or unable to do so in feasible time).

训练精度的下降表明,并不是所有的系统都同样容易优化。让我们考虑一个较浅的架构及其在其上增加更多层的较深的对应系统,深层模型存在一个通过构造的解决方案,即增加的层是身份映射,其他层是从学习的浅层模型中复制出来的。这种构造解的存在表明,一个较深的模型应该不会比它的较浅模型产生更高的训练误差。但实验表明,我们目前手头的求解器无法找到比较好或比构造解更好的解,或者无法在可行的时间内完成
在这里插入图片描述
Figure 2. Residual learning: a building block.残差学习构建

In this paper, we address the degradation problem by introducing a deep residual learning framework. Instead of hoping each few stacked layers directly fit a desired underlying mapping, we explicitly let these layers fit a residual mapping. Formally, denoting the desired underlying mapping as H(x), we let the stacked nonlinear layers fit another mapping of F(x) :=H(x)−x. The original mapping is recast into F(x)+x. We hypothesize that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers.

在本文中,我们通过引入一个深度残差学习框架来解决退化问题。我们不希望每几个堆叠层直接对应一个期望的底层映射,而是明确地让这些层对应一个残差映射。形式上,将期望的底层映射表示为H(x),我们让堆叠的非线性层对应另一个映射F(x) :=H(x)-x。原始映射被重铸为F(x)+x。我们假设,优化残差映射比优化原始的、未引用的映射更容易。极端地讲,如果一个身份映射是最优的,那么通过非线性层的堆叠,将残差推到零比将身份映射对应更容易

The formulation of F(x)+x can be realized by feedforward neural networks with “shortcut connections” (Fig. 2). Shortcut connections are those skipping one or more layers. In our case, the shortcut connections simply perform identity mapping, and their outputs are added to the outputs of the stacked layers (Fig. 2). Identity shortcut connections add neither extra parameter nor computational complexity. The entire network can still be trained end-to-end by SGD with backpropagation, and can be easily implemented using common libraries without modifying the solvers.

F(x)+x的公式可以通过具有 "捷径连接 "的前馈神经网络来实现,如图2。捷径连接是指那些跳过一层或多层的连接。在我们的案例中,捷径连接只是执行身份映射,它们的输出被添加到堆叠层的输出中,如图2所示。身份捷径连接既没有增加额外的参数,也没有增加计算的复杂性。整个网络仍然可以通过SGD与反向传播进行端到端训练,并且可以在不修改求解器的情况下使用通用库轻松实现

We present comprehensive experiments on ImageNet to show the degradation problem and evaluate our method. We show that: 1)Our extremely deep residual nets are easy to optimize, but the counterpart “plain” nets (that simply stack layers) exhibit higher training error when the depth increases; 2) Our deep residual nets can easily enjoy accuracy gains from greatly increased depth, producing results substantially better than previous networks.

我们介绍了在ImageNet上的综合实验,来显示退化问题并评估我们的方法。我们得出: 1,我们的极深残差网很容易优化,但对应的 "普通 "网,即那些只是简单地堆叠层,在深度增加时表现出更高的训练误差;2,我们的深度残差网可以很容易地享受到,由深度大幅增加带来的精度提升,其产生的结果大大优于之前的网络

Similar phenomena are also shown on the CIFAR-10 set , suggesting that the optimization difficulties and the effects of our method are not just akin to a particular dataset. We present successfully trained models on this dataset with over 100 layers, and explore models with over 1000 layers.

类似的现象在CIFAR-10集上也有表现,说明我们方法的优化难度和效果并不只是类似于某一个特定的数据集。我们介绍了在这个数据集上成功训练的超过100层的模型,并探索超过1000层的模型

On the ImageNet classification dataset, we obtain excellent results by extremely deep residual nets. Our 152layer residual net is the deepest network ever presented on ImageNet, while still having lower complexity than VGG nets. Our ensemble has 3.57% top-5 error on the ImageNet test set, and won the 1st place in the ILSVRC 2015 classification competition. The extremely deep representations also have excellent generalization performance on other recognition tasks, and lead us to further win the 1st places on: ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation in ILSVRC &COCO 2015 competitions. This strong evidence shows that the residual learning principle is generic,and we expect that it is applicable in other vision and non-vision problems.

在ImageNet分类数据集上,我们通过极深的残差网获得了出色的结果。我们的152层残差网是有史以来在ImageNet上呈现的最深的网络,同时其复杂度仍低于VGG网。我们的合集在ImageNet测试集上的前5个错误的错误率为3.57%,并在ILSVRC 2015分类竞赛中获得第一名。极大的表现在其他识别任务上也有很好的泛化性能,并带领我们进一步赢得了第1名。在ILSVRC &COCO 2015比赛中,ImageNet检测、ImageNet定位、COCO检测和COCO分割都获得了第一名。这些有力的证据表明,残差学习原理是通用的,我们期望它能适用于其他视觉和非视觉问题

2.Related Work

Residual Representations. In image recognition, VLAD is a representation that encodes by the residual vectors with respect to a dictionary, and Fisher Vector can be formulated as a probabilistic version of VLAD. Both of them are powerful shallow representations for image retrieval and classification . For vector quantization, encoding residual vectors is shown to be more effective than encoding original vectors.

残差表示法:在图像识别中,VLAD是一种由残差向量相对于字典进行编码的表示方法,费舍尔矢量可以被表述为VLAD的概率版本,他们都是强大的浅层表示法,用于图像检索和分类。对于向量量化,编码残差向量被证明比编码原始向量更有效

In low-level vision and computer graphics, for solving Partial Differential Equations (PDEs), the widely used Multigrid method reformulates the system as subproblems at multiple scales, where each subproblem is responsible for the residual solution between a coarser and a finer scale. An alternative to Multigrid is hierarchical basis preconditioning, which relies on variables that represent residual vectors between two scales. It has been shown that these solvers converge much faster than standard solvers that are unaware of the residual nature of the solutions. These methods suggest that a good reformulation or preconditioning can simplify the optimization.

在低级视觉和计算机图形学中,为了解决偏微分方程,广泛使用的Multigrid方法将系统重构为多个尺度的子问题,其中每个子问题负责一个较粗和一个较细尺度之间的残差解。层次基础预设是Multigrid的替代方法,它依赖于代表两个尺度之间残差向量的变量。已经证明,这些求解器的收敛速度比标准求解器快得多,因为标准求解器不知道解的残差性质。这些方法表明,一个好的重新计算或预设可以简化优化

Shortcut Connections. Practices and theories that lead to shortcut connections have been studied for a long time. An early practice of training multi-layer perceptrons (MLPs) is to add a linear layer connected from the network input to the output . In , a few intermediate layers are directly connected to auxiliary classifiers for addressing vanishing/exploding gradients. The papers of propose methods for centering layer responses, gradients, and propagated errors, implemented by shortcut connections. In , an “inception” layer is composed of a shortcut branch and a few deeper branches.

捷径连接:得到捷径连接的实践和理论已经研究了很长时间。在早期实践中,训练多层感知器是增加一个线性层,从网络输入连接到输出。在 几个中间层直接连接到辅助分类器,用于解决消失、爆炸梯度。论文提出了通过快捷连接实现居中层响应,梯度和传播错误的方法。其中一个 "起始 "层是由一个捷径分支和几个较深的分支组成

Concurrent with our work, “highway networks” present shortcut connections with gating functions. These gates are data-dependent and have parameters, in contrast to our identity shortcuts that are parameter-free. When a gated shortcut is “closed” (approaching zero), the layers in highway networks represent non-residual functions. On the contrary, our formulation always learns residual functions; our identity shortcuts are never closed, and all information is always passed through, with additional residual functions to be learned. In addition,highway networks have not demonstrated accuracy gains with extremely increased depth (e.g., over 100 layers).

与我们的工作同步,"高速公路网络 "呈现出具有门控功能的捷径连接。这些门控函数是依赖于数据并有参数的,与我们的身份捷径相反,它是无参数的。当一个门控捷径被 "关闭"或快要关闭时,高速公路网络中的图层就代表了非剩余函数。相反,我们的公式总是学习残差函数;我们的身份捷径永远不会被关闭,所有信息总是被传递,还有额外的残差函数需要学习。此外,高速公路网络并没有显示出随着深度的极度增加而提高的精度

3.Deep Residual Learning深度残差学习

3.1.Residual Learning

Let us consider H(x) as an underlying mapping to be fit by a few stacked layers (not necessarily the entire net), with x denoting the inputs to the first of these layers. If one hypothesizes that multiple nonlinear layers can asymptotically approximate complicated functions, then it is equivalent to hypothesize that they can asymptotically approximate the residual functions, i.e., H(x)−x (assuming that the input and output are of the same dimensions). So rather than expect stacked layers to approximateH(x), we explicitly let these layers approximate a residual function F(x) :=H(x) − x. The original function thus becomes F(x)+x. Although both forms should be able to asymptotically approximate the desired functions (as hypothesized), the ease of learning might be different.

让我们把H(x)看作是由几个堆叠层,不一定是整个网,构成的底层映射,x表示这些层的第几层的输入。如果假设多个非线性层可以渐进地逼近复杂函数,那么就相当于假设它们可以渐进地逼近残差函数,即H(x)-x,假设此时输入和输出的维度相同。因此,我们不期望堆叠层能够近似H(x),而是明确地让这些堆叠层近似残差函数F(x)=H(x)-x.因此,原始函数就变成了F(x)+x。虽然两种形式都应该能够渐进地逼近所需的函数,如假设的那样,但学习的难易程度可能不同

This reformulation is motivated by the counterintuitive phenomena about the degradation problem(Fig.1,left). As we discussed in the introduction, if the added layers can be constructed as identity mappings,a deeper model should have training error no greater than its shallower counterpart. The degradation problem suggests that the solvers might have difficulties in approximating identity mappings by multiple nonlinear layers. With the residual learning reformulation, if identity mappings are optimal, the solvers may simply drive the weights of the multiple nonlinear layers toward zero to approach identity mappings.

这种重构的目的是关于退化问题的反直觉现象,如图1左图。正如我们在前言中所讨论的那样,如果增加的层可以被构造为身份映射,那么一个较深的模型的训练误差应该不大于其较浅的模型。降级问题表明,求解者在通过多个非线性层来逼近身份映射时可能会有困难。通过残差学习的重构,如果身份映射是最优的,求解器可能会简单地将多个非线性层的权重趋近于零,以接近身份映射

In real cases,it is unlikely that identity mappings are optimal, but our reformulation may help to precondition the problem. If the optimal function is closer to an identity mapping than to a zero mapping, it should be easier for the solver to find the perturbations with reference to an identity mapping, than to learn the function as a new one. We show by experiments (Fig.7) that the learned residual functionsin general have small responses, suggesting that identity mappings provide reasonable preconditioning.

在实际情况下,身份映射不太可能是最优的,但我们的重构可能有助于为问题提供前提条件。如果最优函数更接近于身份映射而不是零映射,那么求解者参考身份映射来寻找干扰应该比学习新的函数更容易。我们通过实验,如图7表明,学习到的残差函数一般都有较小的响应,这说明身份映射提供了合理的预设

3.2.Identity Mapping by Shortcuts

We adopt residual learning to every few stacked layers. A building block is shown in Fig. 2. Formally, in this paper we consider a building block defined as: y = F(x,{Wi})+x. (1) Here x and y are the input and output vectors of the layers considered. The function F(x,{Wi}) represents the residual mapping to be learned. For the example in Fig. 2 that has two layers, F = W2σ(W1x) in which σ denotes ReLU and the biases are omitted for simplifying notations. The operation F + x is performed by a shortcut connection and element-wise addition. We adopt the second nonlinearity after the addition (i.e., σ(y), see Fig. 2).

我们对每几个堆叠层采用残差学习。一个构件如图2所示。形式上,在本文中,我们考虑一个构件的定义为:y = F(x,{Wi})+x. 这里x和y是所考虑的层的输入和输出向量。函数F(x,{Wi})表示要学习的残差映射。对于图2中具有两层的例子,F=W2σ(W1x),其中σ表示ReLU,为了简化符号,省略了偏置。F+x的操作是通过捷径连接和元素相加来完成的。我们采用加法后的第二种非线性,即σ(y),见图2

The shortcut connections in Eqn.(1)introduce neither extra parameter nor computation complexity. This is not only attractive in practice but also important in our comparisons between plain and residual networks. We can fairly compare plain/residual networks that simultaneously have the same number of parameters, depth, width, and computational cost (except for the negligible element-wise addition).

Eqn.(1)中的快捷连接既不引入额外的参数,也不增加计算的复杂性。这不仅在实践中很有吸引力,而且在我们比较普通网络和残差网络时也很重要。我们可以公平地比较朴素、残差网络,它们同时具有相同的参数数、深度、宽度和计算成本,除了可忽略不计的元素加法

The dimensions of x and F must be equal in Eqn.(1). If this is not the case (e.g., when changing the input/output channels), we can perform a linear projection Ws by the shortcut connections to match the dimensions:y = F(x,{Wi})+Wsx. (2) We can also use a square matrix Ws inEqn.(1). Butwewill show by experiments that the identity mapping is sufficient for addressing the degradation problem and is economical, and thus Ws is only used when matching dimensions.

在Eqn.(1)中,x和F的维度必须相等。如果不是这种情况,如改变输入、输出通道时,我们可以通过快捷连接进行线性投影Ws来匹配:y=F(x,{Wi})+Wsx。我们也可以在Eqn.(1)中使用方阵Ws。但我们将通过实验表明,身份映射对于解决退化问题是足够的,而且是经济的,因此Ws只在匹配维度时使用

The form of the residual function F is flexible. Experiments in this paper involve a function F that has two or three layers (Fig. 5), while more layers are possible. But if F has only a single layer,Eqn.(1)is similar to a linear layer: y = W1x+x,for which we have not observed advantages. We also note that although the above notations are about fully-connected layers for simplicity, they are applicable to convolutional layers. The function F(x,{Wi}) can represent multiple convolutional layers. The element-wise addition is performed on two feature maps, channel by channel.

残差函数F的形式是可扩展的。本文实验涉及的函数F有两层或三层,而更多的层数也是可能的。但如果F只有单层,则Eqn.(1)类似于线性层:y=W1x+x,对此我们没有观察到优势。我们还注意到,虽然为了简单起见,上面的符号是关于全连接层的,但它们适用于卷积层。函数F(x,{Wi})可以表示多个卷积层。元素加法是在两个特征图上逐个通道进行的

3.3.Network Architectures

We have tested various plain/residual nets, and have observed consistent phenomena. To provide instances for discussion, we describe two models for ImageNet as follows.

我们已经测试了各种普通、残网,并观察到了一致的现象。为了提供讨论的实例,我们将ImageNet的两种模型描述如下:

Plain Network. Our plain baselines (Fig. 3, middle) are mainly inspired by the philosophy of VGG nets. The convolutional layers mostly have 3×3 filters and follow two simple design rules: (i) for the same output feature map size, the layers have the same number of filters; and (ii) if the feature map size is halved, the number of filters is doubled so as to preserve the time complexity per layer. We perform downsampling directly by convolutional layers that have a stride of 2. The network ends with a global average pooling layer and a 1000-way fully-connected layer with softmax. The total number of weighted layers is 34 in Fig. 3 (middle).

我们的朴素基线(图3,中间)主要受VGG网的理念启发。卷积层大多有3×3个过滤器,遵循两个简单的设计规则。(i)对于相同的输出特征图大小,各层具有相同数量的过滤器;(ii)如果特征图大小减半,则filters的数量增加一倍,以保持每层的时间复杂度。我们直接通过卷积层进行降采样,卷积层的步长为2,网络的最后是一个全局平均池化层和一个带有softmax的1000路全连接层,图3中,加权层总数为34层

It is worth noticing that our model has fewer filters and lower complexity than VGGnets(Fig.3,left). Our 34layer baseline has 3.6 billion FLOPs(multiply-adds),which is only 18% of VGG-19 (19.6 billion FLOPs).

值得注意的是,我们的模型比VGG网有更少的滤波器和更低的复杂度,如图3左图。我们的34层基线有36亿个FLOPs,而VGG-19有196亿个FLOPs,只占VGG-19的18%

在这里插入图片描述
Figure 3. Example network architectures for ImageNet.
Left: the VGG-19 model (19.6 billion FLOPs) as a reference.
Middle: aplainnetworkwith34parameterlayers(3.6billionFLOPs).
Right: a residual network with 34 parameter layers (3.6 billion FLOPs). The dotted shortcuts increase dimensions.
Table 1 shows more details and other variants
.

图3. ImageNet的网络架构示例:
左:以VGG-19模型,其具有196亿FLOPs为参考
中间:具有34个参数层的普通网,其具有36亿FLOPs
右:具有34个参数层的残差网络,其具有36亿FLOPs。虚线加剧增加了维度
表1显示了更多细节和其他变体

Residual Network. Based on the above plain network, we insert shortcut connections (Fig. 3, right) which turn the network into its counterpart residual version. The identity shortcuts (Eqn.(1)) can be directly used when the input and output are of the same dimensions (solid line shortcuts in Fig.3). When the dimensions increase (dotted line shortcuts in Fig. 3), we consider two options: (A) The shortcut still performs identity mapping, with extra zero entries padded for increasing dimensions. This option introduces no extra parameter; (B) The projection shortcut in Eqn.(2) is used to match dimensions (done by 1×1 convolutions). For both options, when the shortcuts go across feature maps of two sizes, they are performed with a stride of 2.

在上述朴素网络的基础上,我们插入快捷连接(图3,右),将网络变成其对应的残差版本。当输入和输出的维度相同时(图3中的实线捷径),可以直接使用身份捷径(Eqn.(1))。当维度增加时(图3中的虚线捷径),我们考虑两种方案。(A)快捷方式仍然执行身份映射,尺寸增加时,额外的零条目填充。这个选项没有引入额外的参数;(B)采用Eqn.(2)中的投影快捷方式进行维度匹配(由1×1卷积完成)。对于这两个选项,当捷径横跨两种尺寸的特征图时,它们的步幅都是2。

3.4.Implementation

Our implementation for ImageNet follows the practice in [21, 40]. The image is resized with its shorter side randomly sampled in [256,480] for scale augmentation [40]. A 224×224 crop is randomly sampled from an image or its horizontal flip,with the per-pixel mean subtracted[21]. The standard color augmentation in[21]is used. We adopt batch normalization (BN) [16] right after each convolution and before activation, following [16]. We initialize the weights as in [12] and train all plain/residual nets from scratch. We use SGD with a mini-batch size of 256. The learning rate starts from 0.1 and is divided by 10 when the error plateaus, and the models are trained for up to 60×104 iterations. We use a weight decay of 0.0001 and a momentum of 0.9. We do not use dropout [13], following the practice in [16]. In testing, for comparison studies we adopt the standard 10-crop testing [21]. For best results, we adopt the fullyconvolutional form as in [40, 12], and average the scores at multiple scales (images are resized such that the shorter side is in{224,256,384,480,640}).

我们对ImageNet的实现沿用了[21,40]的做法。图像被调整大小,其较短的一面在[256,480]中随机采样,以进行比例增强[40]。从图像或其水平方向上随机取样224×224的裁剪,并减去每像素的平均值[21]。采用[21]中的标准颜色增强法。我们在每次卷积后和激活前采用批量归一化(batch normalization)[16],遵循[16]。我们按照[12]中的方法初始化权重,并从头开始训练所有普通/残差网。我们使用SGD,迷你批量大小为256。学习率从0.1开始,当误差趋于平缓时除以10,模型的训练次数最多为60×104次迭代。我们使用0.0001的权重衰减和0.9的动量。我们没有使用辍学[13],沿用了[16]中的做法。在测试中,对于对比研究,我们采用标准的10-crop测试[21]。为了获得最佳结果,我们采用[40,12]中的全卷积形式,并对多个尺度的分数进行平均(图像被调整大小,使短边在{224,256,384,480,640})

4.Experiments

4.1.ImageNet Classification

We evaluate our method on the ImageNet 2012 classificationd ataset [35] that consists of 1000 classes. The models are trained on the 1.28 million training images, and evaluated on the 50k validation images. We also obtain a final result on the 100k test images, reported by the test server. We evaluate both top-1 and top-5 error rates.
Plain Networks. We first evaluate 18-layer and 34-layer plain nets. The 34-layer plain net is in Fig. 3 (middle). The 18-layer plain net is of a similar form. See Table 1 for detailed architectures.

我们在ImageNet 2012 的分类集上评估我们的方法,该分类集由1000个类组成。模型在128万张训练图像上进行训练,并在50k张验证图像上进行评估。我们还在10万张测试图像上获得了最终结果,由测试服务器报告。我们评估了top-1和top-5的错误率
Plain Networks我们首先评估18层和34层的Plain Networks。34层的Plain Networks在图3(中间)。18层的Plain Networks也是类似的形式。详细架构见表1

The results inTable 2 show that the deeper 34-layer plain net has higher validation error than the shallower 18-layer plain net. To reveal the reasons, in Fig. 4 (left) we compare their training/validation errors during the training procedure. We have observed the degradation problem the 34-layer plain net has higher training error throughout the whole training procedure, even though the solution space of the 18-layer plain network is a subspace of that of the 34-layer one.
表2中的结果显示,较深的34层比较浅的18层有更高的验证误差。为了揭示原因,在图4(左)中,我们比较了它们在训练过程中的训练、验证误差。我们观察到退化问题34层在整个训练过程中具有较高的训练误差,尽管18层的解空间是34层的一个子空间

在这里插入图片描述
Table 1. Architectures for ImageNet. Building blocks are shown in brackets (see also Fig. 5), with the numbers of blocks stacked. Downsampling is performed by conv3_1, conv4_1, and conv5_1 with a stride of 2.
表1.ImageNet的架构括号中显示了构建块(另请参见图5)和模块堆叠的数量。 下采样由conv3_1,conv4_1和conv5_1执行,步幅为2
在这里插入图片描述
Figure 4. Training on ImageNet. Thin curves denote training error, and bold curves denote validation error of the center crops. Left: plain networks of 18 and 34 layers. Right: ResNets of 18 and 34 layers. In this plot, the residual networks have no extra parameter compared to their plain counterparts.
细曲线表示训练误差,粗曲线表示中心作物的验证误差。左:18层和34层的plain networks。右图:18层和34层的plain networks。18层和34层的ResNets 在这个图中,残差网络与它们的plain networks相比没有额外的参数
在这里插入图片描述
Table 2. Top-1 error (%, 10-crop testing) on ImageNet validation. Here the ResNets have no extra parameter compared to their plain counterparts. Fig. 4 shows the training procedures.
表2.在ImageNet验证上的Top-1误差。这里的ResNets与它们的普通对应物相比没有额外的参数。图4显示了训练程序

We argue that this optimization difficulty is unlikely to be caused by vanishing gradients. These plain networks are trained with BN [16], which ensures forward propagated signals to have non-zero variances. We also verify that the backward propagated gradients exhibit healthy norms with BN. So neither forward nor backward signals vanish. In fact, the 34-layer plain net is still able to achieve competitive accuracy (Table 3), suggesting that the solver works to some extent. We conjecture that the deep plain nets may have exponentially low convergence rates,which impact the reducing of the training error. The reason for such optimization difficulties will be studied in the future.

我们认为,这种优化困难不太可能是由消失梯度引起的。这些 plain networks是用BN训练的,它确保了前向传播的信号具有非零变异。我们也用BN验证了后向传播的梯度表现出健康的规范。所以前向和后向信号都不会消失。事实上,34层的plain networks仍然能够达到竞争性的精度(表3),说明解算器在一定程度上是有效的。我们推测,深层plain net可能具有指数级的低收敛率,这影响了训练误差的减小。这种优化困难的原因将在未来进行研究

Residual Networks. Next we evaluate 18-layer and 34-layer residual nets (ResNets). The baseline architectures are the same as the above plain nets, expect that a shortcut connection is added to each pair of 3×3 filters as in Fig. 3 (right). In the first comparison (Table 2 and Fig. 4 right), we use identity mapping for all shortcuts and zero-padding for increasing dimensions (optionA).So they have no extra parameter compared to the plain counterparts. We have three major observations from Table 2 and Fig. 4. First, the situation is reversed with residual learning–the 34-layer ResNet is better than the18-layer ResNet (by 2.8%). More importantly, the 34-layer ResNet exhibits considerably lower training error and is generalizable to the validation data. This indicates that the degradation problem is well addressed in this setting and we manage to obtain accuracy gains from increased depth.

残差网络。接下来我们评估18层和34层残网络。基线架构与上述plain net相同,期望在每一对3×3 过滤中增加一个快捷连接,如图3(右)。在第一个比较中,即表2和图4右,我们对所有的捷径使用身份映射,对增加的维度使用零填充.因此,与普通对应物相比,它们没有额外的参数。从表2和图4中我们有三个主要的观察。首先,残差学习的情况是相反的,34层的ResNet比18层的ResNet好2.8%。更重要的是,34层ResNet表现出相当低的训练误差,并且对验证数据具有通用性。这说明在这种环境下,退化问题得到了很好的解决,我们设法通过增加深度来获得精度的提升
在这里插入图片描述
Table 3. Error rates (%, 10-crop testing) on ImageNet validation. VGG-16 is based on our test. ResNet-50/101/152 are of option B that only uses projections for increasing dimensions.
ImageNet验证的错误率。VGG-16是基于我们的测试。ResNet-50/101/152属于选项B,只使用投影来增加维度
在这里插入图片描述
Table 4. Error rates (%) of single-model results on the ImageNet validation set (except reported on the test set).
ImageNet验证集上单一模型结果的错误率,测试集上报告的除外
在这里插入图片描述
Table 5. Error rates (%) of ensembles. The top-5 error is on the test set of ImageNet and reported by the test server.
全部的错误率(%)。前5名的错误是在ImageNet的测试集上,由测试服务器报告

Second, compared to its plain counterpart, the 34-layer ResNet reduces the top-1 error by 3.5% (Table 2), resulting from the successfully reduced training error (Fig.4 right vs. left). This comparison verifies the effectiveness of residual learning on extremely deep systems.
Last, we also note that the 18-layer plain/residual nets are comparably accurate (Table 2), but the 18-layer ResNet converges faster (Fig. 4 right vs. left). When the net is “not overly deep”(18 layers here),the current SGD solverisstill able to find good solutions to the plain net. In this case, the ResNet eases the optimization by providing faster convergence at the early stage.

其次,与它的普通对应物相比,34层ResNet减少了3.5%的top-1误差(表2),这是由成功减少的训练误差造成的(Fig.4rightvs.left)。这个比较验证了残差学习在极深系统上的有效性
最后,我们还注意到,18层的平网/残网比较准确(见表2),但18层的ResNet收敛速度更快(图4右与左对比课件)。当网 "不是太深"(这里是18层)时,当前的SGD求解器仍然能够找到 plain net的良好解。在这种情况下,ResNet通过在早期阶段提供更快的收敛速度来缓解优化

Identity vs. Projection Shortcuts. We have shown that parameter-free, identity shortcuts help with training. Next we investigate projection shortcuts(Eqn.(2)). InTable 3 we compare three options: (A)zero-padding shortcuts are used for increasing dimensions, and all shortcuts are parameterfree (the same as Table 2 and Fig. 4 right); (B) projection shortcuts are used for increasing dimensions,andother shortcuts are identity; and © all shortcuts are projections.
身份与投影捷径,我们已经证明了无参数、身份的快捷方式有助于训练。接下来我们研究投影快捷方式。在表3中,我们比较了三种方案。(A)零填充捷径用于增加维度,所有的捷径都是无参数的(与表2和图4右相同);(B)投影捷径用于增加维度,其他捷径都是身份捷径;(C)所有的捷径都是投影
在这里插入图片描述
Figure 5. A deeper residual function F for ImageNet. Left: a building block (on 56×56 feature maps) as in Fig. 3 for ResNet34. Right: a “bottleneck” building block for ResNet-50/101/152.
图5. ImageNet的更深层次的残差函数F。左:ResNet34的一个构建块(在56×56特征图上),如图3。右:ResNet-50/101/152的一个 "瓶颈 "构建块

Table 3 shows that all three options are considerably better than the plain counterpart. B is slightly better thanA.We argue that this is because the zero-padded dimensions in A indeed have no residual learning. C is marginally betterthan B, and we attribute this to the extra parameters introduced by many (thirteen) projection shortcuts. But the small differences among A/B/C indicate that projection shortcuts are not essential for addressing the degradation problem. So we do not use option C in the rest of this paper,to reduce memory/time complexity and model sizes. Identity shortcuts are particularly important for not increasing the complexity of the bottleneck architectures that are introduced below.

表3显示,三个选项都比普通对应的选项好很多。B比A.我们认为,这是因为A中的零填充维度确实没有残余学习。C略好于B,我们认为这是由于许多(十三种)投影快捷方式引入的额外参数。但是A、B、C之间的差异很小,这说明投影快捷键对于解决退化问题并不是必不可少的。所以我们在本文其余部分没有使用选项C,以降低内存/时间的复杂性和模型大小。身份捷径对于不增加下面介绍的瓶颈架构的复杂性尤为重要

Deeper Bottleneck Architectures. Next we describe our deeper nets for ImageNet. Because of concerns on the training time that we can afford, we modify the building block as a bottleneck design4. For each residual function F, we use a stack of 3 layers instead of 2(Fig.5). The three layers are1×1,3×3,and1×1 convolutions,where the 1×1 layers are responsible for reducing and then increasing (restoring) dimensions,leaving the 3×3 layer a bottleneck with smaller input/output dimensions. Fig. 5 shows an example, where both designs have similar time complexity.

The parameter-free identity shortcuts are particularly important for the bottleneck architectures. If the identity shortcut in Fig. 5 (right) is replaced with projection, one can show that the time complexity and model size are doubled, as the shortcut is connected to the two high-dimensional ends. So identity shortcuts lead to more efficient models for the bottleneck designs.

更深层次的瓶颈架构。接下来我们描述一下我们的ImageNet的深度网。由于担心我们能承受的训练时间,我们修改了构件作为瓶颈设计。对于每个残差函数F,我们使用一个3层的堆栈代替2层(见图5)。这三层分别是1×1,3×3,和1×1卷积,其中1×1层负责减少然后增加(恢复)维度,留下3×3层是一个输入、输出维度较小的瓶颈。图5是一个例子,两种设计的时间复杂度相似
无参数的身份捷径对于瓶颈架构尤为重要。如果将图5(右)中的身份捷径替换为投影,可以看出时间复杂度和模型尺寸都增加了一倍,因为捷径连接到两个高维端。所以,身份捷径可以使瓶颈设计的模型更加有效

50-layer ResNet: We replace each 2-layer block in the 34-layer net with this 3-layer bottleneck block, resulting in a 50-layer ResNet (Table1). We use option B for increasing dimensions. This model has 3.8 billion FLOPs.

101-layer and 152-layer ResNets: We construct 101layer and 152-layer ResNets by using more 3-layer blocks (Table 1). Remarkably, although the depth is significantly increased, the 152-layer ResNet (11.3 billion FLOPs) still has lower complexity than VGG-16/19 nets (15.3/19.6 billion FLOPs).

The 50/101/152-layer ResNets are more accurate than the 34-layer ones by considerable margins (Table 3 and 4). We do not observe the degradation problem and thus enjoy significant accuracy gains from considerably increased depth. The benefits of depth are witnessed for all evaluation metrics (Table 3 and 4).
50层ResNet 我们用这个3层瓶颈块替换34层网中的每个2层块,从而得到一个50层的ResNet(见表1)。我们使用选项B来增加维度。这个模型有38亿个FLOPs
101层和152层的 ResNets 我们通过使用更多的3层块来构建101层和152层ResNets(见表1)。值得注意的是,虽然深度明显增加,但152层ResNet,其含113亿FLOPs的复杂度仍然低于VGG-16/19网,其含153或196亿FLOPs
50、101、152层的ResNets比34层的ResNets要精确得多(见表3和4)。我们没有观察到退化问题,因此享受到大幅增加的深度所带来的明显的精度提升。所有的评估指标都可以看到深度的好处(见表3和4)
Comparisons with State-of-the-art Methods. In Table 4 we compare with the previous best single-model results. Our baseline 34-layer ResNets have achieved very competitive accuracy. Our 152-layer ResNet has a single-model top-5 validation error of 4.49%. This single-model result outperforms all previous ensemble results (Table 5). We combine six models of different depth to form an ensemble (only with two 152-layer ones at the time of submitting). This leads to 3.57% top-5 error on the test set (Table 5). This entry won the 1st place in ILSVRC 2015.
对比最先进方法,在表4中,我们与之前最好的单模型结果进行了比较。我们的基线34层ResNets已经达到了非常有竞争力的精度。我们的152层ResNet的单模型前5名验证误差为4.49%。这个单模型结果优于之前所有的合集结果(表5)。我们将6个不同深度的模型组合起来形成一个合集(提交时只有两个152层的)。这导致测试集上前5名的误差为3.57%(表5)。该作品获得了2015年ILSVRC的第一名

4.2.CIFAR-10 and Analysis

We conducted more studies on the CIFAR-10 dataset [20], which consists of 50k training images and 10k testing images in 10 classes. We present experiments trained on the training set and evaluated on the test set. Our focus is on the behaviors of extremely deep networks, but not on pushing the state-of-the-art results, so we intentionally use simple architectures as follows.

我们对CIFAR-10数据集[20]进行了更多的研究,该数据集由10类50k张训练图像和10k张测试图像组成。我们介绍了在训练集上训练的实验和在测试集上评估的实验。我们的重点是极深网络的行为,但不是推崇最先进的结果,所以我们有意使用简单的架构,如下所示

The plain/residual architectures follow the form in Fig.3 (middle/right). The network inputs are 32×32 images,with the per-pixel mean subtracted. The first layeris 3×3 convolutions. Then we use a stack of 6n layers with 3×3 convolutions on the feature maps of sizes{32,16,8} respectively, with 2n layers for each feature map size. The numbers of filtersare{16,32,64}respectively. The subsampling is performed by convolutions with astride of 2. The network ends with a global average pooling, a 10-way fully-connected layer,and softmax. There are totally 6n+2 stacked weighted layers. The following table summarizes the architecture:

普通、残差架构的形式如图3中、右。网络输入为32×32的图像,并减去每个像素的平均值。第五层是3×3的卷积。然后我们在尺寸为{32,16,8}的特征图上分别用3×3卷积堆叠6n层,每个特征图尺寸为2n层。过滤器的数量分别为{16,32,64}。通过跨度为2的卷积进行二次采样。网络以全局平均池,10路全连接层和softmax结尾. 共有6n+2个堆叠加权层。下表总结了架构
在这里插入图片描述
When shortcut connections are used, they are connected to the pairs of 3×3 layers (totally 3n shortcuts). On this dataset we use identity shortcuts in all cases(i.e.,optionA),so our residual models have exactly the same depth, width, and number of parameters as the plain counterparts.
当使用快捷键连接时,它们被连接到3×3层的对子上,共3n个快捷键。在这个数据集上,我们在所有情况下都使用了身份快捷键,即选项A,所以我们的残差模型的深度、宽度和参数数量与普通对应模型完全相同
在这里插入图片描述
Table 6. Classification error on the CIFAR-10 test set. All methods are with data augmentation. ForResNet-110,we run it 5times and show “best (mean±std)” as in [42].
表6.CIFAR-10测试集的分类误差 。所有方法都有数据增强。对于ResNet-110,我们运行了5次,并显示 "最佳(平均值±std)",如[42]

We use a weight decay of 0.0001 and momentum of 0.9, and adopt the weight initialization in [12] and BN [16] but with no dropout. These models are trained with a minibatch size of 128 on two GPUs. We start with a learning rate of 0.1, divide it by 10 at 32k and 48k iterations, and terminate training at 64k iterations, which is determined on a 45k/5k train/val split. We follow the simple data augmentation in [24] for training: 4 pixels are padded on each side, and a 32×32 crop is randomly sampled from the padded image or its horizontal flip. For testing, we only evaluate the single view of the original 32×32 image.

我们使用0.0001的权重衰减和0.9的动量,并采用[12]和BN[16]中的权重初始化,但没有dropout。这些模型在两个GPU上以128个最小批次大小进行训练。我们从0.1的学习率开始,在32k和48k迭代时将其除以10,并在64k迭代时终止训练,这是在45k或5k的训练或值分割上确定的。我们按照[24]中的简单数据扩增进行训练:每边垫上4个像素,从垫上的图像或其水平fl中随机抽取一个32×32的裁剪。对于测试,我们只评估原始32×32图像的单视图

We compare n = {3,5,7,9}, leading to 20, 32, 44, and 56-layer networks. Fig. 6 (left) shows the behaviors of the plain nets. The deep plain nets suffer from increased depth, and exhibit higher training error when going deeper. This phenomenon is similar to that on ImageNet (Fig.4,left) and on MNIST (see [41]), suggesting that such an optimization difficulty is a fundamental problem.

我们比较n = {3,5,7,9},得出20、32、44和56层网络。 图6(左)显示了普通网络的行为。 深层普通网受到深度增加的影响,当深入时表现出更高的训练错误。 这种现象类似于ImageNet(图4,左)和MNIST(参见[41])上的现象,表明这种优化困难是一个基本问题

Fig. 6 (middle) shows the behaviors of ResNets. Also similar to the ImageNet cases (Fig. 4, right), our ResNets manage to overcome the optimization difficulty and demonstrate accuracy gains when the depth increases.

图6(中间)显示了ResNets的行为。同样类似于ImageNet案例(图4,右),我们的ResNets设法克服了优化的困难,并在深度增加时表现出精度的提高

We further explore n = 18 that leads to a 110-layer ResNet. In this case, we find that the initial learning rate of 0.1 is slightly too large to start converging5So we use 0.01 to warm up the training until the training error is below 80%(about 400 iterations ), and then go back to 0.1and continue training. The rest of the learning schedule is as done previously. This 110-layer network converges well (Fig. 6, middle). It has fewer parameters than other deep and thin networks such as FitNet [34] and Highway [41] (Table 6), yet is among the state-of-the-art results (6.43%, Table 6).

我们进一步探索n = 18,得到了110层的ResNet。在这种情况下,我们发现0.1的初始学习率稍大,无法开始收敛。所以我们使用0.01来预热训练,直到训练误差低于80%(约400次迭代),然后回到0.1并继续训练。其余的学习计划和之前一样。这个110层的网络收敛性很好(图6,中间)。它的参数比FitNet[34]和Highway[41]等其他深层和薄层网络少(见表6),却属于最先进的结果(6.43%,表6)
在这里插入图片描述
Figure 6. Training on CIFAR-10. Dashed lines denote training error, and bold lines denote testing error. Left: plain networks. The error of plain-110 is higher than 60% and not displayed. Middle: ResNets. Right: ResNets with 110 and 1202 layers.
图6:在CIFAR-10上进行训练。虚线表示训练误差,粗线表示测试误差。左:plain network。plain-110的误差高于60%,并且不显示。中间:ResNets. 右图:有110层和1202层的ResNets
在这里插入图片描述
Figure 7. Standard deviations (std) of layer responses on CIFAR10. The responses are the outputs of each 3×3 layer, after BN and before nonlinearity. Top: the layers are shown in their original order. Bottom: the responses are ranked in descending order.

图7.CIFAR10上各层响应的标准偏差(std)。这些响应是每个3×3层的输出,在BN之后和非线性之前。顶部:各层以其原始顺序显示。底部:响应按降序排列

Analysis of Layer Responses. Fig. 7 shows the standard deviations (std) of the layer responses. The responses are the outputs of each 3×3 layer, after BN and before other nonlinearity (ReLU/addition). For ResNets, this analysis reveals the response strength of the residual functions. Fig.7 shows that ResNets have generally smaller responses than their plain counterparts. These results support our basic motivation (Sec.3.1) that the residual functions might be generally closer to zero than the non-residual functions. We also notice that the deeper ResNet has smaller magnitudes of responses,as evidenced by the comparisons among ResNet-20, 56, and 110 in Fig. 7. When there are more layers, an individual layer of ResNets tends to modify the signal less.

层级响应分析:图7显示了各层响应的标准偏差(std)。这些响应是每个3×3层的输出,在BN之后和其他非线性(ReLU或附加)之前。对于ResNets,这种分析揭示了残差函数的响应强度。图7显示,ResNets的响应一般比它们的普通对应物小。这些结果支持了我们的基本动机(Sec.3.1),即残差函数可能一般比非残差函数更接近于零。我们还注意到,较深的ResNet具有较小的响应幅度,图7中ResNet-20、56和110之间的比较就证明了这一点。当层数较多时,单个层的ResNet对信号的修改往往较小

Exploring Over 1000 layers. We explore an aggressively deep model of over 1000 layers. We set n = 200 that leadstoa1202-layer network,which is trained as described above. Our method shows no optimization difficulty, and this 103-layer network is able to achieve training error <0.1% (Fig. 6, right). Its test error is still fairly good (7.93%, Table 6).

探索超过1000层:我们探索了一个超过1000层的深度模型。我们设置n=200,导致了一个1202层的网络,训练如上所述。我们的方法没有显示出优化的难度,这个103层的网络能够实现训练误差<0.1%(图6,右),其测试误差还是相当不错的(7.93%,表6)

But there are still open problems on such aggressively deep models. The testing result of this 1202-layer network is worse than that of our 110-layer network, although both have similar training error. We argue that this is because of overfitting. The 1202-layer network may be unnecessarily large (19.4M) for this small dataset. Strong regularization such as maxout [9] or dropout [13] is applied to obtain the best results([9,25,24,34]) on this dataset. In this paper,we use no maxout/dropout and just simply impose regularization via deep and thin architectures by design, without distracting from the focus on the difficulties of optimization. But combining with stronger regularization may improve results, which we will study in the future.

但是在这种激进的深度模型上仍然存在开放性的问题。这个1202层网络的测试结果比我们110层网络的测试结果要差,虽然两者的训练误差相似。我们认为,这是因为过度定型。对于这个小数据集来说,1202层网络可能是不必要的大(19.4M)。使用强正则化,如maxout[9]或dropout[13]被应用于这个数据集上以获得最佳结果([9,25,24,34])。在本文中,我们没有使用maxout/dropout,只是简单地通过深层和薄层架构的设计来施加正则化,而不分散对优化的关注点。但结合更强的正则化可能会改善结果,我们将在未来研究
在这里插入图片描述
Table 7. Object detection mAP (%) on the PASCAL VOC 2007/2012 test sets using baseline Faster R-CNN. See also appendix for better results.使用基准Faster R-CNN在PASCAL VOC 2007/2012测试集中进行的对象检测mAP(%)。 另请参见附录以获得更好的结果
在这里插入图片描述
Table 8. Object detection mAP (%) on the COCO validation set using baseline Faster R-CNN.See also appendix for better results.
使用基线Faster R-CNN的COCO验证集上的对象检测mAP(%).另请参见附录以获得更好的结果

4.3.Object Detection on PASCAL and MS COCO

PASCAL和MS COCO上的目标检测

Our method has good generalization performance on other recognition tasks. Table 7 and 8 show the object detection baseline results on PASCAL VOC 2007 and 2012 [5] and COCO[26]. We adopt Faster R-CNN [32] as the detection method. Here we are interested in the improvements of replacing VGG-16 [40] with ResNet-101. The detection implementation (see appendix) of using both models is the same,so the gains can only be attributed to better networks. Most remarkably, on the challenging COCO dataset we obtaina 6.0% increase in COCO’s standard metric(mAP@[.5, .95]), which is a 28% relative improvement. This gain is solely due to the learned representations.

Based on deep residual nets, we won the 1st places in several tracks in ILSVRC & COCO 2015 competitions: ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation. The details are in the appendix.

我们的方法在其他识别任务上具有良好的泛化性能。表7和表8是PASCAL VOC 2007和2012[5]以及COCO[26]上的对象检测基线结果。我们采用Faster R-CNN[32]作为检测方法。这里我们感兴趣的是用ResNet-101替换VGG-16[40]的改进。使用这两种模型的检测实现(见附录)都是一样的,所以收益只能归功于更好的网络。最值得注意的是,在具有挑战性的COCO数据集上,我们获得了COCO标准度量 6.0%的提升,相对提高了28%。这个增益完全归功于学习的表示

基于深度残差网络,我们在ILSVRC & COCO 2015比赛中获得了多个赛道的第一名。ImageNet检测、ImageNet定位、COCO检测和COCO分割。详情见附录

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
deep residual learning for image recognition是一种用于图像识别的深度残差学习方法。该方法通过引入残差块(residual block)来构建深度神经网络,以解决深度网络训练过程中的梯度消失和梯度爆炸等问题。 在传统的深度学习网络中,网络层数增加时,随之带来的问题是梯度消失和梯度爆炸。这意味着在网络中进行反向传播时,梯度会变得非常小或非常大,导致网络训练变得困难。deep residual learning则使用了残差连接(residual connection)来解决这一问题。 在残差块中,输入特征图被直接连接到输出特征图上,从而允许网络直接学习输入与输出之间的残差。这样一来,即使网络层数增加,也可以保持梯度相对稳定,加速网络训练的过程。另外,通过残差连接,网络也可以更好地捕获图像中的细节和不同尺度的特征。 使用deep residual learning方法进行图像识别时,我们可以通过在网络中堆叠多个残差块来增加网络的深度。这样,网络可以更好地提取图像中的特征,并在训练过程中学习到更复杂的表示。通过大规模图像数据训练,deep residual learning可以在很多图像识别任务中达到甚至超过人类表现的准确性。 总之,deep residual learning for image recognition是一种利用残差连接解决梯度消失和梯度爆炸问题的深度学习方法,通过增加网络深度并利用残差学习,在图像识别任务中获得了突破性的表现。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值