【论文翻译】Deep Residual Learning for Image Recognition

论文题目:Deep Residual Learning for Image Recognition
论文来源:Deep Residual Learning for Image Recognition
翻译人:BDML@CQUT实验室

Deep Residual Learning for Image Recognition

深度残差学习用于图像识别

Kaiming He Xiangyu Zhang Shaoqing Ren Jian Sun
Microsoft Research
{kahe, v-xiangz, v-shren, jiansun}@microsoft.com

Abstract

Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers.

The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

摘要

更深层次的神经网络更难训练。我们提出了一个残差学习框架简化网络的训练,这些网络比以前使用的网络更深。我们明确地将层重新定义为根据层输入的学习残差函数,而不是学习未引用的函数。我们提供了全面的经验证据,表明这些残差网络更容易优化,并且可以从显著增加的深度中获得精度。在ImageNet数据集上,我们评估深度高达152层的残差网络,比VGG网络深8倍,但复杂性仍然较低。这些残差网络的集合在ImageNet测试集上达到了3.57%的误差。该结果在ILSVRC 2015分类任务中获得第一名。我们还分析了CIFAR-10的100层和1000层。

在许多视觉识别任务中,表征的深度是至关重要的。仅仅由于我们非常深入的表示,我们获得了28%的相对改善的COCO对象检测数据集。深度残差网是我们提交ILSVRC和COCO 2015竞赛1的基础,在这次竞赛中,我们在ImageNet检测、ImageNet定位、COCO检测和COCO分割任务中也获得了第一名。

正文

1. Introduction

Deep convolutional neural networks have led to a series of breakthroughs for image classification . Deep networks naturally integrate low/mid/highlevel features and classifiers in an end-to-end multilayer fashion, and the “levels” of features can be enriched by the number of stacked layers (depth). Recent evidence reveals that network depth is of crucial importance, and the leading results on the challenging ImageNet dataset all exploit “very deep” models, with a depth of sixteen to thirty . Many other nontrivial visual recognition tasks have also greatly benefited from very deep models.

1. 介绍

深度卷积神经网络为图像分类带来了一系列突破。深度网络以端到端的多层方式集成低/中/高级特征和分类器,并且可以通过层叠层数(深度)来丰富特征的“层”。最近的证据表明,网络深度是至关重要的,而具有挑战性的ImageNet数据集的主要结果都采用了“非常深”的模型,深度为16到30。许多其他重要的视觉识别任务也从非常深的模型中受益匪浅。

在这里插入图片描述
Figure 1. Training error (left) and test error (right) on CIFAR-10 with 20-layer and 56-layer “plain” networks. The deeper network
has higher training error, and thus test error. Similar phenomena on ImageNet is presented in Fig. 4.

图1. 使用20层和56层“普通”网络的CIFAR-10上的训练错误(左)和测试错误(右)。网络越深,训练误差越大,测试误差越大。图4显示了ImageNet上的类似现象。

Driven by the significance of depth, a question arises: Is learning better networks as easy as stacking more layers? An obstacle to answering this question was the notorious problem of vanishing/exploding gradients, which hamper convergence from the beginning. This problem, however, has been largely addressed by normalized initialization and intermediate normalization layers, which enable networks with tens of layers to start converging for stochastic gradient descent (SGD) with backpropagation.

在深度的重要性的驱使下,一个问题出现了:学习更好的网络是否像堆叠更多层一样容易?回答这个问题的一个障碍是众所周知的梯度消失/爆炸问题,它从一开始就阻碍了收敛。然而,这一问题主要是通过规范化初始化和中间规范化层来解决的,这使得具有数十层的网络的随机梯度下降(SGD)与反向传播开始收敛。

When deeper networks are able to start converging, a degradation problem has been exposed: with the network depth increasing, accuracy gets saturated (which might be unsurprising) and then degrades rapidly. Unexpectedly, such degradation is not caused by overfitting, and adding more layers to a suitably deep model leads to higher training error, as reported in and thoroughly verified by our experiments. Fig. 1 shows a typical example.

当更深层的网络能够开始收敛时,一个退化的问题就暴露出来了:随着网络深度的增加,精度达到饱和(这可能并不奇怪),然后迅速退化。出乎意料的是,这种退化并不是由过度拟合引起的,向适当深度的模型中添加更多的层会导致更高的训练误差,正如所报告的那样,我们的实验也得到了充分的验证。图1展示了一个典型的例子。

The degradation (of training accuracy) indicates that not all systems are similarly easy to optimize. Let us consider a shallower architecture and its deeper counterpart that adds more layers onto it. There exists a solution by construction to the deeper model: the added layers are identity mapping, and the other layers are copied from the learned shallower model. The existence of this constructed solution indicates that a deeper model should produce no higher training error than its shallower counterpart. But experiments show that our current solvers on hand are unable to find solutions that are comparably good or better than the constructed solution (or unable to do so in feasible time).

(训练精度的)下降表明并非所有的系统都同样容易优化。让我们考虑一个较浅的体系结构和它的更深层对应结构,它为体系结构添加了更多的层。对于构造更深层次的模型,存在一种解决方案:添加的层是身份映射,其他层是从学习的浅层模型复制的。此构造解的存在性表明,较深的模型不会比较浅的模型产生更高的训练误差。但实验表明,我们现有的求解器无法找到比构造的解更好或更好的解决方案(或无法在可行的时间内找到)。

In this paper, we address the degradation problem by introducing a deep residual learning framework. Instead of hoping each few stacked layers directly fit a desired underlying mapping, we explicitly let these layers fit a residual mapping. Formally, denoting the desired underlying mapping as H ( x ) \mathcal{H}(x) H(x), we let the stacked nonlinear layers fit another mapping of F ( x ) : = H ( x ) − x \mathcal{F}(x):=\mathcal{H}(x)-x F(x):=H(x)x. The original mapping is recast into F ( x ) + x \mathcal{F}(x)+x F(x)+x. We hypothesize that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers.

在本文中,我们通过引入深度残差学习框架来解决退化问题。我们不希望每一个层叠的层都直接适合期望的底层映射,而是显式地让这些层适合残差映射。形式上,将所需的底层映射表示为 H ( x ) \mathcal{H}(x) H(x),我们让堆叠的非线性层拟合另一个 F ( x ) : = H ( x ) − x \mathcal{F}(x):=\mathcal{H}(x)-x F(x)=H(x)x。原始映射被重新转换为 F ( x ) + x \mathcal{F}(x)+x F(x)+x。我们假设优化残差映射比优化原始未引用映射更容易。在极端情况下,如果一个恒等映射是最优的,那么将残差推到零比通过一堆非线性层来拟合身份映射更容易。

The formulation of F ( x ) + x \mathcal{F}(x)+x F(x)+x can be realized by feedforward neural networks with “shortcut connections” (Fig. 2). Shortcut connections are those skipping one or more layers. In our case, the shortcut connections simply perform identity mapping, and their outputs are added to the outputs of the stacked layers (Fig. 2). Identity shortcut connections add neither extra parameter nor computational complexity. The entire network can still be trained end-to-end by SGD with backpropagation, and can be easily implemented using common libraries (e.g., Caffe [19]) without modifying the solvers.

F ( x ) + x \mathcal{F}(x)+x F(x)+x的公式可以通过具有“快捷连接”的前馈神经网络实现(图2)。快捷连接是那些跳过一个或多个层的连接。在我们的例子中,快捷连接只执行恒等映射,它们的输出被添加到堆叠层的输出中(图2)。标识快捷连接既不会增加额外的参数,也不会增加计算复杂性。整个网络仍然可以通过反向传播的SGD进行端到端的训练,并且可以使用公共库(例如Caffe[19])轻松实现,而无需修改求解器。

在这里插入图片描述
图2.残差学习:构建基块

We present comprehensive experiments on ImageNet to show the degradation problem and evaluate our method. We show that: 1) Our extremely deep residual nets are easy to optimize, but the counterpart “plain” nets (that simply stack layers) exhibit higher training error when the depth increases; 2) Our deep residual nets can easily enjoy accuracy gains from greatly increased depth, producing results substantially better than previous networks.

我们在ImageNet上进行了综合实验以说明退化问题并对我们的方法进行了评价。我们的研究表明:1)我们的极深残差网络易于优化,但当深度增加时,对应的“普通”网(即简单的叠加层)表现出更高的训练误差;2)我们的深残差网络可以很容易地获得深度大幅增加带来的精度提高,产生的结果比以前的网络好得多。

Similar phenomena are also shown on the CIFAR-10 set, suggesting that the optimization difficulties and the effects of our method are not just akin to a particular dataset. We present successfully trained models on this dataset with over 100 layers, and explore models with over 1000 layers.

在CIFAR-10数据集上也显示了类似的现象,这表明我们的方法的优化难度和效果不仅仅与特定的数据集相似。我们在这个数据集中展示了100多层的成功训练模型,并探索了1000多层的模型。

On the ImageNet classification dataset , we obtain excellent results by extremely deep residual nets. Our 152-layer residual net is the deepest network ever presented on ImageNet, while still having lower complexity than VGG nets . Our ensemble has 3.57% top-5 error on the ImageNet test set, and won the 1st place in the ILSVRC 2015 classification competition. The extremely deep representations also have excellent generalization performance on other recognition tasks, and lead us to further win the 1st places on: ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation in ILSVRC & COCO 2015 competitions. This strong evidence shows that the residual learning principle is generic, and we expect that it is applicable in other vision and non-vision problems.

在ImageNet分类数据集上,我们利用极深残差网络得到了很好的结果。我们的152层残差网络是ImageNet上有史以来最深的网络,但其复杂性仍然低于VGG网络。我们的集合在ImageNet测试集中的前5错误率为3.57%,并在ILSVRC 2015分类竞赛中获得第一名。在其他识别任务中,极深的表征也具有很好的泛化性能,使我们在ILSVRC&COCO 2015竞赛中进一步获得ImageNet检测、ImageNet定位、COCO检测和COCO分割的第一名。这有力的证据表明,残差学习原理是通用的,我们期望它适用于其他视觉和非视觉问题。

2. RelatedWork

Residual Representations. In image recognition, VLAD is a representation that encodes by the residual vectors with respect to a dictionary, and Fisher Vector can be formulated as a probabilistic version of VLAD. Both of them are powerful shallow representations for image retrieval and classification. For vector quantization, encoding residual vectors is shown to be more effective than encoding original vectors.

2.相关研究

残差表示

在图像识别中,VLAD是由残差向量相对于字典进行编码的表示,Fisher向量可以表示为VLAD的概率形式。它们都是图像检索和分类的有力的浅层表示。对于矢量量化,对残差向量编码比对原始向量编码更有效。

In low-level vision and computer graphics, for solving Partial Differential Equations (PDEs), the widely used Multigrid method reformulates the system as subproblems at multiple scales, where each subproblem is responsible for the residual solution between a coarser and a finer scale. An alternative to Multigrid is hierarchical basis preconditioning, which relies on variables that represent residual vectors between two scales. It has been shown that these solvers converge much faster than standard solvers that are unaware of the residual nature of the solutions. These methods suggest that a good reformulation or preconditioning can simplify the optimization.

在低层视觉和计算机图形学中,为了求解偏微分方程(PDEs),广泛使用的多重网格方法将系统重构为多个尺度上的子问题,每个子问题负责粗尺度和细尺度之间的残差解。多重网格的另一种选择是分层基础预处理,它依赖于表示两个尺度之间的残差向量的变量。结果表明,这些求解器比不知道解的残差性质的标准求解器收敛得快得多。这些方法表明,一个好的重组或预处理可以简化优化。

Shortcut Connections. Practices and theories that lead to shortcut connections have been studied for a long time. An early practice of training multi-layer perceptrons (MLPs) is to add a linear layer connected from the network input to the output. In[44,24], a few intermediate layers are directly connected to auxiliary classifiers for addressing vanishing/exploding gradients. The papers of propose methods for centering layer responses, gradients, and propagated errors, implemented by shortcut connections. In[44], an “inception” layer is composed of a shortcut branch and a few deeper branches.

快捷连接

导致捷径连接的实践和理论研究由来已久。训练多层感知机(MLPs)的早期实践是在网络输入和输出之间添加一个线性层。在[44,24]中,一些中间层直接连接到辅助分类器,用于处理消失/爆炸梯度。本文提出了用快捷连接实现的中心层响应、梯度和传播误差的方法。在[44]中,“初始”层由一个快捷分支和几个更深层的分支组成。

Concurrent with our work, “highway networks” present shortcut connections with gating functions . These gates are data-dependent and have parameters, in contrast to our identity shortcuts that are parameter-free. When a gated shortcut is “closed” (approaching zero), the layers in highway networks represent non-residual functions. On the contrary, our formulation always learns residual functions; our identity shortcuts are never closed, and all information is always passed through, with additional residual functions to be learned. In addition, highway networks have not demonstrated accuracy gains with extremely increased depth (e.g., over 100 layers).

与我们的工作同时,“公路网”提供具有门控功能的快捷连接。 与我们的不带参数的自身捷径相反,这些门取决于数据并具有参数。 当封闭的捷径“关闭”(接近零)时,公路网中的图层表示非残差函数。 相反,我们的公式总是学习残差函数。 我们的自身捷径永远不会关闭,所有信息始终都会传递,还需要学习其他残差函数。 另外,公路网还没有显示出深度极大增加(例如,超过100层)的准确性。

3. Deep Residual Learning
3.1. Residual Learning
Let us consider H ( x ) \mathcal{H}(x) H(x) as an underlying mapping to be fit by a few stacked layers (not necessarily the entire net), with x denoting the inputs to the first of these layers. If one hypothesizes that multiple nonlinear layers can asymptotically approximate complicated functions, then it is equivalent to hypothesize that they can asymptotically approximate the residual functions, i.e., H ( x ) − x \mathcal{H}(x)-x H(x)x(assuming that the input and output are of the same dimensions). So rather than expect stacked layers to approximate H ( x ) \mathcal{H}(x) H(x), we explicitly let these layers approximate a residual function F ( x ) : = H ( x ) − x \mathcal{F}(x):=\mathcal{H}(x)-x F(x):=H(x)x. The original function thus becomes F ( x ) + x \mathcal{F}(x)+x F(x)+x. Although both forms should be able to asymptotically approximate the desired functions (as hypothesized), the ease of learning might be different.

3.深度残差学习

3.1.残差学习

让我们将 H ( x ) \mathcal{H}(x) H(x)视为由一些叠加的层(不一定是整个网络)拟合的底层映射,其中x表示这些层中第一层的输入。如果假设多个非线性层可以渐近地逼近复杂函数,则等价于假设它们可以渐近地近似残差函数,即 H ( x ) − x \mathcal{H}(x)-x H(x)x(假设输入和输出的维数相同)。 因此,我们没有让叠加的层逼近 H ( x ) \mathcal{H}(x) H(x),而是明确让这些层逼近残差函数 F ( x ) : = H ( x ) − x \mathcal{F}(x):=\mathcal{H}(x)-x F(x):=H(x)x。 因此,原始函数变为 F ( x ) + x \mathcal{F}(x)+x F(x)+x。 尽管两种形式都应能够渐近地逼近所需的函数(如假设),但学习的难易程度可能有所不同。

This reformulation is motivated by the counterintuitive phenomena about the degradation problem (Fig. 1, left). As we discussed in the introduction, if the added layers can be constructed as identity mappings, a deeper model should have training error no greater than its shallower counterpart. The degradation problem suggests that the solvers might have difficulties in approximating identity mappings by multiple nonlinear layers. With the residual learning reformulation, if identity mappings are optimal, the solvers may simply drive the weights of the multiple nonlinear layers toward zero to approach identity mappings.

关于降级问题的反直觉现象促使这种重新形成(图1,左)。 正如我们在引言中讨论的那样,如果可以将添加的层构造为恒等映射,则较深的模型应具有的训练误差不大于其较浅的模型的训练误差。 退化问题表明,求解器可能难以通过多个非线性层来逼近恒等映射。 通过残差学习的重构,如果恒等映射是最佳的,则求解器可以简单地将多个非线性层的权重趋向于零以逼近恒等映射。

In real cases, it is unlikely that identity mappings are optimal, but our reformulation may help to precondition the problem. If the optimal function is closer to an identity mapping than to a zero mapping, it should be easier for the solver to find the perturbations with reference to an identity mapping, than to learn the function as a new one. We show by experiments (Fig. 7) that the learned residual functions in general have small responses, suggesting that identity mappings provide reasonable preconditioning.

在实际情况下,恒等映射不太可能是最佳的,但是我们的重构可能有助于解决问题。 如果最优函数比零映射更接近恒等映射,则求解器参考恒等映射来查找扰动应该比学习新函数更容易。 我们通过实验(图7)表明,所学习的残差函数通常具有较小的响应,这表明恒等映射提供了合理的预处理。

3.2. Identity Mapping by Shortcuts
We adopt residual learning to every few stacked layers. A building block is shown in Fig. 2. Formally, in this paper we consider a building block defined as: y = F ( x , { W i } ) + x . y=\mathcal{F}(x,\{W_i\})+x. y=F(x,{Wi})+x. (1) Here x and y are the input and output vectors of the layers considered. The function F ( x , { W i } ) \mathcal{F}(x,\{W_i\}) F(x,{Wi}) represents the residual mapping to be learned. For the example in Fig. 2 that has two layers, F = W 2 σ ( W 1 x ) \mathcal{F}=W_2\sigma(W_1x) F=W2σ(W1x) in which σ denotes ReLU and the biases are omitted for simplifying notations. The operation F + x \mathcal{F}+x F+x is performed by a shortcut connection and element-wise addition. We adopt the second nonlinearity after the addition (i.e., σ(y), see Fig. 2).

3.2恒等映射

我们对每几个叠加的层采用残差学习。 一个构建块如图2所示。在本文中,我们将构建块定义为 y = F ( x , { W i } ) + x . y=\mathcal{F}(x,\{W_i\})+x. y=F(x,{Wi})+x.(1)这里的x和y是所考虑层的输入和输出向量。 函数 F ( x , { W i } ) \mathcal{F}(x,\{W_i\}) F(x,{Wi})表示要学习的残差映射。 对于图2中具有两层的示例, F = W 2 σ ( W 1 x ) \mathcal{F}=W_2\sigma(W_1x) F=W2σ(W1x),其中σ表示ReLU和简化后的偏差。 F + x \mathcal{F}+x F+x操作通过快捷连接和element-wise加法执行。 在加法之后我们采用第二个非线性度(即σ(y),见图2)。

The shortcut connections in Eqn.(1) introduce neither extra parameter nor computation complexity. This is not only attractive in practice but also important in our comparisons between plain and residual networks. We can fairly compare plain/residual networks that simultaneously have the same number of parameters, depth, width, and computational cost (except for the negligible element-wise addition).

公式(1)中的快捷连接既没有引入额外的参数,也没有引入计算复杂度。 这不仅在实践中具有吸引力,而且在我们比较普通网络和残差网络时也很重要。 我们可以比较同时具有相同数量的参数,深度,宽度和计算成本(除了可以忽略的element-wise加法)的普通/残差网络。

The dimensions of x and F \mathcal{F} F must be equal in Eqn.(1). If this is not the case (e.g., when changing the input/output channels), we can perform a linear projection W s W_s Ws by the shortcut connections to match the dimensions: y = F ( x , { W i } ) + W s x . y=\mathcal{F}(x,\{W_i\})+W_sx. y=F(x,{Wi})+Wsx. (2).We can also use a square matrix W s W_s Ws in Eqn.(1). But we will show by experiments that the identity mapping is sufficient for addressing the degradation problem and is economical, and thus W s W_s Ws is only used when matching dimensions.

在公式(1)中x和 F \mathcal{F} F的维数必须相等。如果不是这种情况(例如,当更改输入/输出通道时),我们可以通过快捷连接执行线性投影 W s W_s Ws以匹配维度: y = F ( x , { W i } ) + W s x . y=\mathcal{F}(x,\{W_i\})+W_sx. y=F(x,{Wi})+Wsx. (2)。我们也可以在公式(1)中使用方阵 W s W_s Ws。 但是我们将通过实验表明,恒等映射足以解决降级问题并且很经济,因此 W s W_s Ws仅在匹配维度时使用。

The form of the residual function F \mathcal{F} F is flexible. Experiments in this paper involve a function F \mathcal{F} F that has two or three layers (Fig. 5), while more layers are possible. But if F has only a single layer, Eqn.(1) is similar to a linear layer: y = W 1 x + x y=W_1x+x y=W1x+x, for which we have not observed advantages.

残差函数 F \mathcal{F} F的形式是灵活的。本文中的实验涉及一个具有两层或三层的函数 F \mathcal{F} F(图5),而更多的层也是可能的。但是,如果F仅具有一层,则公式(1)类似于线性层: y = W 1 x + x y=W_1x+x y=W1x+x,对此我们没有发现到优势。

We also note that although the above notations are about fully-connected layers for simplicity, they are applicable to convolutional layers. The function F ( x , { W i } ) \mathcal{F}(x,\{W_i\}) F(x,{Wi}) can represent multiple convolutional layers. The element-wise addition is performed on two feature maps, channel by channel.

我们还注意到,尽管为简化起见,上述符号是关于全连接层的,但它们也适用于卷积层。 函数 F ( x , { W i } ) \mathcal{F}(x,\{W_i\}) F(x,{Wi})可以表示多个卷积层。 在两个特征映射上逐个通道执行element-wise加法。

3.3. Network Architectures

We have tested various plain/residual nets, and have observed consistent phenomena. To provide instances for discussion, we describe two models for ImageNet as follows.

3.3 网络结构

我们测试了各种普通/残差网络,并观察到了一致的现象。为了提供讨论的实例,我们描述了ImageNet的两个模型。

Plain Network. Our plain baselines (Fig. 3, middle) are mainly inspired by the philosophy of VGG nets(Fig. 3, left). The convolutional layers mostly have 3×3 filters and follow two simple design rules: (i) for the same output feature map size, the layers have the same number of filters; and (ii) if the feature map size is halved, the number of filters is doubled so as to preserve the time complexity per layer. We perform downsampling directly by convolutional layers that have a stride of 2. The network ends with a global average pooling layer and a 1000-way fully-connected layer with softmax. The total number of weighted layers is 34 in Fig. 3 (middle).

普通网络。我们的普通基准网络(图3,中间)主要受VGG网络的哲学启发(图3,左)。卷积层通常有3×3个滤波器,遵循两个简单的设计规则:(i)对于相同的输出特征映射大小,各层具有相同数量的滤波器;(ii)如果特征映射大小减半,则滤波器数量加倍,以保持每层的时间复杂度。我们直接通过(stride=2)卷积层进行下采样。该网络末端以一个全局均值池化层和一个1000路全连接层(softmax激活)结束。图3(中间)中加权层的总数为34。

It is worth noticing that our model has fewer filters and lower complexity than VGG nets(Fig. 3, left). Our 34- layer baseline has 3.6 billion FLOPs (multiply-adds), which is only 18% of VGG-19 (19.6 billion FLOPs).

值得注意的是,与VGG网络相比,我们的模型具有更少的过滤器和更低的复杂度(图3,左图)。我们的34层基本计算量有3.6亿次FLOPs(乘法和加法),只有VGG-19(196亿次FLOPs)的18%。

在这里插入图片描述
Figure 3. Example network architectures for ImageNet. Left: the VGG-19 model [40] (19.6 billion FLOPs) as a reference. Middle:
a plain network with 34 parameter layers (3.6 billion FLOPs). Right: a residual network with 34 parameter layers (3.6 billion FLOPs). The dotted shortcuts increase dimensions. Table 1 shows more details and other variants.

图3.ImageNet的网络结构示例。左图:VGG-19模型(196亿次FLOPs)作为参考。中间层:一个有34个参数层(36亿次FLOPs)的普通网络。右图:一个有34个参数层(36亿次FLOPs)的残差网络。虚线捷径增加维度。表1显示了更多细节和其他变体。

Residual Network. Based on the above plain network, we insert shortcut connections (Fig. 3, right) which turn the network into its counterpart residual version. The identity shortcuts (Eqn.(1)) can be directly used when the input and output are of the same dimensions (solid line shortcuts in Fig. 3). When the dimensions increase (dotted line shortcuts in Fig. 3), we consider two options: (A) The shortcut still performs identity mapping, with extra zero entries padded for increasing dimensions. This option introduces no extra parameter; (B) The projection shortcut in Eqn.(2) is used to match dimensions (done by 1×1 convolutions). For both options, when the shortcuts go across feature maps of two sizes, they are performed with a stride of 2.

残差网络。基于上述普通网络,我们插入快捷连接(图3,右图),将网络转换为对应的残差版本。当输入和输出具有相同的维度(图3中的实线捷径)时,可以直接使用自身捷径(公式(1))。当维度增加时(图3中的虚线捷径),我们考虑两个策略:(A)快捷连接仍然执行恒等映射,并为增加维度填充额外的零项。此策略不引入额外参数;(B)公式(2)中的投影捷径用于匹配维度(通过1×1卷积完成)。对于这两个策略,当快捷连接跨越两个维度的特征图时,它们的步长为2。

3.4. Implementation

Our implementation for ImageNet follows the practice in [21, 40]. The image is resized with its shorter side randomly sampled in [256, 480] for scale augmentation [40]. A 224×224 crop is randomly sampled from an image or its horizontal flip, with the per-pixel mean subtracted [21]. The standard color augmentation in [21] is used. We adopt batch normalization (BN) [16] right after each convolution and before activation, following [16]. We initialize the weights as in [12] and train all plain/residual nets from scratch. We use SGD with a mini-batch size of 256. The learning rate starts from 0.1 and is divided by 10 when the error plateaus, and the models are trained for up to 60×104 iterations. We use a weight decay of 0.0001 and a momentum of 0.9. We do not use dropout [13], following the practice in [16].

3.4. 实现

我们对ImageNet的实现遵循了[21,40]中的实践。图片被根据短边等比缩放,按照[256,480]区间的尺寸随机采样进行尺度增强[40]。从图像或其水平翻转中随机抽取224×224的裁剪,并减去每像素的平均值[21]。使用了[21]中的标准增色。我们把批量正则化用在了每个卷积层和激活层之间。我们按照[12]中的方法初始化权重,并从0开始训练所有的普通/残差网络。我们使用SGD算法,最小批量为256。学习速率从0.1开始,当误差趋于平稳时除以10,模型训练到60×104次迭代。我们使用参数为0.0001的权重衰减和0.9的动量。我们不使用dropout[13],参考了[16]中的做法。

In testing, for comparison studies we adopt the standard 10-crop testing [21]. For best results, we adopt the fullyconvolutional form as in [40, 12], and average the scores at multiple scales (images are resized such that the shorter side is in {224, 256, 384, 480, 640}).

在测试中,为了进行比较研究,我们采用了标准的10-crop实验[21]。 为了获得最佳结果,我们采用[40,12]中的全卷积形式的网络,并在多个维度上对结果取平均值(调整图像大小,使较短的边在{224,256,384,480,640}中)。

4. Experiments

4.1. ImageNet Classification
We evaluate our method on the ImageNet 2012 classification dataset [35] that consists of 1000 classes. The models are trained on the 1.28 million training images, and evaluated on the 50k validation images. We also obtain a final result on the 100k test images, reported by the test server. We evaluate both top-1 and top-5 error rates.

4. 实验

4.1 ImageNet 分类

我们在包含1000个类的ImageNet 2012分类数据集[35]上评估我们的方法。各模型均在128万张训练图像上进行训练,并在5万个验证图像上进行评估。我们还获得了由测试服务器报告的10万个测试图像的最终结果。我们评估第1和前5的错误率。

Plain Networks. We first evaluate 18-layer and 34-layer plain nets. The 34-layer plain net is in Fig. 3 (middle). The 18-layer plain net is of a similar form. See Table 1 for detailed architectures.

普通网络。我们首先评估18层和34层普通网络。图3(中)为34层普通网络。18层的普通网络结构相似。具体结构见表1。

The results in Table 2 show that the deeper 34-layer plain net has higher validation error than the shallower 18-layer plain net. To reveal the reasons, in Fig. 4 (left) we compare their training/validation errors during the training procedure. We have observed the degradation problem - the 34-layer plain net has higher training error throughout the whole training procedure, even though the solution space of the 18-layer plain network is a subspace of that of the 34-layer one.

表2中的结果表明,较深的34层普通网络比浅层的18层普通网络具有更高的验证误差。为了揭示原因,在图4(左)中,我们比较了他们在训练过程中的训练/验证错误。我们观察到了退化问题:尽管18层普通网络的解空间是34层普通网络的子空间,但在整个训练过程中,34层普通网络的训练误差较大。

在这里插入图片描述
Table 1. Architectures for ImageNet. Building blocks are shown in brackets (see also Fig. 5), with the numbers of blocks stacked. Downsampling is performed by conv3_1, conv4_1, and conv5_1 with a stride of 2.

表1. ImageNet的结构。块结构显示在括号中(也可参见图5)几种类型的块堆积成网络结构。下采样用的步长为2的conv3_1、conv4_1和conv5_1。

在这里插入图片描述
Figure 4. Training on ImageNet. Thin curves denote training error, and bold curves denote validation error of the center crops. Left: plain networks of 18 and 34 layers. Right: ResNets of 18 and 34 layers. In this plot, the residual networks have no extra parameter compared to their plain counterparts.

图4. ImageNet的训练。细曲线表示训练误差,粗曲线表示中心作物的验证误差。左图:18层和34层的普通网络。右图:18层和34层的残差网络。在这个图中,残差网络与普通网络相比没有额外的参数。

在这里插入图片描述

Table 2. Top-1 error (%, 10-crop testing) on ImageNet validation. Here the ResNets have no extra parameter compared to their plain counterparts. Fig. 4 shows the training procedures.

表2. ImageNet验证中的最大错误率(%,10-crop测试)。在这里,残差网络与普通网络相比也没有任何额外参数。图4示展出了训练的过程。

We argue that this optimization difficulty is unlikely to be caused by vanishing gradients. These plain networks are trained with BN , which ensures forward propagated signals to have non-zero variances. We also verify that the backward propagated gradients exhibit healthy norms with BN. So neither forward nor backward signals vanish. In fact, the 34-layer plain net is still able to achieve competitive accuracy (Table 3), suggesting that the solver works to some extent. We conjecture that the deep plain nets may have exponentially low convergence rates, which impact the reducing of the training error3. The reason for such optimization difficulties will be studied in the future.

我们认为这种优化困难不太可能是由梯度消失引起的。这些普通网络用BN训练,以确保前向传播的信号具有非零方差。我们还验证了反向传播的梯度具有BN的良好的规范性。所以前进和反向传播信号都不会消失。事实上,34层的普通网络仍然能够达到有竞争力的精度(表3),这表明求解器在一定程度上起作用。我们推测深普通网络可能具有指数级的低收敛速度,这影响了训练误差的降低。这种优化困难的原因将在未来进行研究。

Residual Networks. Next we evaluate 18-layer and 34- layer residual nets (ResNets). The baseline architectures are the same as the above plain nets, expect that a shortcut connection is added to each pair of 3×3 filters as in Fig. 3 (right). In the first comparison (Table 2 and Fig. 4 right), we use identity mapping for all shortcuts and zero-padding for increasing dimensions (option A). So they have no extra parameter compared to the plain counterparts.

残差网络。接下来我们评估18层和34层残差网络(ResNets)。基线架构与上述普通网络相同,只是在每对3×3滤波器上增加了一个快捷连接,如图3(右图)所示。在第一个比较中(表2和图4右图),我们对所有快捷连接使用恒等映射,对增加维度使用零填充(选项A)。因此,与普通网络相比,它们没有额外的参数。

We have three major observations from Table 2 and Fig. 4. First, the situation is reversed with residual learning – the 34-layer ResNet is better than the 18-layer ResNet (by 2.8%). More importantly, the 34-layer ResNet exhibits considerably lower training error and is generalizable to the validation data. This indicates that the degradation problem is well addressed in this setting and we manage to obtain accuracy gains from increased depth.

从表2和图4中我们有三个主要的观察结果。首先,通过残差学习扭转了这种情况——34层残差网络比18层残差网络好(降低了2.8%)。更重要的是,34层ResNet显示出相当低的训练误差,并且可以推广到验证数据中。这表明,在这种情况下,退化问题得到了很好的解决,并且我们设法从增加深度中获得更高的精度。

Second, compared to its plain counterpart, the 34-layer ResNet reduces the top-1 error by 3.5% (Table 2), resulting from the successfully reduced training error (Fig. 4 right vs. left). This comparison verifies the effectiveness of residual learning on extremely deep systems.

第二,与普通网络相比,34层残差网络将最大错误率减少了3.5%(表2),这是由于成功地减少了训练误差(图4右与左)。这项比较验证了残差学习在极深系统上的有效性。

在这里插入图片描述
Table 3. Error rates (%, 10-crop testing) on ImageNet validation. VGG-16 is based on our test. ResNet-50/101/152 are of option B that only uses projections for increasing dimensions.

表3. ImageNet验证的错误率(%,10个裁剪测试)。VGG-16基于我们的测试。ResNet-50/101/152属于选项B,仅使用投影来增加维度。

在这里插入图片描述
Table 4. Error rates (%) of single-model results on the ImageNet validation set (except † reported on the test set).

表4. ImageNet验证集上单模型结果的错误率(%)(测试集上报告的†除外)。

在这里插入图片描述
Table 5. Error rates (%) of ensembles. The top-5 error is on the test set of ImageNet and reported by the test server.

表5. 整体的错误率(%)。前5的错误率在ImageNet的测试集中,由测试服务器报告。

Last, we also note that the 18-layer plain/residual nets are comparably accurate (Table 2), but the 18-layer ResNet converges faster (Fig. 4 right vs. left). When the net is “not overly deep” (18 layers here), the current SGD solver is still able to find good solutions to the plain net. In this case, the ResNet eases the optimization by providing faster convergence at the early stage.

最后,我们还注意到18层普通/残差网络相对准确(表2),但18层残差网络收敛更快(图4右与左)。当网络“不太深”(这里是18层)时,当前的SGD算法仍然能够找到普通网络的良好解决方案。在这种情况下,残差网络通过在早期阶段提供更快的收敛来简化优化。

在这里插入图片描述
Figure 5. A deeper residual function F for ImageNet. Left: a building block (on 56×56 feature maps) as in Fig. 3 for ResNet-34. Right: a “bottleneck” building block for ResNet-50/101/152.

图5. ImageNet的一个更深的残差函数F。左图:如图3所示的残差网络的构建块(在56×56特征图上)。右图:ResNet-50/101/152的“瓶颈”构建块。

Identity vs. Projection Shortcuts. We have shown that parameter-free, identity shortcuts help with training. Next we investigate projection shortcuts (Eqn.(2)). In Table 3 we compare three options: (A) zero-padding shortcuts are used for increasing dimensions, and all shortcuts are parameterfree (the same as Table 2 and Fig. 4 right); (B) projection shortcuts are used for increasing dimensions, and other shortcuts are identity; and © all shortcuts are projections.

恒等与投影捷径。我们已经证明了无参数的恒等捷径有助于训练。接下来我们研究投影捷径(公式(2))。在表3中,我们比较了三个选项:(A)零填充捷径用于增加维度,并且所有捷径都是无参数的(与表2和图4右相同);(B)投影捷径用于增加维度,其他捷径都是没有参数的恒等捷径;(C)所有捷径都是投影捷径。

Table 3 shows that all three options are considerably better than the plain counterpart. B is slightly better than A.We argue that this is because the zero-padded dimensions in A indeed have no residual learning. C is marginally better than B, and we attribute this to the extra parameters introduced by many (thirteen) projection shortcuts. But the small differences among A/B/C indicate that projection shortcuts are not essential for addressing the degradation problem. So we do not use option C in the rest of this paper, to reduce memory/time complexity and model sizes. Identity shortcuts are particularly important for not increasing the complexity of the bottleneck architectures that are introduced below.

表3显示,这三种方案都比普通网络好得多。我们认为这是因为A中的零填充维度确实没有进行残差学习。C略优于B,我们将其归因于许多(十三)个投影捷径引入的额外参数。但是A/B/C之间的细微差别表明,投影捷径对于解决退化问题不是必需的。因此,在本文的其余部分中,我们不使用选项C来降低内存/时间复杂性和模型大小。恒等捷径对于不增加下面介绍的瓶颈体系结构的复杂性特别重要。

Deeper Bottleneck Architectures. Next we describe our deeper nets for ImageNet. Because of concerns on the training time that we can afford, we modify the building block as a bottleneck design. For each residual function F, we use a stack of 3 layers instead of 2 (Fig. 5). The three layers are 1×1, 3×3, and 1×1 convolutions, where the 1×1 layers are responsible for reducing and then increasing (restoring) dimensions, leaving the 3×3 layer a bottleneck with smaller input/output dimensions. Fig. 5 shows an example, where both designs have similar time complexity.

深度瓶颈架构。接下来,我们将描述ImageNet的深层网络。出于对训练时间的考虑,我们将构建块修改为瓶颈设计。对于每个残差函数F,我们使用3层而不是2层的堆栈(图5)。这三个层分别是1×1、3×3和1×1卷积,其中1×1层负责减少然后增加(恢复)维度,使3×3层成为输入/输出维度较小的瓶颈。图5显示出了两个设计具有相似的时间复杂度的示例。

The parameter-free identity shortcuts are particularly important for the bottleneck architectures. If the identity shortcut in Fig. 5 (right) is replaced with projection, one can show that the time complexity and model size are doubled, as the shortcut is connected to the two high-dimensional ends. So identity shortcuts lead to more efficient models for the bottleneck designs.

无参数恒等捷径对于瓶颈体系结构尤为重要。如果将图5(右)中的恒等捷径替换为投影捷径,则可以显示时间复杂性和模型大小加倍,因为捷径连接到两个高维端。因此,恒等捷径为瓶颈设计提供了更有效的模型。

50-layer ResNet: We replace each 2-layer block in the 34-layer net with this 3-layer bottleneck block, resulting in a 50-layer ResNet (Table 1). We use option B for increasing dimensions. This model has 3.8 billion FLOPs.

50层残差网络:我们用这个3层瓶颈块替换34层网络中的每个2层块,从而得到一个50层残差网络(表1)。我们使用选项B来增加维度。这个模型有38亿次FLOPs。

101-layer and 152-layer ResNets: We construct 101- layer and 152-layer ResNets by using more 3-layer blocks (Table 1). Remarkably, although the depth is significantly increased, the 152-layer ResNet (11.3 billion FLOPs) still has lower complexity than VGG-16/19 nets (15.3/19.6 billion FLOPs).

101层和152层残差网络:我们使用更多的3层块构造101层和152层残差网络(表1)。值得注意的是,尽管深度显著增加,但152层残差网络(113亿个FLOPs)仍然比VGG-16/19网络(153/196亿个FLOPs)的复杂度低。

The 50/101/152-layer ResNets are more accurate than the 34-layer ones by considerable margins (Table 3 and 4). We do not observe the degradation problem and thus enjoy significant accuracy gains from considerably increased depth. The benefits of depth are witnessed for all evaluation metrics (Table 3 and 4).

50/101/152层残差网络比34层残差网络精确度高得多(表3和表4)。我们没有观察到退化问题,因此可以从显著增加的深度中获得显著的精缺度的提高。深度的好处在所有评估指标中都有体现(表3和表4)。

Comparisons with State-of-the-art Methods. In Table 4 we compare with the previous best single-model results. Our baseline 34-layer ResNets have achieved very competitive accuracy. Our 152-layer ResNet has a single-model top-5 validation error of 4.49%. This single-model result outperforms all previous ensemble results (Table 5). We combine six models of different depth to form an ensemble (only with two 152-layer ones at the time of submitting). This leads to 3.57% top-5 error on the test set (Table 5).This entry won the 1st place in ILSVRC 2015.

与最先进方法的比较。在表4中,我们将与之前的最佳单模型结果进行比较。我们的34层基础残差网络已经达到了非常有竞争力的准确度。我们的152层残差网络的单模型的前5验证误差为4.49%。这个单一模型的结果优于所有先前的整体结果(表5)。我们将六个不同深度的模型组合成一个整体(提交时只有两个152层模型)。这导致测试集上3.57%的前5错误率(表5)。这项结果在2015年ILSVRC中获得第一名。

4.2. CIFAR-10 and Analysis

We conducted more studies on the CIFAR-10 dataset, which consists of 50k training images and 10k testing images in 10 classes. We present experiments trained on the training set and evaluated on the test set. Our focus is on the behaviors of extremely deep networks, but not on pushing the state-of-the-art results, so we intentionally use simple architectures as follows.

4.2 CIFAR-10与分析

我们对CIFAR-10数据集进行了进一步的研究,该数据集包括10个类别的5万张训练图像和1万张测试图像。我们提出了在训练集上训练的实验,并在测试集上进行了评估。我们的重点是极深层网络的行为,而不是推动最先进的结果,因此我们有意使用以下简单的架构。

The plain/residual architectures follow the form in Fig. 3 (middle/right). The network inputs are 32×32 images, with the per-pixel mean subtracted. The first layer is 3×3 convolutions. Then we use a stack of 6n layers with 3×3 convolutions on the feature maps of sizes {32, 16, 8} respectively, with 2n layers for each feature map size. The numbers of filters are {16, 32, 64} respectively. The subsampling is performed by convolutions with a stride of 2. The network ends with a global average pooling, a 10-way fully-connected layer, and softmax. There are totally 6n+2 stacked weighted layers. The following table summarizes the architecture:
在这里插入图片描述
When shortcut connections are used, they are connected to the pairs of 3×3 layers (totally 3n shortcuts). On this dataset we use identity shortcuts in all cases (i.e., option A), so our residual models have exactly the same depth, width, and number of parameters as the plain counterparts.

普通/残差结构遵循图3(中/右)中的形式。网络输入为32×32的图像,减去每像素的平均值。第一层是3×3卷积层。然后我们在尺寸分别为{32、16、8}的特征映射上使用6n层和3×3卷积,每个特征图大小有2n层。滤波器的数目分别是{16,32,64}。二次采样是通过步长为2的卷积进行的。该网络以全局均值池化、10路全连接层和softmax结束。共有6n+2个加权层。下表总结了结构:
在这里插入图片描述
当使用快捷连接时,它们连接到3×3网络层(共3n个快捷连接)对上。在这个数据集中,我们在所有情况下都使用恒等捷径(即选项A),因此我们的残差模型具有与普通模型完全相同的深度、宽度和参数数量。

在这里插入图片描述
Table 6. Classification error on the CIFAR-10 test set. All methods are with data augmentation. For ResNet-110, we run it 5 times and show “best (mean±std)” as in [42].

表6. CIFAR-10测试集上的分类错误。所有的方法都有数据增强。对于110层的残差网络,我们运行了5次,并显示“最佳(平均值±标准)”如[42]所示。

We use a weight decay of 0.0001 and momentum of 0.9, and adopt the weight initialization in [12] and BN but with no dropout. These models are trained with a mini-batch size of 128 on two GPUs. We start with a learning rate of 0.1, divide it by 10 at 32k and 48k iterations, and terminate training at 64k iterations, which is determined on a 45k/5k train/val split. We follow the simple data augmentation in [24] for training: 4 pixels are padded on each side, and a 32×32 crop is randomly sampled from the padded image or its horizontal flip. For testing, we only evaluate the single view of the original 32×32 image. We compare n = {3, 5, 7, 9}, leading to 20, 32, 44, and 56-layer networks. Fig. 6 (left) shows the behaviors of the plain nets. The deep plain nets suffer from increased depth, and exhibit higher training error when going deeper. This phenomenon is similar to that on ImageNet (Fig. 4, left) and on MNIST (see [41]), suggesting that such an optimization difficulty is a fundamental problem.

我们使用0.0001的权重衰减和0.9的动量,并采用了[12]和BN中的权重初始化,但没有使用dropout。这些模型训练时用了两个GPU,mini-batch大小为128。我们从0.1的学习率开始,在32k和48k迭代时除以10,在64k迭代时终止训练,这是根据45k/5k训练/val分割确定的。我们遵循[24]中的简单数据增强进行训练:在每侧填充4个像素,并从填充的图像或其水平翻转中随机采样32×32的切割。为了测试,我们只评估原始32×32图像的单个视图。我们比较n={3、5、7、9},得到20、32、44和56层的网络。图6(左)显示了普通网络的表现。深度普通网络受深度增加的影响,越深训练误差越大。这种现象类似于ImageNet(图4,左图)和MNIST(见[41]),这表明这种优化困难是一个基本问题。

Fig. 6 (middle) shows the behaviors of ResNets. Also similar to the ImageNet cases (Fig. 4, right), our ResNets manage to overcome the optimization difficulty and demonstrate accuracy gains when the depth increases.

图6(中间)显示了残差网络的表现。同样类似于ImageNet的例子(图4,右图),我们的残差网络设法克服了优化的困难,并且证明了随着深度的增加,精度得到了提高。

We further explore n = 18 that leads to a 110-layer ResNet. In this case, we find that the initial learning rate of 0.1 is slightly too large to start converging. So we use 0.01 to warm up the training until the training error is below 80% (about 400 iterations), and then go back to 0.1 and continue training. The rest of the learning schedule is as done previously. This 110-layer network converges well (Fig. 6, middle). It has fewer parameters than other deep and thin networks such as FitNet and Highway (Table 6), yet is among the state-of-the-art results (6.43%, Table 6).

我们进一步研究了n=18导致的110层残差网络t。在这种情况下,我们发现初始学习率0.1太大了,无法开始收敛。所以我们用0.01预训练直到训练误差小于80%(大约400次迭代),然后回到0.1得学习率继续训练。接下来的训练方案如前文所述。这个110层网络收敛性很好(图6,中间)。它的参数比FitNet和Highway(表6)等其他深而细的网络要少,但仍然是最先进的结果(6.43%,表6)。

在这里插入图片描述
Figure 6. Training on CIFAR-10. Dashed lines denote training error, and bold lines denote testing error. Left: plain networks. The error of plain-110 is higher than 60% and not displayed. Middle: ResNets. Right: ResNets with 110 and 1202 layers.

图6. CIFAR-10训练。虚线表示训练错误】率,粗线表示测试错误率。左图:普通网络。Plain-110的错误大于60%,不显示。中间:残差网络。右:具有110层和1202层的残差网络。

在这里插入图片描述
Figure 7. Standard deviations (std) of layer responses on CIFAR-10. The responses are the outputs of each 3×3 layer, after BN and before nonlinearity. Top: the layers are shown in their original order. Bottom: the responses are ranked in descending order.

图7. CIFAR-10层响应的标准差(std)。响应是每3×3层的输出,在BN后和非线性之前。顶部:图层按原始顺序显示。底部:响应按降序排列。

在这里插入图片描述
Table 7. Object detection mAP (%) on the PASCAL VOC 2007/2012 test sets using baseline Faster R-CNN. See also appendix for better results.

表7. 使用基准Faster R-CNN在PASCAL VOC 2007/2012测试集中进行的对象检测mAP(%)。

在这里插入图片描述
Table 8. Object detection mAP (%) on the COCO validation set using baseline Faster R-CNN. See also appendix for better results.

表8. 使用基线Faster R-CNN在COCO验证集上进行的目标检测mAP(%)。

Analysis of Layer Responses. Fig. 7 shows the standard deviations (std) of the layer responses. The responses are the outputs of each 3×3 layer, after BN and before other nonlinearity (ReLU/addition). For ResNets, this analysis reveals the response strength of the residual functions. Fig. 7 shows that ResNets have generally smaller responses than their plain counterparts. These results support our basic motivation (Sec.3.1) that the residual functions might be generally closer to zero than the non-residual functions. We also notice that the deeper ResNet has smaller magnitudes of responses, as evidenced by the comparisons among ResNet-20, 56, and 110 in Fig. 7. When there are more layers, an individual layer of ResNets tends to modify the signal less.

网络层响应分析。图7显示了层响应的标准差(std)。响应是每3×3层的输出,在BN之后和其他非线性(ReLU/addition)之前。对于残差网络,这个分析揭示了残差函数的响应强度。图7显示出残差网络通常普通网络具有更小的响应。这些结果支持我们的基本动机(第3.1节),即残差函数可能比非残差函数更接近于零。我们还注意到更深的残差网络具有更小的响应,如图7中ResNet-20、56和110之间的比较所证明的。当有更多层时,单个残差网络层对信号的修改更少。

Exploring Over 1000 layers. We explore an aggressively deep model of over 1000 layers. We set n = 200 that leads to a 1202-layer network, which is trained as described above. Our method shows no optimization difficulty, and this 1 0 3 10^3 103-layer network is able to achieve training error <0.1% (Fig. 6, right). Its test error is still fairly good (7.93%, Table 6).

探索超过1000层的网络。我们探索了一个超过1000层的深度模型。我们将n设置为200,对应了1202层网络,该网络按上述方式训练。我们的方法没有优化的困难,而且这个1000层网络能够达到训练误差<0.1%(图6,右图)。其测试误差仍然相当的好(7.93%,表6)。

But there are still open problems on such aggressively deep models. The testing result of this 1202-layer network is worse than that of our 110-layer network, although both have similar training error. We argue that this is because of overfitting. The 1202-layer network may be unnecessarily large (19.4M) for this small dataset. Strong regularization such as maxout or dropout is applied to obtain the best results ([9, 25, 24, 34]) on this dataset. In this paper, we use no maxout/dropout and just simply impose regularization via deep and thin architectures by design, without distracting from the focus on the difficulties of optimization. But combining with stronger regularization may improve results, which we will study in the future.

但在这种积极的深层次模型上,仍然存在一些开放性的问题。该1202层网络的测试结果比110层网络的测试结果差,尽管两者的训练错误率相似。我们认为这是因为过度拟合。1202层网络对于这个小数据集来说可能是不必要的大(19.4M)。强正则化(如maxout或dropout)用于在该数据集上获得最佳结果。在本文中,我们没有使用maxout/dropout,只是简单的实施了正则化来配合增大/减少网络深度架构设计,而不分散对优化困难的关注。但结合更强的正则化可以提高结果,我们将在以后进行研究。

4.3. Object Detection on PASCAL and MS COCO

Our method has good generalization performance on other recognition tasks. Table 7 and 8 show the object detection baseline results on PASCAL VOC 2007 and 2012 and COCO . We adopt Faster R-CNN as the detection method. Here we are interested in the improvements of replacing VGG-16 [40] with ResNet-101. The detection implementation (see appendix) of using both models is the same, so the gains can only be attributed to better networks. Most remarkably, on the challenging COCO dataset we obtain a 6.0% increase in COCO’s standard metric (mAP@[.5, .95]), which is a 28% relative improvement. This gain is solely due to the learned representations.

4.3. 物体检测(PASCAL和MS COCO)

我们的方法在其他识别任务上具有良好的泛化性能。表7和表8显示了PASCAL VOC 2007和2012和COCO的目标检测基本结果。我们采用Faster R-CNN作为检测方法。这里我们重点关注对用ResNet-101代替VGG-16[40]的改进。使用这两种模型的检测实现(见附录)是相同的,因此增益只能归因于更好的网络结构。最值得注意的是,在具有挑战性的COCO数据集上,我们的结果比COCO的标准指标(mAP@[.5,.95])增加了6.0%,相对提高了28%。这种增益完全是由于残差网络学习的表现。

Based on deep residual nets, we won the 1st places in several tracks in ILSVRC & COCO 2015 competitions: ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation. The details are in the appendix.

基于深度残差网络,我们在ILSVRC&COCO 2015多个竞赛中竞赛中获得了第一名:ImageNet检测、ImageNet定位、COCO检测和COCO分割。详情见附录。

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值