ImageNet Classification with Deep Convolutional NeuralNetwork
We trained a large, deep convolutionalneural network to classify the 1.2 million high-resolution images in theImageNet LSVRC-2010 contest into the 1000 different classes. On the test data,we achieved top-1 and top-5 error rates of 37.5% and 17.0% which isconsiderably better than the previous state-of-the-art. The neural network,which has 60 million parameters and 650,000 neurons, consists of fiveconvolutional layers, some of which are followed by max-pooling layers,
and three fully-connected layers with afinal 1000-way softmax. To make training faster, we used non-saturating neuronsand a very efficient GPU implementation of the convolution operation. To reduceoverfitting in the fully-connected layers we employed a recently-developedregularization method called “dropout” that proved to be very effective. Wealso entered a variant of this model in the ILSVRC-2012 competition andachieved a winning top-5 test error rate of 15.3%, compared to 26.2% achievedby the second-best entry.
我们训练了一个又大又深的卷积神经网络，来分类120万张图片，这些图都是从Imagenet 2010 竞赛库中的，1000个类别。在测试数据集上我们的第一判别错误率为37.5%,前五准确率17%，，比第二名要好很多。神经网络有着6000万的参数，和650000个神经元，由5个卷基层，其中一些附加上max-pooling层，3个全连阶层，和1000维的softmax层。为了使得训练更加快速我们使用不会饱和的神经元，以及高效的GPU来实现卷积，为了减少全连接层的过拟合，我们使用了一个最近较多使用的策略“dropout”，这种方法可以很高效。我们还参与了好多的ILSVRC 2012 的比赛并取得了前五错误5.3%，与第二名26.2%相比。
Current approaches to object recognitionmake essential use of machine learning methods. To improve their performance,we can collect larger datasets, learn more powerful models, and use bettertechniques for preventing overfitting. Until recently, datasets of labeled imageswere relatively small — on the order of tens of thousands of images (e.g., NORB, Caltech-101/256 [8, 9], and CIFAR-10/100 ). imple recognition taskscan be solved quite well with datasets of this size, especially if they areaugmented with label-preserving transformations. For example, the currentbesterror rate on the MNIST digit-recognition task (<0.3%) approaches humanperformance . But objects in realistic settings exhibit considerablevariability, so to learn to recognize them it is necessary to use much largertraining sets. And indeed, the shortcomings of small image datasets have beenwidely recognized (e.g., Pinto et al. ), but it has only recently becomepossible to collect labeled datasets with millions of images. The new largerdatasets include LabelMe , which consists of hundreds of thousands offully-segmented images, and ImageNet , which consists of over 15 millionlabeled high-resolution images in over 22,000 categories.
To learn about thousands of objects frommillions of images, we need a model with a large learning
capacity. However, the immense complexityof the object recognition task means that this problem cannot be specified evenby a dataset as large as ImageNet, so our model should also have lots
of prior knowledge to compensate for allthe data we don’t have. Convolutional neural networks
(CNNs) constitute one such class of models[16, 11, 13, 18, 15, 22, 26]. Their capacity can be controlled by varying theirdepth and breadth, and they also make strong and mostly correct assumptions aboutthe nature of images (namely, stationarity of statistics and locality of pixeldependencies).Thus, compared to standard feedforward neural networks withsimilarly-sized layers, CNNs havemuch fewer connections and parameters and sothey are easier to train, while their theoretically-best performance is likelyto be only slightly worse.
Despite the attractive qualities of CNNs,and despite the relative efficiency of their local architecture,they have stillbeen prohibitively expensive to apply in large scale to high-resolution images.Luckily, current GPUs, paired with a highly-optimized implementation of 2Dconvolution, are powerful enough to facilitate the training ofinterestingly-large CNNs, and recent datasets such as ImageNet contain enoughlabeled examples to train such models without severe overfitting. The specificcontributions of this paper are as follows: we trained one of the largestconvolutional neural networks to date on the subsets of ImageNet used in theILSVRC-2010 and ILSVRC-2012 competitions  and achieved by far the bestresults ever reported on these datasets. We wrote a highly-optimized GPUimplementation of 2D convolution and all the other operations inherent in trainingconvolutional neural networks, which we make available publicly.1. Our networkcontains a number of new and unusual features which improve its performance andreduce its training time, which are detailed in Section 3.
The size of our network made overfitting asignificant problem, even with 1.2 million labeled training examples, so weused several effective techniques for preventing overfitting, which aredescribed in Section 4. Our final network contains five convolutional and threefully-connected layers, and this depth seems to be important: we found thatremoving any convolutional layer (each of which contains no more than 1% of themodel’s parameters) resulted in inferior performance.
In the end, the network’s size is limitedmainly by the amount of memory available on current GPUs and by the amount oftraining time that we are willing to tolerate. Our network takes between five andsix days to train on two GTX 580 3GB GPUs. All of our experiments suggest thatour results can be improved simply by waiting for faster GPUs and biggerdatasets to become available.
最后，网络的大小被限制，主要因为现在GPU的内存不够，为了速度考虑我们就忍了这一点，我们的网络模型使用了5-6天在两块GTX580 3GB GPU上训练，所有的我们的实现都现实我们的结果可以在更快的GPU和更大的数据集上得到提升
2 The Dataset
ImageNet is a dataset of over 15 millionlabeled high-resolution images belonging to roughly 22,000categories. Theimages were collected from the web and labeled by human labelers using Amazon’sMechanical Turk crowd-sourcing tool. Starting in 2010, as part of the PascalVisual Object Challenge, an annual competition called the ImageNet Large-ScaleVisual Recognition Challenge (ILSVRC) has been held. ILSVRC uses a subset ofImageNet with roughly 1000 images in each of 1000 categories. In all, there areroughly 1.2 million training images, 50,000 validation images, and 150,000testing images.ILSVRC-2010 is the only version of ILSVRC for which the test setlabels are available, so this is the version on which we performed most of ourexperiments. Since we also entered our model in the ILSVRC-2012 competition, inSection 6 we report our results on this version of the dataset as well, forwhich test set labels are unavailable. On ImageNet, it is customary to reporttwo error rates: top-1 and top-5, where the top-5 error rate is the fraction oftest images for which the correct label is not among the five labels consideredmost probable by the model.
Imagenet是一个超过150万数据量的数据库，包含22000类。数据是从网上收集，人工标注，使用亚马逊的亚马逊土耳其机器人(Amazon Mechanical Turk)。从2010年开始进行竞赛，ILSVRC使用一个子集包含1000类，每类别约1000张图像。所有的图像大概120万张训练，50000张验证，15000张测试，ILSVRC-2010是唯一一个可以获取测试集的，所以我们的大多数实验都是在这上面进行，因为我们也参加了ILSVRC2012，第六节我们汇报了我们的结果，其中测试数据是无法获取的。
ImageNet consists of variable-resolutionimages, while our system requires a constant input dimensionality. Therefore,we down-sampled the images to a fixed resolution of 256 × 256. Given a rectangularimage, we first rescaled the image such that the shorter side was of length256, and then cropped out the central 256×256 patch from the resulting image.We did not pre-process the images in any other way, except for subtracting themean activity over the training set from each pixel. So we trained our networkon the (centered) raw RGB values of the pixels.
3 The Architecture
The architecture of our network issummarized in Figure 2. It contains eight learned layers five convolutional andthree fully-connected. Below, we describe some of the novel or unusual featuresof our network’s architecture. Sections 3.1-3.4 are sorted according to ourestimation of their importance, with the most important first.
Figure 1: A four-layer convolutional neuralnetwork with ReLUs (solid line) reaches a 25% training error rate on CIFAR-10six times faster than an equivalent network with tanh neurons (dashed line).The learning rates for each network were chosen independently to make trainingas fast as possible. No regularization of any kind was employed. The magnitudeof the effect demonstrated here varies with network architecture, but networkswith ReLUs consistently learn several times faster than equivalents withsaturating neurons.
3.1 ReLU Nonlinearity
The standard way to model a neuron’s outputf as a function of its input x is with f(x) = tanh(x) or f(x) = (1 + e−x)−1. Interms of training time with gradient descent, these saturating nonlinearities aremuch slower than the non-saturating nonlinearity f(x) = max(0,x). FollowingNair and Hinton , we refer to neurons with this nonlinearity as Rectified LinearUnits (ReLUs). Deep convolutional neural networks with ReLUs train severaltimes faster than their equivalents with tanh units. This is demonstrated in Figure1, which shows the number of iterations required to reach 25% training error onthe CIFAR-10 dataset for a particular four-layer convolutional network. Thisplot shows that we would not have been able to experiment with such largeneural networks for this work if we had used traditional saturating neuron models.
We are not the first to consideralternatives to traditional neuron models in CNNs. For example, Jarrett et al. claim that the nonlinearity f(x) = jtanh(x)j works particularly well withtheir type of contrast normalization followed by local average pooling on the Caltech-101dataset. However, on this dataset the primary concern is preventingoverfitting, so the effect they are observing is different from the acceleratedability to fit the training set which we report when using ReLUs. Fasterlearning has a great influence on the performance of large models trained onlarge datasets.
我们不是第一个考虑换掉传统神经元的，例如Jarrett等人就表示非线性函数f(x) = jtanh(x)能有很好效果，反向归一化加局部平均池化在Caltech-101数据集上然而，在此数据集上，此前的结果是过拟合的，效果比不上使用relu加速厚度结果。快速学习对大型模型的最终结果有很大的影响.
3.2 Training on Multiple GPUs
A single GTX 580 GPU has only 3GB ofmemory, which limits the maximum size of the networks that can be trained onit. It turns out that 1.2 million training examples are enough to trainnetworks which are too big to fit on one GPU. Therefore we spread the netacross two GPUs. Current GPUs are particularly well-suited to cross-GPUparallelization, as they are able to read from and write to one another’smemory directly, without going through host machine memory. The parallelizationscheme that we employ essentially puts half of the kernels (or neurons) on eachGPU, with one additional trick: the GPUs communicate only in certain layers.This means that, for example, the kernels of layer 3 take input from all kernelmaps in layer 2. However, kernels in layer 4 take input only from those kernelmaps in layer 3 which reside on the same GPU. Choosing the pattern of connectivityis a problem for cross-validation, but this allows us to precisely tune theamount of communication until it is an acceptable fraction of the amount ofcomputation.
The resultant architecture is somewhatsimilar to that of the “columnar” CNN employed by Cire¸san
et al. , except that our columns are notindependent (see Figure 2). This scheme reduces our top-1 and top-5 error ratesby 1.7% and 1.2%, respectively, as compared with a net with half as many kernelsin each convolutional layer trained on one GPU. The two-GPU net takes slightlyless time to train than the one-GPU net
The one-GPU net actually has the samenumber of kernels as the two-GPU net in the final convolutional layer. This isbecause most of the net’s parameters are in the first fully-connected layer,which takes the last convolutional layer as input. So to make the two nets haveapproximately the same number of parameters, we did not halve the size of thefinal convolutional layer (nor the fully-conneced layers which follow).Therefore this comparison is biased in favor of the one-GPU net, since it isbigger than “half the size” of the two-GPU net.
33.3 Local Response Normalization
ReLUs have the desirable property that theydo not require input normalization to prevent them
from saturating. If at least some trainingexamples produce a positive input to a ReLU, learning will
happen in that neuron. However, we stillfind that the following local normalization scheme aids
generalization. Denoting by ai x;y theactivity of a neuron computed by applying kernel i at position
(x; y) and then applying the ReLUnonlinearity, the response-normalized activity bi x;y is given by
where the sum runs over n “adjacent” kernelmaps at the same spatial position, and N is the total number of kernels in thelayer. The ordering of the kernel maps is of course arbitrary and determined beforetraining begins. This sort of response normalization implements a form oflateral inhibition inspired by the type found in real neurons, creatingcompetition for big activities amongst neuron outputs computed using differentkernels. The constants k; n; α, and β are hyper-parameters whose values aredetermined using a validation set; we used k = 2, n = 5, α = 10−4, and β =0:75.
Relus单元有个很振奋人心的特点就是它们不需要输入是归一化的，不会饱和。如果至少一些训练样本产生了一个正的输入到relu单元，学习过程会发生在这个神经元上。然而我们任然发现，接下来的局部归一化有助于泛华。其中会对n个临近的在同一个位置的滤波结果进行相加，N是这一层所有滤波器的总数，滤波器的顺序当然是在开始的时候就决定的，这种响应归一化顺序实现了一种侧向压抑，为不同的滤波器之间创造一种竞争机制，其中的恒定变量，k。n。α, and β是超参数，是通过验证集确定。我们使用 k = 2, n = 5, α = 10−4, and β = 0:75.
We applied this normalization afterapplying the ReLU nonlinearity in certain layers (see Section 3.5). This schemebears some resemblance to the local contrast normalization scheme of Jarrett etal. , but ours would be more correctly termed “brightness normalization”,since we do not subtract the mean activity. Response normalization reduces ourtop-1 and top-5 error rates by 1.4% and 1.2%, respectively. We also verifiedthe effectiveness of this scheme on the CIFAR-10 dataset: a four-layer CNNachieved a 13% test error rate without normalization and 11% withnormalization3.
3.4 Overlapping Pooling
Pooling layers in CNNs summarize theoutputs of neighboring groups of neurons in the same kernel map. Traditionally,the neighborhoods summarized by adjacent pooling units do not overlap (e.g., [17,11, 4]). To be more precise, a pooling layer can be thought of as consisting ofa grid of pooling units spaced s pixels apart, each summarizing a neighborhoodof size z × z centered at the location of the pooling unit. If we set s = z, weobtain traditional local pooling as commonly employed in CNNs. If we set s <z, we obtain overlapping pooling. This is what we use throughout our network,with s = 2 and z = 3. This scheme reduces the top-1 and top-5 error rates by0.4% and 0.3%, respectively, as compared with the non-overlapping scheme s = 2;z = 2, which produces output of equivalent dimensions. We generally observeduring training that models with overlapping pooling find it slightly moredifficult to overfit.
3.5 Overall Architecture
Now we are ready to describe the overallarchitecture of our CNN. As depicted in Figure 2, the net
contains eight layers with weights; thefirst five are convolutional and the remaining three are fullyconnected. Theoutput of the last fully-connected layer is fed to a 1000-way softmax whichproduces a distribution over the 1000 class labels. Our network maximizes themultinomial logistic regression objective, which is equivalent to maximizingthe average across training cases of the log-probability of the correct labelunder the prediction distribution.
The kernels of the second, fourth, andfifth convolutional layers are connected only to those kernel maps in theprevious layer which reside on the same GPU (see Figure 2). The kernels of thethird convolutional layer are connected to all kernel maps in the second layer.The neurons in the fullyconnected layers are connected to all neurons in theprevious layer. Response-normalization layers follow the first and secondconvolutional layers. Max-pooling layers, of the kind described in Section 3.4,follow both response-normalization layers as well as the fifth convolutionallayer. The ReLU non-linearity is applied to the output of every convolutionaland fully-connected layer. The first convolutional layer filters the 224×224×3input image with 96 kernels of size 11×11×3 with a stride of 4 pixels (this isthe distance between the receptive field centers of neighboring 3We cannotdescribe this network in detail due to space constraints, but it is specifiedprecisely by the code and parameter files provided here:http://code.google.com/p/cuda-convnet/.
Figure 2: An illustration of thearchitecture of our CNN, explicitly showing the delineation of responsibilitiesbetween the two GPUs. One GPU runs the layer-parts at the top of the figurewhile the other runs the layer-parts at the bottom. The GPUs communicate onlyat certain layers. The network’s input is 150,528-dimensional, and the numberof neurons in the network’s remaining layers is given by253,440–186,624–64,896–64,896–43,264–4096–4096–1000.
图2.一个我们网络的架构图，清楚地可以看到两个GPU的分工，一个GPU负责上边的层，另一个负责下面，GPU通信只在特定的层进行，网络的输入是一个150*528唯独的，然后神经网络保留层的神经元数目则是 253， 440*186,624*64,896*64,896*43,264-4096-4096-1000神经元
neurons in a kernel map). The secondconvolutional layer takes as input the (response-normalized
and pooled) output of the firstconvolutional layer and filters it with 256 kernels of size 5 × 5 × 48.
The third, fourth, and fifth convolutionallayers are connected to one another without any intervening pooling ornormalization layers. The third convolutional layer has 384 kernels of size 3 ×3 × 256 connected to the (normalized, pooled) outputs of the secondconvolutional layer. The fourth convolutional layer has 384 kernels of size 3 ×3 × 192 , and the fifth convolutional layer has 256 kernels of size 3 × 3 ×192. The fully-connected layers have 4096 neurons each.
4 Reducing Overfitting
Our neural network architecture has 60million parameters. Although the 1000 classes of ILSVRC
make each training example impose 10 bitsof constraint on the mapping from image to label, this
turns out to be insufficient to learn somany parameters without considerable overfitting. Below, we describe the twoprimary ways in which we combat overfitting.
4.1 Data Augmentation
The easiest and most common method toreduce overfitting on image data is to artificially enlarge the dataset usinglabel-preserving transformations (e.g., [25, 4, 5]). We employ two distinctforms of data augmentation, both of which allow transformed images to beproduced from the original images with very little computation, so thetransformed images do not need to be stored on disk. In our implementation, thetransformed images are generated in Python code on the CPU while the GPU istraining on the previous batch of images. So these data augmentation schemesare, in effect, computationally free. The first form of data augmentationconsists of generating image translations and horizontal reflections. We dothis by extracting random 224 × 224 patches (and their horizontal reflections)from the 256×256 images and training our network on these extracted patches4.This increases the size of our training set by a factor of 2048, though theresulting training examples are, of course, highly interdependent. Without thisscheme, our network suffers from substantial overfitting, which would have forcedus to use much smaller networks. At test time, the network makes a predictionby extracting five 224 × 224 patches (the four corner patches and the centerpatch) as well as their horizontal reflections (hence ten patches in all), andaveraging the predictions made by the network’s softmax layer on the tenpatches.
The second form of data augmentationconsists of altering the intensities of the RGB channels in training images.Specifically, we perform PCA on the set of RGB pixel values throughout the ImageNettraining set. To each training image, we add multiples of the found principalcomponents, 4This is the reason why the input images in Figure 2 are 224 × 224× 3-dimensional. 5with magnitudes proportional to the corresponding eigenvaluestimes a random variable drawn from a Gaussian with mean zero and standard deviation0.1. Therefore to each RGB image pixel we add the following quantity:
where pi and λi are ith eigenvector andeigenvalue of the 3 × 3 covariance matrix of RGB pixel values, respectively,and αi is the aforementioned random variabl. Each αi is drawn only once for allthe pixels of a particular training image until that image is used for trainingagain, at which point it is re-drawn. This scheme approximately captures animportant property of natural images, namely, that object identity is invariantto changes in the intensity and color of the illumination. This scheme reducesthe top-1 error rate by over 1%.
pi 和 λi 是3*3的特征向量和特征值， 来自RGB数据的 3 × 3 的协方差矩阵。 αi 是之前提到的随机变量，每个 αi 都只提取一次，直到这幅图像再次用作训练，在每个点都重新提取，这个方法大概是为了提取自然图像的重要特征。也就是说，对象属性在亮度和颜色变化过程中是保持不变的。这个可以减少约1%的错误率
Combining the predictions of many differentmodels is a very successful way to reduce test errors [1, 3], but it appears tobe too expensive for big neural networks that already take several days totrain. There is, however, a very efficient version of model combination thatonly costs about a factor of two during training. The recently-introducedtechnique, called “drpout” , consists of setting to zero the output of eachhidden neuron with probability 0.5. The neurons which are“dropped out”in this way do not contribute to the forward pass and do not participate inbackpropagation.
So every time an input is presented, theneural network samples a different architecture, but all these architecturesshare weights. This technique reduces complex co-adaptations of neurons, sincea neuron cannot rely on the presence of particular other neurons. It is,therefore, forced to learn more robust features that are useful in conjunctionwith many different random subsets of the other neurons. At test time, we useall the neurons but multiply their outputs by 0.5, which is a reasonableapproximation to taking the geometric mean of the predictive distributionsproduced by the exponentially-many dropout networks. We use dropout in thefirst two fully-connected layers of Figure 2.
Without dropout, our network exhibitssubstantial overfitting. Dropout roughly doubles the number of iterationsrequired to converge.
5 Details of learning
We trained our models using stochasticgradient descent with a batch size of 128 examples, momentum of 0.9, and weightdecay of 0.0005. We found that this small amount of weight decay was importantfor the model to learn. In other words, weight decay here is not merely aregularizer: it reduces the model’s training error. The update rule for weightw was：
where i is the iteration index, v is themomentum variable, is the learning rate,and is the average over the ith batch Di of the derivative of theobjective with respect to w, evaluated at wi. We initialized the weights ineach layer from a zero-mean Gaussian distribution with standard deviation 0.01.We initialized the neuron biases in the second, fourth, and fifth convolutionallayers, as well as in the fully-connected hidden layers, with the constant 1.This initialization accelerates the early stages of learning by providing theReLUs with positive inputs. We initialized the neuron biases in the remaininglayers with the constant 0. We used an equal learning rate for all layers,which we adjusted manually throughout training. The heuristic which we followedwas to divide the learning rate by 10 when the validation error rate stoppedimproving with the current learning rate. The learning rate was initialized at0.01 and 6reduced three times prior to termination. We trained the network forroughly 90 cycles through the training set of 1.2 million images, which tookfive to six days on two NVIDIA GTX 580 3GB GPUs.
其中i是迭代标签，v是惯性参数，Si是学习速率， 是平均第i个批量两本Di的关于w偏导数平均值。我们在每一层使用了0均值-0.01方差的高斯分布来作为初始化权重，我们初始化神经元偏差，在第2层，第4层，第5层卷积层，在全连接层也是如此，恒定量1，这个初始化加速了早=先前的学习，提供Relu单元正数输入，我们初始化神经元偏差，在其余的层数设置为0，我们使用相等的学习速率对所有的层进行学习。在训练过程中人为地调节，启发是，我们把学习速率除以10，当验证错误率不再改变的时候，这样学习速率定为 0.01 并且终止之前会减少三次。我们训练了神经网络大约90次循环，在120万张图像上花费了5-6天，Nvidia GTX580 3GB GPUs。
Our results on ILSVRC-2010 are summarizedin Table 1. Our network achieves top-1 and top-5 test set error rates of 37.5%and 17.0%5. The best performance achieved during the ILSVRC-2010 competitionwas 47.1% and 28.2% with an approach that averages the predictions produced fromsix sparse-coding models trained on different features , and since then thebest published results are 45.7% and 25.7% with an approach that averages the predictionsof two classifiers trained on Fisher Vectors (FVs) computed from two types ofdensely-sampled features .
Table对结果进行了总结，我们的网络获得了top137.5%准确率和top5 17.0%准确率，之前最好的表现是在ISLVRC2010 上取得的 47.1%，和 28.2%，该方法是利用不通特征建立的六个稀疏编码的模型，直到该文章发表，最好的准确率是45.7%和25.7%，使用了两个分类器的组合，使用了两种稠密特征上提取的费舍尔向量
Table1: Comparison of results on ILSVRC-
2010test set. In italics are best results
achieved by others. We also entered ourmodel in the ILSVRC-2012 competition and report our results in Table 2. Sincethe ILSVRC-2012 test set labels are not publicly available, we cannot reporttest error rates for all the models that we tried. In the remainder of thisparagraph, we use validation and test error rates interchangeably because inour experience they do not differ by more than 0.1% (see Table 2).
The CNN described in this paper achieves atop-5 error rate of 18.2%. Averaging the predictions of five similar CNNs givesan error rate of 16.4%. Training one CNN, with an extra sixth convolutionallayer over the last pooling layer, to classify the entire ImageNet Fall 2011release (15M images, 22K categories), and then “fine-tuning” it on ILSVRC-2012gives an error rate of 16.6%. Averaging the predictions of two CNNs that werepre-trained on the entire Fall 2011 release with the aforementioned five CNNsgives an error rate of 15.3%. The second-best contest entry achieved an errorrate of 26.2% with an approach that averages the predictions of severalclassifiers trained on FVs computed from different types of densely-sampledfeatures .
Table 2: Comparison of error rates onILSVRC-2012 validation and test sets. In italics are best results achieved byothers. Models with an asterisk* were “pre-trained” to classify the entireImageNet 2011 Fall release. See Section 6 for details.
Finally, we also report our error rates onthe Fall 2009 version of ImageNet with 10,184 categories and 8.9 millionimages. On this dataset we follow the convention in the literature of usinghalf of the images for training and half for testing. Since there is noestablished test set, our split necessarily differs from the splits used byprevious authors, but this does not affect the results appreciably. Our top-1and top-5 error rates on this dataset are 67.4% and 40.9%, attained by the netdescribed above but with an additional, sixth convolutional layer over the lastpooling layer. The best published results on this dataset are 78.1% and 60.9%.
Figure 3 shows the convolutional kernelslearned by the network’s two data-connected layers. The network has learned avariety of frequency- and orientation-selective kernels, as well as variouscolored blobs. Notice the specialization exhibited by the two GPUs, a result ofthe restricted connectivity described in Section 3.5. The kernels on GPU 1 arelargely color-agnostic, while the kernels
Figure 3: 96 convolutional kernels of size 11×11×3learned by the first convolutional layer on the 224×224×3 inputimages. The top 48 kernels were learned on GPU 1 while the bottom 48 kernelswere learned on GPU 2. See Section 6.1 for details.
6.1 Qualitative Evaluations
Figure 3 shows the convolutional kernelslearned by the network’s two data-connected layers. The
network has learned a variety of frequency-and orientation-selective kernels, as well as various colored blobs. Notice thespecialization exhibited by the two GPUs, a result of the restrictedconnectivity described in Section 3.5. The kernels on GPU 1 are largelycolor-agnostic, while the kernels on GPU 2 are largely color-specific. Thiskind of specialization occurs during every run and is independent of anyparticular random weight initialization (modulo a renumbering of the GPUs).
5The error rates without averagingpredictions over ten patches as described in Section 4.1 are 39.0% and 18.3%.
Figure 4: (Left) Eight ILSVRC-2010 test images and the five labels consideredmost probable by our model. The correct label is written under each image, andthe probability assigned to the correct label is also shown with a red bar (ifit happens to be in the top 5). (Right) Five ILSVRC-2010 test images in thefirst column. The remaining columns show the six training images that producefeature vectors in the last hidden layer with the smallest Euclidean distancefrom the feature vector for the test image.
In the left panel of Figure 4 wequalitatively assess what the network has learned by computing its top-5predictions on eight test images. Notice that even off-center objects, such asthe mite in the top-left, can be recognized by the net. Most of the top-5 labelsappear reasonable. For example, only other types of cat are consideredplausible labels for the leopard. In some cases (grille, cherry) there isgenuine ambiguity about the intended focus of the photograph.
Another way to probe the network’s visual knowledgeis to consider the feature activations induced by an image at the last,4096-dimensional hidden layer. If two images produce feature activation vectorswith a small Euclidean separation, we can say that the higher levels of theneural network consider them to be similar. Figure 4 shows five images from thetest set and the six images from the training set that are most similar to eachof them according to this measure. Notice that at the pixel level, theretrieved training images are generally not close in L2 to the query images inthe first column. For example, the retrieved dogs and elephants appear in avariety of poses. We present the results for many more test images in thesupplementary material. Computing similarity by using Euclidean distancebetween two 4096-dimensional, real-valued vectors is inefficient, but it couldbe made efficient by training an auto-encoder to compress these vectors toshort binary codes. This should produce a much better image retrieval methodthan applying autoencoders to the raw pixels , which does not make use ofimage labels and hence has a tendency to retrieve images with similar patternsof edges, whether or not they are semantically similar.
Our results show that a large, deepconvolutional neural network is capable of achieving recordbreaking results ona highly challenging dataset using purely supervised learning. It is notable thatour network’s performance degrades if a single convolutional layer is removed.For example, removing any of the middle layers results in a loss of about 2%for the top-1 performance of the network. So the depth really is important forachieving our results. To simplify our experiments, we did not use anyunsupervised pre-training even though we expect that it will help, especiallyif we obtain enough computational power to significantly increase the size ofthe network without obtaining a corresponding increase in the amount of labeleddata. Thus far, our results have improved as we have made our network largerand trained it longer but we still have many orders of magnitude to go in orderto match the infero-temporal pathway of the human visual system. Ultimately wewould like to use very large and deep convolutional nets on video sequenceswhere the temporal structure provides very helpful information that is missingor far less obvious in static images.