【翻译】【YOLOv2】YOLO9000: Better, Faster, Stronger

13 篇文章 0 订阅
6 篇文章 0 订阅

YOLO9000: Better, Faster, Stronger
YOLO9000:更好,更快,更强壮

Joseph Redmon, Ali Farhadi

论文:YOLO9000: Better, Faster, Stronger
项目:http://pjreddie.com/yolo9000/

Abstract(摘要)

We introduce YOLO9000, a state-of-the-art, real-time object detection system that can detect over 9000 object categories. First we propose various improvements to the YOLO detection method, both novel and drawn from prior work. The improved model, YOLOv2, is state-of-the-art on standard detection tasks like PASCAL VOC and COCO. Using a novel, multi-scale training method the same YOLOv2 model can run at varying sizes, offering an easy tradeoff between speed and accuracy. At 67 FPS, YOLOv2 gets 76.8 mAP on VOC 2007. At 40 FPS, YOLOv2 gets 78.6 mAP, outperforming state-of-the-art methods like Faster R-CNN with ResNet and SSD while still running significantly faster. Finally we propose a method to jointly train on object detection and classification. Using this method we train YOLO9000 simultaneously on the COCO detection dataset and the ImageNet classification dataset. Our joint training allows YOLO9000 to predict detections for object classes that don’t have labelled detection data. We validate our approach on the ImageNet detection task. YOLO9000 gets 19.7 mAP on the ImageNet detection validation set despite only having detection data for 44 of the 200 classes. On the 156 classes not in COCO, YOLO9000 gets 16.0 mAP. YOLO9000 predicts detections for more than 9000 different object categories, all in real-time.
  我们介绍了YOLO9000,一个最先进的实时目标检测系统,可以检测超过9000个物体类别。首先,我们提出了对YOLO检测方法的各种改进,这些改进既是新的,也是来自先前的工作。改进后的模型,YOLOv2,在标准检测任务(如PASCAL VOC和COCO)上是最先进的。使用一种新的、多尺度的训练方法,同一个YOLOv2模型可以在不同的规模下运行,在速度和准确性之间提供了一个简单的权衡。在67FPS时,YOLOv2在VOC 2007上得到76.8mAP。在40 FPS时,YOLOv2得到78.6 mAP,超过了最先进的方法,如带有ResNet和SSD的Faster R-CNN,同时运行速度仍然很快。最后,我们提出了一种联合训练目标检测和分类的方法。使用这种方法,我们在COCO检测数据集和ImageNet分类数据集上同时训练YOLO9000。我们的联合训练使YOLO9000能够预测没有标记检测数据的目标类别的检测情况。我们在ImageNet检测任务上验证了我们的方法。尽管200个类中只有44个有检测数据,YOLO9000在ImageNet检测验证集上得到了19.7 mAP。在没有COCO里的156个类中,YOLO9000得到了16.0的mAP。YOLO9000预测了超过9000个不同的目标类别的检测,而且都是实时的。
在这里插入图片描述

1. Introduction(介绍)

General purpose object detection should be fast, accurate, and able to recognize a wide variety of objects. Since the introduction of neural networks, detection frameworks have become increasingly fast and accurate. However, most detection methods are still constrained to a small set of objects.
  通用的目标检测应该是快速、准确的,并且能够识别各种各样的物体。自从引入神经网络以来,检测框架已经变得越来越快和准确。然而,大多数检测方法仍然被限制在一小部分物体上。
  Current object detection datasets are limited compared to datasets for other tasks like classification and tagging. The most common detection datasets contain thousands to hundreds of thousands of images with dozens to hundreds of tags [3] [10] [2]. Classification datasets have millions of images with tens or hundreds of thousands of categories [20] [2].
  与分类和标记等其他任务的数据集相比,目前的目标检测数据集很有限。最常见的检测数据集包含几千到几十万张图像,有几十到几百个标签[3] [10] [2]。分类数据集有数以百万计的图像和几万或几十万的类别[20] [2]。
  We would like detection to scale to level of object classification. However, labelling images for detection is far more expensive than labelling for classification or tagging (tags are often user-supplied for free). Thus we are unlikely to see detection datasets on the same scale as classification datasets in the near future.
  我们希望检测的规模能达到目标分类的水平。然而,为检测而给图像贴标签要比为分类或标签贴标签昂贵得多(标签通常是用户免费提供的)。因此,我们不太可能在不久的将来看到检测数据集与分类数据集的规模相同。
  We propose a new method to harness the large amount of classification data we already have and use it to expand the scope of current detection systems. Our method uses a hierarchical view of object classification that allows us to combine distinct datasets together.
  我们提出了一种新的方法来利用我们已经拥有的大量分类数据,并利用它来扩大当前检测系统的范围。我们的方法使用目标分类的分层观点,使我们能够将不同的数据集结合在一起。
  We also propose a joint training algorithm that allows us to train object detectors on both detection and classification data. Our method leverages labeled detection images to learn to precisely localize objects while it uses classification images to increase its vocabulary and robustness.
  我们还提出了一种联合训练算法,使我们能够在检测和分类数据上训练目标检测器。我们的方法利用标记的检测图像来学习精确地定位目标,同时它使用分类图像来增加其词汇量和鲁棒性。
  Using this method we train YOLO9000, a real-time object detector that can detect over 9000 different object categories. First we improve upon the base YOLO detection system to produce YOLOv2, a state-of-the-art, real-time detector. Then we use our dataset combination method and joint training algorithm to train a model on more than 9000 classes from ImageNet as well as detection data from COCO.
  使用这种方法,我们训练了YOLO9000,这是一个实时目标检测器,可以检测超过9000个不同的目标类别。首先,我们在基础YOLO检测系统的基础上进行了改进,产生了YOLOv2,一个最先进的实时检测器。然后,我们使用我们的数据集组合方法和联合训练算法,在ImageNet的9000多个类别以及COCO的检测数据上训练一个模型。
  All of our code and pre-trained models are available online at http://pjreddie.com/yolo9000/.
  我们所有的代码和预训练的模型都可以在网上找到http://pjreddie.com/yolo9000/

2. Better(更好)

YOLO suffers from a variety of shortcomings relative to state-of-the-art detection systems. Error analysis of YOLO compared to Fast R-CNN shows that YOLO makes a significant number of localization errors. Furthermore, YOLO has relatively low recall compared to region proposal-based methods. Thus we focus mainly on improving recall and localization while maintaining classification accuracy.
  相对于最先进的检测系统,YOLO存在着各种缺陷。与Fast R-CNN相比,对YOLO的错误分析表明,YOLO存在大量的定位错误。此外,与基于region proposal的方法相比,YOLO的召回率相对较低。因此,我们主要关注的是在保持分类精度的同时提高召回率和定位率。
在这里插入图片描述
  Computer vision generally trends towards larger, deeper networks [6] [18] [17]. Better performance often hinges on training larger networks or ensembling multiple models together. However, with YOLOv2 we want a more accurate detector that is still fast. Instead of scaling up our network, we simplify the network and then make the representation easier to learn. We pool a variety of ideas from past work with our own novel concepts to improve YOLO’s performance. A summary of results can be found in Table 2.
  计算机视觉通常趋向于更大、更深的网络[6] [18] [17]。更好的性能往往取决于训练更大的网络或将多个模型组合在一起。然而,在YOLOv2中,我们希望有一个更准确的检测器,但仍然是快速的。我们没有扩大我们的网络,而是简化了网络,然后使表示更容易学习。我们将过去工作中的各种想法与我们自己的新概念结合起来,以提高YOLO的性能。在表2中可以看到结果的总结。
  Batch Normalization. Batch normalization leads to significant improvements in convergence while eliminating the need for other forms of regularization [7]. By adding batch normalization on all of the convolutional layers in YOLO we get more than 2% improvement in mAP. Batch normalization also helps regularize the model. With batch normalization we can remove dropout from the model without overfitting.
  批归一化。批归一化导致收敛性的显著改善,同时消除了对其他形式的规范化的需求[7]。通过在YOLO的所有卷积层上添加批归一化,我们在mAP上得到了超过2%的改善。批归一化也有助于规范化模型。有了批归一化,我们可以在去除模型中的dropout的情况下不过拟合。
  High Resolution Classifier. All state-of-the-art detection methods use classifier pre-trained on ImageNet [16]. Starting with AlexNet most classifiers operate on input images smaller than 256 × 256 [8]. The original YOLO trains the classifier network at 224 × 224 and increases the resolution to 448 for detection. This means the network has to simultaneously switch to learning object detection and adjust to the new input resolution.
  高分辨率的分类器。所有最先进的检测方法都使用在ImageNet上预先训练的分类器[16]。从AlexNet开始,大多数分类器在小于256×256的输入图像上运行[8]。最初的YOLO在224×224的情况下训练分类器网络,并将分辨率提高到448,以进行检测。这意味着网络必须同时切换到学习目标检测和调整到新的输入分辨率。
  For YOLOv2 we first fine tune the classification network at the full 448 × 448 resolution for 10 epochs on ImageNet. This gives the network time to adjust its filters to work better on higher resolution input. We then fine tune the resulting network on detection. This high resolution classification network gives us an increase of almost 4% mAP.
  对于YOLOv2,我们首先在ImageNet上以448×448的完整分辨率对分类网络进行了10次微调。这使网络有时间调整其过滤器,以便在更高的分辨率输入上更好地工作。然后,我们在检测时对所产生的网络进行微调。这个高分辨率的分类网络使我们的mAP增加了近4%。
  Convolutional With Anchor Boxes. YOLO predicts the coordinates of bounding boxes directly using fully connected layers on top of the convolutional feature extractor. Instead of predicting coordinates directly Faster R-CNN predicts bounding boxes using hand-picked priors [15]. Using only convolutional layers the region proposal network (RPN) in Faster R-CNN predicts offsets and confidences for anchor boxes. Since the prediction layer is convolutional, the RPN predicts these offsets at every location in a feature map. Predicting offsets instead of coordinates simplifies the problem and makes it easier for the network to learn.
  带有锚框的卷积。YOLO直接使用卷积特征提取器上面的全连接层来预测bounding boxes(下用bbox)的坐标。Faster R-CNN不直接预测坐标,而是使用手工挑选的先验因素来预测bbox[15]。Faster R-CNN中的region proposal网络(RPN)只使用卷积层,预测锚框的偏移量和置信度。由于预测层是卷积层,RPN预测了特征图中每个位置的这些偏移量。预测偏移量而不是坐标可以简化问题,使网络更容易学习。
  We remove the fully connected layers from YOLO and use anchor boxes to predict bounding boxes. First we eliminate one pooling layer to make the output of the network’s convolutional layers higher resolution. We also shrink the network to operate on 416 input images instead of 448×448. We do this because we want an odd number of locations in our feature map so there is a single center cell. Objects, especially large objects, tend to occupy the center of the image so it’s good to have a single location right at the center to predict these objects instead of four locations that are all nearby. YOLO’s convolutional layers downsample the image by a factor of 32 so by using an input image of 416 we get an output feature map of 13 × 13.
  我们从YOLO中移除全连接层,并使用锚框来预测边界框。首先,我们取消了一个池化层,使网络卷积层的输出分辨率更高。我们还缩小了网络,使其在416长宽的输入图像上运行,而不是448×448。我们这样做是因为我们希望在我们的特征图中有一个奇数的位置,以便有一个单一的中心单元。对于一个目标,尤其是大型目标,往往会占据图像的中心位置,所以在中心位置有一个单一的位置来预测这些目标是很好的,而不是四个都在附近的位置。YOLO的卷积层对图像进行了32倍的降样,所以通过使用416的输入图像,我们得到了一个13×13的输出特征图。
  When we move to anchor boxes we also decouple the class prediction mechanism from the spatial location and instead predict class and objectness for every anchor box. Following YOLO, the objectness prediction still predicts the IOU of the ground truth and the proposed box and the class predictions predict the conditional probability of that class given that there is an object.
  当我们转移到锚框时,我们也将类别预测机制与空间位置脱钩,而是预测每个锚框的类别和bojectness。在YOLO之后,bojectness预测仍然用ground truth(下用GT替代)和the proposed box的IOU,而类别预测则预测该类别的条件概率,因为那里有一个目标。
  Using anchor boxes we get a small decrease in accuracy. YOLO only predicts 98 boxes per image but with anchor boxes our model predicts more than a thousand. Without anchor boxes our intermediate model gets 69.5 mAP with a recall of 81%. With anchor boxes our model gets 69.2 mAP with a recall of 88%. Even though the mAP decreases, the increase in recall means that our model has more room to improve.
  使用锚框,我们的准确性会有小幅下降。YOLO只预测了每张图片上的98个boxes,但使用锚框,我们的模型预测了超过1000个boxes。在没有锚框的情况下,我们的中间模型得到69.5个mAP,召回率为81%。有了锚框,我们的模型得到69.2个mAP,召回率为88%。即使mAP下降了,召回率的增加意味着我们的模型有更大的改进空间。
  Dimension Clusters. We encounter two issues with anchor boxes when using them with YOLO. The first is that the box dimensions are hand picked. The network can learn to adjust the boxes appropriately but if we pick better priors for the network to start with we can make it easier for the network to learn to predict good detections.
  维度集类。在与YOLO一起使用锚框时,我们遇到了两个问题。第一个问题是,boxes的尺寸是手工挑选的。网络可以学习适当地调整boxes,但是如果我们为网络挑选更好的先验因素来开始,我们可以使网络更容易学习预测好的检测结果。
  Instead of choosing priors by hand, we run k-means clustering on the training set bounding boxes to automatically find good priors. If we use standard k-means with Euclidean distance larger boxes generate more error than smaller boxes. However, what we really want are priors that lead to good IOU scores, which is independent of the size of the box. Thus for our distance metric we use:
  我们在训练集的boxes上运行k-means聚类,以自动找到好的预设,而不是手工选择预设。如果我们使用标准的K-means与欧氏距离,较大的boxes比小boxes产生更多的错误。然而,我们真正想要的是能够导致良好的IOU分数的预设,这与boxes的大小无关。因此,对于我们的距离度量,我们使用:在这里插入图片描述
在这里插入图片描述
  We run k-means for various values of k and plot the average IOU with closest centroid, see Figure 2. We choose k = 5 as a good tradeoff between model complexity and high recall. The cluster centroids are significantly different than hand-picked anchor boxes. There are fewer short, wide boxes and more tall, thin boxes.
  我们对不同的k值运行k-means,并绘制出最接近中心点的平均IOU,见图2。我们选择k=5作为模型复杂性和高召回率之间的良好权衡。聚类中心点与手工挑选的锚框有明显不同。短而宽的boxes较少,高而窄的boxes较多。
在这里插入图片描述
  We compare the average IOU to closest prior of our clustering strategy and the hand-picked anchor boxes in Table 1. At only 5 priors the centroids perform similarly to 9 anchor boxes with an average IOU of 61.0 compared to 60.9. If we use 9 centroids we see a much higher average IOU. This indicates that using k-means to generate our bounding box starts the model off with a better representation and makes the task easier to learn.
  我们在表1中比较了我们的聚类策略中最接近的先验和手工挑选的锚框的平均IOU。在只有5个先验的情况下,中心点的表现与9个锚框相似,前者平均IOU为61.0,而后者为60.9。如果我们使用9个中心点,我们会看到一个高得多的平均IOU。这表明,使用k-means来生成我们的bbox,使模型开始有一个更好的表示,并使任务更容易学习。
  Direct location prediction. When using anchor boxes with YOLO we encounter a second issue: model instability, especially during early iterations. Most of the instability comes from predicting the (x, y) locations for the box. In region proposal networks the network predicts values tx and ty and the (x, y) center coordinates are calculated as:
  直接位置预测。当用YOLO使用锚框时,我们遇到了第二个问题:模型的不稳定性,特别是在早期迭代中。大部分的不稳定性来自于对box的(x,y)位置的预测。在region proposal网络中,网络预测值tx和ty,(x,y)中心坐标的计算方法是:在这里插入图片描述
  For example, a prediction of t x = 1 t_x = 1 tx=1 would shift the box to the right by the width of the anchor box, a prediction of t x = − 1 t_x = -1 tx=1 would shift it to the left by the same amount.
  例如,预测 t x = 1 t_x = 1 tx=1会将box向右移动,移动的宽度为锚框的宽度,预测 t x = − 1 t_x = -1 tx=1会将box向左移动相同的数量。
  This formulation is unconstrained so any anchor box can end up at any point in the image, regardless of what location predicted the box. With random initialization the model takes a long time to stabilize to predicting sensible offsets.
  这个公式是不受限制的,所以任何锚框都可以在图像中的任何一点结束,不管是什么位置预测了这个框。在随机初始化的情况下,模型需要很长时间才能稳定地预测出合理的偏移。
  Instead of predicting offsets we follow the approach of YOLO and predict location coordinates relative to the location of the grid cell. This bounds the ground truth to fall between 0 and 1. We use a logistic activation to constrain the network’s predictions to fall in this range.
  我们没有预测偏移量,而是采用了YOLO的方法,预测相对于网格单元位置的位置坐标。我们使用logistic激活函数来约束网络的预测,使其落在这个范围内。
  The network predicts 5 bounding boxes at each cell in the output feature map. The network predicts 5 coordinates for each bounding box, t x , t y , t w , t h t_x, t_y, t_w, t_h tx,ty,tw,th, and t o t_o to. If the cell is offset from the top left corner of the image by ( c x , c y ) (c_x, c_y) (cx,cy) and the bounding box prior has width and height p w , p h p_w, p_h pw,ph, then the predictions correspond to:
  该网络在输出特征图中的每个单元预测了5个bbox。该网络为每个bbox预测了5个坐标,即 t x , t y , t w , t h t_x, t_y, t_w, t_h tx,ty,tw,th t o t_o to。如果单元格与图像左上角的偏移量为 ( c x , c y ) (c_x, c_y) (cx,cy),且bbox先验的宽度和高度为 p w , p h p_w,p_h pwph,则预测值对应于。在这里插入图片描述
  Since we constrain the location prediction the parametrization is easier to learn, making the network more stable. Using dimension clusters along with directly predicting the bounding box center location improves YOLO by almost 5% over the version with anchor boxes.
  由于我们限制了位置预测,参数化更容易学习,使网络更稳定。使用维度聚类以及直接预测bbox中心位置,比起使用锚框的版本,YOLO性能提高了近5%。
  Fine-Grained Features.This modified YOLO predicts detections on a 13 × 13 feature map. While this is sufficient for large objects, it may benefit from finer grained features for localizing smaller objects. Faster R-CNN and SSD both run their proposal networks at various feature maps in the network to get a range of resolutions. We take a different approach, simply adding a passthrough layer that brings features from an earlier layer at 26 × 26 resolution.
  细粒度特征。这种改进的YOLO在13×13的特征图上预测检测。虽然这对大目标来说是足够的,但它可能会受益于更细粒度的特征来定位较小的目标。Faster R-CNN和SSD都在网络中的各种特征图上运行他们的proposal网络,以获得一系列的分辨率。我们采取了一种不同的方法,简单地增加了一个直通层,以26×26的分辨率从先前的层中带来特征。
  The passthrough layer concatenates the higher resolution features with the low resolution features by stacking adjacent features into different channels instead of spatial locations, similar to the identity mappings in ResNet. This turns the 26 × 26 × 512 feature map into a 13 × 13 × 2048 feature map, which can be concatenated with the original features. Our detector runs on top of this expanded feature map so that it has access to fine grained features. This gives a modest 1% performance increase.
  passthrough层通过将相邻的特征堆叠到不同的通道而不是空间位置,将高分辨率的特征与低分辨率的特征串联起来,类似于ResNet中的恒等映射。这就把26×26×512的特征图变成了13×13×2048的特征图,它可以与原始特征串联起来。我们的检测器在这个扩展的特征图之上运行,这样它就可以访问细粒度的特征。这使性能有了1%的适度提高。
  Multi-Scale Training. The original YOLO uses an input resolution of 448 × 448. With the addition of anchor boxes we changed the resolution to 416×416. However, since our model only uses convolutional and pooling layers it can be resized on the fly. We want YOLOv2 to be robust to running on images of different sizes so we train this into the model.
  多尺度的训练。原始的YOLO使用448×448的输入分辨率。随着锚框的增加,我们将分辨率改为416×416。然而,由于我们的模型只使用卷积层和池化层,它可以在运行中调整大小。我们希望YOLOv2能够稳健地运行在不同尺寸的图像上,所以我们将此训练到模型中。
  Instead of fixing the input image size we change the network every few iterations. Every 10 batches our network randomly chooses new image dimensions. Since our model downsamples by a factor of 32, we pull from the following multiples of 32: {320, 352, …, 608}. Thus the smallest option is 320 × 320 and the largest is 608 × 608. We resize the network to that dimension and continue training.
  我们没有固定输入图像的尺寸,而是每隔几个迭代就改变网络。每10个批次,我们的网络就会随机选择新的图像尺寸。由于我们的模型缩小了32倍,我们从以下32的倍数中抽取:{320, 352, …, 608}。因此,最小的选项是320 × 320,最大的是608 × 608。我们将网络的大小调整到这个维度,然后继续训练。
  This regime forces the network to learn to predict well across a variety of input dimensions. This means the same network can predict detections at different resolutions. The network runs faster at smaller sizes so YOLOv2 offers an easy tradeoff between speed and accuracy.
  这一制度迫使网络学会在各种输入维度上进行良好的预测。这意味着同一个网络可以在不同的分辨率下预测探测结果。该网络在较小的尺寸下运行得更快,因此YOLOv2在速度和准确性之间提供了一个简单的权衡。
  At low resolutions YOLOv2 operates as a cheap, fairly accurate detector. At 288 × 288 it runs at more than 90 FPS with mAP almost as good as Fast R-CNN. This makes it ideal for smaller GPUs, high framerate video, or multiple video streams.
  在低分辨率下,YOLOv2作为一个廉价的、相当准确的检测器运行。在288×288时,它以超过90 FPS的速度运行,mAP几乎与Fast R-CNN一样好。这使它成为较小的GPU、高帧率视频或多视频流的理想选择。
在这里插入图片描述
在这里插入图片描述
  At high resolution YOLOv2 is a state-of-the-art detector with 78.6 mAP on VOC 2007 while still operating above real-time speeds. See Table 3 for a comparison of YOLOv2 with other frameworks on VOC 2007. Figure 4
  在高分辨率下,YOLOv2是一个最先进的检测器,在VOC 2007上有78.6 mAP,同时仍在实时速度以上运行。YOLOv2与其他框架在VOC 2007上的比较见表3。图4
在这里插入图片描述
在这里插入图片描述
  Further Experiments. We train YOLOv2 for detection on VOC 2012. Table 4 shows the comparative performance of YOLOv2 versus other state-of-the-art detection systems. YOLOv2 achieves 73.4 mAP while running far faster than other methods. We also train on COCO, see Table 5. On the VOC metric (IOU = .5) YOLOv2 gets 44.0 mAP, comparable to SSD and Faster R-CNN.
  进一步的实验。我们训练YOLOv2对VOC 2012进行检测。表4显示了YOLOv2与其他最先进的检测系统的比较性能。YOLOv2实现了73.4 mAP,同时运行速度远远超过其他方法。我们还对COCO进行了训练,见表5。在VOC指标(IOU = .5)上,YOLOv2得到44.0 mAP,与SSD和Faster R-CNN相当。

3. Faster(更快)

We want detection to be accurate but we also want it to be fast. Most applications for detection, like robotics or selfdriving cars, rely on low latency predictions. In order to maximize performance we design YOLOv2 to be fast from the ground up.
  我们希望检测是准确的,但我们也希望它是快速的。大多数检测的应用,如机器人或自动驾驶汽车,都依赖于低延迟的预测。为了最大限度地提高性能,我们在设计YOLOv2时从头到尾都力求快速。
  Most detection frameworks rely on VGG-16 as the base feature extractor [17]. VGG-16 is a powerful, accurate classification network but it is needlessly complex. The convolutional layers of VGG-16 require 30.69 billion floating point operations for a single pass over a single image at 224 × 224 resolution.
  大多数检测框架依靠VGG-16作为基础特征提取器[17]。VGG-16是一个强大的、准确的分类网络,但它有不必要的复杂度。VGG-16的卷积层需要306.9亿次浮点运算来处理224×224分辨率下的一张图像。
  The YOLO framework uses a custom network based on the GoogLeNet architecture [19]. This network is faster than VGG-16, only using 8.52 billion operations for a forward pass. However, it’s accuracy is slightly worse than VGG-16. For single-crop, top-5 accuracy at 224 × 224, YOLO’s custom model gets 88.0% ImageNet compared to 90.0% for VGG-16.
  YOLO框架使用了一个基于GoogLeNet架构的定制网络[19]。这个网络比VGG-16快,一个前向通道只用了85.2亿次操作。然而,它的准确性比VGG-16略差。对于单裁剪,在224×224的情况下,YOLO的自定义模型在ImageNet上得到了88.0%,而VGG-16为90.0%。
  Darknet-19. We propose a new classification model to be used as the base of YOLOv2. Our model builds off of prior work on network design as well as common knowledge in the field. Similar to the VGG models we use mostly 3 × 3 filters and double the number of channels after every pooling step [17]. Following the work on Network in Network (NIN) we use global average pooling to make predictions as well as 1 × 1 filters to compress the feature representation between 3 × 3 convolutions [9]. We use batch normalization to stabilize training, speed up convergence, and regularize the model [7].
  Darknet-19。我们提出一个新的分类模型,作为YOLOv2的基础。我们的模型建立在先前的网络设计工作以及该领域的常识之上。与VGG模型类似,我们主要使用3×3的过滤器,并在每个池化步骤后将通道数量增加一倍[17]。按照网络中的网络(NIN)的工作,我们使用全局平均池化来进行预测,以及使用1×1滤波器来压缩3×3卷积之间的特征表示[9]。我们使用批归一化来稳定训练,加快收敛速度,并使模型正规化[7]。
在这里插入图片描述
  Our final model, called Darknet-19, has 19 convolutional layers and 5 maxpooling layers. For a full description see Table 6. Darknet-19 only requires 5.58 billion operations to process an image yet achieves 72.9% top-1 accuracy and 91.2% top-5 accuracy on ImageNet.
  我们最终的模型,称为Darknet-19,有19个卷积层和5个maxpooling层。完整的描述见表6。Darknet-19只需要55.8亿次操作来处理一张图片,但在ImageNet上却取得了72.9%的top-1准确率和91.2%的top-5准确率。
  Training for classification. We train the network on the standard ImageNet 1000 class classification dataset for 160 epochs using stochastic gradient descent with a starting learning rate of 0.1, polynomial rate decay with a power of 4, weight decay of 0.0005 and momentum of 0.9 using the Darknet neural network framework [13]. During training we use standard data augmentation tricks including random crops, rotations, and hue, saturation, and exposure shifts.
  分类的训练。我们使用随机梯度下降法在标准的ImageNet 1000个分类数据集上训练网络160次,使用Darknet神经网络框架[13],起始学习率为0.1,多项式速率衰减为4次方,权重衰减为0.0005,动量为0.9。在训练过程中,我们使用标准的数据增强技巧,包括随机作物、旋转以及色调、饱和度和曝光度的转变。
  As discussed above, after our initial training on images at 224 × 224 we fine tune our network at a larger size, 448. For this fine tuning we train with the above parameters but for only 10 epochs and starting at a learning rate of 1 0 − 3 10^{−3} 103. At this higher resolution our network achieves a top-1 accuracy of 76.5% and a top-5 accuracy of 93.3%.
  如上所述,在对224×224的图像进行初始训练后,我们在更大的尺寸(448)上对我们的网络进行微调。在这种微调中,我们用上述参数进行训练,但只用了10个epochs,并以 1 0 − 3 10^{-3} 103的学习率开始。在这个较高的分辨率下,我们的网络达到了76.5%的top-1准确率和93.3%的top-5准确率。
  Training for detection. We modify this network for detection by removing the last convolutional layer and instead adding on three 3 × 3 convolutional layers with 1024 filters each followed by a final 1 × 1 convolutional layer with the number of outputs we need for detection. For VOC we predict 5 boxes with 5 coordinates each and 20 classes per box so 125 filters. We also add a passthrough layer from the final 3 × 3 × 512 layer to the second to last convolutional layer so that our model can use fine grain features.
  检测的训练。我们对这个网络进行了修改,去掉了最后一个卷积层,而是增加了三个3×3的卷积层,每个卷积层有1024个过滤器,然后是最后一个1×1的卷积层,输出的数量是我们检测所需的。对于VOC,我们预测5个box,每个盒子有5个坐标,每个box有20个类别,所以有125个过滤器。我们还从最后的3×3×512层添加了一个passthrough层到倒数第二个卷积层,这样我们的模型就可以使用细粒度特征。
  We train the network for 160 epochs with a starting learning rate of 1 0 − 3 10^{−3} 103, dividing it by 10 at 60 and 90 epochs. We use a weight decay of 0.0005 and momentum of 0.9. We use a similar data augmentation to YOLO and SSD with random crops, color shifting, etc. We use the same training strategy on COCO and VOC.
  我们用 1 0 − 3 10^{-3} 103的起始学习率训练网络160个epochs,在60和90个epochs时除以10。我们使用0.0005的权重衰减和0.9的动量。我们使用与YOLO和SSD类似的数据增强,包括随机裁剪、颜色转换等。我们在COCO和VOC上使用同样的训练策略。

4. Stronger(更强)

We propose a mechanism for jointly training on classification and detection data. Our method uses images labelled for detection to learn detection-specific information like bounding box coordinate prediction and objectness as well as how to classify common objects. It uses images with only class labels to expand the number of categories it can detect.
  我们提出了一种对分类和检测数据进行联合训练的机制。我们的方法使用标记为检测的图像来学习特定的检测信息,如边界框坐标预测和物体性,以及如何对普通物体进行分类。它使用只有类别标签的图像来扩大它可以检测的类别的数量。
  During training we mix images from both detection and classification datasets. When our network sees an image labelled for detection we can backpropagate based on the full YOLOv2 loss function. When it sees a classification image we only backpropagate loss from the classificationspecific parts of the architecture.
  在训练过程中,我们混合了检测和分类数据集的图像。当我们的网络看到一个标记为检测的图像时,我们可以根据完整的YOLOv2损失函数进行反向传播。当它看到一个分类图像时,我们只从架构的分类特定部分反向传播损失。
  This approach presents a few challenges. Detection datasets have only common objects and general labels, like “dog” or “boat”. Classification datasets have a much wider and deeper range of labels. ImageNet has more than a hundred breeds of dog, including “Norfolk terrier”, “Yorkshire terrier”, and “Bedlington terrier”. If we want to train on both datasets we need a coherent way to merge these labels.
  这种方法带来了一些挑战。检测数据集只有常见的物体和一般的标签,如 "狗 "或 “船”。分类数据集有更广泛和更深入的标签范围。ImageNet有一百多个狗的品种,包括Norfolk terrier”, “Yorkshire terrier”和“Bedlington terrier”。如果我们想在两个数据集上进行训练,我们需要一个连贯的方法来合并这些标签。
  Most approaches to classification use a softmax layer across all the possible categories to compute the final probability distribution. Using a softmax assumes the classes are mutually exclusive. This presents problems for combining datasets, for example you would not want to combine ImageNet and COCO using this model because the classes “Norfolk terrier” and “dog” are not mutually exclusive.
  大多数分类方法在所有可能的类别中使用softmax层来计算最终的概率分布。使用softmax时,假定这些类别是相互排斥的。这给合并数据集带来了问题,例如,你不会想用这个模型合并ImageNet和COCO,因为“Norfolk terrier”和"狗 "这两个类别并不相互排斥。
  We could instead use a multi-label model to combine the datasets which does not assume mutual exclusion. This approach ignores all the structure we do know about the data, for example that all of the COCO classes are mutually exclusive.
  相反,我们可以使用一个多标签模型来组合数据集,而这个模型并不假定相互排斥。这种方法忽略了我们所知道的关于数据的所有结构,例如,所有的COCO类都是互斥的。
  Hierarchical classification. ImageNet labels are pulled from WordNet, a language database that structures concepts and how they relate [12]. In WordNet, “Norfolk terrier” and “Yorkshire terrier” are both hyponyms of “terrier” which is a type of “hunting dog”, which is a type of “dog”, which is a “canine”, etc. Most approaches to classification assume a flat structure to the labels however for combining datasets, structure is exactly what we need.
  分层分类。ImageNet的标签是从WordNet中提取的,WordNet是一个语言数据库,用于构造概念和它们之间的关系[12]。在WordNet中,"诺福克梗 "和 "约克夏梗 "都是 "梗 "的外来语,而 "梗 "是 "猎狗 "的一种,是 "狗 "的一种,是 "犬 "的一种等等。大多数分类方法都假定标签是一个平面结构,然而对于结合数据集来说,结构正是我们所需要的。
  WordNet is structured as a directed graph, not a tree, because language is complex. For example a “dog” is both a type of “canine” and a type of “domestic animal” which are both synsets in WordNet. Instead of using the full graph structure, we simplify the problem by building a hierarchical tree from the concepts in ImageNet.
  WordNet的结构是一个有向图,而不是一个树,因为语言是复杂的。例如,"狗 "既是 "犬类 "的一种类型,也是 "家畜 "的一种类型,它们都是WordNet中的概念集。我们没有使用完整的图结构,而是通过从ImageNet中的概念建立一棵分层的树来简化这个问题。
  To build this tree we examine the visual nouns in ImageNet and look at their paths through the WordNet graph to the root node, in this case “physical object”. Many synsets only have one path through the graph so first we add all of those paths to our tree. Then we iteratively examine the concepts we have left and add the paths that grow the tree by as little as possible. So if a concept has two paths to the root and one path would add three edges to our tree and the other would only add one edge, we choose the shorter path.
  为了建立这棵树,我们检查了ImageNet中的视觉名词,并查看了它们通过WordNet图到根节点的路径,在这个例子中是 “物理对象”。许多同义词在图中只有一条路径,因此我们首先将所有这些路径添加到我们的树上。然后,我们反复检查我们剩下的概念,并添加路径,使树的增长尽可能少。因此,如果一个概念有两条通向根的路径,其中一条路径会给我们的树增加三条边,而另一条只增加一条边,我们就选择较短的路径。
  The final result is WordTree, a hierarchical model of visual concepts. To perform classification with WordTree we predict conditional probabilities at every node for the probability of each hyponym of that synset given that synset. For example, at the “terrier” node we predict:
  最后的结果是WordTree,一个视觉概念的分层模型。为了用WordTree进行分类,我们在每个节点上预测条件概率,即在给定该同义词的情况下,该同义词的每个连字符的概率。例如,在 "terrier "节点,我们预测。在这里插入图片描述
  If we want to compute the absolute probability for a particular node we simply follow the path through the tree to the root node and multiply to conditional probabilities. So if we want to know if a picture is of a Norfolk terrier we compute:
  如果我们想计算一个特定节点的绝对概率,我们只需沿着树的路径到根节点,然后乘以条件概率。因此,如果我们想知道一张图片是否是诺福克猎犬,我们就计算一下。在这里插入图片描述
  For classification purposes we assume that the the image contains an object: P r(physical object) = 1.
  为了分类的目的,我们假设该图像包含一个物体。P r(物理对象) = 1。
  To validate this approach we train the Darknet-19 model on WordTree built using the 1000 class ImageNet. To build WordTree1k we add in all of the intermediate nodes which expands the label space from 1000 to 1369. During training we propagate ground truth labels up the tree so that if an image is labelled as a “Norfolk terrier” it also gets labelled as a “dog” and a “mammal”, etc. To compute the conditional probabilities our model predicts a vector of 1369 values and we compute the softmax over all sysnsets that are hyponyms of the same concept, see Figure 5.
  为了验证这种方法,我们在使用1000类ImageNet建立的WordTree上训练Darknet-19模型。为了建立WordTree1k,我们加入了所有的中间节点,将标签空间从1000扩大到1369。在训练过程中,我们在树上传播基础事实标签,这样,如果一张图片被标记为 “诺福克梗”,它也会被标记为 "狗 "和 “哺乳动物”,等等。为了计算条件概率,我们的模型预测了一个由1369个值组成的向量,我们计算了所有作为同一概念的假名的系统集的softmax,见图5。
  Using the same training parameters as before, our hierarchical Darknet-19 achieves 71.9% top-1 accuracy and 90.4% top-5 accuracy. Despite adding 369 additional concepts and having our network predict a tree structure our accuracy only drops marginally. Performing classification in this manner also has some benefits. Performance degrades gracefully on new or unknown object categories. For example, if the network sees a picture of a dog but is uncertain what type of dog it is, it will still predict “dog” with high confidence but have lower confidences spread out among the hyponyms.
  使用与之前相同的训练参数,我们的分层式Darknet-19达到了71.9%的前1名准确率和90.4%的前5名准确率。尽管增加了369个额外的概念,并让我们的网络预测树状结构,但我们的准确率只下降了一点。以这种方式进行分类也有一些好处。在新的或未知的对象类别上,性能会优雅地下降。例如,如果网络看到一张狗的照片,但不确定它是什么类型的狗,它仍然会以高置信度预测 “狗”,但在假名中分布的置信度会降低。
  This formulation also works for detection. Now, instead of assuming every image has an object, we use YOLOv2’s objectness predictor to give us the value of P r(physical object). The detector predicts a bounding box and the tree of probabilities. We traverse the tree down, taking the highest confidence path at every split until we reach some threshold and we predict that object class.
  这种提法也适用于检测。现在,我们不是假设每张图片都有一个物体,而是使用YOLOv2的物体性预测器来给我们P r(物理物体)的值。检测器会预测出一个边界框和概率树。我们向下遍历这棵树,在每一个分叉处采取最高的置信度路径,直到我们达到某个阈值,我们就可以预测那个物体类别。
  Dataset combination with WordTree. We can use WordTree to combine multiple datasets together in a sensible fashion. We simply map the categories in the datasets to synsets in the tree. Figure 6 shows an example of using WordTree to combine the labels from ImageNet and COCO. WordNet is extremely diverse so we can use this technique with most datasets.
  用WordTree组合数据集。我们可以使用WordTree以合理的方式将多个数据集组合在一起。我们只需将数据集中的类别映射到树中的同位素。图6显示了一个使用WordTree来结合ImageNet和COCO的标签的例子。WordNet是非常多样化的,所以我们可以将这种技术用于大多数数据集。
  Joint classification and detection. Now that we can combine datasets using WordTree we can train our joint model on classification and detection. We want to train an extremely large scale detector so we create our combined dataset using the COCO detection dataset and the top 9000 classes from the full ImageNet release. We also need to evaluate our method so we add in any classes from the ImageNet detection challenge that were not already included. The corresponding WordTree for this dataset has 9418 classes. ImageNet is a much larger dataset so we balance the dataset by oversampling COCO so that ImageNet is only larger by a factor of 4:1.
  联合分类和检测。现在我们可以使用WordTree结合数据集,我们可以训练分类和检测的联合模型。我们想训练一个极大规模的检测器,所以我们使用COCO检测数据集和ImageNet完整版本中的前9000个类来创建我们的联合数据集。我们还需要评估我们的方法,所以我们加入了ImageNet检测挑战中尚未包括的任何类别。这个数据集的相应WordTree有9418个类。ImageNet是一个更大的数据集,所以我们通过对COCO进行过度采样来平衡数据集,使ImageNet只比它大4:1。
  Using this dataset we train YOLO9000. We use the base YOLOv2 architecture but only 3 priors instead of 5 to limit the output size. When our network sees a detection image we backpropagate loss as normal. For classification loss, we only backpropagate loss at or above the corresponding level of the label. For example, if the label is “dog” we do assign any error to predictions further down in the tree, “German Shepherd” versus “Golden Retriever”, because we do not have that information.
  使用这个数据集,我们训练YOLO9000。我们使用基本的YOLOv2架构,但只有3个先验因素,而不是5个,以限制输出规模。当我们的网络看到一个检测图像时,我们像平常一样反向传播损失。对于分类损失,我们只在标签的相应级别或以上反向传播损失。例如,如果标签是 “狗”,我们不给树上更远的预测分配任何错误,"德国牧羊犬 "与 “金毛猎犬”,因为我们没有这些信息。
  When it sees a classification image we only backpropagate classification loss. To do this we simply find the bounding box that predicts the highest probability for that class and we compute the loss on just its predicted tree. We also assume that the predicted box overlaps what would be the ground truth label by at least .3 IOU and we backpropagate objectness loss based on this assumption.
  当它看到一个分类图像时,我们只反向传播分类损失。要做到这一点,我们只需找到预测该类的最高概率的边界盒,并计算其预测树上的损失。我们还假设预测框与地面真实标签至少有0.3 IOU的重叠,我们根据这个假设反向传播对象性损失。
  Using this joint training, YOLO9000 learns to find objects in images using the detection data in COCO and it learns to classify a wide variety of these objects using data from ImageNet.
  通过这种联合训练,YOLO9000学会了使用COCO中的检测数据来寻找图像中的物体,并学会了使用ImageNet中的数据对这些物体进行广泛分类。
  We evaluate YOLO9000 on the ImageNet detection task. The detection task for ImageNet shares on 44 object categories with COCO which means that YOLO9000 has only seen classification data for the majority of the test categories. YOLO9000 gets 19.7 mAP overall with 16.0 mAP on the disjoint 156 object classes that it has never seen any labelled detection data for. This mAP is higher than results achieved by DPM but YOLO9000 is trained on different datasets with only partial supervision [4]. It also is simultaneously detecting 9000 other categories, all in real-time.
  我们在ImageNet检测任务上评估了YOLO9000。ImageNet的检测任务与COCO共享44个对象类别,这意味着YOLO9000只见过大多数检测类别的分类数据。YOLO9000总体上得到了19.7个mAP,在它从未见过任何标记的检测数据的156个不相连的对象类别上得到了16.0个mAP。这个mAP比DPM取得的结果要高,但是YOLO9000是在不同的数据集上训练的,只有部分监督[4]。它还同时检测了9000个其他类别,而且都是实时的。
  YOLO9000 learns new species of animals well but struggles with learning categories like clothing and equipment. New animals are easier to learn because the objectness predictions generalize well from the animals in COCO. Conversely, COCO does not have bounding box label for any type of clothing, only for person, so YOLO9000 struggles to model categories like “sunglasses” or “swimming trunks”.
  YOLO9000能很好地学习新的动物种类,但在学习服装和设备等类别时却很困难。新的动物更容易学习,因为对象性预测能很好地从COCO中的动物中概括出来。相反,COCO没有任何类型的衣服的边界框标签,只有人的标签,所以YOLO9000在为 "太阳镜 "或 "游泳裤 "这样的类别建模时很吃力。

5. Conclusion(结论)

We introduce YOLOv2 and YOLO9000, real-time detection systems. YOLOv2 is state-of-the-art and faster than other detection systems across a variety of detection datasets. Furthermore, it can be run at a variety of image sizes to provide a smooth tradeoff between speed and accuracy.
  我们介绍了YOLOv2和YOLO9000,实时检测系统。YOLOv2是最先进的,在各种检测数据集上比其他检测系统更快。此外,它可以在各种图像尺寸下运行,在速度和准确度之间提供平稳的权衡。
  YOLO9000 is a real-time framework for detection more than 9000 object categories by jointly optimizing detection and classification. We use WordTree to combine data from various sources and our joint optimization technique to train simultaneously on ImageNet and COCO. YOLO9000 is a strong step towards closing the dataset size gap between detection and classification.
  YOLO9000是一个实时框架,通过联合优化检测和分类来检测9000多个目标类别。我们使用WordTree来结合各种来源的数据,并使用我们的联合优化技术在ImageNet和COCO上同时进行训练。YOLO9000是朝着缩小检测和分类之间的数据集大小差距迈出的有力一步。
  Many of our techniques generalize outside of object detection. Our WordTree representation of ImageNet offers a richer, more detailed output space for image classification. Dataset combination using hierarchical classification would be useful in the classification and segmentation domains. Training techniques like multi-scale training could provide benefit across a variety of visual tasks.
  我们的许多技术可以在目标检测之外进行推广。我们对ImageNet的WordTree表示为图像分类提供了一个更丰富、更详细的输出空间。使用分层分类的数据集组合在分类和分割领域将是有用的。像多尺度训练这样的训练技术可以在各种视觉任务中提供好处。
  For future work we hope to use similar techniques for weakly supervised image segmentation. We also plan to improve our detection results using more powerful matching strategies for assigning weak labels to classification data during training. Computer vision is blessed with an enormous amount of labelled data. We will continue looking for ways to bring different sources and structures of data together to make stronger models of the visual world.
  对于未来的工作,我们希望将类似的技术用于弱监督的图像分割。我们还计划使用更强大的匹配策略来改善我们的检测结果,在训练期间为分类数据分配弱标签。计算机视觉有着得天独厚的大量标记数据。我们将继续寻找方法,将不同来源和结构的数据结合起来,为视觉世界建立更强大的模型。

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值