Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks

Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks

we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals.

An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection.

We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features

INTRODUCTION

Fast R-CNN , achieves near real-time rates using very deep networks , when ignoring the time spent on region proposals.

computing proposals with a deep convolutional neural network

On top of these convolutional features, we construct an RPN by adding a few additional convolutional layers that simultaneously regress region bounds and objectness scores at each location on a regular grid. The RPN is thus a kind of fully convolutional network (FCN) and can be trained end-toend specifically for the task for generating detection proposals.

addressing multiple scales and sizes

we introduce novel “anchor” boxes that serve as references at multiple scales and aspect ratios. Our scheme can be thought of as a pyramid of regression references (Figure 1, c), which avoids enumerating images or filters of multiple scales or aspect ratios.

we propose a training scheme that alternates between fine-tuning for the region proposal task and then fine-tuning for object detection, while keeping the proposals fixed.

Our method is not only a cost-efficient solution for practical usage, but also an effective way of improving object detection accuracy.

RELATED WORK

Object Proposals.

Deep Networks for Object Detection.

FASTER R-CNN

faster r-cnn

Our object detection system, called Faster R-CNN, is composed of two modules.

  1. The first module is a deep fully convolutional network that proposes regions
  2. The second module is the Fast R-CNN detector that uses the proposed regions.

Region Proposal Networks

![RPN][faster_rcnn_figure3]

A Region Proposal Network (RPN) takes animage (of any size) as input and outputs a set of rectangular object proposals, each with an objectness score.

To generate region proposals, we slide a small network over the convolutional feature map output by the last shared convolutional layer. This small network takes as input an n × n n \times n n×n spatial window of the input convolutional feature map. Each sliding window is mapped to a lower-dimensional feature (256-d for ZF and 512-d for VGG, with ReLU following).

This feature is fed into two sibling fullyconnected layers—a box-regression layer (reg) and a box-classification layer (cls). We use n = 3 in this paper, noting that the effective receptive field on the input image is large (171 and 228 pixels for ZF and VGG, respectively).

This mini-network is illustrated at a single position in Figure 3 (left). Note that because the mini-network operates in a sliding-window fashion, the fully-connected layers are shared across all spatial locations. This architecture is naturally implemented with an n × n convolutional layer followed by two sibling 1 × 1 convolutional layers (for r e g reg reg and c l s cls cls, respectively).

Anchors

maximum possible proposals for each location is denoted as k k k.

the r e g reg reg layer has 4 k 4k 4k outputs encoding the coordinates of k k k boxes, and the c l s cls cls layer outputs 2 k 2k 2k scores that estimate probability of object or not object for each proposal.

Translation-Invariant Anchors
An important property of our approach is that it is translation invariant

Multi-Scale Anchors as Regression References

  1. The first way is based on image/feature pyramids. The images are resized at multiple scales, and feature maps (HOG [8] or deep convolutional features [9], [1], [2]) are computed for each scale
  2. The second way is to use sliding windows of multiple scales (and/or aspect ratios) on the feature maps. Different aspect ratios are trained separately using different filter sizes
  3. Our method classifies and regresses bounding boxes with reference to anchor boxes of multiple scales and aspect ratios.

Loss Function

We assign a positive label to two kinds of anchors:

  1. the anchor/anchors with the highest Intersection-overUnion (IoU) overlap with a ground-truth box
  2. an anchor that has an IoU overlap higher than 0.7 with any ground-truth box.

We assign a negative label to a non-positive anchor if its IoU ratio is lower than 0.3 for all ground-truth boxes.

loss function for an image:
L ( p i , t i ) = 1 N c l s ∑ i L c l s ( P i , P i ∗ ) + λ 1 N r e g ∑ i p i ∗ L r e g ( t i , t i ∗ ) L({p_i}, {t_i}) = \frac{1}{N_{cls}}\sum_iL_{cls}(P_i, P_i^*) + \lambda \frac{1}{N_{reg}}\sum_i p_i^*L_{reg}(t_i, t_i^*) L(pi,ti)=Ncls1iLcls(Pi,Pi)+λNreg1ipiLreg(ti,ti)

The term p i ∗ L r e g p_i^* L_{reg} piLreg means the regression loss is activated only for positive anchors ( p i ∗ = 1 p_i^* = 1 pi=1) and is disabled otherwise ( p i ∗ = 0 p_i^* = 0 pi=0).

In our formulation, the features used for regression are of the same spatial size (3 × 3) on the feature maps. To account for varying sizes, a set of k bounding-box regressors are learned. Each regressor is responsible for one scale and one aspect ratio, and the k regressors do not share weights.

Training RPNs

Sharing Features for RPN and Fast R-CNN

We discuss three ways for training networks with features shared:

  1. Alternating training. In this solution, we first train RPN, and use the proposals to train Fast R-CNN. The network tuned by Fast R-CNN is then used to initialize RPN, and this process is iterated.
  2. Approximate joint training. The backward propagation takes place as usual, where for the shared layers the backward propagated signals from both the RPN loss and the Fast R-CNN loss are combined.
  3. Non-approximate joint training. a theoretically valid backpropagation solver should also involve gradients w.r.t. the box coordinates. we need an RoI pooling layer that is differentiable w.r.t. the box coordinates. This is a nontrivial problem and a solution can be given by an “RoI warping” layer

4-Step Alternating Training.

  1. In the first step, we train the RPN as described. This network is initialized with an ImageNet-pre-trained model and fine-tuned end-to-end for the region proposal task.
  2. In the second step, we train a separate detection network by Fast R-CNN using the proposals generated by the step-1 RPN. This detection network is also initialized by the ImageNet-pre-trained model. At this point the two networks do not share convolutional layers.
  3. In the third step, we use the detector network to initialize RPN training, but we fix the shared convolutional layers and only fine-tune the layers unique to RPN. Now the two networks share convolutional layers.
  4. Finally, keeping the shared convolutional layers fixed, we fine-tune the unique layers of Fast R-CNN.

Implementation Details

We note that our algorithm allows predictions that are larger than the underlying receptive field.

To reduce redundancy, we adopt non-maximum suppression (NMS) on the proposal regions based on their cls scores.

EXPERIMENTS

Experiments on PASCAL VOC

Ablation Experiments on RPN.

NMS does not harm the detection mAP and may reduce false alarms.

the cls scores account for the accuracy of the highest ranked proposals.

This suggests that the highquality proposals are mainly due to the regressed box bounds. The anchor boxes, though having multiple scales and aspect ratios, are not sufficient for accurate detection.

it suggests that the proposal quality of RPN+VGG is better than that of RPN+ZF.

Performance of VGG-16.

Sensitivities to Hyper-parameters.

using anchors of multiple sizes as the regression references is an effective solution.

scales and aspect ratios are not disentangled dimensions for the detection accuracy.

the result is insensitive to λ in a wide range.

Analysis of Recall-to-IoU.

It is more appropriate to use this metric to diagnose the proposal method than to evaluate it.

RPN method behaves gracefully

One-Stage Detection vs. Two-Stage Proposal + Detection.

OverFeat is a one-stage, class-specific detection pipeline, and ours is a two-stage cascade consisting of class-agnostic proposals and class-specific detections.

In OverFeat, the region-wise features come from a sliding window of one aspect ratio over a scale pyramid. These features are used to simultaneously determine the location and category of objects.

In RPN, the features are from square (3×3) sliding windows and predict proposals relative to anchors with different scales and aspect ratios.

Experiments on MS COCO

Faster R-CNN in ILSVRC & COCO 2015 competitions

From MS COCO to PASCAL VOC

CONCLUSION

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
### 回答1: Faster R-CNN是一种基于区域建议网络(Region Proposal Networks,RPN)的物体检测算法,旨在实现实时物体检测。它通过预测每个区域是否含有物体来生成候选框,并使用卷积神经网络(CNN)来确定候选框中的物体类别。Faster R-CNN在提高检测精度的同时,也显著提高了检测速度。 ### 回答2: 在计算机视觉领域中,目标检测一直是热门研究的方向之一。近年来,基于深度学习的目标检测方法已经取得了显著的进展,并且在许多实际应用中得到了广泛的应用。其中,Faster R-CNN 是一种基于区域建议网络(Region Proposal Networks,RPN)的目标检测方法,在检测准确率和速度之间取得了很好的平衡,能够实现实时目标检测。 Faster R-CNN 的基本框架由两个模块组成:区域建议网络(RPN)和检测模块。RPN 主要负责生成候选目标框,而检测模块则利用这些候选框完成目标检测任务。具体来说,RPN 首先在原始图像上以多个尺度的滑动窗口为基础,使用卷积网络获取特征图。然后,在特征图上应用一个小型网络来预测每个位置是否存在目标,以及每个位置的目标边界框的坐标偏移量。最终,RPN 根据预测得分和位置偏移量来选择一部分具有潜在对象的区域,然后将这些区域作为候选框送入检测模块。 检测模块的主要任务是使用候选框来检测图像中的目标类别和位置。具体来说,该模块首先通过将每个候选框映射回原始图像并使用 RoI Pooling 算法来获取固定大小的特征向量。然后,使用全连接神经网络对这些特征向量进行分类和回归,以获得每个框的目标类别和精确位置。 相比于传统的目标检测方法,Faster R-CNN 具有以下优点:首先,通过使用 RPN 可以自动生成候选框,避免了手动设计和选择的过程;其次,通过共享卷积网络可以大大减少计算量,提高效率;最后,Faster R-CNN 在准确率和速度之间取得了很好的平衡,可以实现实时目标检测。 总之,Faster R-CNN 是一种高效、准确的目标检测方法,是深度学习在计算机视觉领域中的重要应用之一。在未来,随着计算机视觉技术的进一步发展,Faster R-CNN 这类基于深度学习的目标检测方法将会得到更广泛的应用。 ### 回答3: Faster R-CNN是一种结合了深度学习和传统目标检测算法的新型目标检测方法,旨在提高目标检测速度和准确率。Faster R-CNN采用了Region Proposal Network(RPN)来生成候选区域,并通过R-CNN网络对候选区域进行分类和定位。 RPN是一种全卷积神经网络,用于在图像中生成潜在的候选区域。RPN通常在卷积特征图上滑动,对每个位置预测k个候选区域和其对应的置信度得分。这样,对于输入图像,在不同大小和宽高比的Anchor上预测候选框,可以在计算上更有效率。 R-CNN网络利用卷积特征图作为输入,对RPN生成的候选区域进行分类和精确定位。与以前的目标检测方法相比,Faster R-CNN使用了共享卷积特征,使得整个检测网络可以端到端地进行训练和优化,缩短了训练时间,同时也更便于理解和改进。 Faster R-CNN不仅具有较高的准确性,还具有较快的检测速度。在各种基准测试中,Faster R-CNN与其他目标检测算法相比,都取得了优异的性能表现。总之,Faster R-CNN将目标检测引入了一个新的阶段,为实时目标检测提供了一个良好的基础。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值