- UnitBox An Advanced Object Detection Network,arxiv 16.08

UnitBox是一种先进的目标检测网络,通过全卷积网络架构预测对象边界和像素级分类得分,并引入IoU损失函数,直接优化预测框与真实框的最大重叠,解决传统损失函数存在的问题。相较于Faster R-CNN和DenseBox,UnitBox能更准确地预测边界,适应不同形状和尺度的对象,并且训练收敛速度快。
摘要由CSDN通过智能技术生成

- UnitBox An Advanced Object Detection Network,arxiv 16.08 (download

    该论文提出了一种新的loss function:IoU loss。这点比较有意思,也容易复现。

    ======

    论文分析了faster-rcnn和densebox的优缺点:

        1 faster-rcnn:rpn用来predict the bounding boxes of object candidates from anchors,但是这些anchors是事先定义好的(如3 scales & 3 aspect ratios),RPN shows difficult to handle the object candidates with large shape variations, especially for small objects. 也就是RPN不能很好cover所有的情况(以至于很多基于faster-rcnn的论文都在改善这点)

        2 densebox:utilizes every pixel of the feature map to regress a 4-D distance vector (the distances between the current pixel and the four bounds of object candidate containing it). However, DenseBox optimizes the four-side distances as four independent variables, under the simplistic lL2 loss,;besides, to balance the bounding boxes with varied scales, DenseBox requires the training image patches to be resized to a fixed scale. As a consequence, DenseBox has to perform detection on image pyramids, which unavoidably affects the eciency of the framework. 

    =====

回答: 本文探索了将普通的Vision Transformer (ViT)作为目标检测的骨干网络。通过对ViT架构进行微调,而无需重新设计分层骨干进行预训练,我们的普通骨干检测器可以取得竞争性的结果。研究发现,只需从单尺度特征图构建简单的特征金字塔(无需常见的FPN设计),并使用窗口注意(无需移动)辅助少量的跨窗口传播块即可。通过使用预先训练的纯ViT主干作为Masked Autoencoders (MAE),我们的检测器ViTDet可以与之前基于分层骨干的领先方法竞争,在COCO数据集上达到61.3 APbox的性能。我们希望这项研究能够引起对普通骨干检测器的关注。\[1\]\[2\]\[3\] #### 引用[.reference_title] - *1* [论文阅读-ViTDet:Exploring Plain Vision Transformer Backbones for Object Detection](https://blog.csdn.net/qq_37662375/article/details/126675811)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v91^insert_down28v1,239^v3^insert_chatgpt"}} ] [.reference_item] - *2* *3* [ViTDet:Exploring Plain Vision Transformer Backbonesfor Object DetectionarXiv 2022)](https://blog.csdn.net/qq_54828577/article/details/127262932)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v91^insert_down28v1,239^v3^insert_chatgpt"}} ] [.reference_item] [ .reference_list ]
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值