Bottom-up Object Detection by Grouping Extreme and Center Points 论文笔记

Summary:德克萨斯大学提出:One-stage目标检测最强算法 ExtremeNet
Author:Amusi
Date:2019-02-11
微信公众号:CVer
GitHub:https://github.com/amusi/TensorFlow-From-Zero-To-One
原文链接:德克萨斯大学提出:One-stage目标检测最强算法 ExtremeNet
知乎:https://zhuanlan.zhihu.com/p/55838614

今天头条推送的是目前人脸检测方向的SOTA论文:改进SRN人脸检测算法。本文要介绍的是目前(2019-01-26) one-stage目标检测中最强算法:ExtremeNet。

正文

《Bottom-up Object Detection by Grouping Extreme and Center Points》
file

arXiv: https://arxiv.org/abs/1901.08043
github: https://github.com/xingyizhou/ExtremeNet
作者团队:UT Austin
注:2019年01月23日刚出炉的paper

Abstract:With the advent of deep learning, object detection drifted from a bottom-up to a top-down recognition problem. State of the art algorithms enumerate a near-exhaustive list of object locations and classify each into: object or not. In this paper, we show that bottom-up approaches still perform competitively. We detect four extreme points (top-most, left-most, bottom-most, right-most) and one center point of objects using a standard keypoint estimation network. We group the five keypoints into a bounding box if they are geometrically aligned. Object detection is then a purely appearance-based keypoint estimation problem, without region classification or implicit feature learning. The proposed method performs on-par with the state-of-the-art region based detection methods, with a bounding box AP of 43.2% on COCO test-dev. In addition, our estimated extreme points directly span a coarse octagonal mask, with a COCO Mask AP of 18.9%, much better than the Mask AP of vanilla bounding boxes. Extreme point guided segmentation further improves this to 34.6% Mask AP.

file

file
Illustration of our object detection method

file
Illustration of our framework

file
Illustration of our object detection method

基础工作

  • Extreme and center points
  • Keypoint detection
  • CornerNet
  • Deep Extreme Cut

创新点

  • Center Grouping
  • Ghost box suppression
  • Edge aggregation
  • Extreme Instance Segmentation

实验结果

ExtremeNet有多强,看下面的图示就知道了,在COCO test-dev数据集上,mAP为43.2,在one-stage detector中,排名第一。可惜的是没有给出时间上的对比,论文中只介绍说测试一幅图像,耗时322ms(3.1 FPS)。

file
State-of-the-art comparison on COCO test-dev

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值