目标跟踪论文整理(不全,以单目标为主)

目标跟踪论文整理(不全)

【注】:

  1. 其中1和2的论文会重叠;
  2. 部分论文title冒号前的缩写是我自己加的(论文中没有给出明确的算法简称),所以在搜索文章时请仅复制冒号后的内容。

1. 按问题类型整理

1.1 轻量化模型

LightTrack: Finding Lightweight Neural Networks for Object Tracking via One-Shot Architecture Search
2021CVPR
LightTrack采用one-shot NAS的方法搜索结构,流程如图2所示。整个过程训练与搜索是解耦的,首先训练超网(随机采样路径进行训练),然后用进化算法从超网中寻找最优子结构。实验可以看到三个版本mobile,largeA,largeB在性能、计算量和参数量上都具有优势。在骁龙845中,LightTrack运行速度比Ocean快12倍,参数量减少13倍,计算量减少38倍。作者称这种改进可能会缩小学术模型和工业部署在物体跟踪任务中的差距。

Efficient Visual Tracking with Exemplar Transformers
本文对transformer架构进行轻量化,提出了一种高效的Exemplar Transformer来替代卷积。E.T.Track在CPU上速度达到47FPS,比其他基于transformer的跟踪器快8倍,作者称这是目前唯一的实时transformer-based的跟踪器。

FEAR: Fast, Efficient, Accurate and Robust Visual Tracker
提出两个轻量化模型,dual-template module和pixel-wise fusion block。前者使用一个可学习的参数集成了时域信息,而后者使用更少的参数编码了更有判别性的特征。使用复杂的backbone,本文方法FEAR-M和FEAR-L在速度和精度上超过大多数算法;而使用轻量backbone的版本FEAR-XS比目前的Siamese跟踪器快10倍以上的跟踪速度,同时保持接近的精度。FEAR-XS比LightTrack小2.4倍,快4.3倍,且具有更高的精度。此外,本文引入能耗和速度来扩展模型效率的定义。

1.2 超长跟踪探索

LTMU:High-Performance Long-Term Tracking with Meta-Updater 王栋老师组
2020CVPR

Improved SPLT: Effective Local and Global Search for Fast Long-term Tracking王栋老师组
2022TPAMI

‘Skimming-perusal’ tracking: A framework for real-time and robust long-term tracking王栋老师组
2019 ICCV

Long-term tracking in the wild:A benchmark
2018ECCV SiamFC+简单的重检测机制

1.3 和NLP结合

- Towards More Flexible and Accurate Object Tracking with Natural Language: Algorithms and Benchmark

- Siamese Natural Language Tracker: Tracking by Natural Language Descriptions with Siamese Trackers 和NLP结合

2. 按发表年份/会议/期刊整理

2.1 2023 CVPR

1. Unsupervised Sampling Promoting for Stochastic Human Trajectory Prediction

Paper:https://arxiv.org/abs/2304.04298

Code:GitHub - viewsetting/Unsupervised_sampling_promoting: Official repository of "Unsupervised Sampling Promoting for Stochastic Human Trajectory Prediction" (CVPR 2023)

2. Trace and Pace: Controllable Pedestrian Animation via Guided Trajectory Diffusion

Paper:https://arxiv.org/abs/2304.01893

Code:

3. Uncovering the Missing Pattern: Unified Framework Towards Trajectory Imputation and Prediction

Paper:https://arxiv.org/abs/2303.16005

Code:GitHub - colorfulfuture/GC-VRNN

4. Visibility Aware Human-Object Interaction Tracking from Single RGB Camera

Paper:https://arxiv.org/abs/2303.16479

Code:

5. DropMAE: Masked Autoencoders with Spatial-Attention Dropout for Tracking Tasks

Paper:https://arxiv.org/abs/2304.00571

Code:GitHub - jimmy-dq/DropMAE

6. MotionTrack: Learning Robust Short-term and Long-term Motions for Multi-Object Tracking

Paper:https://arxiv.org/abs/2303.10404

Code:

7. Visual Prompt Multi-Modal Tracking

Paper:https://arxiv.org/abs/2303.10826

Code:GitHub - jiawen-zhu/ViPT: [CVPR23] Visual Prompt Multi-Modal Tracking

8. Observation-Centric SORT: Rethinking SORT for Robust Multi-Object Tracking

Paper:https://arxiv.org/abs/2203.14360v2

Code:GitHub - noahcao/OC_SORT: [CVPR2023] The official repo for OC-SORT: Observation-Centric SORT on video Multi-Object Tracking. OC-SORT is simple, online and robust to occlusion/non-linear motion.

9. Focus On Details: Online Multi-object Tracking with Diverse Fine-grained Representation

Paper:https://arxiv.org/abs/2302.14589

Code:

10. Referring Multi-Object Tracking

Paper:https://arxiv.org/abs/2303.03366

Code:

11. Simple Cues Lead to a Strong Multi-Object Tracker

Paper: https://arxiv.org/abs/2206.04656

Code:

2.2 2022 CVPR

1.CSWinTT: Transformer Tracking with Cyclic Shifting Window Attention
paper: https://arxiv.org/abs/2205.03806
code: https://github.com/SkyeSong38/CSWinTT
循环为一窗口注意力模型

2.Trackron: Unified Transformer Tracker for Object Tracking
paper: https://arxiv.org/abs/2203.15175
code: https://github.com/Flowerfan/Trackron
将多目标跟踪和单目标跟踪统一建模的基于Transformer 的跟踪方法。以往多目标跟踪和单目标跟踪是计算机视觉领域中两个相对独立的领域,该文为跟踪问题建立了统一的基于Transformer 的方法 Unified Transformer Tracker (UTT) ,SOT 和 MOT 任务都可以在这个框架内解决。

3.GTELT: Global Tracking via Ensemble of Local Trackers
paper: https://arxiv.org/abs/2203.16092
基于局部跟踪器集成的全局跟踪(单目标跟踪)。该方法的提出为了应对“长期跟踪”中由于突然运动和遮挡造成的目标消失,在多个数据集中均显示出优越的性能。

4.HMFT: Visible-Thermal UAV Tracking: A Large-Scale Benchmark and New Baseline卢湖川组
paper: https://arxiv.org/abs/2204.04120
code: https://zhang-pengyu.github.io/DUT-VTUAV
可见光-热成像 无人机视觉:大规模基准及新基线算法。500个序列,170万个高分辨(1920x1080)帧对,提供了从粗到细的属性注释,作者提出了新baseline :Hierarchical Multi-modal Fusion Tracker (HMFT)。

5.Unsupervised Learning of Accurate Siamese Tracking
paper: https://arxiv.org/abs/2204.01475
code: https://github.com/FlorinShum/ULAST
无监督学习 + Siamese Tracking (单目标跟踪)。该文通过研究向前和向后跟踪视频来获得自监督信息,扩展了Siamese Tracking方法,新方法大大优于之前的无监督方法,甚至在大规模数据集( TrackingNet 和 LaSOT)上的表现与监督方法相当。

6.MeMOT: Multi-Object Tracking with Memory
具有记忆的多目标跟踪,作者发明了一个通用的检测与关联框架,使多目标跟踪可跟踪到长时间消失的目标,具体做法使用一个大的时空记忆存储目标的身份信息,并可据此自适应参考和聚合有用的信息。

7.TCTrack: Temporal Contexts for Aerial Tracking
paper: https://arxiv.org/abs/2203.01885
code: https://github.com/vision4robotics/TCTrack
无人机视觉中的目标跟踪。建模“连续帧之间的时间上下文信息”,精度高、速度快,在真实世界的无人机上测试,其在 NVIDIA Jetson AGX Xavier 上的速度超过 27 FPS。

8.Global Tracking Transformers
paper: https://arxiv.org/abs/2203.13250
code: https://github.com/xingyizhou/GTR
以往多目标跟踪技术中 tracking-by-detection 的方法,需要对相邻帧的目标进行“成对关联”,作者提出的方法是对多帧图像序列中的目标进行“全局关联”,取得了 MOTA 75.3 mAP 和 HOTA 59.1 mAP 的成绩,在TAO数据集上超出baseline 7.7 mAP。

9.ToMP: Transforming Model Prediction for Tracking
paper: https://arxiv.org/abs/2203.11192
code: https://github.com/visionml/pytracking

10.MixFormer: End-to-End Tracking with Iterative Mixed Attention
paper: https://arxiv.org/abs/2203.11082
code: https://github.com/MCG-NJU/MixFormer

11.Unsupervised Domain Adaptation for Nighttime Aerial Tracking
paper: https://arxiv.org/abs/2203.10541
code: https://github.com/vision4robotics/UDAT

12.TCTrack: Temporal Contexts for Aerial Tracking
paper: https://arxiv.org/abs/2203.01885
code: https://github.com/vision4robotics/TCTrack

13.SBT: Correlation-Aware Deep Tracking
paper: https://arxiv.org/abs/2203.01666
作者提出一种新型目标相关特征网络,用于目标跟踪中的特征提取,可方便应用于现有跟踪算法管道中,提高跟踪性能。

2.3 2021 ICCV

STARK: Learning Spatio-Temporal Transformer for Visual Tracking卢湖川组
paper: https://arxiv.org/abs/2103.17154
code: https://github.com/researchmm/Stark

Learn to Match: Automatic Matching Network Design for Visual Tracking
paper: https://arxiv.org/abs/2108.00803
code: https://github.com/JudasDie/SOTS

HiFT: Hierarchical Feature Transformer for Aerial Tracking
paper: https://arxiv.org/abs/2108.00202
code: https://github.com/vision4robotics/HiFT

Learning to Adversarially Blur Visual Object Tracking
paper: https://arxiv.org/abs/2107.12085
code: https://github.com/tsingqguo/ABA

Learning target candidate association to keep track of what not to track
paper: https://arxiv.org/abs/2103.16556
code: https://github.com/visionml/pytracking
对干扰目标提出相关举措。

Video Annotation for Visual Tracking via Selection and Refinement卢湖川组

USOT:Learning to Track Objects from Unlabeled Videos 上交马超老师组

2.4 2021 CVPR

- Transformer Tracking卢湖川组

- Alpha-Refine: Boosting Tracking Performance by Precise Bounding Box Estimation卢湖川组

- LightTrack: Finding Lightweight Neural Networks for Object Tracking via One-Shot Architecture Search卢湖川组

- Towards More Flexible and Accurate Object Tracking with Natural Language: Algorithms and Benchmark

- Learnable Graph Matching: Incorporating Graph Partitioning with Deep Feature Learning for Multiple Object Tracking

- IoU Attack: Towards Temporally Coherent Black-Box Adversarial Attack for Visual Object Tracking

- TMT:Transformer Meets Tracker: Exploiting Temporal Context for Robust Visual Tracking

- Track to Detect and Segment: An Online Multi-Object Tracker

- Rotation Equivariant Siamese Networks for Tracking 旋转等变

- Graph Attention Tracking 图注意力

- Siamese Natural Language Tracker: Tracking by Natural Language Descriptions with Siamese Trackers 和NLP结合

- Towards More Flexible and Accurate Object Tracking with Natural Language: Algorithms and Benchmark 和NLP结合

- STMTrack: Template-free Visual Tracking with Space-time Memory Networks 北航

2.5 2020 CVPR

High-Performance Long-Term Tracking with Meta-Updater 卢湖川组[Best Paper Award Nominee]

2.6 2020 ACMMM

Online Filtering Training Samples for Robust Visual Tracking 卢湖川组

2.7 2022 IEEE Transactions on Pattern Analysis and Machine Intelligence

improved SPLT: Effective Local and Global Search for Fast Long-term Tracking 卢湖川组

  • 0
    点赞
  • 60
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值