目标跟踪CVPR,ICCV,ECCV会议年度论文2018

” 目标跟踪基础与智能前沿 “

目标跟踪基础与智能前沿

寻找 目标跟踪方向的小伙伴,如果你苦于没有地方可以和同方向的小伙伴交流,我们创建了一个交流群,点上方链接可以进入,每周的交流活动通过该号宣传,群里随时随地可以展开讨论,无论是学术交流,还是环境配置,实验讲解,欢迎加入我们,一起交流进步!

参考CVPR推荐链接:https://blog.csdn.net/hitzijiyingcai/article/details/81210498

CVPR 2018

Paper arXiv下载链接:http://openaccess.thecvf.com/CVPR2018.py

Track检索相关论文:

  1. GANerated Hands for Real-Time 3D Hand Tracking From Monocular RGB
  2. Detect-and-Track: Efficient Pose Estimation in Videos
  3. Context-Aware Deep Feature Compression for High-Speed Visual Tracking
  4. Correlation Tracking via Joint Discrimination and Reliability Learning
  5. Hyperparameter Optimization for Tracking With Continuous Deep Q-Learning
  6. A Prior-Less Method for Multi-Face Tracking in Unconstrained Videos
  7. End-to-End Flow Correlation Tracking With Spatial-Temporal Attention
  8. CarFusion: Combining Point Tracking and Part Detection for Dynamic 3D Reconstruction of Vehicles
  9. A Causal And-Or Graph Model for Visibility Fluent Reasoning in Tracking Interacting Objects
  10. Fast and Furious: Real Time End-to-End 3D Detection, Tracking and Motion Forecasting With a Single Convolutional Net
  11. Towards Dense Object Tracking in a 2D Honeybee Hive
  12. Efficient Diverse Ensemble for Discriminative Co-Tracking
  13. Rolling Shutter and Radial Distortion Are Features for High Frame Rate Multi-Camera Tracking
  14. A Twofold Siamese Network for Real-Time Object Tracking
  15. Multi-Cue Correlation Filters for Robust Visual Tracking
  16. Learning Attentions: Residual Attentional Siamese Network for High Performance Online Visual Tracking
  17. SINT++: Robust Visual Tracking via Adversarial Positive Instance Generation
  18. High-Speed Tracking With Multi-Kernel Correlation Filters
  19. Learning Spatial-Temporal Regularized Correlation Filters for Visual Tracking
  20. WILDTRACK: A Multi-Camera HD Dataset for Dense Unscripted Pedestrian Detection
  21. PoseTrack: A Benchmark for Human Pose Estimation and Tracking
  22. Fusing Crowd Density Maps and Visual Object Trackers for People Tracking in Crowd Scenes
  23. Features for Multi-Target Multi-Camera Tracking and Re-Identification
  24. MX-LSTM: Mixing Tracklets and Vislets to Jointly Forecast Trajectories and Head Poses
  25. Tracking Multiple Objects Outside the Line of Sight Using Speckle Imaging
  26. Fast and Accurate Online Video Object Segmentation via Tracking Parts
  27. Total Capture: A 3D Deformation Model for Tracking Faces, Hands, and Bodies
  28. Learning Spatial-Aware Regressions for Visual Tracking
  29. High Performance Visual Tracking With Siamese Region Proposal Network
  30. VITAL: VIsual Tracking via Adversarial Learning

翻译:

1.GANerated Hands用于从单目RGB进行实时3D手部跟踪
2.检测和跟踪:视频中的高效姿态估计
3.用于高速视觉跟踪的上下文感知深度特征压缩
4.通过联合歧视和可靠性学习进行相关跟踪
5.连续深度Q学习跟踪的超参数优化
6.无约束视频中的多优先级跟踪方法
7.具有时空关注的端到端流量相关跟踪
8. CarFusion:结合点跟踪和零件检测进行车辆动态三维重建
9.跟踪交互对象中可见性流畅推理的因果和图形模型
10.速度与激情:使用单一卷积网实时进行端到端3D检测,跟踪和运动预测
11.在2D蜜蜂蜂巢中进行密集物体跟踪
12.用于判别共同跟踪的高效多样化集合
13.滚动快门和径向失真是高帧率多相机跟踪的特征
14.用于实时目标跟踪的双重暹罗网络
15.用于鲁棒视觉跟踪的多线索相关滤波器
16.学习注意事项:用于高性能在线视觉跟踪的残留注意暹罗网络
17. SINT ++:通过对抗性正实例生成进行稳健的视觉跟踪
18.具有多核相关滤波器的高速跟踪
19.学习用于视觉跟踪的时空正则化相关滤波器
20. WILDTRACK:用于密集的非脚本行人检测的多摄像机HD数据集
21. PoseTrack:人体姿势估计和跟踪的基准
22.融合人群密度图和视觉对象跟踪器,用于人群场景中的人物跟踪
23.多目标多相机跟踪和重新识别的功能
24. MX-LSTM:混合小轨和Vislets共同预测轨迹和头部姿势
25.使用散斑成像跟踪视线外的多个物体
26.通过跟踪部件进行快速准确的在线视频对象分割
27.总捕获:用于跟踪面部,手部和身体的3D变形模型
28.学习视觉跟踪的空间感知回归
29.具有暹罗地区提案网络的高性能视觉跟踪
30. VITAL:通过对抗性学习进行虚拟跟踪

深度学习相关论文:

  1. Hyperparameter Optimization for Tracking With Continuous Deep Q-Learning
  2. End-to-End Flow Correlation Tracking With Spatial-Temporal Attention
  3. Fast and Furious: Real Time End-to-End 3D Detection, Tracking and Motion Forecasting With a Single Convolutional Net
  4. A Twofold Siamese Network for Real-Time Object Tracking
  5. Learning Attentions: Residual Attentional Siamese Network for High Performance Online Visual Tracking
  6. SINT++: Robust Visual Tracking via Adversarial Positive Instance Generation
  7. Fusing Crowd Density Maps and Visual Object Trackers for People Tracking in Crowd Scenes
  8. Fast and Accurate Online Video Object Segmentation via Tracking Parts
  9. Learning Spatial-Aware Regressions for Visual Tracking
  10. High Performance Visual Tracking With Siamese Region Proposal Network
  11. VITAL: VIsual Tracking via Adversarial Learning

翻译:

5.连续深度Q学习跟踪的超参数优化
7.具有时空关注的端到端流量相关跟踪
10.速度与激情:使用单一卷积网实时进行端到端3D检测,跟踪和运动预测
14.用于实时目标跟踪的双重暹罗网络
16.学习注意事项:用于高性能在线视觉跟踪的残留注意暹罗网络
17. SINT ++:通过对抗性正实例生成进行稳健的视觉跟踪
22.融合人群密度图和视觉对象跟踪器,用于人群场景中的人物跟踪
26.通过跟踪部件进行快速准确的在线视频对象分割
27.学习视觉跟踪的空间感知回归
28.具有暹罗地区提案网络的高性能视觉跟踪
29. VITAL:通过对抗性学习进行虚拟跟踪

摘要:

- Hyperparameter Optimization for Tracking With Continuous Deep Q-Learning

Hyperparameters are numerical presets whose values are assigned prior to the commencement of the learning process. Selecting appropriate hyperparameters is critical for the accuracy of tracking algorithms, yet it is difficult to determine their optimal values, in particular, adaptive ones for each specific video sequence. Most hyperparameter optimization algorithms depend on searching a generic range and they are imposed blindly on all sequences. Here, we propose a novel hyperparameter optimization method that can find optimal hyperparameters for a given sequence using an action-prediction network leveraged on Continuous Deep Q-Learning. Since the common state-spaces for object tracking tasks are significantly more complex than the ones in traditional control problems, existing Continuous Deep Q-Learning algorithms cannot be directly applied. To overcome this challenge, we introduce an efficient heuristic to accelerate the convergence behavior. We evaluate our method on several tracking benchmarks and demonstrate its superior performance.
超参数是数字预设,其值在学习过程开始之前分配。选择适当的超参数对于跟踪算法的准确性至关重要,但是很难确定它们的最佳值,特别是每个特定视频序列的自适应值。大多数超参数优化算法依赖于搜索通用范围,并且盲目地强加于所有序列。在这里,我们提出了一种新的超参数优化方法,该方法可以使用连续深度Q学习的动作预测网络找到给定序列的最佳超参数。由于对象跟踪任务的公共状态空间比传统控制问题复杂得多,因此不能直接应用现有的连续深度Q学习算法。为了克服这一挑战,我们引入了一种有效的启发式方法来加速收敛行为。我们在几个跟踪基准上评估我们的方法,并展示其卓越的性能。

- End-to-End Flow Correlation Tracking With Spatial-Temporal Attention

Discriminative correlation filters (DCF) with deep convolutional features have achieved favorable performance in recent tracking benchmarks. However, most of existing DCF trackers only consider appearance features of current frame, and hardly benefit from motion and inter-frame information. The lack of temporal information degrades the tracking performance during challenges such as partial occlusion and deformation. In this paper, we pro

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值