tracking MOT log
一篇论文综述:翻译
https://blog.csdn.net/yuhq3/article/details/78742658
arxiv搜索:
http://www.arxiv-sanity.com/search?q=recurrent+tracking
quora:https://www.quora.com/Why-is-no-visual-tracking-algorithm-using-RNN-LSTM
使用多目标跟踪的数据集测试单目标跟踪算法(DaSiamRPN、ECO等),发现跟踪效果较差。原因是多目标跟踪数据集中多个目标相似度很高、并且存在大量的遮挡和目标消失现象。
DaSiamRPN-ECCV-2018VOT单目标跟踪冠军
ECO-CVPR-2017
0. 数据集
多目标跟踪数据集
检测文件标注格式:
<frame>, <id>, <bb_left>, <bb_top>, <bb_width>, <bb_height>, <conf>, <x>, <y>, <z>
1, -1, 794.2, 47.5, 71.2, 174.8, 67.5, -1, -1
1, -1, 164.1, 19.6, 66.5, 163.2, 29.4, -1, -1
标注格式:第一个数字表示帧号,第二个-1表示还没有分配ID,后面四个数字表示左上角x,y,w,h,接着的一个数字表示检测器得到的置信度,最后两个-1对检测文件来说是忽略的.
GroundTrue标注格式
<frame>, <id>, <bb_left>, <bb_top>, <bb_width>, <bb_height>, <0/1忽略>, <cls>, <>
1, 1, 794.2, 47.5, 71.2, 174.8, 1, 1, 0.8
第7个数字表示这个实体是否被评估,0=忽略,1=评估;第8个数字表示类别
Label ID
Pedestrian 1
Person on vehicle 2
Car 3
Bicycle 4
Motorbike 5
Non motorized vehicle 6
Static person 7
Distractor 8
Occluder 9
Occluder on the ground 10 Occluder full 11
Reflection 12
- MOT数据集解析、绘制、预处理
按照det.txt将MOT序列图像中的检测框绘制出来:目录=/media/han/E/mWork/datasets/MOT/DataSetParser
按照gt.txt将MOT序列图像的跟踪结果绘制出来:例如目录=/media/han/E/mWork/datasets/MOT/DataSetParser/MOT17/train/MOT17-02-GT
按照gt.txt去掉忽略的框,绘制跟踪结果:例如目录=/media/han/E/mWork/datasets/MOT/DataSetParser/MOT17/train/MOT17-02-GT_ignore0
- DukeMTMC : http://vision.cs.duke.edu/DukeMTMC/
杜克大学多相机多目标跟踪项目组
1.Github
- DeepCC
https://github.com/ergysr/DeepCC 杜克大学项目组,论文里提到了很多技巧,很杂,我虽然代码开源了,但是数据集太大100G以上,有点难训练。
- MOT竞赛 devkit
https://bitbucket.org/amilan/motchallenge-devkit/src
- MDP_Tracking 2015年ICCV
https://github.com/yuxng/MDP_Tracking
MDP_Tracking is a online multi-object tracking framework based on Markov Decision Processes (MDPs).
- ICCV 2015论文代码开源
2.资源综述
-
多目标跟踪论文清单 http://perception.yale.edu/Brian/refGuides/MOT.html
-
相关知识点
评价指标:
多目标跟踪竞赛结果摘要:Multiple Object Tracking Challenge 2017 Results
IDF1 IDP IDR| Rcll Prcn FAR| GT MT PT ML| FP FN IDs FM| MOTA MOTP MOTAL
% metrics contains the following
% [1] recall - percentage of detected targets
% [2] precision - percentage of correctly detected targets
% [3] FAR - number of false alarms per frame
% [4] GT - number of ground truth trajectories
% [5-7] MT, PT, ML - number of mostly tracked, partially tracked and mostly lost trajectories
% [8] falsepositives- number of false positives (FP)
% [9] missed - number of missed targets (FN)
% [10] idswitches - number of id switches (IDs)
% [11] FRA - number of fragmentations
% [12] MOTA - Multi-object tracking accuracy in [0,100]
% [13] MOTP - Multi-object tracking precision in [0,100] (3D) / [td,100] (2D)
% [14] MOTAL - Multi-object tracking accuracy in [0,100] with log10(idswitches)
ICCV 2015 Multiple Hypothesis Tracking Revisited 阅读笔记
ECCV2018论文:Multi-object Tracking with Neural Gating Using Bilinear LSTM
作者Fuxin Li主页 没有开源代码,比清楚具体训练细节
3.工具包Code
MOT 工具包
https://bitbucket.org/amilan/motchallenge-devkit/
例子:
% 输入序列list,GT路径,自己的算法跟踪结果路径,评估集名称
benchmarkGtDir = '/media/han/E/mWork/mCode/tracking-mot/MOT16/train/';
[allMets, metsBenchmark] = evaluateTracking('c5-train.txt', '/media/han/E/mWork/mCode/tracking-mot/deep_sort/results/', benchmarkGtDir, 'MOT16');
没有可视化的程序,可视化的程序可以在/media/han/E/mWork/mCode/tracking-mot/MDP_Tracking/show_groundtruth.m
,我把这个可视化脚本和其依赖函数提取出来,放在了
/media/han/E/mWork/mCode/tracking-mot/show_MOT_groundTrue.m
Deep SORT算法代码中还有一个可视化工具,Python写的脚本: