代码运行方法deep sort: 运行解读

有问题可以在这个群里沟通:712790258

 

Deep SORT

https://github.com/nwojke/deep_sort   #这个是代码网址

切记1:一定要下载下图1.9g的,这里面是一套图片,不要想当然的用视频

切忌2:

以此链接(https://blog.csdn.net/zjc910997316/article/details/84068857

运行的时候尽量在终端进行上述链接里面第一类方法(直接在终端运行)的格式.

不要用第三类方法(bash脚本)的模式,bash 123.sh这种方法很容易出错

 

 

 

Introduction

This repository contains code for Simple Online and Realtime Tracking with a Deep Association Metric (Deep SORT). We extend the original SORT algorithm to integrate appearance information based on a deep appearance descriptor. See the arXiv preprint for more information.

 

Dependencies

The code is compatible with Python 2.7 and 3. The following dependencies are needed to run the tracker:

  • NumPy
  • sklearn
  • OpenCV

Additionally, feature generation requires TensorFlow (>= 1.0).

 

Installation

First, clone the repository:

git clone https://github.com/nwojke/deep_sort.git

Then, download pre-generated detections and the CNN checkpoint file from here.

#与生成的检测结果和

神经网络模型(下图ckpt)

(这里是一个.pb文件啊??怎么回事)

NOTE: The candidate object locations of our pre-generated detections are taken from the following paper:

#预训练检测来自下面文章

F. Yu, W. Li, Q. Li, Y. Liu, X. Shi, J. Yan. POI: Multiple Object Tracking with
High Performance Detection and Appearance Feature. In BMTT, SenseTime Group
Limited, 2016.

We have replaced the appearance descriptor with a custom deep convolutional neural network (see below).

#外形描述 用的深度卷积神经网络(如下)

 

Running the tracker

The following example starts the tracker on one of the MOT16 benchmark sequences. We assume resources have been extracted to the repository root directory and the MOT16 benchmark data is in ./MOT16:

#下面例子在MOT16,假设资源已经被提取到根目录存储库 且 MOT benchmark数据也在

python deep_sort_app.py \
    --sequence_dir=./MOT16/test/MOT16-06    #???此处不理解应该是测试视频
    --detection_file=./resources/detections/MOT16_POI_test/MOT16-06.npy \    #后面有说明,可以下载现成的
    --min_confidence=0.3 \
    --nn_budget=100 \
    --display=True

Check python deep_sort_app.py -h  for an overview of available options. There are also scripts in the repository to visualize results, generate videos, and evaluate the MOT challenge benchmark.

#这里说的是 python deep_sort_app.py -h  方法查看输入内容,打开deep_sort_app.py也能看到

#这里有脚本在存储库,去 1可视化结果 2产生视频 3评价MOTchallenge benchmark(如下三个选中项)

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~`

Generating detections

Beside the main tracking application, this repository contains a script to generate features for person re-identification, suitable to compare the visual appearance of pedestrian bounding boxes using cosine similarity.
The following example generates these features from standard MOT challenge detections.
Again, we assume resources have been extracted to the repository root directory and MOT16 data is in ./MOT16:

#除了主要的跟踪应用,存储库包含脚本来产生特征用于行人REID,用于使用cosine相似来比较行人框的视觉外貌特征.

#下面例子从标准MOTchanllege detection 产生特征

#我们假设资源已经提取到了存储库根目录MOT16数据已经在MOT16文件夹里

 

python tools/generate_detections.py
    --model=resources/networks/mars-small128.pb \   #后面有提到训练好的.pb文件  ,提供了三个,只是后缀不同,全下载下来即可
    --mot_dir=./MOT16/train \   #???目测是训练视频,放置所有训练的视频
    --output_dir=./resources/detections/MOT16_train    #输出位置

The model has been generated with TensorFlow 1.5. If you run into incompatibility, re-export the frozen inference graph to obtain a new mars-small128.pb that is compatible with your version:

  #模型已经用TF1.5产生,如果不兼容,用冷冻推理图重生成mars-small128.pb

python tools/freeze_model.py

The generate_detections.py stores for each sequence of the MOT16 dataset a separate binary file in NumPy native format.
Each file contains an array of shape Nx138, where N is the number of detections in the corresponding MOT sequence.
The first 10 columns of this array contain the raw MOT detection copied over from the input file.
The remaining 128 columns store the appearance descriptor.
The files generated by this command can be used as input for the deep_sort_app.py.

#用generate_detections.py为MOT16序列数据 存储一个独立二值文件在 NUMPY 原始格式里
每个文件含有一个N*138的数组,N是相应MOT序列里面检测数量,
数组的前10列包含有 来自输入文件的原始的MOT检测

剩下的128列存储外貌描述.
文件由
generate_detections.py产生的结果可以作为deep_sort_app.py.的输入

NOTE: If python tools/generate_detections.py raises a TensorFlow error, try passing an absolute path to the --model argument. This might help in some cases.  #使用绝对路径

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Training the model

To train the deep association metric model we used a novel cosine metric learning approach which is provided as a separate repository.    #去训练关联模型,使用经典 cosine metric learning ,作为独立的存储库被提供

 

Highlevel overview of source files

In the top-level directory are executable scripts to execute, evaluate, and visualize the tracker. The main entry point is in deep_sort_app.py. This file runs the tracker on a MOTChallenge sequence.

In package deep_sort is the main tracking code:

  • detection.py: Detection base class.
  • kalman_filter.py: A Kalman filter implementation and concrete parametrization for image space filtering.
  • linear_assignment.py: This module contains code for min cost matching and the matching cascade.
  • iou_matching.py: This module contains the IOU matching metric.
  • nn_matching.py: A module for a nearest neighbor matching metric.
  • track.py: The track class contains single-target track data such as Kalman state, number of hits, misses, hit streak, associated feature vectors, etc.
  • tracker.py: This is the multi-target tracker class.

The deep_sort_app.py expects detections in a custom format, stored in .npy files.
These can be computed from MOTChallenge detections using generate_detections.py.
We also provide pre-generated detections.

deep_sort_app.py希望以传统格式的检测结果,存储在 .npy文件夹.
这能
generate_detections.py从MOTchanllenge 计算得来.
我们也提供了与训练的接测结果如下图

 

Citing DeepSORT

If you find this repo useful in your research, please consider citing the following papers:

@inproceedings{Wojke2017simple,
  title={Simple Online and Realtime Tracking with a Deep Association Metric},
  author={Wojke, Nicolai and Bewley, Alex and Paulus, Dietrich},
  booktitle={2017 IEEE International Conference on Image Processing (ICIP)},
  year={2017},
  pages={3645--3649},
  organization={IEEE},
  doi={10.1109/ICIP.2017.8296962}
}

@inproceedings{Wojke2018deep,
  title={Deep Cosine Metric Learning for Person Re-identification},
  author={Wojke, Nicolai and Bewley, Alex},
  booktitle={2018 IEEE Winter Conference on Applications of Computer Vision (WACV)},
  year={2018},
  pages={748--756},
  organization={IEEE},
  doi={10.1109/WACV.2018.00087}
}

 

 

补充:

下面评论要文件的朋友,不好意思,最近太忙没顾着给你们找。是下面截图中选中的这个文件吗?
PS:我忘了为啥下载的时候有三个,我的程序这里面只有一个.pb的,懂得朋友跟我说一下

已经传入网盘:

链接: https://pan.baidu.com/s/1ayOlSj5nq4_Ad96f1NPx8A 提取码: x73h 复制这段内容后打开百度网盘手机App,操作更方便哦

  • 6
    点赞
  • 49
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 29
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 29
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

计算机视觉-Archer

图像分割没有团队的同学可加群

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值