由于mikel-brostrom在github上发布的Yolov5_DeepSort_Pytorch更新,使整个代码封装性更好,进而允许采用多种REID特征识别模型,完善了deepsort在检测跟踪方面的性能。本博文记录如何使用此版本Yolov5_DeepSort_Pytorch的过程,同时给出ZQPei REID模型的修改方法,以适应mikel-brostrom更新版本。
使用Yolov5_DeepSort_Pytorch默认的osnet REID实现跟踪track.py
Yolov5_DeepSort_Pytorch中包含了两个链接目录yolov5和reid,不能一次性把github中的代码克隆下来,因此,需分别将三个github代码克隆到本地。
Yolov5_DeepSort_Pytorch: git clone https://github.com/mikel-brostrom/Yolov5_DeepSort_Pytorch
Yolov5: git clone https://github.com/ultralytics/yolov5
REID: git clone https://github.com/KaiyangZhou/deep-person-reid
假定你的deepsort目录为your_dir,是第一个克隆下来的目录。第二个克隆目录是yolov5,将yolov5目录放在your_dir目录下,即your_dir/yolov5。第三个克隆目录是reid,放到your_dir/deep_sort/deep目录下,your_dir/deep_sort/deep/reid。
假定已经安装了conda和虚拟环境,且安装好运行Yolov5_DeepSort_Pytorch所需的模块。进入reid目录,运行
python setup.py develop
如此,即安装好KaiyangZhou的REID环境。
下载yolov5模型权重,放入目录your_dir/yolov5/weights
从KaiyangZhou的github中,Model zoo里下载权重文件,例如osnet_x1_0.pth,放到checkpoint目录:your_dir/deep_sort/deep/checkpoint。
(1)修改deep_sort/configs/deep_sort.yaml
DEEPSORT:
MODEL_TYPE: "osnet_x1_0"
REID_CKPT: '~/your_dir/deep_sort/deep/checkpoint/osnet_x1_0_imagenet.pth'
MAX_DIST: 0.1 # 0.2 The matching threshold. Samples with larger distance are considered an invalid match
MAX_IOU_DISTANCE: 0.7 # 0.7 Gating threshold. Associations with cost larger than this value are disregarded.
MAX_AGE: 90 # 30 Maximum number of missed misses before a track is deleted
N_INIT: 3 # 3 Number of frames that a track remains in initialization phase
NN_BUDGET: 100 # 100 Maximum size of the appearance descriptors gallery
MIN_CONFIDENCE: 0.75
NMS_MAX_OVERLAP: 1.0
这里,增加REID_CKPT,把某些参数设置放到yaml文件中,尽可能减少track.py命令行中的输入参数。
(2)修改track.py中DeepSort类实例的参数定义
deepsort = DeepSort( cfg.DEEPSORT.MODEL_TYPE,
cfg.DEEPSORT.REID_CKPT, # 添加checkpoint路径
device,
max_dist=cfg.DEEPSORT.MAX_DIST,
max_iou_distance=cfg.DEEPSORT.MAX_IOU_DISTANCE,
max_age=cfg.DEEPSORT.MAX_AGE,
n_init=cfg.DEEPSORT.N_INIT,
nn_budget=cfg.DEEPSORT.NN_BUDGET,
)
此处增加了一个reid权重文件路径参数,故也需在DeepSort类定义中增加该参数model_path,修改deep_sort/deep_sort.py,__init__():
class DeepSort(object):
def __init__(self, model_type, model_path, device, max_dist=0.2, min_confidence=0.3, nms_max_overlap=1.0, max_iou_distance=0.7, max_age=70, n_init=3, nn_budget=100, use_cuda=True):
self.min_confidence = min_confidence
self.nms_max_overlap = nms_max_overlap
self.extractor = FeatureExtractor(
model_name=model_type,
model_path = model_path,
device=str(device)
)
max_cosine_distance = max_dist
metric = NearestNeighborDistanceMetric(
"cosine", max_cosine_distance, nn_budget)
self.tracker = Tracker(
metric, max_iou_distance=max_iou_distance, max_age=max_age, n_init=n_init)
注:mikel好像又改了有关model_path的引入方法,我感觉太复杂,故还是用以上的修改办法,其目的就是从deep_sort/deep/checkpoint中找到权重文件路径,避免从网上下载权重文件,或者从本地缓存.torch中去找权重。
(3)运行deepsort跟踪程序,命令行选项中给出一种比较全的选项
python track.py --yolo_model ~/your_dir/yolov5/weights/yolov5s.pt \ // yolov5权文件
--source ~/your_dir/video_demo.mp4 \ // 输入视频文件
--show-vid \ // 显示跟踪视频
--classes 0 2 \ // 0 = 行人类别, 2=小汽车类别。
--save-txt \ // 输出兼容MOT16格式文件
--save-vid \ // 保存跟踪视频
其中,classes 0 表示yolov5检测对象为行人,类型号0。
更改ZQPei REID模型文件
此模型文件命名为model_ZQP.py,放入目录 deep_sort/deep/reid/torchreid/models
模型更改只需添加一个定义函数 def ZQP()
import torch
import torch.nn as nn
import torch.nn.functional as F
class BasicBlock(nn.Module):
def __init__(self, c_in, c_out, is_downsample=False):
super(BasicBlock, self).__init__()
self.is_downsample = is_downsample
if is_downsample:
self.conv1 = nn.Conv2d(
c_in, c_out, 3, stride=2, padding=1, bias=False)

本文介绍如何集成YoloV5_DeepSORT_Pytorch并使用ZQPei REID模型进行目标跟踪。涉及代码配置、模型训练及自定义数据集准备。
最低0.47元/天 解锁文章
1万+

被折叠的 条评论
为什么被折叠?



