yolov10+bytetrack实现多目标跟踪

  一、准备数据集

1.1 数据集标注

这时候使用labelimg工具标注自己的数据集,一定要分清楚自己的类别信息

安装完anconda之后打开命令框输入以下命令进入标注页面

conda activate labelimg
labelimg

 

然后标注自己的数据集就行。

1.2 数据集划分

使用以下代码可以按照比例划分自己的数据集

import os
import random
import shutil
 
# 设置路径
image_folder = ''  # 图像文件夹路径
label_folder = ''  # 标签文件夹路径
train_image_folder = ''  # 训练集图像文件夹路径
train_label_folder = ''  # 训练集标签文件夹路径
val_image_folder = ''  # 验证集图像文件夹路径
val_label_folder = ''  # 验证集标签文件夹路径
 
# 创建训练集和验证集文件夹
os.makedirs(train_image_folder, exist_ok=True)
os.makedirs(train_label_folder, exist_ok=True)
os.makedirs(val_image_folder, exist_ok=True)
os.makedirs(val_label_folder, exist_ok=True)
 
# 获取所有的图像文件(假设图像是.jpg 格式,标签是 .txt 格式)
image_files = [f for f in os.listdir(image_folder) if f.endswith('.jpg')]
label_files = [f for f in os.listdir(label_folder) if f.endswith('.txt')]
 
# 确保每个图像都有一个对应的标签
assert len(image_files) == len(label_files), "图像和标签数量不匹配"
 
# 打乱文件顺序
data = list(zip(image_files, label_files))
random.shuffle(data)
 
# 按照 9:1 划分数据集
train_size = int(0.9 * len(data))
train_data = data[:train_size]
val_data = data[train_size:]
 
# 复制文件到训练集和验证集目录
for image, label in train_data:
    shutil.copy(os.path.join(image_folder, image), os.path.join(train_image_folder, image))
    shutil.copy(os.path.join(label_folder, label), os.path.join(train_label_folder, label))
 
for image, label in val_data:
    shutil.copy(os.path.join(image_folder, image), os.path.join(val_image_folder, image))
    shutil.copy(os.path.join(label_folder, label), os.path.join(val_label_folder, label))
 
print(f"数据集划分完成:{len(train_data)} 张图像用于训练,{len(val_data)} 张图像用于验证。")

只需要输入好自己数据集的路径就可以开始划分数据集了,到此数据集准备完毕。

二、训练yolov10的权重

2.1 修改训练用的yaml文件

在训练yolov10权重的时候一定要注意将类别信息改成自己的信息

将代码里面的nc改成自己的真实类别

# Ultralytics YOLO 🚀, AGPL-3.0 license
# YOLOv8 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect
 
# Parameters
nc: 80 # number of classes
scales: # model compound scaling constants, i.e. 'model=yolov8n.yaml' will call yolov8.yaml with scale 'n'
  # [depth, width, max_channels]
  n: [0.33, 0.25, 1024] # YOLOv8n summary: 225 layers,  3157200 parameters,  3157184 gradients,   8.9 GFLOPs
 
# YOLOv8.0n backbone
backbone:
  # [from, repeats, module, args]
  - [-1, 1, Conv, [64, 3, 2]] # 0-P1/2
  - [-1, 1, Conv, [128, 3, 2]] # 1-P2/4
  - [-1, 3, C2f, [128, True]]
  - [-1, 1, Conv, [256, 3, 2]] # 3-P3/8
  - [-1, 6, C2f, [256, True]]
  - [-1, 1, Conv, [512, 3, 2]] # 5-P4/16
  - [-1, 6, C2f, [512, True]]
  - [-1, 1, Conv, [1024, 3, 2]] # 7-P5/32
  - [-1, 3, C2f, [1024, True]]
  - [-1, 1, SPPF, [1024, 5]] # 9
 
# YOLOv8.0n head
head:
  - [-1, 1, nn.Upsample, [None, 2, "nearest"]]
  - [[-1, 6], 1, Concat, [1]] # cat backbone P4
  - [-1, 3, C2f, [512]] # 12
 
  - [-1, 1, nn.Upsample, [None, 2, "nearest"]]
  - [[-1, 4], 1, Concat, [1]] # cat backbone P3
  - [-1, 3, C2f, [256]] # 15 (P3/8-small)
 
  - [-1, 1, Conv, [256, 3, 2]]
  - [[-1, 12], 1, Concat, [1]] # cat head P4
  - [-1, 3, C2f, [512]] # 18 (P4/16-medium)
 
  - [-1, 1, Conv, [512, 3, 2]]
  - [[-1, 9], 1, Concat, [1]] # cat head P5
  - [-1, 3, C2f, [1024]] # 21 (P5/32-large)
 
  - [[15, 18, 21], 1, Detect, [nc]] # Detect(P3, P4, P5)

2.2 创建yaml文件

自己创建一个.yaml文件用于自己数据集的路径选择

# Ultralytics YOLO 🚀, AGPL-3.0 license
# COCO8 dataset (first 8 images from COCO train2017) by Ultralytics
# Documentation: https://docs.ultralytics.com/datasets/detect/coco8/
# Example usage: yolo train data=coco8.yaml
# parent
# ├── ultralytics
# └── datasets
#     └── coco8  ← downloads here (1 MB)
 
# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
path: ../datasets/coco8 # dataset root dir
train: images/train # train images (relative to 'path') 4 images
val: images/val # val images (relative to 'path') 4 images
test: # test images (optional)
 
# Classes
names:
  0: person
  1: bicycle
  2: car
  3: motorcycle
  4: airplane
  5: bus
  6: train
  7: truck
  8: boat
  9: traffic light
  10: fire hydrant
  11: stop sign
  12: parking meter
  13: bench
  14: bird
  15: cat
  16: dog
  17: horse
  18: sheep
  19: cow
  20: elephant
  21: bear
  22: zebra
  23: giraffe
  24: backpack
  25: umbrella
  26: handbag
  27: tie
  28: suitcase
  29: frisbee
  30: skis
  31: snowboard
  32: sports ball
  33: kite
  34: baseball bat
  35: baseball glove
  36: skateboard
  37: surfboard
  38: tennis racket
  39: bottle
  40: wine glass
  41: cup
  42: fork
  43: knife
  44: spoon
  45: bowl
  46: banana
  47: apple
  48: sandwich
  49: orange
  50: broccoli
  51: carrot
  52: hot dog
  53: pizza
  54: donut
  55: cake
  56: chair
  57: couch
  58: potted plant
  59: bed
  60: dining table
  61: toilet
  62: tv
  63: laptop
  64: mouse
  65: remote
  66: keyboard
  67: cell phone
  68: microwave
  69: oven
  70: toaster
  71: sink
  72: refrigerator
  73: book
  74: clock
  75: vase
  76: scissors
  77: teddy bear
  78: hair drier
  79: toothbrush
 
# Download script/URL (optional)
download: https://github.com/ultralytics/assets/releases/download/v0.0.0/coco8.zip

把类比什么的改成自己的数据集,数据集的路径也要替换自己的路径。

2.3 创建一个train.py文件

import warnings
warnings.filterwarnings('ignore')
from ultralytics import YOLOv8
 
if __name__ == '__main__':
    model = YOLOv8('yolov8n.yaml')
   #  model.load('yolov8n.pt') # loading pretrain weights
    model.train(data='创建的.yaml',
                cache=False,
                imgsz=640,
                epochs=100,
                batch=32,
                close_mosaic=0,
                workers=8,
                device='0',
                optimizer='SGD', # using SGD
                project='runs/train',
                name='exp',
                )

2.4 训练数据集

修改完之后,运行train.py

就可以训练自己的数据集了,训练完之后得到自己的权重文件。

三、Bytetrack多目标跟踪

3.1 修改完track.py里面的参数

然后运行文件就行

跟踪可视化结果在runs/track里面

3.2 得到用于评估的跟踪文本

3.3 得到跟踪的结果

### 将YOLOv8与ByteTrack集成 为了实现YOLOv8与ByteTrack的有效结合,可以按照如下方法操作: #### 安装依赖库 确保已安装必要的Python包来支持YOLOv8和ByteTrack的功能。通常情况下,这涉及到安装`ultralytics`以及其他可能需要的计算机视觉库。 ```bash pip install ultralytics bytetrack ``` #### 导入所需模块并初始化模型 加载YOLOv8预训练权重文件以及配置ByteTrack参数设置。这里假设已经下载好了相应的YOLOv8模型文件(如`yolov8s.onnx`) 和 ByteTrack所需的资源。 ```python from ultralytics import YOLO import cv2 from byte_tracker import BYTETracker # 假定这是处理跟踪逻辑的一个类 # 初始化YOLOv8对象检测器 model = YOLO('path/to/yolov8_model.onnx') # 创建BYTETracker实例用于多目标跟踪 byte_tracker = BYTETracker(frame_rate=30) # 设置帧率为每秒30帧或其他适当值 ``` #### 图像读取与预测 对于每一帧视频数据或者静态图片执行目标检测,并获取边界框信息作为输入传递给ByteTrack进行轨迹关联计算。 ```python def process_frame(image_path): image = cv2.imread(image_path) # 执行YOLOv8的目标检测任务 detections = model.predict(image)[0].boxes.data.tolist() online_targets = [] if len(detections) > 0: track_results = byte_tracker.update(np.array(detections), image.shape[:2]) for t in track_results: bbox, id_, score = t.tlbr.astype(int).tolist(), int(t.track_id), float(t.score) online_targets.append((bbox, id_)) # 绘制矩形边框标记被识别到的对象及其ID编号 label = f'ID-{id_}' color = (0, 255, 0) thickness = 2 cv2.rectangle(image, tuple(bbox[:2]), tuple(bbox[2:]), color=color, thickness=thickness) cv2.putText(image, text=label, org=(bbox[0], max(0,bbox[1]-10)), fontFace=cv2.FONT_HERSHEY_SIMPLEX, fontScale=.5, color=color, thickness=thickness//2) return image, online_targets ``` 此部分代码展示了如何利用YOLOv8完成物体定位后,再通过调用`update()`函数将这些位置更新至ByteTrack中以便持续追踪多个移动实体[^1]。 #### 结果展示或保存 最后一步是对经过上述流程处理后的图像做进一步的操作——要么直接显示出来供即时查看;要么将其序列化存储起来形成完整的监控录像片段。 ```python processed_image, _ = process_frame("input/image_or_video_frame.png") cv2.imshow('Processed Frame', processed_image) cv2.waitKey(0) cv2.destroyAllWindows() ``` 以上就是YOLOv8同ByteTrack相结合的一种典型做法概述[^4]。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值