目录
1. yolov8安装
1.1 创建anaconda虚拟环境
比如:名字位torch_py39,因为yolov8 ultralytics安装需要python3.8以上,这里安装python3.9
conda create -n torch_py39 python=3.9
1.2 安装ultralytics
pip install ultralytics
1.3 下载ultralytics源码
源码中有数据配置文件和模型配置文件等,一会训练会用到
下载地址
2. VisDrone数据下载
3. 数据集标签转换
from utils.general import download, os, Path
def visdrone2yolo(dir):
from PIL import Image
from tqdm import tqdm
def convert_box(size, box):
# Convert VisDrone box to YOLO xywh box
dw = 1. / size[0]
dh = 1. / size[1]
return (box[0] + box[2] / 2) * dw, (box[1] + box[3] / 2) * dh, box[2] * dw, box[3] * dh
(dir / 'labels').mkdir(parents=True, exist_ok=True) # make labels directory
pbar = tqdm((dir / 'annotations').glob('*.txt'), desc=f'Converting {dir}')
for f in pbar:
img_size = Image.open((dir / 'images' / f.name).with_suffix('.jpg')).size
lines = []
with open(f, 'r') as file: # read annotation.txt
for row in [x.split(',') for x in file.read().strip().splitlines()]:
if row[4] == '0': # VisDrone 'ignored regions' class 0
continue
cls = int(row[5]) - 1 # 类别号-1
box = convert_box(img_size, tuple(map(int, row[:4])))
lines.append(f"{cls} {' '.join(f'{x:.6f}' for x in box)}\n")
with open(str(f).replace(os.sep + 'annotations' + os.sep, os.sep + 'labels' + os.sep), 'w') as fl:
fl.writelines(lines) # write label.txt
if __name__ == '__main__':
dir = Path('E:\YOLO-datasets\VisDrone') # datasets文件夹下Visdrone2019文件夹目录
# Convert
for d in 'VisDrone2019-DET-train', 'VisDrone2019-DET-val', 'VisDrone2019-DET-test-dev':
visdrone2yolo(dir / d) # convert VisDrone annotations to YOLO labels
4. 准备数据配置文件
VisDrone.yaml位于ultralytics-main\ultralytics\cfg\datasets下
自己可以建一个文件夹,并将VisDrone.yaml拷贝过来
修改配置文件中的path: …/datasets/VisDrone,指向下载数据集的位置
5. 开始训练
指定预训练权重model=yolov8l.pt
指定数据配置所在目录data=E:/code/other/ultralytics-main/VisDrone/VisDrone-2.yaml
yolo train model=yolov8l.pt data=E:/code/other/ultralytics-main/VisDrone/VisDrone-2.yaml batch=8 epochs=100