极市平台比赛主体流程

该博客详细介绍了如何在本地环境中安装YOLOv5,包括创建虚拟环境、克隆YOLOv5仓库、安装依赖,并提供了数据处理脚本将XML标注转换为YOLO格式。此外,还展示了如何配置myvoc.yaml和yolov5s.yaml以适应自定义数据集,以及训练模型的参数设置。最后,给出了训练脚本train.py的修改建议和训练指令。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

本地YOLOV5环境安装

创建虚拟环境

conda create -n yolov5 python=3.7

激活环境后

git clone https://github.com/ultralytics/yolov5
cd yolov5
pip install -r requirements.txt

数据处理

# -*- coding: utf-8 -*-

# -*- coding: utf-8 -*-

import xml.etree.ElementTree as ET
from tqdm import tqdm
import os
from os import getcwd


def convert(size, box):
    dw = 1. / (size[0])
    dh = 1. / (size[1])
    x = (box[0] + box[1]) / 2.0 - 1
    y = (box[2] + box[3]) / 2.0 - 1
    w = box[1] - box[0]
    h = box[3] - box[2]
    x = x * dw
    w = w * dw
    y = y * dh
    h = h * dh
    return x, y, w, h


def convert_annotation(image_id):
    # try:
    in_file = open('/home/data/130/{}.xml'.format(image_id), encoding='utf-8')
    out_file = open('/project/LYXXX/VOCData/labels/{}.txt'.format(image_id),
                    'w', encoding='utf-8')
    tree = ET.parse(in_file)
    root = tree.getroot()
    size = root.find('size')
    w = int(size.find('width').text)
    h = int(size.find('height').text)
    for obj in root.iter('object'):
        #difficult = obj.find('difficult').text
        cls = obj.find('name').text
        if cls not in classes:
            continue
        cls_id = classes.index(cls)
        xmlbox = obj.find('bndbox')
        b = (float(xmlbox.find('xmin').text), float(xmlbox.find('xmax').text), float(xmlbox.find('ymin').text),
             float(xmlbox.find('ymax').text))
        b1, b2, b3, b4 = b
        # 标注越界修正
        if b2 > w:
            b2 = w
        if b4 > h:
            b4 = h
        b = (b1, b2, b3, b4)
        bb = convert((w, h), b)
        out_file.write(str(cls_id) + " " +
                       " ".join([str(a) for a in bb]) + '\n')
    # except Exception as e:
    #     print(e, image_id)


if __name__ == '__main__':

    sets = ['train', 'val']

    image_ids = [v.split('.')[0]
                 for v in os.listdir('/home/data/130/') if v.endswith('.xml')]
    #print(image_ids)

    split_num = int(1 * len(image_ids))

    classes = ['person_no_clothes', 'person_clothes']

    if not os.path.exists('VOCData/labels/'):
        os.makedirs('VOCData/labels/')
    list_file = open('/project/LYXXX/train.txt', 'w')
    for image_id in tqdm(image_ids[:split_num]):
        list_file.write('/home/data/130/{}.jpg\n'.format(image_id))
        convert_annotation(image_id)
    list_file.close()

    list_file = open('val.txt', 'w')
    for image_id in tqdm(image_ids[split_num:]):
        list_file.write('VOCData/images/{}.jpg\n'.format(image_id))
        convert_annotation(image_id)
    list_file.close()

处理xml与图片处于同一文件夹下的数据集情况。
本地运行的话需要修改in_file为存放数据集的文件夹,out_file
可修改为当前目录下的VOCData即可

    in_file = open('/home/data/130/{}.xml'.format(image_id), encoding='utf-8')
    out_file = open('/project/VOCData/labels/{}.txt'.format(image_id),

注意class处需要根据不同的比赛进行修改

训练自己的模型

在工程目录下的data文件夹下建立myvoc.yaml文件,内容为

train: /project/LYXXX/train.txt
val: /project/LYXXX/train.txt

# number of classes
nc: 2

# class names
names: ['person_no_clothes', 'person_clothes']

记得下载预训练权重放在weights文件夹下面
修改models/yolov5s.yaml的类别数:

# Parameters
nc: 2  # number of classes
depth_multiple: 0.33  # model depth multiple
width_multiple: 0.50  # layer channel multiple
anchors:
  - [10,13, 16,30, 33,23]  # P3/8
  - [30,61, 62,45, 59,119]  # P4/16
  - [116,90, 156,198, 373,326]  # P5/32

train.py中的路径需要改为绝对路径

parser = argparse.ArgumentParser()
    parser.add_argument('--weights', type=str, default='yolov5s.pt', help='initial weights path')
    parser.add_argument('--cfg', type=str, default='', help='model.yaml path')
    parser.add_argument('--data', type=str, default='/project/LYXXX/data/coco128.yaml', help='dataset.yaml path')
    parser.add_argument('--hyp', type=str, default='/project/LYXXX/data/hyps/hyp.scratch.yaml', help='hyperparameters path')
    parser.add_argument('--epochs', type=int, default=300)
    parser.add_argument('--batch-size', type=int, default=16, help='total batch size for all GPUs')
    parser.add_argument('--imgsz', '--img', '--img-size', type=int, default=640, help='train, val image size (pixels)')
    parser.add_argument('--rect', action='store_true', help='rectangular training')
    parser.add_argument('--resume', nargs='?', const=True, default=False, help='resume most recent training')
    parser.add_argument('--nosave', action='store_true', help='only save final checkpoint')
    parser.add_argument('--noval', action='store_true', help='only validate final epoch')
    parser.add_argument('--noautoanchor', action='store_true', help='disable autoanchor check')
    parser.add_argument('--evolve', type=int, nargs='?', const=300, help='evolve hyperparameters for x generations')
    parser.add_argument('--bucket', type=str, default='', help='gsutil bucket')
    parser.add_argument('--cache-images', action='store_true', help='cache images for faster training')
    parser.add_argument('--image-weights', action='store_true', help='use weighted image selection for training')
    parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
    parser.add_argument('--multi-scale', action='store_true', help='vary img-size +/- 50%%')
    parser.add_argument('--single-cls', action='store_true', help='train multi-class data as single-class')
    parser.add_argument('--adam', action='store_true', help='use torch.optim.Adam() optimizer')
    parser.add_argument('--sync-bn', action='store_true', help='use SyncBatchNorm, only available in DDP mode')
    parser.add_argument('--workers', type=int, default=8, help='maximum number of dataloader workers')
    parser.add_argument('--project', default='/project/train/models/', help='save to project/name')
    parser.add_argument('--entity', default=None, help='W&B entity')
    parser.add_argument('--name', default='exp', help='save to project/name')
    parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment')
    parser.add_argument('--quad', action='store_true', help='quad dataloader')
    parser.add_argument('--linear-lr', action='store_true', help='linear LR')
    parser.add_argument('--label-smoothing', type=float, default=0.0, help='Label smoothing epsilon')
    parser.add_argument('--upload_dataset', action='store_true', help='Upload dataset as W&B artifact table')
    parser.add_argument('--bbox_interval', type=int, default=-1, help='Set bounding-box image logging interval for W&B')
    parser.add_argument('--save_period', type=int, default=-1, help='Log model after every "save_period" epoch')
    parser.add_argument('--artifact_alias', type=str, default="latest", help='version of dataset artifact to be used')
    parser.add_argument('--local_rank', type=int, default=-1, help='DDP parameter, do not modify')
    opt = parser.parse_known_args()[0] if known else parser.parse_args()
    return opt

train.sh的编写为

python /project/LYXXX/convert_data.py
python /project/LYXXX/train.py --img 640 --batch 16 --epoch 300 --data /project/LYXXX/data/myvoc.yaml --cfg /project/LYXXX/models/yolov5s.yaml --weights /project/LYXXX/weights/yolov5s.pt --workers 1

# /project/LYXXX/VOCData/labels/CARTclothes20200821_118.txt
# /home/data/130/CARTclothes20200821_118.jpg
# sh /project/LYXXX/start.sh

运行train.sh

 sh /project/LYXXX/start.sh

关于路径

图像和标注的挂载路径为:

/home/data/130/CARTclothes20200821_118.jpg

项目的存放路径为

/project/LYXXX/start.sh

在线训练的模型的保存位置为

/project/train/models/
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值