Yolov8变换为v5训练界面

本文介绍了如何在Pycharm中使用Yolov8进行训练,包括设置文件结构(如coco.yaml、yolov8.yaml和default.yaml),以及如何编写train.py、验证.py和模型计算量.py脚本来调整训练参数。重点在于Yolov8的终端运行方式和自定义训练流程。
摘要由CSDN通过智能技术生成

       Yolov8与v5、v7的训练方式略有不同,官方采用的是终端运行方式。在Pycharm中训练可以增加一下几个文件变成和v5、v7的训练方式。

1,首先了解Yolov8文件的格式

        采用coco数据集为例,数据集的yaml文件放在ultralytics/cfg/datasets/coco.yaml下,如果训练自己的数据集,只需要仿照coco数据集yaml的书写格式准备好自己数据集的yaml即可,如下图(coco数据集) 所示:

        如下图所示yolov8的网络结构放在ultralytics/cfg/models/v8/yolov8.yaml路径下。

        训练时的超参数文件放在ultralytics/cfg/default.yaml,如下图所示

        可以在default.yaml文件中修改超参数文件,但是不建议这样去修改。

2,添加train.py、验证.py和 模型计算量.py

        在yolov8中是没有将 训练、验证 的程序单独列出来的,可以自己写 train.py、验证.py 和 模型计算量.py 去训练自己的数据集(不建议使用中文名字,但是在Pycharm中是可以运行的,读者可自行修改)。如下图所示,在根目录下添加以下三个文件:

        train.py代码如下,以下代码中只是列出了笔者常用的一些训练参数,如读者需要修改更多超参数,只需要仿照以下的超参数添加上即可。训练直接运行该程序即可。

from ultralytics import YOLO

if __name__ == '__main__':
    # Load a model
    model = YOLO(r'yolov8.yaml')  # 不使用预训练权重训练
    # model = YOLO(r'yolov8.yaml').load("yolov8n.pt")  # 使用预训练权重训练
    # Trainparameters ----------------------------------------------------------------------------------------------
    model.train(
        data=r'ultralytics/cfg/datasets/coco.yaml',
        epochs= 300 , # (int) number of epochs to train for
        patience= 50 , # (int) epochs to wait for no observable improvement for early stopping of training
        batch= 16 , # (int) number of images per batch (-1 for AutoBatch)
        imgsz= 416 , # (int) size of input images as integer or w,h
        save= True , # (bool) save train checkpoints and predict results
        save_period= -1, # (int) Save checkpoint every x epochs (disabled if < 1)
        cache= False , # (bool) True/ram, disk or False. Use cache for data loading
        device='' , # (int | str | list, optional) device to run on, i.e. cuda device=0 or device=0,1,2,3 or device=cpu
        workers= 1 , # (int) number of worker threads for data loading (per RANK if DDP)
        project= 'runs/train', # (str, optional) project name
        name= 'exp' ,# (str, optional) experiment name, results saved to 'project/name' directory
        optimizer= 'auto',  # (str) optimizer to use, choices=[SGD, Adam, Adamax, AdamW, NAdam, RAdam, RMSProp, auto]
        # Classification
        dropout= 0.0,  # (float) use dropout regularization (classify train only)
        # Hyperparameters ----------------------------------------------------------------------------------------------
        lr0=0.01,  # (float) initial learning rate (i.e. SGD=1E-2, Adam=1E-3)
        lrf=0.01,  # (float) final learning rate (lr0 * lrf)
        momentum=0.937,  # (float) SGD momentum/Adam beta1
        weight_decay=0.0005,  # (float) optimizer weight decay 5e-4
        warmup_epochs=3.0,  # (float) warmup epochs (fractions ok)
        warmup_momentum=0.8,  # (float) warmup initial momentum
        warmup_bias_lr=0.1,  # (float) warmup initial bias lr
        box=7.5,  # (float) box loss gain
        cls=0.5,  # (float) cls loss gain (scale with pixels)
        dfl=1.5,  # (float) dfl loss gain
        pose=12.0,  # (float) pose loss gain
        kobj=1.0,  # (float) keypoint obj loss gain
        label_smoothing=0.0,  # (float) label smoothing (fraction)
        nbs=64,  # (int) nominal batch size
        hsv_h=0.015,  # (float) image HSV-Hue augmentation (fraction)
        hsv_s=0.7,  # (float) image HSV-Saturation augmentation (fraction)
        hsv_v=0.4,  # (float) image HSV-Value augmentation (fraction)
        degrees=0.0,  # (float) image rotation (+/- deg)
        translate=0.1,  # (float) image translation (+/- fraction)
        scale=0.5,  # (float) image scale (+/- gain)
        shear=0.0,  # (float) image shear (+/- deg)
        perspective=0.0,  # (float) image perspective (+/- fraction), range 0-0.001
        flipud=0.0,  # (float) image flip up-down (probability)
        fliplr=0.5,  # (float) image flip left-right (probability)
        mosaic=1.0,  # (float) image mosaic (probability)
        mixup=0.0,  # (float) image mixup (probability)
        copy_paste=0.0,  # (float) segment copy-paste (probability)
     )


        验证.py 代码如下,以下代码中只是列出了笔者常用的一些训练参数,如读者需要修改更多超参数,只需要仿照以下的超参数添加上即可。验证时将路径修改为自己exp文件下最好的best.pt权重文件,然后运行即可。

from ultralytics import YOLO

if __name__ == '__main__':
    # Load a model
    model = YOLO(r'F:\yolov8\yolov8\ultralytics-main\runs\train\exp\weights\best.pt')  # build a new model from YAML
    # Validate the model
    model.val(
        val=True,  # (bool) validate/test during training
        data=r'ultralytics/cfg/datasets/haolaji.yaml',
        split='test',  # (str) dataset split to use for validation, i.e. 'val', 'test' or 'train'
        batch=1,  # (int) number of images per batch (-1 for AutoBatch)
        imgsz=640,  # (int) size of input images as integer or w,h
        device='',  # (int | str | list, optional) device to run on, i.e. cuda device=0 or device=0,1,2,3 or device=cpu
        workers=1,  # (int) number of worker threads for data loading (per RANK if DDP)
        save_json=False,  # (bool) save results to JSON file
        save_hybrid=False,  # (bool) save hybrid version of labels (labels + additional predictions)
        project='runs/val',  # (str, optional) project name
        name='exp',  # (str, optional) experiment name, results saved to 'project/name' directory
        max_det=300,  # (int) maximum number of detections per image
    )

        模型计算量.py 代码如下,查看模型计算量时将自己的yaml文件填写进去,然后运行即可。

from ultralytics import YOLO

if __name__ == '__main__':
    # Load a model
    model = YOLO(r'ultralytics/cfg/models/v8/yolov8.yaml')  # build a new model from YAML
    model.info()


  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值