YOLOv8-seg——基于自定义数据集训练图像分割模型

目录

一、制作分割数据集
1 标注
2 json文件转txt文件
3 数据集划分
二、训练图像分割模型
1 环境搭建
2 训练网络
3 预测
三、训练结果解读

一.制作分割数据集

1 标注

运用labelme软件进行手动标注,得到数据的json格式标注文件。
*注意区别于labelimg软件,labelimg软件对每个目标只能标注四个点,无法用于图像分割。


2 json文件转txt文件

yolov8-seg要求的的标注文件是txt格式,具体要求如下:

Ultralytics YOLO format
The dataset label format used for training YOLO segmentation models is as follows:

  1. One text file per image: Each image in the dataset has a corresponding text file with the same name as the image file and the “.txt” extension.
  2. One row per object: Each row in the text file corresponds to one object instance in the image. Object information per row:
  3. Each row contains the following information about the object instance:
  • Object class index: An integer representing the class of the object (e.g., 0 for person, 1 for car, etc.).
  • Object bounding coordinates: The bounding coordinates around the mask area, normalized to be between 0 and 1.
    The format for a single row in the segmentation dataset file is as follows:
 <class-index> <x1> <y1> <x2> <y2> ... <xn> <yn>

具体代码:

import json
import os
from tqdm import tqdm


# 创建保存TXT文件的目录
def make_dir(path):
    if not os.path.exists(path):
        os.makedirs(path)


# 将多边形坐标转换为YOLO格式的多边形点
def convert_polygon_to_yolo(size, points):
    dw = 1. / size[0]
    dh = 1. / size[1]
    points_nor_list = []

    for point in points:
        points_nor_list.append(point[0] * dw)
        points_nor_list.append(point[1] * dh)

    return points_nor_list


def convert_label_json(json_dir, save_dir, classes):
    make_dir(save_dir)
    json_paths = [f for f in os.listdir(json_dir) if f.endswith('.json')]
    classes = classes.split(',')

    for json_path in tqdm(json_paths):
        path = os.path.join(json_dir, json_path)
        with open(path, 'r', encoding='utf-8') as load_f:
            json_dict = json.load(load_f)

        h, w = json_dict.get('imageHeight', None), json_dict.get('imageWidth', None)
        if not h or not w:
            continue  # 如果没有图像尺寸信息,跳过该文件

        txt_path = os.path.join(save_dir, json_path.replace('.json', '.txt'))
        with open(txt_path, 'w', encoding='utf-8') as txt_file:
            for shape_dict in json_dict['shapes']:
                label = shape_dict['label']
                if label in classes:
                    label_index = classes.index(label)
                    points = shape_dict['points']

                    yolo_points = convert_polygon_to_yolo((w, h), points)
                    label_str = f"{label_index} " + " ".join([f"{a:.6f}" for a in yolo_points]) + '\n'
                    txt_file.write(label_str)


if __name__ == "__main__":
    import argparse

    parser = argparse.ArgumentParser(description='JSON convert to YOLO TXT format')
    parser.add_argument('--json-dir', type=str, default='data/Annotations', help='JSON path directory')
    parser.add_argument('--save-dir', type=str, default='data/labels', help='TXT save directory')
    parser.add_argument('--classes', type=str, default='ADP', help='Target classes separated by comma')
    args = parser.parse_args()

    convert_label_json(args.json_dir, args.save_dir, args.classes)


3 数据集划分

将数据集划分为训练集、验证集和测试集。
具体代码:

import shutil
import random
import os
import argparse

# 检查文件夹是否存在
def mkdir(path):
    if not os.path.exists(path):
        os.makedirs(path)

def split_dataset(image_dir, txt_dir, save_dir):
    # 创建文件夹
    mkdir(save_dir)
    images_dir = os.path.join(save_dir, 'images')
    labels_dir = os.path.join(save_dir, 'labels')

    img_train_path = os.path.join(images_dir, 'train')
    img_test_path = os.path.join(images_dir, 'test')
    img_val_path = os.path.join(images_dir, 'val')

    label_train_path = os.path.join(labels_dir, 'train')
    label_test_path = os.path.join(labels_dir, 'test')
    label_val_path = os.path.join(labels_dir, 'val')

    mkdir(images_dir)
    mkdir(labels_dir)
    mkdir(img_train_path)
    mkdir(img_test_path)
    mkdir(img_val_path)
    mkdir(label_train_path)
    mkdir(label_test_path)
    mkdir(label_val_path)

    # 数据集划分比例,训练集80%,验证集10%,测试集10%
    train_percent = 0.80
    val_percent = 0.1
    test_percent = 0.1

    total_txt = os.listdir(txt_dir)
    num_txt = len(total_txt)
    list_all_txt = range(num_txt)  # 范围 range(0, num)

    num_train = int(num_txt * train_percent)
    num_val = int(num_txt * val_percent)
    num_test = num_txt - num_train - num_val

    train = random.sample(list_all_txt, num_train)
    # 在全部数据集中取出train
    val_test = [i for i in list_all_txt if not i in train]
    # 再从val_test取出num_val个元素,val_test剩下的元素就是test
    val = random.sample(val_test, num_val)

    print("训练集数目:{}, 验证集数目:{}, 测试集数目:{}".format(len(train), len(val), len(val_test) - len(val)))
    for i in list_all_txt:
        name = total_txt[i][:-4]

        srcImage = os.path.join(image_dir, name + '.jpg')
        srcLabel = os.path.join(txt_dir, name + '.txt')

        if i in train:
            dst_train_Image = os.path.join(img_train_path, name + '.png')
            dst_train_Label = os.path.join(label_train_path, name + '.txt')
            shutil.copyfile(srcImage, dst_train_Image)
            shutil.copyfile(srcLabel, dst_train_Label)
        elif i in val:
            dst_val_Image = os.path.join(img_val_path, name + '.png')
            dst_val_Label = os.path.join(label_val_path, name + '.txt')
            shutil.copyfile(srcImage, dst_val_Image)
            shutil.copyfile(srcLabel, dst_val_Label)
        else:
            dst_test_Image = os.path.join(img_test_path, name + '.png')
            dst_test_Label = os.path.join(label_test_path, name + '.txt')
            shutil.copyfile(srcImage, dst_test_Image)
            shutil.copyfile(srcLabel, dst_test_Label)

if __name__ == '__main__':
    parser = argparse.ArgumentParser(description='Split dataset into train, val, test sets')
    parser.add_argument('--image-dir', type=str, default='data/images', help='Image directory')
    parser.add_argument('--txt-dir', type=str, default='data/labels', help='Label TXT files directory')
    parser.add_argument('--save-dir', type=str, default='data/split', help='Directory to save split datasets')
    args = parser.parse_args()
    split_dataset(args.image_dir, args.txt_dir, args.save_dir)


二、训练图像分割模型

1 环境搭建(window+anaconda环境安装+pycharm部署)

1.1 安装Anaconda和PyCharm

1.2 创建yolov8虚拟环境

1.windows下打开Anaconda Prompt
2.创建名为yolov8的虚拟环境:

conda create -n yolov8 python=3.7 anaconda

3.查看conda中已创建的环境:

conda env list

4.打开新建环境yolov8:

conda activate yolov8

1.3 安装pytorch

1.检查是否安装pytorch:(新建环境则直接跳过检查)
在yolov8环境下输入:


python

import torch

import torchvision

torch.cuda.is_available()

2.安装pytorch
pytorch官网: https://pytorch.org/get-started/locally/

pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu116

若显示Successfully installed torch…则成功下载。

1.4 下载yolov8源码

github下载地址:https://github.com/ultralytics/ultralytics

1.5 设置PyCharm

1.设置PyCharm解释器
在PyCharm中打开项目文件ultralytics,并在setting中按照刚创建的conda环境来选择python Interpreter,在pycharm终端中显示(yolov8)前缀即成功

2.安装依赖
在PyCharm终端中安装依赖:

pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple

若出现 WARNING: Ignore distutils configs in setup.cfg due to encoding errors.
原因是未设置全为UTF-8,解决方法:

https://blog.csdn.net/weixin_37989267/article/details/128326603

python setup.py install

最后输出Finished processing则为成功。

1.6 测试

yolo task=segment mode=predict model=weight/yolov8n-seg.pt source=ultralytics/assets/bus.jpg save=true

结果保存到了对应的文件夹下的runs\segment\predict


2 训练网络

建立train.py脚本,具体参数自己调整,代码为:

from ultralytics import YOLO

if __name__ == '__main__':
    # 加载模型
    model = YOLO("yolov8n-seg.pt")  # 使用预训练模型

    # 开始训练
    model.train(data="data/data.yaml", batch=16, epochs=100, imgsz=640, workers=2, device="0")

3 预测

建立predict.py脚本,具体参数自己调整,代码为:

from ultralytics import YOLO
import numpy as np
from PIL import Image
import os

# 加载模型
model = YOLO('E:/.../weights/best.pt')

# 图像目录和保存结果的目录
image_dir = 'E:/../data/split/images/test'
save_dir = 'E:/../ultralytics-main/prediction_results'
os.makedirs(save_dir, exist_ok=True)

# 获取目录中的所有图像文件
image_files = [f for f in os.listdir(image_dir) if f.endswith(('.jpg', '.jpeg', '.png'))]

# 对每个图像文件进行预测
for image_file in image_files:
    image_path = os.path.join(image_dir, image_file)
    results = model(image_path)

    # 显示并保存预测结果
    for r in results:
        im_array = r.plot()  # 绘制预测结果为BGR格式的numpy数组
        im = Image.fromarray(im_array[..., ::-1])  # 转换为RGB格式的PIL图像
        # im.show()  # 显示图像
        save_path = os.path.join(save_dir, image_file)  # 结果保存路径
        im.save(save_path)  # 保存图像

三、训练结果解读

yolov8模型训练结果分析以及如何评估yolov8模型训练的效果
超详细YOLOv8实例分割全程概述:环境、训练、验证与预测详解

  • 27
    点赞
  • 9
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值