YOLOv8+Pytorch+Windows10环境配置(2)

一、环境配置

1.1 创建python环境

1.conda create -n yolov8 python=3.8 //创建环境
2.conda env list    //查看所有环境及其路径

1.2 下载yolov8源码

1.下载路径,点击zip下载即可,也可直接git克隆。

https://github.com/ultralytics/ultralytics/

2.下载zip后解压,使用pycharm打开该文件,如图所示

3.进入python解释器界面,选择环境

4.。。。pycharm2024识别不了conda环境下的python.exe。换pycharm2022

换pycharm版本后,添加环境

如下所示即为成功:

二、下载Pytorch

折腾几天,各种换源都下载的cpu版本。

最终在阿里云下找到,经过折腾最终将CUDA版本更换为11.8

https://mirrors.aliyun.com/pytorch-wheels/cu118/

在链接中下载t对应版本的torch,torchvision,torchaudio的whl文件到目录下。

最后使用需要使用的conda环境cd到该目录下

pip install "torch-xxxxx.whl"
pip install "torchvision-xxxxx.whl"
pip install "torchaudio-xxxxx.whl"

三、开始训练

需要再安装一个包,使用以下命令

pip install ultralytics

准备coco128数据集,以及yolov8n.pt或yolov8s.pt,如下图所示:新建目录dataset,修改coco128.yaml的path为准备好的数据集路径。权重放在新建的weight目录下。

随后使用以下命令,其中data代表数据集,model代表使用的权重,epoch代表迭代多少次,batch为一轮训练中每一次放入图片数量,越大越快效果越好,但是对性能要求越高,device代表使用什么设备进行训练。

yolo train data=ultralytics/cfg/datasets/coco128.yaml model=weight/yolov8s.pt epochs=100 batch=4 device=0

训练结束后可在以下路径中找到best.pt

四、使用自己的数据集进行训练

1.在coco128数据集相同的位置导入图像,并且进行过图像标注(我使用的是VOC格式),如下图中ZJHH目录下的Annotations和images,其中Annotations下为标注后的xml文件,images为原图像。2.在根目录下新建split_train_val.py文件,为以下内容,运行文件后得到得到trainval.txt、val.txt、test.txt

import os
import random
trainval_percent = 0.1
train_percent = 0.9
xmlfilepath = 'dataset/DataSet/ZJHH/Annotations'
txtsavepath = 'dataset/DataSet/ZJHH/imgSets'
total_xml = os.listdir(xmlfilepath)
num = len(total_xml)
list = range(num)
tv = int(num * trainval_percent)
tr = int(tv * train_percent)
trainval = random.sample(list, tv)
train = random.sample(trainval, tr)
ftrainval = open('dataset/DataSet/ZJHH/imgSets/trainval.txt', 'w')
ftest = open('dataset/DataSet/ZJHH/imgSets/test.txt', 'w')
ftrain = open('dataset/DataSet/ZJHH/imgSets/train.txt', 'w')
fval = open('dataset/DataSet/ZJHH/imgSets/val.txt', 'w')
for i in list:
    name = total_xml[i][:-4] + '\n'
    if i in trainval:
        ftrainval.write(name)
        if i in train:
            ftest.write(name)
        else:
            fval.write(name)
    else:
        ftrain.write(name)
ftrainval.close()
ftrain.close()
fval.close()
ftest.close()

如下图所示,得到以下几个文件

3.在根目录下创建voc_label.py文件,为以下内容,运行后对于每一张图像,程序会生成一个相应的文本文件存放在labels目录下。

import xml.etree.ElementTree as ET
import os
from os import getcwd

sets = ['train', 'val', 'test']
classes = ['批号']
abs_path = os.getcwd()
print(abs_path)


def convert(size, box):
    dw = 1. / (size[0])
    dh = 1. / (size[1])
    x = (box[0] + box[1]) / 2.0 - 1
    y = (box[2] + box[3]) / 2.0 - 1
    w = box[1] - box[0]
    h = box[3] - box[2]
    x = x * dw
    w = w * dw
    y = y * dh
    h = h * dh
    return x, y, w, h


def convert_annotation(image_id):
    in_file = open('dataset/DataSet/ZJHH/Annotations/%s.xml' % (image_id), encoding='UTF-8')
    out_file = open('dataset/DataSet/ZJHH/labels/%s.txt' % (image_id), 'w')
    tree = ET.parse(in_file)
    root = tree.getroot()
    size = root.find('size')
    w = int(size.find('width').text)
    h = int(size.find('height').text)
    for obj in root.iter('object'):
        # difficult = obj.find('difficult').text
        difficult = obj.find('difficult').text
        cls = obj.find('name').text
        if cls not in classes or int(difficult) == 1:
            continue
        cls_id = classes.index(cls)
        xmlbox = obj.find('bndbox')
        b = (float(xmlbox.find('xmin').text), float(xmlbox.find('xmax').text), float(xmlbox.find('ymin').text),
             float(xmlbox.find('ymax').text))
        b1, b2, b3, b4 = b
        # 标注越界修正
        if b2 > w:
            b2 = w
        if b4 > h:
            b4 = h
        b = (b1, b2, b3, b4)
        bb = convert((w, h), b)
        out_file.write(str(cls_id) + " " + " ".join([str(a) for a in bb]) + '\n')


wd = getcwd()
for image_set in sets:
    if not os.path.exists('dataset/DataSet/ZJHH/labels'):
        os.makedirs('dataset/DataSet/ZJHH/labels')
    image_ids = open('dataset/DataSet/ZJHH/imgSets/%s.txt' % (image_set)).read().strip().split()
    list_file = open('dataset/DataSet/ZJHH/%s.txt' % (image_set), 'w')
    for image_id in image_ids:
        list_file.write(abs_path + 'dataset/DataSet/ZJHH/images/%s.jpg\n' % (image_id))
        convert_annotation(image_id)
    list_file.close()

运行后得到以下文件,其中labels下为VOC格式转化为YOLO格式的标注数据

4.在ultralytics/cfg/datasets下创建“zjhhBN.yaml”为以下内容

train: dataset/DataSet/ZJHH/train.txt
val: dataset/DataSet/ZJHH/val.txt
test: dataset/DataSet/ZJHH/test.txt
 
nc: 1 # 填写类别数
 
names: ["批号"]  # 要和voc_label.py中的类别数目顺序一模一样
 
# train,val,test的路径根据自己情况而定

5.也可使用coco128的方式进行数据集格式的转换,在Annotations同目录下新建YoLolabels文件夹以及在根目录下新建应该vocToYoLO.py文件,内容如下:

import xml.etree.ElementTree as ET
import pickle
import os
from os import listdir, getcwd
from os.path import join
import random
from shutil import copyfile

classes = ['批号'] # 划分的类别,依实际情况而定
# classes=["ball"]

TRAIN_RATIO = 80  # 训练集与数据集划分的比例


def clear_hidden_files(path):
    dir_list = os.listdir(path)
    for i in dir_list:
        abspath = os.path.join(os.path.abspath(path), i)
        if os.path.isfile(abspath):
            if i.startswith("._"):
                os.remove(abspath)
        else:
            clear_hidden_files(abspath)


def convert(size, box): # 边界框的转换
    dw = 1. / size[0]
    dh = 1. / size[1]
    x = (box[0] + box[1]) / 2.0
    y = (box[2] + box[3]) / 2.0
    w = box[1] - box[0]
    h = box[3] - box[2]
    x = x * dw
    w = w * dw
    y = y * dh
    h = h * dh
    return (x, y, w, h)


def convert_annotation(image_id):
    in_file = open('dataset/DataSet/ZJHH/Annotations/%s.xml' % image_id,encoding='utf-8') # xml路径
    out_file = open('dataset/DataSet/ZJHH/YoLolabels/%s.txt' % image_id, 'w',encoding='utf-8')
    tree = ET.parse(in_file)
    root = tree.getroot()
    size = root.find('size')
    w = int(size.find('width').text)
    h = int(size.find('height').text)

    for obj in root.iter('object'):
        difficult = obj.find('difficult').text
        cls = obj.find('name').text
        if cls not in classes or int(difficult) == 1:
            continue
        cls_id = classes.index(cls)
        xmlbox = obj.find('bndbox')
        b = (float(xmlbox.find('xmin').text), float(xmlbox.find('xmax').text), float(xmlbox.find('ymin').text),
             float(xmlbox.find('ymax').text))
        bb = convert((w, h), b)
        out_file.write(str(cls_id) + " " + " ".join([str(a) for a in bb]) + '\n')
    in_file.close()
    out_file.close()


wd = os.getcwd()
wd = os.getcwd()
data_base_dir = os.path.join(wd, "dataset/DataSet/")  #数据集目录
if not os.path.isdir(data_base_dir):
    os.mkdir(data_base_dir)
work_sapce_dir = os.path.join(data_base_dir, "ZJHH/")  #xml文件及图片上级目录
if not os.path.isdir(work_sapce_dir):
    os.mkdir(work_sapce_dir)
annotation_dir = os.path.join(work_sapce_dir, "Annotations/")  #xml文件目录
if not os.path.isdir(annotation_dir):
    os.mkdir(annotation_dir)
clear_hidden_files(annotation_dir)
image_dir = os.path.join(work_sapce_dir, "images/")    #图片目录
if not os.path.isdir(image_dir):
    os.mkdir(image_dir)
clear_hidden_files(image_dir)
yolo_labels_dir = os.path.join(work_sapce_dir, "YoLolabels/")    #数据集格式转换存放目录
if not os.path.isdir(yolo_labels_dir):
    os.mkdir(yolo_labels_dir)
clear_hidden_files(yolo_labels_dir)
yolov5_images_dir = os.path.join(data_base_dir, "images/")
if not os.path.isdir(yolov5_images_dir):
    os.mkdir(yolov5_images_dir)
clear_hidden_files(yolov5_images_dir)
yolov5_labels_dir = os.path.join(data_base_dir, "labels/")
if not os.path.isdir(yolov5_labels_dir):
    os.mkdir(yolov5_labels_dir)
clear_hidden_files(yolov5_labels_dir)
yolov5_images_train_dir = os.path.join(yolov5_images_dir, "train/")
if not os.path.isdir(yolov5_images_train_dir):
    os.mkdir(yolov5_images_train_dir)
clear_hidden_files(yolov5_images_train_dir)
yolov5_images_test_dir = os.path.join(yolov5_images_dir, "val/")
if not os.path.isdir(yolov5_images_test_dir):
    os.mkdir(yolov5_images_test_dir)
clear_hidden_files(yolov5_images_test_dir)
yolov5_labels_train_dir = os.path.join(yolov5_labels_dir, "train/")
if not os.path.isdir(yolov5_labels_train_dir):
    os.mkdir(yolov5_labels_train_dir)
clear_hidden_files(yolov5_labels_train_dir)
yolov5_labels_test_dir = os.path.join(yolov5_labels_dir, "val/")
if not os.path.isdir(yolov5_labels_test_dir):
    os.mkdir(yolov5_labels_test_dir)
clear_hidden_files(yolov5_labels_test_dir)

train_file = open(os.path.join(wd, "yolov5_train.txt"), 'w')
test_file = open(os.path.join(wd, "yolov5_val.txt"), 'w')
train_file.close()
test_file.close()
train_file = open(os.path.join(wd, "yolov5_train.txt"), 'a')
test_file = open(os.path.join(wd, "yolov5_val.txt"), 'a')
list_imgs = os.listdir(image_dir)  # list image files
prob = random.randint(1, 100)
print("Probability: %d" % prob)
for i in range(0, len(list_imgs)):
    path = os.path.join(image_dir, list_imgs[i])
    if os.path.isfile(path):
        image_path = image_dir + list_imgs[i]
        voc_path = list_imgs[i]
        (nameWithoutExtention, extention) = os.path.splitext(os.path.basename(image_path))
        (voc_nameWithoutExtention, voc_extention) = os.path.splitext(os.path.basename(voc_path))
        annotation_name = nameWithoutExtention + '.xml'
        annotation_path = os.path.join(annotation_dir, annotation_name)
        label_name = nameWithoutExtention + '.txt'
        label_path = os.path.join(yolo_labels_dir, label_name)
    prob = random.randint(1, 100)
    print("Probability: %d" % prob)
    if (prob < TRAIN_RATIO):  # train dataset
        if os.path.exists(annotation_path):
            train_file.write(image_path + '\n')
            convert_annotation(nameWithoutExtention)  # convert label
            copyfile(image_path, yolov5_images_train_dir + voc_path)
            copyfile(label_path, yolov5_labels_train_dir + label_name)
    else:  # test dataset
        if os.path.exists(annotation_path):
            test_file.write(image_path + '\n')
            convert_annotation(nameWithoutExtention)  # convert label
            copyfile(image_path, yolov5_images_test_dir + voc_path)
            copyfile(label_path, yolov5_labels_test_dir + label_name)
train_file.close()
test_file.close()

注:若类别中含有中文,该段代码encoding应为utf-8.否则会出现格式转换后文件为空的情况。

    in_file = open('dataset/DataSet/ZJHH/Annotations/%s.xml' % image_id,encoding='utf-8') # xml路径
    out_file = open('dataset/DataSet/ZJHH/YoLolabels/%s.txt' % image_id, 'w',encoding='utf-8')

6.运行该代码后,会生成以下文件及文件夹

其中DataSet/images/train、val下存放按数据集验证集随机分类的图片,labels下存放转换后的数据集标注,内容如下:,若为空则说明转换失败,可检查中文编码问题。

随后使用以下命令进行训练

yolo train data=ultralytics/cfg/datasets/zjhhBN.yaml model=weight/yolov8s.pt epochs=100 batch=4 device=0

因为类型包含中文,所以会自动下载应该ttf文件,因为下载太慢,也可以复制网址下载后复制到字符路径下,在下图红圈位置。

开始训练

训练完成后可在run目录下找到训练的模型

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

为阿根廷助威

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值