Yolov8分割训练自己的数据集记录
第一章、标签制作
一、安装labelme
labelme安装很简单,直接在终端输入:
pip install labelme
启用labelme
在终端输入:
labelme
接下来就是标注数据了。实例分割数据标注选择“创建多边形”标注就行。
二、json转txt
使用labelme标注的label数据格式为json格式,但是yolov8分割使用的依旧是txt格式。需要进行转换。转换代码如下:
import json
import os
from tqdm import tqdm
def convert_label(json_dir, save_dir, classes):
json_paths = os.listdir(json_dir)
classes = classes.split(',')
for json_path in tqdm(json_paths):
path = os.path.join(json_dir, json_path)
with open(path, 'r', encoding='utf-8') as load_f:
json_dict = json.load(load_f)
h, w = json_dict['imageHeight'], json_dict['imageWidth']
# save txt path
txt_path = os.path.join(save_dir, json_path.replace('json', 'txt'))
txt_file = open(txt_path, 'w')
for shape_dict in json_dict['shapes']:
label = shape_dict['label']
label_index = classes.index(label)
points = shape_dict['points']
points_nor_list = []
for point in points:
points_nor_list.append(point[0] / w)
points_nor_list.append(point[1] / h)
points_nor_list = list(map(lambda x: str(x), points_nor_list))
points_nor_str = ' '.join(points_nor_list)
label_str = str(label_index) + ' ' + points_nor_str + '\n'
txt_file.writelines(label_str)
if __name__ == "__main__":
json_dir = 'D:/work/PythonCode/Yolo/yolov8/dataset/jsons'
save_dir = 'D:/work/PythonCode/Yolo/yolov8/dataset/labels'
classes = 'box,Refence'
convert_label(json_dir, save_dir, classes)
注意:json文件夹下不要有其他格式的任何文件。classes中修改为自己的类别。
转换格式后,可以使用如下代码进行查看是否正确。
import cv2
import numpy as np
def restore_masks_to_image(mask_data, image_path, output_path):
# 读取图像
img = cv2.imread(image_path)
# 将掩码数据还原到图像上
for mask in mask_data:
values = list(map(float, mask.split()))
class_id = int(values[0])
mask_values = values[1:]
# 将掩码数据转换为NumPy数组
mask_array = np.array(mask_values, dtype=np.float32).reshape((int(len(mask_values) / 2), 2))
# 将相对于图像大小的百分比转换为具体坐标值
mask_array[:, 0] *= img.shape[1] # 宽度
mask_array[:, 1] *= img.shape[0] # 高度
# 将坐标值转换为整数
mask_array = mask_array.astype(np.int32)
# 在图像上绘制掩码
cv2.polylines(img, [mask_array], isClosed=True, color=(0, 255, 0), thickness=2)
# 在图像上绘制每个坐标点
for point in mask_array:
cv2.circle(img, tuple(point), 3, (255, 0, 0), -1) # -1 表示填充圆
# 保存带有掩码和坐标点的图像
cv2.imwrite(output_path, img)
if __name__ == "__main__":
with open('./coco8-seg/labels/train/000000000009.txt', 'r') as file:
mask_data = [line.strip() for line in file.readlines()]
#mask_data = [
# "22 0.00746875 0.0539294 0.117891 0.0921412 0.231297 0.110118 0.2895 0.0674118 0.331281 0.0472 0.3865 0.0696706 0.423813 0.0943765 0.446188 0.105624 0.467078 0.1528 0.517813 0.182024 0.577516 0.253929 0.658094 0.379765 0.690922 0.532588 0.687937 0.6 0.650625 0.555059 0.658094 0.644941 0.668547 0.755059 0.676 0.838212 0.658094 0.894376 0.613328 0.925835 0.589453 0.914612 0.590938 0.856188 0.552141 0.791012 0.523781 0.725835 0.528266 0.633718 0.498422 0.577529 0.444703 0.505624 0.407391 0.505624 0.395453 0.541576 0.417844 0.591012 0.450672 0.642706 0.456641 0.642706 0.461109 0.725835 0.458125 0.786518 0.450672 0.853929 0.444703 0.898871 0.401422 0.869671 0.411875 0.815741 0.423813 0.734824 0.425297 0.694376 0.361125 0.608988 0.316359 0.588753 0.280547 0.703365 0.271594 0.757294 0.261141 0.829224 0.268609 0.869671 0.277562 0.901129 0.250703 0.937082 0.222344 0.939318 0.231297 0.901129 0.222344 0.844941 0.238766 0.7236 0.246219 0.642706 0.271594 0.510118 0.182062 0.507859 0.0999844 0.525835 0.0208906 0.494376 0.0015 0.0516941"
# ]
image_path = "./coco8-seg/images/train/000000000009.jpg"
output_path = "000000000034_out.jpg"
restore_masks_to_image(mask_data, image_path, output_path)
第二章、分割模型训练
一、拆分数据集
在项目目录下新建dataset文件夹。并建立images和labels文件夹,再建立train,val文件夹
对应放入数据。
二、模型训练
1.创建train.py源码
我是在项目目录下新建一个code文件夹,在该文件下新建train.py文件。文件来源:
https://docs.ultralytics.com/tasks/segment/#train
源码:
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-seg.yaml") # build a new model from YAML
model = YOLO("yolov8n-seg.pt") # load a pretrained model (recommended for training)
model = YOLO("yolov8n-seg.yaml").load("yolov8n.pt") # build from YAML and transfer weights
# Train the model
results = model.train(data="coco8-seg.yaml", epochs=100, imgsz=640)
可以看到这里需要yolov8n-seg.yaml和coco8-seg.yaml两个配置文件。
coco8-seg.yaml文件原件在:ultralytics\ultralytics\cfg\datasets下
yolov8n-seg.yaml文件原件在:ultralytics\ultralytics\cfg\models\v8下。
2.修改coco8-seg.yaml
复制coco8-seg.yaml文件到code文件夹下。打开进行修改。将路径和类别修改为自己的。
3.修改yolov8-seg.yaml
注意这里应该是去只要修改一下文件名称就可以,将yolov8-seg.yaml改为yolov8n-seg.yaml,会自动选择。也可以打开yaml文件,删除掉你不想要的配置选项。
4.修改train.py
修改完yolov8n-seg.yaml和coco8-seg.yaml两个配置文件后,在代码中设置绝对路径,直接使用这两个配置文件进行训练。
from ultralytics import YOLO
if __name__ == '__main__':
#freeze_support()
# Load a model
#model = YOLO("yolov8n-seg.yaml") # build a new model from YAML
#model = YOLO("yolov8n-seg.pt") # load a pretrained model (recommended for training)
model = YOLO("D:/work/PythonCode/Yolo/yolov8/code/yolov8n-seg.yaml").load("yolov8n.pt") # build from YAML and transfer weights
# Train the model
results = model.train(data="D:/work/PythonCode/Yolo/yolov8/code/coco8-seg.yaml", epochs=100, imgsz=640,workers=2, device="0")
运行后出现
三、推理
from ultralytics import YOLO
# Load a model
#model = YOLO("yolov8n-seg.pt") # load an official model
model = YOLO("D:/work/PythonCode/Yolo/yolov8/code/runs/segment/train8/weights/best.pt") # load a custom model
# Predict with the model
#results = model("3.bmp") # predict on an image
model.predict(source="3.bmp", save=True, imgsz=640, conf=0.5)