YOLOv8-Pose训练自己的数据集
版权声明:本文为转载文章,遵循 CC 4.0 BY-SA 版权协议,如由侵权请联系删除。
原文链接:https://blog.csdn.net/qq_54134410/article/details/134875161
2.2.1 将COCO数据集json格式文件转换成YOLOv8-Pose格式的txt
2.2.2 将自己标注的数据转换成YOLOv8-Pose格式的txt
0、引言
本文是使用YOLOv8-Pose训练自己的数据集,数据集包含COCO数据集以及自己标注的人体姿态数据集。
1、环境准备
可以参考这篇博客:深度学习环境搭建-CSDN博客
本文环境:
- Windows10
- python:3.10
- cuda:11.6
- pytorch:1.12.0
- torchvision:0.13.0
2、数据集准备
2.1、创建数据集存放的文件夹
-
Posedata
-
____
__images
-
__
____
____
__train
-
__
____
____
____
___001.jpg
-
___
____
____
_val
-
____
____
____
_____002.jpg
-
____
__labels
-
__
____
____
__train
-
__
____
____
____
___001.txt
-
___
____
____
_val
-
____
____
____
_____002.txt
本人的数据都存放在Posedata文件夹中(自定义命名)
目录结构如下:images存放训练集和验证集图片,labels存放训练集和验证集txt
2.2 准备用于YOLOv8-Seg训练的txt
2.2.1 将COCO数据集json格式文件转换成YOLOv8-Pose格式的txt
从官网下载CoCo数据集的json文件
具体步骤参考我的这篇博客:将CoCo数据集Json格式转成训练Yolov8-Pose姿态的txt格式-CSDN博客
2.2.2 将自己标注的数据转换成YOLOv8-Pose格式的txt
具体步骤参考我的这篇博客:将labelme标注的人体姿态Json文件转成训练Yolov8-Pose的txt格式-CSDN博客
将COCO转化得到的数据和自己的数据集合并即可,这样就得到了可用于训练的数据,train中存放训练数据,val存放验证集。
3、创建配置文件
3.1、设置myposedata.yaml
根据自己的数据集位置进行修改和配置
-
# Train
/val
/
test sets
as
1) dir: path
/
to
/imgs,
2)
file: path
/
to
/imgs.txt,
or
3) list: [path
/
to
/imgs
1, path
/
to
/imgs
2, ..]
-
path: G:\Yolov
8\ultralytics-main\datasets\myposedata\Posedata # dataset root dir
-
train: images
/train # train images (
relative
to
'path')
4 images
-
val: images
/val # val images (
relative
to
'path')
4 images
-
test: #
test images (
optional)
-
-
# Keypoints
-
kpt_shape: [
17,
3] #
number
of keypoints,
number
of dims (
2
for x,y
or
3
for x,y,visible)
-
flip_idx: [
0,
2,
1,
4,
3,
6,
5,
8,
7,
10,
9,
12,
11,
14,
13,
16,
15]
-
-
# Classes
-
names:
-
0: person
-
-
3.2、设置yolov8s-pose.yaml
根据自己想使用的权重进行选择,我这里采用的是yolov8s-pose.pt进行训练。
-
# Ultralytics YOLO 🚀, AGPL-
3.0 license
-
# YOLOv
8-pose keypoints
/pose estimation model.
For
Usage examples see https:
/
/docs.ultralytics.com
/tasks
/pose
-
-
# Parameters
-
nc:
1 #
number
of classes
-
kpt_shape: [
17,
3] #
number
of keypoints,
number
of dims (
2
for x,y
or
3
for x,y,visible)
-
scales: # model compound scaling constants, i.e.
'model=yolov8n-pose.yaml' will
call yolov
8-pose.yaml
with scale
'n'
-
# [depth, width, max_channels]
-
s: [
0.33,
0.50,
1024]
-
-
# YOLOv
8.0n backbone
-
backbone:
-
# [
from, repeats, module, args]
-
- [-
1,
1, Conv, [
64,
3,
2]] #
0-P
1
/
2
-
- [-
1,
1, Conv, [
128,
3,
2]] #
1-P
2
/
4
-
- [-
1,
3, C
2f, [
128,
True]]
-
- [-
1,
1, Conv, [
256,
3,
2]] #
3-P
3
/
8
-
- [-
1,
6, C
2f, [
256,
True]]
-
- [-
1,
1, Conv, [
512,
3,
2]] #
5-P
4
/
16
-
- [-
1,
6, C
2f, [
512,
True]]
-
- [-
1,
1, Conv, [
1024,
3,
2]] #
7-P
5
/
32
-
- [-
1,
3, C
2f, [
1024,
True]]
-
- [-
1,
1, SPPF, [
1024,
5]] #
9
-
-
# YOLOv
8.0n head
-
head:
-
- [-
1,
1, nn.Upsample, [None,
2,
'nearest']]
-
- [[-
1,
6],
1, Concat, [
1]] # cat backbone P
4
-
- [-
1,
3, C
2f, [
512]] #
12
-
-
- [-
1,
1, nn.Upsample, [None,
2,
'nearest']]
-
- [[-
1,
4],
1, Concat, [
1]] # cat backbone P
3
-
- [-
1,
3, C
2f, [
256]] #
15 (P
3
/
8-small)
-
-
- [-
1,
1, Conv, [
256,
3,
2]]
-
- [[-
1,
12],
1, Concat, [
1]] # cat head P
4
-
- [-
1,
3, C
2f, [
512]] #
18 (P
4
/
16-medium)
-
-
- [-
1,
1, Conv, [
512,
3,
2]]
-
- [[-
1,
9],
1, Concat, [
1]] # cat head P
5
-
- [-
1,
3, C
2f, [
1024]] #
21 (P
5
/
32-large)
-
-
- [[
15,
18,
21],
1, Pose, [nc, kpt_shape]] # Pose(P
3, P
4, P
5)
4、进行训练
上述步骤完成后,即可开始训练。设置训练的轮数epochs,我这里设置为100。
-
from ultralytics import YOLO
-
-
if __name__
=
=
'__main__':
-
-
model
= YOLO(
'yolov8s-pose.yaml') # load a pretrained model (recommended
for training)
-
# Train the model
-
model.train(
data
=
'myposedata.yaml')
也可以不使用yaml文件,直接读取.pt文件
-
-
-
-
-
-
from ultralytics
import YOLO
-
-
if __name__ ==
'__main__':
-
modelpath =
r'G:\Yolov8\yolov8-pose-pt\yolov8s-pose.pt'
-
-
model = YOLO(modelpath)
# load a pretrained model (recommended for training)
-
# Train the model
-
yamlpath =
r'G:\Yolov8\ultralytics-main\yolov8-pose\myposedata.yaml'
-
model.train(epochs=
100,data=yamlpath)
训练过程,我这里为了测试能否训练起来,训练数据较少,设置的训练epochs=10:
训练过程中会保存以下内容,最后得到两个模型分别是:best.pt、last.pt
5、验证模型
训练进程完毕以后可使用一些验证数据进行模型验证,查看模型的分割效果。
-
from ultralytics
import YOLO
-
import glob
-
import os
-
# Load a model
-
model = YOLO(
r'G:\Yolov8\yolov8-pose-pt\best.pt')
# load an official model
-
-
# Predict with the model
-
imgpath =
r'G:\Yolov8\ultralytics-main\yolov8-pose\testimgs'
-
imgs = glob.glob(os.path.join(imgpath,
'*.jpg'))
-
for img
in imgs:
-
model.predict(img, save=
True)
6、总结
至此,整个YOLOv8-Pose模型训练预测阶段完成。此过程同样可以在linux系统上进行,在数据准备过程中需要仔细,保证最后得到的数据准确,最好是用显卡进行训练。
有问题评论区见!