前言
yolo8可以实现人体姿态估计,如何迁移到人脸关键点检测的任务上呢?本篇基于yolo8pose实现yoloFace Landmark的检测.
一、yolo8特性
yolo5和yolo8的差别还是挺大的,最大的差别是yolo8的集成度.它将detection, pose和segmentation都进行了整合.并且将v5, v7和v8都进行来整合.
二、使用步骤
1.修改数据集的入口
代码如下(示例):
# parent
# ├── ultralytics
# └── datasets
# └── coco8-pose ← downloads here (1 MB)
#########################################ori##########################################################################
# # Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
# path: ../datasets/coco8-pose # dataset root dir
# train: images/train # train images (relative to 'path') 4 images
# val: images/val # val images (relative to 'path') 4 images
# test: # test images (optional)
#########################################wqt##########################################################################
# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
path:
/home/wqt/Datasets/300W
# ../datasets/coco8-pose # dataset root dir
train:
train2yolo.txt
# images/train # train images (relative to 'path') 4 images
val:
test2yolo.txt
# images/val # val images (relative to 'path') 4 images
test: # test images (optional)
#########################################ori##########################################################################
# # Keypoints
# kpt_shape: [17, 3] # number of keypoints, number of dims (2 for x,y or 3 for x,y,visible)
# flip_idx: [0, 2, 1, 4, 3, 6, 5, 8, 7, 10, 9, 12, 11, 14, 13, 16, 15]
#########################################wqt##########################################################################
# Keypoints
kpt_shape: [68, 3] # number of keypoints, number of