实战Detectron2— 训练人体关键点检测

Detectron2作为一个成熟的目标检测框架,其官方的tutorials有详细的文档说明。本文按照官方指导文档基于COCO2017数据集,使用一个GPU,型号为 TITAN RTX(24G显存),花费了4天左右的时间训练了人体关键点检测模型。

步骤如下:

1. Detectron2 代码库安装

按照官方的 install指导命令,安装十分简单。我是采用本地源代码安装方式。

git clone https://github.com/facebookresearch/detectron2.git
python -m pip install -e detectron2

上面的第一条命令运行后本地会从github上clone一个代码库目录结构如下

detectron2
├── configs
├── datasets
├── demo
├── detectron2
├── dev
├── docker
├── docs
├── GETTING_STARTED.md
├── INSTALL.md
├── LICENSE
├── MODEL_ZOO.md
├── output
├── projects
├── README.md
├── setup.cfg
├── setup.py
├── tests
└── tools

2.人体关键点数据集下载

 首先打开COCO数据集官方下载链接

正常打开后见下图橙色框标注的Images和Annotations分别表示图片和标注文件。

对于Images一栏的绿色框需要下载三个大的文件,分别对应的是训练集,验证集和测试集:

2017 Train images [118K/18GB]
2017 Val images [5K/1GB]
2017 Test images [41K/6GB]

对于Annotations一栏绿色框需要下载一个标注文件:

2017 Train/Val annotations [241MB]  如果将这个文件解压后,可以得到如下目录结构:

其中的person_keypoints_train2017.json 和person_keypoints_val2017.json 分别对应的人体关键点检测对应的训练集标注文件是我们真正需要的文件。

annotations
├── captions_train2017.json
├── captions_val2017.json
├── instances_train2017.json
├── instances_val2017.json
├── person_keypoints_train2017.json    人体关键点检测对应的训练集标注文件
└── person_keypoints_val2017.json     人体关键点检测对应的验证集标注文件

在本地代码库datasets目录下面新建立coco目录 

将上面的训练集,验证集以及标注文件放到本地代码的coco目录下面


datasets
├── coco
│   ├── annotations
│   ├── test2017
│   ├── train2017
│   └── val2017

3. 环境配置与模型训练
训练时先进入到代码库的detectron2目录下面

python tools/train_net.py --config-file ./configs/COCO-Keypoints/keypoint_rcnn_R_101_FPN_3x.yaml  SOLVER.IMS_PER_BATCH 8 SOLVER.BASE_LR 0.0025

经过4天左右的训练时间,如果顺利会得到如下训练结果:

[07/19 03:54:17] d2.evaluation.evaluator INFO: Total inference pure compute time: 0:05:37 (0.067663 s / iter per device, on 1 devices)
[07/19 03:54:17] d2.evaluation.coco_evaluation INFO: Preparing results for COCO format ...
[07/19 03:54:17] d2.evaluation.coco_evaluation INFO: Saving results to ./output/inference/coco_instances_results.json
[07/19 03:54:19] d2.evaluation.coco_evaluation INFO: Evaluating predictions with unofficial COCO API...
[07/19 03:54:19] d2.evaluation.fast_eval_api INFO: Evaluate annotation type *bbox*
[07/19 03:54:19] d2.evaluation.fast_eval_api INFO: COCOeval_opt.evaluate() finished in 0.77 seconds.
[07/19 03:54:19] d2.evaluation.fast_eval_api INFO: Accumulating evaluation results...
[07/19 03:54:19] d2.evaluation.fast_eval_api INFO: COCOeval_opt.accumulate() finished in 0.10 seconds.
[07/19 03:54:19] d2.evaluation.coco_evaluation INFO: Evaluation results for bbox: 
|   AP   |  AP50  |  AP75  |  APs   |  APm   |  APl   |
|:------:|:------:|:------:|:------:|:------:|:------:|
| 55.244 | 83.255 | 60.158 | 36.447 | 63.029 | 72.722 |
[07/19 03:54:20] d2.evaluation.fast_eval_api INFO: Evaluate annotation type *keypoints*
[07/19 03:54:25] d2.evaluation.fast_eval_api INFO: COCOeval_opt.evaluate() finished in 5.44 seconds.
[07/19 03:54:25] d2.evaluation.fast_eval_api INFO: Accumulating evaluation results...
[07/19 03:54:25] d2.evaluation.fast_eval_api INFO: COCOeval_opt.accumulate() finished in 0.03 seconds.
[07/19 03:54:25] d2.evaluation.coco_evaluation INFO: Evaluation results for keypoints: 
|   AP   |  AP50  |  AP75  |  APm   |  APl   |
|:------:|:------:|:------:|:------:|:------:|
| 63.696 | 85.916 | 69.254 | 59.249 | 72.083 |
[07/19 03:54:25] d2.engine.defaults INFO: Evaluation results for keypoints_coco_2017_val in csv format:
[07/19 03:54:25] d2.evaluation.testing INFO: copypaste: Task: bbox
[07/19 03:54:25] d2.evaluation.testing INFO: copypaste: AP,AP50,AP75,APs,APm,APl
[07/19 03:54:25] d2.evaluation.testing INFO: copypaste: 55.2440,83.2547,60.1577,36.4470,63.0290,72.7224
[07/19 03:54:25] d2.evaluation.testing INFO: copypaste: Task: keypoints
[07/19 03:54:25] d2.evaluation.testing INFO: copypaste: AP,AP50,AP75,APm,APl
[07/19 03:54:25] d2.evaluation.testing INFO: copypaste: 63.6964,85.9162,69.2536,59.2487,72.0832

 最终在detectron2/output 目录下生成了如下模型文件:

config.yaml                                    inference          model_0034999.pth  model_0084999.pth  model_0134999.pth  model_0184999.pth  model_0234999.pth
events.out.tfevents.1626131087.ubuntu.23424.0  last_checkpoint    model_0039999.pth  model_0089999.pth  model_0139999.pth  model_0189999.pth  model_0239999.pth
events.out.tfevents.1626131279.ubuntu.24690.0  log.txt            model_0044999.pth  model_0094999.pth  model_0144999.pth  model_0194999.pth  model_0244999.pth
events.out.tfevents.1626218003.ubuntu.28190.0  metrics.json       model_0049999.pth  model_0099999.pth  model_0149999.pth  model_0199999.pth  model_0249999.pth
events.out.tfevents.1626218189.ubuntu.29394.0  model_0004999.pth  model_0054999.pth  model_0104999.pth  model_0154999.pth  model_0204999.pth  model_0254999.pth
events.out.tfevents.1626218366.ubuntu.30715.0  model_0009999.pth  model_0059999.pth  model_0109999.pth  model_0159999.pth  model_0209999.pth  model_0259999.pth
events.out.tfevents.1626218400.ubuntu.30986.0  model_0014999.pth  model_0064999.pth  model_0114999.pth  model_0164999.pth  model_0214999.pth  model_0264999.pth
events.out.tfevents.1626218466.ubuntu.31493.0  model_0019999.pth  model_0069999.pth  model_0119999.pth  model_0169999.pth  model_0219999.pth  model_0269999.pth
events.out.tfevents.1626218502.ubuntu.31791.0  model_0024999.pth  model_0074999.pth  model_0124999.pth  model_0174999.pth  model_0224999.pth  model_final.pth
events.out.tfevents.1626262859.ubuntu.20243.0  model_0029999.pth  model_0079999.pth  model_0129999.pth  model_0179999.pth  model_0229999.pth

 我们以最后一次生成的模型做推理,测试下训练的模型效果,从下面三张图片看起来,非常棒的.

cd ./detectron2/demo
python demo.py --config-file ../configs/COCO-Keypoints/keypoint_rcnn_R_101_FPN_3x.yaml --output ./output  --input ./keypoints_input/000000000552.jpg  ./keypoints_input/000000001152.jpg   ./keypoints_input/000000581918.jpg --opts MODEL.WEIGHTS  ../output/model_final.pth

  

   

  

  • 5
    点赞
  • 35
    收藏
    觉得还不错? 一键收藏
  • 3
    评论
YOLOv5是一种基于深度学习目标检测算法,可以用于人体关键点检测。YOLOv5提供了不同参数量级的模型,其中也包括了专门用于人体关键点检测的模型。你可以通过下载预训练好的yolo5检测模型来进行使用。预训练模型可以在以下链接中找到:。 另外,在YOLOv5的官方GitHub仓库中,你可以找到一个特别针对人脸关键点检测的版本,叫做YOLOv5-face。这个版本提供了带有人脸关键点标注的annotation文件,你可以在链接中下载。下载后,你需要将文件放置在适当的文件夹结构下,然后就可以使用YOLOv5-face模型进行人体关键点检测了。 希望这些信息对你有帮助!<span class="em">1</span><span class="em">2</span><span class="em">3</span> #### 引用[.reference_title] - *1* [基于YOLOv5系列【n/s/m/l】模型开发构建人体手势目标检测识别分析系统](https://blog.csdn.net/Together_CZ/article/details/131476547)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v92^chatsearchT3_1"}}] [.reference_item style="max-width: 50%"] - *2* *3* [人脸与关键点检测:YOLO5Face实战](https://blog.csdn.net/wqthaha/article/details/125959373)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v92^chatsearchT3_1"}}] [.reference_item style="max-width: 50%"] [ .reference_list ]
评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值