insightface人脸关键点检测训练全过程记录(68点+双眼珠)

代码地址:

https://github.com/deepinsight/insightface/tree/master/alignment/synthetics

环境安装

pip install albumentations
pip install torch==1.12.1+cu116 torchvision==0.13.1+cu116 torchaudio==0.12.1 --extra-index-url https://download.pytorch.org/whl/cu116
pip install pytorch-lightning
pip install timm
pip install albumentations
pip install insightface
pip install onnxruntime

训练集

1 下载训练数据集:
Face Synthetics dataset从https://github.com/microsoft/FaceSynthetics下载并将其放在某个地方
在这里插入图片描述
2 路径修改:
在这里插入图片描述
3 数据准备
使用tools/prepare_synthetics.py进行训练数据准备
过程:
在这里插入图片描述
4 数据生成
数据处理完成后会在此文件夹内生成生成的70点标注图片+模型
在这里插入图片描述
在这里插入图片描述

训练模型

预训练模型、细节修改
预训练模型,不用下载,需要修改点的个数68改为70
使用ResNet50d
在这里插入图片描述
root为数据生成后的文件夹路径,填写自己的即可
在这里插入图片描述

开始训练
python -u trainer_synthetics.py

训练报错解决

1 报错:

C:\anaconda\envs\insightface-master12\lib\site-packages\torchaudio\backend\utils.py:62: UserWarning: No audio backend is available.
  warnings.warn("No audio backend is available.")
Global seed set to 727
Traceback (most recent call last):
  File "E:\project_mian\insightface-master\alignment\synthetics\trainer_synthetics.py", line 139, in <module>
    cli_main()
  File "E:\project_mian\insightface-master\alignment\synthetics\trainer_synthetics.py", line 101, in cli_main
    train_set = FaceDataset(root_dir=args.root, is_train=True)
  File "E:\project_mian\insightface-master\alignment\synthetics\datasets\dataset_synthetics.py", line 92, in __init__
    A.MotionBlur(blur_limit=(5,12), p=0.1),
  File "C:\anaconda\envs\insightface-master12\lib\site-packages\albumentations\augmentations\blur\transforms.py", line 78, in __init__
    raise ValueError(f"Blur limit must be odd when centered=True. Got: {self.blur_limit}")
ValueError: Blur limit must be odd when centered=True. Got: (5, 12)

解决: 12改为13
在这里插入图片描述

2 报错:

type object 'Trainer' has no attribute 'add_argparse_args'
降级并安装 
pip install pytorch-lightning==1.9.4
报错:
Traceback (most recent call last):
  File "E:\project_mian\insightface-master\alignment\synthetics\trainer_synthetics.py", line 139, in <module>
    cli_main()
  File "E:\project_mian\insightface-master\alignment\synthetics\trainer_synthetics.py", line 130, in cli_main
    logger=TensorBoardLogger(osp.join(ckpt_path, 'logs')),
  File "C:\anaconda\envs\insightface-master12\lib\site-packages\pytorch_lightning\loggers\tensorboard.py", line 110, in __init__
    super().__init__(
  File "C:\anaconda\envs\insightface-master12\lib\site-packages\lightning_fabric\loggers\tensorboard.py", line 93, in __init__
    raise ModuleNotFoundError(
ModuleNotFoundError: Neither `tensorboard` nor `tensorboardX` is available. Try `pip install`ing either.

解决:

pip install tensorboardX 

3 报错
type object 'Trainer' has no attribute 'add_argparse_args'
解决

pip install pytorch-lightning==1.9.4

4 报错

 __init__() got an unexpected keyword argument 'progress_bar_refresh_rate'

解决:

注释这行progress_bar_refresh_rate

5 报错

AttributeError: 'Chatbot' object has no attribute 'style'

解决:

pip install gradio==3.50.0 降低版本 先卸载

6 报错:

Traceback (most recent call last):
  File "E:\project_mian\insightface-master\alignment\synthetics\trainer_synthetics.py", line 143, in <module>
    cli_main()
  File "E:\project_mian\insightface-master\alignment\synthetics\trainer_synthetics.py", line 140, in cli_main
    trainer.fit(model, train_loader, val_loader)
  File "C:\anaconda\envs\insightface-master12\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 608, in fit
    call._call_and_handle_interrupt(
  File "C:\anaconda\envs\insightface-master12\lib\site-packages\pytorch_lightning\trainer\call.py", line 38, in _call_and_handle_interrupt
    return trainer_fn(*args, **kwargs)
  File "C:\anaconda\envs\insightface-master12\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 650, in _fit_impl
    self._run(model, ckpt_path=self.ckpt_path)
  File "C:\anaconda\envs\insightface-master12\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 1112, in _run
    results = self._run_stage()
  File "C:\anaconda\envs\insightface-master12\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 1191, in _run_stage
    self._run_train()
  File "C:\anaconda\envs\insightface-master12\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 1214, in _run_train
    self.fit_loop.run()
  File "C:\anaconda\envs\insightface-master12\lib\site-packages\pytorch_lightning\loops\loop.py", line 199, in run
    self.advance(*args, **kwargs)
  File "C:\anaconda\envs\insightface-master12\lib\site-packages\pytorch_lightning\loops\fit_loop.py", line 267, in advance
    self._outputs = self.epoch_loop.run(self._data_fetcher)
  File "C:\anaconda\envs\insightface-master12\lib\site-packages\pytorch_lightning\loops\loop.py", line 199, in run
    self.advance(*args, **kwargs)
  File "C:\anaconda\envs\insightface-master12\lib\site-packages\pytorch_lightning\loops\epoch\training_epoch_loop.py", line 187, in advance
    batch = next(data_fetcher)
  File "C:\anaconda\envs\insightface-master12\lib\site-packages\pytorch_lightning\utilities\fetching.py", line 184, in __next__
    return self.fetching_function()
  File "C:\anaconda\envs\insightface-master12\lib\site-packages\pytorch_lightning\utilities\fetching.py", line 265, in fetching_function
    self._fetch_next_batch(self.dataloader_iter)
  File "C:\anaconda\envs\insightface-master12\lib\site-packages\pytorch_lightning\utilities\fetching.py", line 280, in _fetch_next_batch
    batch = next(iterator)
  File "C:\anaconda\envs\insightface-master12\lib\site-packages\pytorch_lightning\trainer\supporters.py", line 569, in __next__
    return self.request_next_batch(self.loader_iters)
  File "C:\anaconda\envs\insightface-master12\lib\site-packages\pytorch_lightning\trainer\supporters.py", line 581, in request_next_batch
    return apply_to_collection(loader_iters, Iterator, next)
  File "C:\anaconda\envs\insightface-master12\lib\site-packages\lightning_utilities\core\apply_func.py", line 64, in apply_to_collection
    return function(data, *args, **kwargs)
  File "C:\anaconda\envs\insightface-master12\lib\site-packages\torch\utils\data\dataloader.py", line 681, in __next__
    data = self._next_data()
  File "C:\anaconda\envs\insightface-master12\lib\site-packages\torch\utils\data\dataloader.py", line 1376, in _next_data
    return self._process_data(data)
  File "C:\anaconda\envs\insightface-master12\lib\site-packages\torch\utils\data\dataloader.py", line 1402, in _process_data
    data.reraise()
  File "C:\anaconda\envs\insightface-master12\lib\site-packages\torch\_utils.py", line 461, in reraise
    raise exception
RuntimeError: Caught RuntimeError in DataLoader worker process 0.
Original Traceback (most recent call last):
  File "C:\anaconda\envs\insightface-master12\lib\site-packages\torch\utils\data\_utils\worker.py", line 302, in _worker_loop
    data = fetcher.fetch(index)
  File "C:\anaconda\envs\insightface-master12\lib\site-packages\torch\utils\data\_utils\fetch.py", line 52, in fetch
    return self.collate_fn(data)
  File "C:\anaconda\envs\insightface-master12\lib\site-packages\torch\utils\data\_utils\collate.py", line 175, in default_collate
    return [default_collate(samples) for samples in transposed]  # Backwards compatibility.
  File "C:\anaconda\envs\insightface-master12\lib\site-packages\torch\utils\data\_utils\collate.py", line 175, in <listcomp>
    return [default_collate(samples) for samples in transposed]  # Backwards compatibility.
  File "C:\anaconda\envs\insightface-master12\lib\site-packages\torch\utils\data\_utils\collate.py", line 140, in default_collate
    out = elem.new(storage).resize_(len(batch), *list(elem.size()))
RuntimeError: Trying to resize storage that is not resizable

解决:数据尺寸不对应,都改为70即可
在这里插入图片描述

开始训练

训练生成模型保存为下图所示:
在这里插入图片描述

测试

运行test_synthetics.py

在’data/300W/Validation’中加入pred.txt,txt中为每张图片路径
并替换对应模型路径即可开始测试

测试结果:
在这里插入图片描述

  • 10
    点赞
  • 8
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 4
    评论
以下是使用dlib库进行人脸关键点检测,并获取眼球和嘴巴内部轮廓的彩色图片的代码示例: ```python import cv2 import dlib import numpy as np # 加载dlib的人脸检测器和关键点检测器 detector = dlib.get_frontal_face_detector() predictor = dlib.shape_predictor('shape_predictor_68_face_landmarks.dat') # 读取摄像头 cap = cv2.VideoCapture(0) while True: # 读取一帧图像 ret, frame = cap.read() if not ret: break # 转换为灰度图像 gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) # 使用dlib的人脸检测检测人脸 faces = detector(gray) # 遍历检测到的人脸 for face in faces: # 使用dlib的关键点检测检测人脸关键点 landmarks = predictor(gray, face) # 获取眼球和嘴巴内部轮廓的关键点坐标 left_eye = np.array([(landmarks.part(36).x, landmarks.part(36).y), (landmarks.part(37).x, landmarks.part(37).y), (landmarks.part(38).x, landmarks.part(38).y), (landmarks.part(39).x, landmarks.part(39).y), (landmarks.part(40).x, landmarks.part(40).y), (landmarks.part(41).x, landmarks.part(41).y)], np.int32) right_eye = np.array([(landmarks.part(42).x, landmarks.part(42).y), (landmarks.part(43).x, landmarks.part(43).y), (landmarks.part(44).x, landmarks.part(44).y), (landmarks.part(45).x, landmarks.part(45).y), (landmarks.part(46).x, landmarks.part(46).y), (landmarks.part(47).x, landmarks.part(47).y)], np.int32) mouth = np.array([(landmarks.part(48).x, landmarks.part(48).y), (landmarks.part(49).x, landmarks.part(49).y), (landmarks.part(50).x, landmarks.part(50).y), (landmarks.part(51).x, landmarks.part(51).y), (landmarks.part(52).x, landmarks.part(52).y), (landmarks.part(53).x, landmarks.part(53).y), (landmarks.part(54).x, landmarks.part(54).y), (landmarks.part(55).x, landmarks.part(55).y), (landmarks.part(56).x, landmarks.part(56).y), (landmarks.part(57).x, landmarks.part(57).y), (landmarks.part(58).x, landmarks.part(58).y), (landmarks.part(59).x, landmarks.part(59).y), (landmarks.part(60).x, landmarks.part(60).y), (landmarks.part(61).x, landmarks.part(61).y), (landmarks.part(62).x, landmarks.part(62).y), (landmarks.part(63).x, landmarks.part(63).y), (landmarks.part(64).x, landmarks.part(64).y), (landmarks.part(65).x, landmarks.part(65).y), (landmarks.part(66).x, landmarks.part(66).y), (landmarks.part(67).x, landmarks.part(67).y)], np.int32) # 使用关键点坐标绘制眼球和嘴巴内部轮廓的多边形 cv2.fillPoly(frame, [left_eye], (0, 255, 0)) cv2.fillPoly(frame, [right_eye], (0, 255, 0)) cv2.fillPoly(frame, [mouth], (0, 0, 255)) # 显示图像 cv2.imshow('frame', frame) # 按下q键退出程序 if cv2.waitKey(1) == ord('q'): break # 释放摄像头和窗口 cap.release() cv2.destroyAllWindows() ``` 需要注意的是,在运行代码之前,需要先下载dlib的人脸关键点检测器模型文件shape_predictor_68_face_landmarks.dat,并将其放在与代码文件相同的目录下,才能正确运行。
评论 4
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

Fishel-

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值