python 摄像头标定_python 3利用Dlib 19.7实现摄像头人脸检测特征点标定

Python 3 利用 Dlib 19.7 实现摄像头人脸检测特征点标定

0.引言

利用python开发,借助Dlib库捕获摄像头中的人脸,进行实时特征点标定;

图1 工程效果示例(gif)

图2 工程效果示例(静态图片)

(实现比较简单,代码量也比较少,适合入门或者兴趣学习。)

1.开发环境

python:  3.6.3

dlib:    19.7

OpenCv, numpy

import dlib # 人脸识别的库dlib

import numpy as np # 数据处理的库numpy

import cv2 # 图像处理的库OpenCv

2.源码介绍

其实实现很简单,主要分为两个部分:摄像头调用+人脸特征点标定

2.1 摄像头调用

介绍下opencv中摄像头的调用方法;

利用 cap = cv2.VideoCapture(0) 创建一个对象;

(具体可以参考官方文档)

# 2018-2-26

# By TimeStamp

# cnblogs: http://www.cnblogs.com/AdaminXie

"""

cv2.VideoCapture(), 创建cv2摄像头对象/ open the default camera

Python: cv2.VideoCapture() →

Python: cv2.VideoCapture(filename) →

filename – name of the opened video file (eg. video.avi) or image sequence (eg. img_%02d.jpg, which will read samples like img_00.jpg, img_01.jpg, img_02.jpg, ...)

Python: cv2.VideoCapture(device) →

device – id of the opened video capturing device (i.e. a camera index). If there is a single camera connected, just pass 0.

"""

cap = cv2.VideoCapture(0)

"""

cv2.VideoCapture.set(propId, value),设置视频参数;

propId:

CV_CAP_PROP_POS_MSEC Current position of the video file in milliseconds.

CV_CAP_PROP_POS_FRAMES 0-based index of the frame to be decoded/captured next.

CV_CAP_PROP_POS_AVI_RATIO Relative position of the video file: 0 - start of the film, 1 - end of the film.

CV_CAP_PROP_FRAME_WIDTH Width of the frames in the video stream.

CV_CAP_PROP_FRAME_HEIGHT Height of the frames in the video stream.

CV_CAP_PROP_FPS Frame rate.

CV_CAP_PROP_FOURCC 4-character code of codec.

CV_CAP_PROP_FRAME_COUNT Number of frames in the video file.

CV_CAP_PROP_FORMAT Format of the Mat objects returned by retrieve() .

CV_CAP_PROP_MODE Backend-specific value indicating the current capture mode.

CV_CAP_PROP_BRIGHTNESS Brightness of the image (only for cameras).

CV_CAP_PROP_CONTRAST Contrast of the image (only for cameras).

CV_CAP_PROP_SATURATION Saturation of the image (only for cameras).

CV_CAP_PROP_HUE Hue of the image (only for cameras).

CV_CAP_PROP_GAIN Gain of the image (only for cameras).

CV_CAP_PROP_EXPOSURE Exposure (only for cameras).

CV_CAP_PROP_CONVERT_RGB Boolean flags indicating whether images should be converted to RGB.

CV_CAP_PROP_WHITE_BALANCE_U The U value of the whitebalance setting (note: only supported by DC1394 v 2.x backend currently)

CV_CAP_PROP_WHITE_BALANCE_V The V value of the whitebalance setting (note: only supported by DC1394 v 2.x backend currently)

CV_CAP_PROP_RECTIFICATION Rectification flag for stereo cameras (note: only supported by DC1394 v 2.x backend currently)

CV_CAP_PROP_ISO_SPEED The ISO speed of the camera (note: only supported by DC1394 v 2.x backend currently)

CV_CAP_PROP_BUFFERSIZE Amount of frames stored in internal buffer memory (note: only supported by DC1394 v 2.x backend currently)

value: 设置的参数值/ Value of the property

"""

cap.set(3, 480)

"""

cv2.VideoCapture.isOpened(), 检查摄像头初始化是否成功 / check if we succeeded

返回true或false

"""

cap.isOpened()

"""

cv2.VideoCapture.read([imgage]) -> retval,image, 读取视频 / Grabs, decodes and returns the next video frame

返回两个值:

一个是布尔值true/false,用来判断读取视频是否成功/是否到视频末尾

图像对象,图像的三维矩阵

"""

flag, im_rd = cap.read()

2.2 人脸特征点标定

调用预测器“shape_predictor_68_face_landmarks.dat”进行68点标定,这是dlib训练好的模型,可以直接调用进行人脸68个人脸特征点的标定;

2.3 源码

实现的方法比较简单:

利用 cv2.VideoCapture() 创建摄像头对象,然后利用 flag, im_rd = cv2.VideoCapture.read() 读取摄像头视频,im_rd就是视频中的一帧帧图像;

然后就类似于单张图像进行人脸检测,对这一帧帧的图像im_rd利用dlib进行特征点标定,然后绘制特征点;

你可以按下s键来获取当前截图,或者按下q键来退出摄像头;

# 2018-2-26

# By TimeStamp

# cnblogs: http://www.cnblogs.com/AdaminXie

# github: https://github.com/coneypo/Dlib_face_detection_from_camera

import dlib #人脸识别的库dlib

import numpy as np #数据处理的库numpy

import cv2 #图像处理的库OpenCv

# dlib预测器

detector = dlib.get_frontal_face_detector()

predictor = dlib.shape_predictor('shape_predictor_68_face_landmarks.dat')

# 创建cv2摄像头对象

cap = cv2.VideoCapture(0)

# cap.set(propId, value)

# 设置视频参数,propId设置的视频参数,value设置的参数值

cap.set(3, 480)

# 截图screenshoot的计数器

cnt = 0

# cap.isOpened() 返回true/false 检查初始化是否成功

while(cap.isOpened()):

# cap.read()

# 返回两个值:

# 一个布尔值true/false,用来判断读取视频是否成功/是否到视频末尾

# 图像对象,图像的三维矩阵

flag, im_rd = cap.read()

# 每帧数据延时1ms,延时为0读取的是静态帧

k = cv2.waitKey(1)

# 取灰度

img_gray = cv2.cvtColor(im_rd, cv2.COLOR_RGB2GRAY)

# 人脸数rects

rects = detector(img_gray, 0)

#print(len(rects))

# 待会要写的字体

font = cv2.FONT_HERSHEY_SIMPLEX

# 标68个点

if(len(rects)!=0):

# 检测到人脸

for i in range(len(rects)):

landmarks = np.matrix([[p.x, p.y] for p in predictor(im_rd, rects[i]).parts()])

for idx, point in enumerate(landmarks):

# 68点的坐标

pos = (point[0, 0], point[0, 1])

# 利用cv2.circle给每个特征点画一个圈,共68个

cv2.circle(im_rd, pos, 2, color=(0, 255, 0))

# 利用cv2.putText输出1-68

cv2.putText(im_rd, str(idx + 1), pos, font, 0.2, (0, 0, 255), 1, cv2.LINE_AA)

cv2.putText(im_rd, "faces: "+str(len(rects)), (20,50), font, 1, (0, 0, 255), 1, cv2.LINE_AA)

else:

# 没有检测到人脸

cv2.putText(im_rd, "no face", (20, 50), font, 1, (0, 0, 255), 1, cv2.LINE_AA)

# 添加说明

im_rd = cv2.putText(im_rd, "s: screenshot", (20, 400), font, 0.8, (255, 255, 255), 1, cv2.LINE_AA)

im_rd = cv2.putText(im_rd, "q: quit", (20, 450), font, 0.8, (255, 255, 255), 1, cv2.LINE_AA)

# 按下s键保存

if (k == ord('s')):

cnt+=1

cv2.imwrite("screenshoot"+str(cnt)+".jpg", im_rd)

# 按下q键退出

if(k==ord('q')):

break

# 窗口显示

cv2.imshow("camera", im_rd)

# 释放摄像头

cap.release()

# 删除建立的窗口

cv2.destroyAllWindows()

如果对您有帮助,欢迎在GitHub上star本项目。

以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持脚本之家。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值