5.3 深入face_recognition人脸检测
在本章前面的内容中,已经讲解了使用face_recognition实现基本人脸检测的知识。在本节的内容中,将进一步深入讲解face_recognition人脸检测的知识。
5.3.1 检测人脸眼睛的状态
请看下面的实例文件blink_detection.py,可以从相中机检测眼睛的状态。如果用户的眼睛闭上几秒钟,系统将打印输出“眼睛闭上”,直到用户按住空格键确认此状态为止。注意,本实例需要在Linux系统下运行,并且必须以sudo权限运行键盘模块才能正常工作。
实例5-10:实时检测摄像头中人眼睛的状态
源码路径:daima\5\5-7\blink_detection.py
import face_recognition
import cv2
import time
from scipy.spatial import distance as dist
EYES_CLOSED_SECONDS = 5
def main():
closed_count = 0
video_capture = cv2.VideoCapture(0)
ret, frame = video_capture.read(0)
small_frame = cv2.resize(frame, (0, 0), fx=0.25, fy=0.25)
rgb_small_frame = small_frame[:, :, ::-1]
face_landmarks_list = face_recognition.face_landmarks(rgb_small_frame)
process = True
while True:
ret, frame = video_capture.read(0)
# 转换成正确的格式
small_frame = cv2.resize(frame, (0, 0), fx=0.25, fy=0.25)
rgb_small_frame = small_frame[:, :, ::-1]
# 获得正确的面部标志
if process:
face_landmarks_list = face_recognition.face_landmarks(rgb_small_frame)
#抓住眼睛
for face_landmark in face_landmarks_list:
left_eye = face_landmark['left_eye']
right_eye = face_landmark['right_eye']
color = (255,0,0)
thickness = 2
cv2.rectangle(small_frame, left_eye[0], right_eye[-1], color, thickness)
cv2.imshow('Video', small_frame)
ear_left = get_ear(left_eye)
ear_right = get_ear(right_eye)
closed = ear_left < 0.2 and ear_right < 0.2
if (closed):
closed_count += 1
else:
closed_count = 0
if (closed_count >= EYES_CLOSED_SECONDS):
asleep = True
while (asleep): #继续此循环,直到他们醒来并确认音乐
print("眼睛闭上")
if cv2.waitKey(1) == 32: #等待空格键
asleep = False
print("眼睛打开")
closed_count = 0
process = not process
key = cv2.waitKey(1) & 0xFF
if key == ord("q"):
break
def get_ear(eye):
#计算两个坐标(x,y)之间的欧氏距离
A = dist.euclidean(eye[1], eye[5])
B = dist.euclidean(eye[2], eye[4])
# 计算(x,y)坐标水平之间的欧氏距离
C = dist.euclidean(eye[0], eye[3])
#计算眼睛纵横比
ear = (A + B) / (2.0 * C)
# 返回眼睛纵横比
return ear
if __name__ == "__main__":
main()
执行后的效果如图5-9所示。
图5-9 执行效果
5.3.2 模糊处理人脸
在现实应用中,有时候需要保护个人的隐私,在电视节目中将人脸进行马赛克处理。请看下面的实例文件blur_faces_on_webcam.py,功能是使用OpenCV读取摄像头中的人脸数据,然后将检测到的人脸实现模糊处理。
实例5-11:读取摄像头中的人脸数据并实现模糊处理
源码路径:daima\5\5-8\blur_faces_on_webcam.py
import face_recognition
import cv2
#获取对摄像头0的引用(0是默认值)
video_capture = cv2.VideoCapture(0)
# 初始化一些变量
face_locations = []
while True:
#抓拍了一帧视频
ret, frame = video_capture.read()
# 将视频帧的大小调整为1/4,以便更快地进行人脸检测处理
small_frame = cv2.resize(frame, (0, 0), fx=0.25, fy=0.25)
#查找当前视频帧中的所有面和面编码
face_locations = face_recognition.face_locations(small_frame, model="cnn")
#显示结果
for top, right, bottom, left in face_locations:
# 缩放面部位置,检测到的帧已缩放为1/4大小
top *= 4
right *= 4
bottom *= 4
left *= 4
# 提取包含人脸的图像区域
face_image = frame[top:bottom, left:right]
# 模糊面部图像
face_image = cv2.GaussianBlur(face_image, (99, 99), 30)
# 将模糊的人脸区域放回帧图像中
frame[top:bottom, left:right] = face_image
#显示结果图像
cv2.imshow('Video', frame)
# 按下键盘中的“q”键退出
if cv2.waitKey(1) & 0xFF == ord('q'):
break
#释放摄像头资源
video_capture.release()
cv2.destroyAllWindows()
执行后会模糊处理摄像头中的人脸,效果如图5-10所示。
图5-10 执行效果
5.3.3 检测两个人脸是否匹配
在现实应用中检查两张脸是否匹配(真或假)的应用中,通常是通过验证相似度实现的。在库face_recognition中,通过内置函数face_distance()来比较两个脸的相似度。函数face_distance()通过提供一组面部编码,将它们与已知的面部编码进行比较,得到欧氏距离。对于每一个比较的脸来说,欧氏距离代表了这些脸有多相似。函数face_distance()语法格式如下:
face_distance(face_encodings, face_to_compare)
- faces:要比较的人脸编码列表。
- face_to_compare:待进行对比的单张人脸编码数据。
- tolerance:两张脸之间有多少距离才算匹配。该值越小对比越严格,0.6是典型的最佳值。
- 返回值:一个numpy ndarray,数组中的欧式距离与faces数组的顺序一一对应。
请看下面的实例文件face_distance.py,功能是使用函数face_distance()检测两个人脸是否匹配。本实例模型的训练方式是,距离小于等于0.6的脸是匹配的。但是如果读者朋友们想更加严格,可以设置使用一个较小的脸距离。例如,使用数值0.55会减少假阳性匹配,但是同时会有更多假阴性的风险。
实例5-12:检测两个人脸是否匹配
源码路径:daima\5\5-9\face_distance.py
import face_recognition
#加载两幅图像进行比较
known_obama_image = face_recognition.load_image_file("obama.jpg")
known_biden_image = face_recognition.load_image_file("biden.jpg")
#获取已知图像的人脸编码
obama_face_encoding = face_recognition.face_encodings(known_obama_image)[0]
biden_face_encoding = face_recognition.face_encodings(known_biden_image)[0]
known_encodings = [
obama_face_encoding,
biden_face_encoding
]
#加载一个测试图像并获取它的编码
image_to_test = face_recognition.load_image_file("obama2.jpg")
image_to_test_encoding = face_recognition.face_encodings(image_to_test)[0]
# 查看测试图像与已知面之间的距离
face_distances = face_recognition.face_distance(known_encodings, image_to_test_encoding)
for i, face_distance in enumerate(face_distances):
print("The test image has a distance of {:.2} from known image #{}".format(face_distance, i))
print("- With a normal cutoff of 0.6, would the test image match the known image? {}".format(face_distance < 0.6))
print("- With a very strict cutoff of 0.5, would the test image match the known image? {}".format(face_distance < 0.5))
print()
执行后会比较两幅照片obama.jpg和biden.jpg的人脸相似度,输出:
The test image has a distance of 0.35 from known image #0
- With a normal cutoff of 0.6, would the test image match the known image? True
- With a very strict cutoff of 0.5, would the test image match the known image? True
The test image has a distance of 0.82 from known image #1
- With a normal cutoff of 0.6, would the test image match the known image? False
- With a very strict cutoff of 0.5, would the test image match the known image? False
5.3.4 识别视频中的人脸
请看下面的实例文件facerec_from_video_file.py,功能是识别某个视频文件中的人脸,然后将结果保存到新的视频文件中。
实例5-13:识别某个视频文件中的人脸后将结果保存到新的视频文件中
源码路径:daima\5\5-10\facerec_from_video_file.py
import face_recognition
import cv2
#打开要识别的视频文件
input_movie = cv2.VideoCapture("hamilton_clip.mp4")
length = int(input_movie.get(cv2.CAP_PROP_FRAME_COUNT))
#创建输出电影文件(确保分辨率/帧速率与输入视频匹配!)
fourcc = cv2.VideoWriter_fourcc(*'XVID')
output_movie = cv2.VideoWriter('output.avi', fourcc, 29.97, (640, 360))
#加载实例图片,并学习如何识别它们。
lmm_image = face_recognition.load_image_file("lin-manuel-miranda.png")
lmm_face_encoding = face_recognition.face_encodings(lmm_image)[0]
al_image = face_recognition.load_image_file("alex-lacamoire.png")
al_face_encoding = face_recognition.face_encodings(al_image)[0]
known_faces = [
lmm_face_encoding,
al_face_encoding
]
# 初始化一些变量
face_locations = []
face_encodings = []
face_names = []
frame_number = 0
while True:
#抓取一帧视频
ret, frame = input_movie.read()
frame_number += 1
#输入视频文件结束时退出
if not ret:
break
# 将图像从BGR颜色(OpenCV使用)转换为RGB颜色(人脸识别使用的颜色)
rgb_frame = frame[:, :, ::-1]
#查找当前视频帧中的所有面和面编码
face_locations = face_recognition.face_locations(rgb_frame)
face_encodings = face_recognition.face_encodings(rgb_frame, face_locations)
face_names = []
for face_encoding in face_encodings:
#查看该面是否与已知面匹配
match = face_recognition.compare_faces(known_faces, face_encoding, tolerance=0.50)
#如果有两张以上的识别脸,则可以使当前编码逻辑变得更漂亮,但我保持简单的演示
name = None
if match[0]:
name = "Lin-Manuel Miranda"
elif match[1]:
name = "Alex Lacamoire"
face_names.append(name)
#标记结果
for (top, right, bottom, left), name in zip(face_locations, face_names):
if not name:
continue
#在脸上画一个方框
cv2.rectangle(frame, (left, top), (right, bottom), (0, 0, 255), 2)
#在脸的下方绘制一个带有名称的标签
cv2.rectangle(frame, (left, bottom - 25), (right, bottom), (0, 0, 255), cv2.FILLED)
font = cv2.FONT_HERSHEY_DUPLEX
cv2.putText(frame, name, (left + 6, bottom - 6), font, 0.5, (255, 255, 255), 1)
#将识别结果图像写入到输出视频文件中
print("Writing frame {} / {}".format(frame_number, length))
output_movie.write(frame)
input_movie.release()
cv2.destroyAllWindows()
对上述代码的具体说明如下:
(1)首先准备了视频文件hamilton_clip.mp4做为输入文件,然后设置输出文件名为output.avi。
(2)准备素材图片文件lin-manuel-miranda.png,在此文件中保存的是一幅人脸照片。
(3)处理输入视频文件hamilton_clip.mp4中,在视频中标记处图片文件lin-manuel-miranda.png中的人脸,并将检测结果保存为输出视频文件output.avi。
执行后会检测输入视频文件hamilton_clip.mp4中的每一帧,并标记处图片文件lin-manuel-miranda.png中的人脸。打开识别结果视频文件output.avi ,如图5-11所示。
图5-11 执行效果
5.3.5 网页版人脸识别器
请看下面的实例文件web_service_example.py,基于Flask框架开发了一个在线Web程序。在Web网页中可以上传图片到服务器,然后识别这幅上传图片中的人脸是不是奥巴马,并使用json键值对输出显示识别结果。
实例5-14:识别这幅上传图片中的人脸是不是奥巴马
源码路径:daima\5\5-11\web_service_example.py
import face_recognition
from flask import Flask, jsonify, request, redirect
#我们可以将其更改为系统上的任何文件夹
ALLOWED_EXTENSIONS = {'png', 'jpg', 'jpeg', 'gif'}
app = Flask(__name__)
def allowed_file(filename):
return '.' in filename and \
filename.rsplit('.', 1)[1].lower() in ALLOWED_EXTENSIONS
@app.route('/', methods=['GET', 'POST'])
def upload_image():
# 检测图片是否上传成功
if request.method == 'POST':
if 'file' not in request.files:
return redirect(request.url)
file = request.files['file']
if file.filename == '':
return redirect(request.url)
if file and allowed_file(file.filename):
# 图片上传成功,检测图片中的人脸
return detect_faces_in_image(file)
# 图片上传失败,输出以下html代码
return '''
<!doctype html>
<title>Is this a picture of Obama?</title>
<h1>Upload a picture and see if it's a picture of Obama!</h1>
<form method="POST" enctype="multipart/form-data">
<input type="file" name="file">
<input type="submit" value="Upload">
</form>
'''
def detect_faces_in_image(file_stream):
# 用face_recognition.face_encodings(img)接口提前把奥巴马人脸的编码录入
known_face_encoding = [-0.09634063, 0.12095481, -0.00436332, -0.07643753, 0.0080383,
0.01902981, -0.07184699, -0.09383309, 0.18518871, -0.09588896,
0.23951106, 0.0986533 , -0.22114635, -0.1363683 , 0.04405268,
0.11574756, -0.19899382, -0.09597053, -0.11969153, -0.12277931,
0.03416885, -0.00267565, 0.09203379, 0.04713435, -0.12731361,
-0.35371891, -0.0503444 , -0.17841317, -0.00310897, -0.09844551,
-0.06910533, -0.00503746, -0.18466514, -0.09851682, 0.02903969,
-0.02174894, 0.02261871, 0.0032102 , 0.20312519, 0.02999607,
-0.11646006, 0.09432904, 0.02774341, 0.22102901, 0.26725179,
0.06896867, -0.00490024, -0.09441824, 0.11115381, -0.22592428,
0.06230862, 0.16559327, 0.06232892, 0.03458837, 0.09459756,
-0.18777156, 0.00654241, 0.08582542, -0.13578284, 0.0150229 ,
0.00670836, -0.08195844, -0.04346499, 0.03347827, 0.20310158,
0.09987706, -0.12370517, -0.06683611, 0.12704916, -0.02160804,
0.00984683, 0.00766284, -0.18980607, -0.19641446, -0.22800779,
0.09010898, 0.39178532, 0.18818057, -0.20875394, 0.03097027,
-0.21300618, 0.02532415, 0.07938635, 0.01000703, -0.07719778,
-0.12651891, -0.04318593, 0.06219772, 0.09163868, 0.05039065,
-0.04922386, 0.21839413, -0.02394437, 0.06173781, 0.0292527 ,
0.06160797, -0.15553983, -0.02440624, -0.17509389, -0.0630486 ,
0.01428208, -0.03637431, 0.03971229, 0.13983178, -0.23006812,
0.04999552, 0.0108454 , -0.03970895, 0.02501768, 0.08157793,
-0.03224047, -0.04502571, 0.0556995 , -0.24374914, 0.25514284,
0.24795187, 0.04060191, 0.17597422, 0.07966681, 0.01920104,
-0.01194376, -0.02300822, -0.17204897, -0.0596558 , 0.05307484,
0.07417042, 0.07126575, 0.00209804]
# 载入用户上传的图片
img = face_recognition.load_image_file(file_stream)
# 为用户上传的图片中的人脸编码
unknown_face_encodings = face_recognition.face_encodings(img)
face_found = False
is_obama = False
if len(unknown_face_encodings) > 0:
face_found = True
# 看看图片中的第一张脸是不是奥巴马
match_results = face_recognition.compare_faces([known_face_encoding], unknown_face_encodings[0])
if match_results[0]:
is_obama = True
# 讲识别结果以json键值对的数据结构输出
result = {
"face_found_in_image": face_found,
"is_picture_of_obama": is_obama
}
return jsonify(result)
if __name__ == "__main__":
app.run(debug=True)
运行上述Flask程序,然后在浏览器中输入URL地址:http://127.0.0.1:5000/,如图5-12所示。
图5-12 Flask主页
单击“选择文件”按钮选择一幅照片,单击“Upload”按钮后上传被选择的照片。然后调用face_recognition识别上传照片中的人物是不是奥巴马,例如上传一幅照片后会输出显示如下图5-13所示的识别结果。
图5-13 识别结果