树莓派项目(1)人脸识别python

http://www.eeworld.com.cn/afdz/article_2018030511619.html

 

使用树莓派进行简易人脸识别

http://shumeipai.nxez.com/2017/03/16/raspberry-pi-face-recognition-system.html

主要学习这个加速策略

多核加跳帧…帧数可到28帧左右。

效果

1人脸检测不管是谁

1-1 检测依赖文件参数

在opencv目录下 data面

1-2 检测不识别代码

最好用绝对路径读取配置文件

创建 文件 .py

 

01_face_dataset.py

#!/usr/bin/env python3
# -*- coding:utf-8 -*-
import numpy as np
import cv2

# multiple cascades: https://github.com/Itseez/opencv/tree/master/data/haarcascades

#https://github.com/Itseez/opencv/blob/master/data/haarcascades/haarcascade_frontalface_default.xml
face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
#https://github.com/Itseez/opencv/blob/master/data/haarcascades/haarcascade_eye.xml

eye_cascade = cv2.CascadeClassifier('haarcascade_eye.xml')

cap = cv2.VideoCapture(0)

while 1:
    ret, img = cap.read()
    gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    faces = face_cascade.detectMultiScale(gray, 1.3, 5)

    for (x,y,w,h) in faces:
        cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)
        roi_gray = gray[y:y+h, x:x+w]
        roi_color = img[y:y+h, x:x+w]
        
        eyes = eye_cascade.detectMultiScale(roi_gray)
        for (ex,ey,ew,eh) in eyes:
            cv2.rectangle(roi_color,(ex,ey),(ex+ew,ey+eh),(0,255,0),2)
                                                                        
    cv2.imshow('img',img)
    k = cv2.waitKey(30) & 0xff
    if k == 27:
        break

cap.release()
cv2.destroyAllWindows()

 

2人脸识别(检测+识别)

 

 

效果

1获取人脸样本

 

为这个用户输入一个ID 后期靠ID找到其名字

每个人的ID唯一

 

等待人连检测出来,蓝框出现,按下S保存,请注意更换不同角度拍摄,一般存30张

 结束可以按退出按键

代码

#!/usr/bin/env python3
# -*- coding:utf-8 -*-
import cv2
import os
font = cv2.FONT_HERSHEY_SIMPLEX
cam = cv2.VideoCapture(0)
cam.set(3, 640) # set video width
cam.set(4, 480) # set video height
face_detector = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
# For each person, enter one numeric face id
face_id = input('\n 请输入用户数字编号(只能是数字):   ')
print("\n [INFO] Initializing face capture. Look the camera and wait ...")
# Initialize individual sampling face countdsa
count = 0
while(True):
    ret, img = cam.read()
    cv2.putText(img, str(count), (5,80), font, 1, (255,255,255), 2)
   # cv2.putText(img, str("change face!"), (5,40), font, 1, (255,255,255), 2)
 
   # img = cv2.flip(img, -1) # flip video image vertically
    gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    faces = face_detector.detectMultiScale(gray, 1.3, 5)
    for (x,y,w,h) in faces:
        cv2.rectangle(img, (x,y), (x+w,y+h), (255,0,0), 2)   #
        cv2.putText(img, str("Perss s to save face!"), (5,40), font, 1, (255,255,255), 2)
       # cv2.imshow('image', img)
        k = cv2.waitKey(10) & 0xff # Press 'ESC' for exiting video
        if k == ord('s'):
            count += 1
            cv2.imwrite("dataset/User." + str(face_id) + '.' + str(count) + ".jpg", gray[y:y+h,x:x+w])
            break
        elif k == 27:
            break
    cv2.imshow('image', img)    
    k = cv2.waitKey(10) & 0xff # Press 'ESC' for exiting video
    if k == 27:
        break
    elif count >= 30: # Take 30 face sample and stop video
         break
    
# Do a bit of cleanup
print("\n [INFO] Exiting Program and cleanup stuff")
cam.release()
cv2.destroyAllWindows()

  

 

2 训练数据

节点2自动读取上一步保存的人脸开始训练,生成xml文件

 

 

 

训练结果

 

02_face_training.py

#!/usr/bin/env python3
# -*- coding:utf-8 -*-
import cv2
import numpy as np
from PIL import Image
import os
# Path for face image database
path = 'dataset'
recognizer = cv2.face.LBPHFaceRecognizer_create()
detector = cv2.CascadeClassifier("haarcascade_frontalface_default.xml");
# function to get the images and label data
def getImagesAndLabels(path):
    imagePaths = [os.path.join(path,f) for f in os.listdir(path)]     
    faceSamples=[]
    ids = []
    for imagePath in imagePaths:
        PIL_img = Image.open(imagePath).convert('L') # convert it to grayscale
        img_numpy = np.array(PIL_img,'uint8')
        id = int(os.path.split(imagePath)[-1].split(".")[1])
        faces = detector.detectMultiScale(img_numpy)
        for (x,y,w,h) in faces:
            faceSamples.append(img_numpy[y:y+h,x:x+w])
            ids.append(id)
    return faceSamples,ids
print ("\n [INFO] Training faces. It will take a few seconds. Wait ...")
faces,ids = getImagesAndLabels(path)
recognizer.train(faces, np.array(ids))
# Save the model into trainer/trainer.yml
recognizer.write('trainer/trainer.yml') # recognizer.save() worked on Mac, but not on Pi
# Print the numer of faces trained and end program
print("\n [INFO] {0} faces trained. Exiting Program".format(len(np.unique(ids))))

 

不建议在树梅派上训练,速度太慢,可以在电脑训练生成xml传给树梅派。 

3 开始识别

3-1识别先获取ID(数字),然后根据ID找到数组对应的名字

3-2可修改读取方式,决定获取摄像头还是网络相机的数据(网络相机目前只能拿到700×600分辨率图像)

创建文件

 

读取训练结果

 

03_face_recognition.py

#!/usr/bin/env python3
# -*- coding:utf-8 -*-
import cv2
import numpy as np
import os 
recognizer = cv2.face.LBPHFaceRecognizer_create()
recognizer.read('trainer/trainer.yml')
cascadePath = "haarcascade_frontalface_default.xml"
faceCascade = cv2.CascadeClassifier(cascadePath);
font = cv2.FONT_HERSHEY_SIMPLEX
#iniciate id counter
id = 0
# names related to ids: example ==> Marcelo: id=1,  etc
names = ['None', 'dongdong', 'Paula', 'Ilza', 'Z', 'W'] 
# Initialize and start realtime video capture
#cam = cv2.VideoCapture(0)
url = 'http://192.168.1.82/webcapture.jpg?command=snap&channel=1'
cam = cv2.VideoCapture(0)
cam.set(3, 640) # set video widht
cam.set(4, 480) # set video height
# Define min window size to be recognized as a face
minW = 0.1*cam.get(3)
minH = 0.1*cam.get(4)
while True:
    ret, img =cam.read()
    #img = cv2.flip(img, -1) # Flip vertically
    gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)

    faces = faceCascade.detectMultiScale( 
        gray,
        scaleFactor = 1.2,
        minNeighbors = 5,
        minSize = (int(minW), int(minH)),
       )
    for(x,y,w,h) in faces:
        cv2.rectangle(img, (x,y), (x+w,y+h), (0,255,0), 2)
        id, confidence = recognizer.predict(gray[y:y+h,x:x+w])
        # Check if confidence is less them 100 ==> "0" is perfect match 
        if (confidence < 100):
            id = names[id]
            confidence = "  {0}%".format(round(100 - confidence))
        else:
            id = "unknown"
            confidence = "  {0}%".format(round(100 - confidence))

        cv2.putText(img, str(id), (x+5,y-5), font, 1, (255,255,255), 2)
        cv2.putText(img, str(confidence), (x+5,y+h-5), font, 1, (255,255,0), 1)  

    cv2.imshow('camera',img)
   # cam = cv2.VideoCapture(url)
    k = cv2.waitKey(10) & 0xff # Press 'ESC' for exiting video
    if k == 27:
        break
# Do a bit of cleanup
print("\n [INFO] Exiting Program and cleanup stuff")
cam.release()
cv2.destroyAllWindows()

  

 后期

1加入图行化界面

2采用web交互界面,传送文件。

 

转载于:https://www.cnblogs.com/kekeoutlook/p/11067605.html

基于树莓派人脸识别考勤项目是利用树莓派Python语言开发的一种考勤系统。该系统可以通过摄像头捕捉到员工的面部图像,并通过人脸识别算法进行人脸识别。通过与已经注册的员工面部信息进行对比,可以判断员工是否是合法员工,从而实现考勤功能。 首先,我们需要在树莓派上安装摄像头,并使用Python编程语言进行图像采集和处理。通过调用树莓派摄像头模块拍摄照片,并对照片进行预处理,例如灰度化、裁剪、归一化等操作,以提高识别精度。 接下来,我们需要使用适当的人脸识别算法。常见的人脸识别算法有基于特征的算法(如LBP、Haar特征等)和基于深度学习的算法(如卷积神经网络)。我们可以选择合适的算法,并使用Python进行实现和调用。 然后,我们需要建立员工人脸数据库。可以让每位员工在初始注册时拍摄照片,并将其存储在数据库中。在日常考勤时,系统会将摄取到的人脸图像与数据库中的员工面部信息进行比对,判断是否为合法员工。 最后,我们可以根据比对结果进行考勤统计。系统可以记录员工的到岗时间、离岗时间等信息,并生成考勤报表。此外,还可以增加其他功能,如远程查看考勤记录、异常考勤的报警等。 综上所述,基于树莓派人脸识别考勤项目是一种利用树莓派Python开发的考勤系统,具有较高的准确性和便捷性。通过这种系统可以提高考勤的自动化水平,减少人力成本,提升办公效率。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值