基于opencv tensorflow2.0开发的人脸识别锁定与解锁win10屏幕实战

基于opencv tensorflow2.0开发的人脸识别锁定与解锁win10屏幕

前言

windows hello的低阶板本,没有hello的3D景深镜头,因此这是一个基于图片的识别机主的程序。具体运行时,解锁时,判断是否是本人;若不是本人或无人(10s),锁屏;若是本人,正常使用

基础需要由四部分组成。

face_1.pyface_2.pyface_3.pyface_4.py
制作自己人脸训练数据由face_1.py 和 face_2.py制作的数据来进行深度学习,并保存模型由已知其他人脸来制作数据最后的检测程序

运行python环境

主要是在tensorflow2.0-gpu下运行;
这里略微吐槽下tensorflow2.0.keras模块部分无提示,对于新手不太友好。
具体conda list:

NameVersionBuild Channel
_tflow_select2.1.0gpu
absl-py0.8.1py37_0
altgraph0.17pypi_0 pypi
astor0.8.0py37_0
astroid2.3.3py37_0
backcall0.1.0py37_0
blas1.0mkl
ca-certificates2019.11.270
certifi2019.11.28py37_0
colorama0.4.3py_0
cudatoolkit10.0.1300
cudnn7.6.5cuda10.0_0
cycler0.10.0pypi_0 pypi
decorator4.4.1py_0
future0.18.2pypi_0 pypi
gast0.2.2py37_0
google-pasta0.1.8py_0
grpcio1.16.1py37h351948d_1
h5py2.9.0py37h5e291fa_0
hdf51.10.4h7ebc959_0
icc_rt2019.0.0h0cc432a_1
intel-openmp2019.4245
ipykernel5.1.3py37h39e3cac_1
ipython7.11.1py37h39e3cac_0
ipython_genutils0.2.0py37_0
isort4.3.21py37_0
jedi0.15.2py37_0
joblib0.14.1py_0
jupyter_client5.3.4py37_0
jupyter_core4.6.1py37_0
keras2.3.1pypi_0 pypi
keras-applications1.0.8py_0
keras-preprocessing1.1.0py_1
kiwisolver1.1.0pypi_0 pypi
lazy-object-proxy1.4.3py37he774522_0
libprotobuf3.11.2h7bd577a_0
libsodium1.0.16h9d3ae62_0
markdown3.1.1py37_0
matplotlib3.1.2pypi_0 pypi
mccabe0.6.1py37_1
mkl2019.4245
mkl-service2.3.0py37hb782905_0
mkl_fft1.0.15py37h14836fe_0
mkl_random1.1.0py37h675688f_0
mouseinfo0.1.2pypi_0 pypi
numpy1.17.4py37h4320e6b_0
numpy-base1.17.4py37hc3f5095_0
opencv-python4.1.2.30pypi_0 pypi
openssl1.1.1dhe774522_3
opt_einsum3.1.0py_0
pandas0.25.3pypi_0
parso0.5.2py_0
pefile2019.4.18pypi_0
pickleshare0.7.5py37_0
pillow7.0.0pypi_0
pip19.3.1py37_0
prompt_toolkit3.0.2py_0
protobuf3.11.2py37h33f27b4_0
pyautogui0.9.48pypi_0 pypi
pygetwindow0.0.8pypi_0 pypi
pygments2.5.2py_0
pyinstaller3.6pypi_0 pypi
pylint2.4.4py37_0
pymsgbox1.0.7pypi_0 pypi
pyparsing2.4.6pypi_0 pypi
pyperclip1.7.0pypi_0 pypi
pyreadline2.1py37_1
pyrect0.1.4pypi_0 pypi
pyscreeze0.1.26pypi_0 pypi
python3.7.6h60c2a47_2
python-dateutil2.8.1py_0
pytweening1.0.3pypi_0 pypi
pytz2019.3pypi_0 pypi
pywin32227py37he774522_1
pywin32-ctypes0.2.0pypi_0 pypi
pyyaml5.3pypi_0 pypi
pyzmq18.1.0py37ha925a31_0
scikit-learn0.22.1py37h6288b17_0
scipy1.3.2py37h29ff71c_0
setuptools44.0.0py37_0
six1.13.0py37_0
sqlite3.30.1he774522_0
tensorboard2.0.0pyhb38c66f_1
tensorflow2.0.0gpu_py37h57d29ca_0
tensorflow-base2.0.0gpu_py37h390e234_0
tensorflow-estimator2.0.0pyh2649769_0
tensorflow-gpu2.0.0h0d30ee6_0
termcolor1.1.0py37_1
tornado6.0.3py37he774522_0
traitlets4.3.3py37_0
vc14.1h0510ff6_4
vs2015_runtime14.16.27012hf0eaf9b_1
wcwidth0.1.7py37_0
werkzeug0.16.0py_0
wheel0.33.6py37_0
wincertstore0.2py37_0
wrapt1.11.2 py37he774522_0
zeromq4.3.1h33f27b4_3
zlib1.2.11h62dcd97_3

首先制作自己训练数据:

人脸数据存储至my_faces 可自己命名

face_1.py


# 制作自己人脸数据

from cv2 import cv2
import os
import sys
import random

out_dir = './my_faces'
if not os.path.exists(out_dir):
    os.makedirs(out_dir)


# 改变亮度与对比度
def relight(img, alpha=1, bias=0):
    w = img.shape[1]
    h = img.shape[0]
    #image = []
    for i in range(0,w):
        for j in range(0,h):
            for c in range(3):
                tmp = int(img[j,i,c]*alpha + bias)
                if tmp > 255:
                    tmp = 255
                elif tmp < 0:
                    tmp = 0
                img[j,i,c] = tmp
    return img


# 获取分类器
haar = cv2.CascadeClassifier(r'E:\ProgramData\Anaconda3\envs\tenserflow02\Lib\site-packages\cv2\data\haarcascade_frontalface_default.xml')

# 打开摄像头 参数为输入流,可以为摄像头或视频文件
camera = cv2.VideoCapture(0)

n = 1
while 1:
    if (n <= 5000):
        print('It`s processing %s image.' % n)
        # 读帧
        success, img = camera.read()
        gray_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
        faces = haar.detectMultiScale(gray_img, 1.3, 5)
        for f_x, f_y, f_w, f_h in faces:
            face = img[f_y:f_y+f_h, f_x:f_x+f_w]
            face = cv2.resize(face, (64,64))
            
            face = relight(face, random.uniform(0.5, 1.5), random.randint(-50, 50))
            cv2.imshow('img', face)
            cv2.imwrite(out_dir+'/'+str(n)+'.jpg', face)
            n+=1
        key = cv2.waitKey(30) & 0xff
        if key == 27:
            break
    else:
        break


制作他人训练数据:

需要收集一个其他人脸的图片集,只要不是自己的人脸都可以,可以在网上找到,这里我给出一个我用到的图片集:
网站地址:http://vis-www.cs.umass.edu/lfw/
图片集下载:http://vis-www.cs.umass.edu/lfw/lfw.tgz
先将下载的图片集,解压到项目目录下的lfw目录下,也可以自己指定目录(修改代码中的input_dir变量)

face_3.py

# -*- codeing: utf-8 -*-
import sys
import os
from cv2 import cv2

input_dir = './lfw'
output_dir = './other_faces'
size = 64

if not os.path.exists(output_dir):
    os.makedirs(output_dir)

def close_cv2():
    """删除cv窗口"""
    while(1):
        if(cv2.waitKey(100)==27):
            break
    cv2.destroyAllWindows()
# 获取分类器
haar = cv2.CascadeClassifier(r'E:\ProgramData\Anaconda3\envs\tenserflow02\Lib\site-packages\cv2\data\haarcascade_frontalface_default.xml')

index = 1
for (path, dirnames, filenames) in os.walk(input_dir):
    for filename in filenames:
        if filename.endswith('.jpg'):
            print('Being processed picture %s' % index)
            img_path = path+'/'+filename
            # # 从文件读取图片
            print(img_path)
            img = cv2.imread(img_path)
            # cv2.imshow(" ",img)
            # close_cv2()
            # 转为灰度图片

            gray_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
            faces = haar.detectMultiScale(gray_img, 1.3, 5)
            for f_x, f_y, f_w, f_h in faces:
                face = img[f_y:f_y+f_h, f_x:f_x+f_w]
                face = cv2.resize(face, (64,64))
  
                # face = relight(face, random.uniform(0.5, 1.5), random.randint(-50, 50))
                cv2.imshow('img', face)
                cv2.imwrite(output_dir+'/'+str(index)+'.jpg', face)
                index+=1
            key = cv2.waitKey(30) & 0xff
            if key == 27:
                sys.exit(0)

接下来进行数据训练

读取上文的 my_faces和other_faces文件夹下的训练数据进行训练

face_2.py

# -*- codeing: utf-8 -*-
from __future__ import absolute_import, division, print_function

import tensorflow as tf
from cv2 import cv2
import numpy as np
import os
import random
import sys
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
# from keras import backend as K

def getPaddingSize(img):
    h, w, _ = img.shape
    top, bottom, left, right = (0,0,0,0)
    longest = max(h, w)

    if w < longest:
        tmp = longest - w
        # //表示整除符号
        left = tmp // 2
        right = tmp - left
    elif h < longest:
        tmp = longest - h
        top = tmp // 2
        bottom = tmp - top
    else:
        pass
    return top, bottom, left, right

def readData(path, h,w,imgs,labs):
    for filename in os.listdir(path):
        if filename.endswith('.jpg'):
            filename = path + '/' + filename

            img = cv2.imread(filename)
            # cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
            top,bottom,left,right = getPaddingSize(img)
            # 将图片放大, 扩充图片边缘部分
            img = cv2.copyMakeBorder(img, top, bottom, left, right, cv2.BORDER_CONSTANT, value=[0,0,0])
            img = cv2.resize(img, (h, w))

            imgs.append(img)
            labs.append(path)
    return imgs,labs




def get_model():
    model = tf.keras.Sequential()
    # 第一层卷积,卷积的数量为128,卷积的高和宽是3x3,激活函数使用relu
    model.add(tf.keras.layers.Conv2D(128, kernel_size=3, activation='relu', input_shape=(64, 64, 3)))
    # 第二层卷积
    model.add(tf.keras.layers.Conv2D(64, kernel_size=3, activation='relu'))
    #把多维数组压缩成一维,里面的操作可以简单理解为reshape,方便后面Dense使用
    model.add(tf.keras.layers.Flatten())
    #对应cnn的全链接层,可以简单理解为把上面的小图汇集起来,进行分类
    model.add(tf.keras.layers.Dense(40, activation='softmax'))
    model.compile(optimizer='adam',
              loss='sparse_categorical_crossentropy',
              metrics=['accuracy'])
    return model 

def facemain():
    my_faces_path = './my_faces'
    other_faces_path = './other_faces'
    size = 64

    imgs = []
    labs = []
    imgs,labs=readData(my_faces_path,size,size,imgs,labs)
    imgs,labs=readData(other_faces_path,size,size,imgs,labs)


    # 将图片数据与标签转换成数组
    imgs = np.array(imgs)
    # labs = np.array([[0,1] if lab == my_faces_path else [1,0] for lab in labs])
    labs = np.array([[1] if lab == my_faces_path else [0] for lab in labs])
    print(imgs.shape)
    print(labs.shape)
    # 随机划分测试集与训练集
    train_x,test_x,train_y,test_y = train_test_split(imgs, labs, test_size=0.8, random_state=random.randint(0,100))

    # 参数:图片数据的总数,图片的高、宽、通道
    train_x = train_x.reshape(train_x.shape[0], size, size, 3)
    test_x = test_x.reshape(test_x.shape[0], size, size, 3)

    # 将数据转换成小于1的数
    train_x = train_x.astype('float32')/255.0
    test_x = test_x.astype('float32')/255.0

    print('train size:%s, test size:%s' % (len(train_x), len(test_x)))
    # 图片块,每次取100张图片
    batch_size = 100
    num_batch = len(train_x) // batch_size


    model=get_model()
    model.fit(train_x, train_y, epochs=5)
    model.save(r'C:\Users\Administrator\Desktop\my_model.h5')


facemain()

最后进行预测判断是否是本人,以进行是否锁屏操作

face_4.py

#识别自己
from __future__ import absolute_import, division, print_function
import tensorflow as tf

from cv2 import cv2
import os
import sys
import random
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
from sklearn.metrics import cohen_kappa_score
from ctypes import *
import time
import sys


def getPaddingSize(img):
    h, w, _ = img.shape
    top, bottom, left, right = (0,0,0,0)
    longest = max(h, w)

    if w < longest:
        tmp = longest - w
        # //表示整除符号
        left = tmp // 2
        right = tmp - left
    elif h < longest:
        tmp = longest - h
        top = tmp // 2
        bottom = tmp - top
    else:
        pass
    return top, bottom, left, right

def readData(path, h,w,imgs,labs):
    for filename in os.listdir(path):
        if filename.endswith('.jpg'):
            filename = path + '/' + filename

            img = cv2.imread(filename)
            # cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
            top,bottom,left,right = getPaddingSize(img)
            # 将图片放大, 扩充图片边缘部分
            img = cv2.copyMakeBorder(img, top, bottom, left, right, cv2.BORDER_CONSTANT, value=[0,0,0])
            img = cv2.resize(img, (h, w))

            imgs.append(img)
            labs.append(path)
    return imgs,labs
# 改变亮度与对比度
def relight(img, alpha=1, bias=0):
    w = img.shape[1]
    h = img.shape[0]
    #image = []
    for i in range(0,w):
        for j in range(0,h):
            for c in range(3):
                tmp = int(img[j,i,c]*alpha + bias)
                if tmp > 255:
                    tmp = 255
                elif tmp < 0:
                    tmp = 0
                img[j,i,c] = tmp
    return img

out_dir = './temp_faces'
if not os.path.exists(out_dir):
    os.makedirs(out_dir)

# 获取分类器
haar = cv2.CascadeClassifier(r'E:\ProgramData\Anaconda3\envs\tenserflow02\Lib\site-packages\cv2\data\haarcascade_frontalface_default.xml')

# 打开摄像头 参数为输入流,可以为摄像头或视频文件
camera = cv2.VideoCapture(0)
n = 1

start = time.clock()
while 1:
    if (n <= 20):
        print('It`s processing %s image.' % n)
        # 读帧
        success, img = camera.read()
        gray_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
        faces = haar.detectMultiScale(gray_img, 1.3, 5)
        for f_x, f_y, f_w, f_h in faces:
            face = img[f_y:f_y+f_h, f_x:f_x+f_w]
            face = cv2.resize(face, (64,64))
            # face = relight(face, random.uniform(0.5, 1.5), random.randint(-50, 50))
            cv2.imshow('img', face)
            cv2.imwrite(out_dir+'/'+str(n)+'.jpg', face)
            n+=1
        key = cv2.waitKey(30) & 0xff
        if key == 27:
            break
        end = time.clock()
        print(str(end-start))
        if (end-start)>10:
            user32 = windll.LoadLibrary('user32.dll')
            user32.LockWorkStation()
            sys.exit()
    else:
        break


my_faces_path = out_dir
size = 64

imgs = []
labs = []
imgs,labs=readData(my_faces_path,size,size,imgs,labs)
# 将图片数据与标签转换成数组
imgs = np.array(imgs)
# labs = np.array([[0,1] if lab == my_faces_path else [1,0] for lab in labs])
labs = np.array([[1] if lab == my_faces_path else [0] for lab in labs])
# 随机划分测试集与训练集
train_x,test_x,train_y,test_y = train_test_split(imgs, labs, test_size=0.9, random_state=random.randint(0,100))

# 参数:图片数据的总数,图片的高、宽、通道
train_x = train_x.reshape(train_x.shape[0], size, size, 3)
test_x = test_x.reshape(test_x.shape[0], size, size, 3)

# 将数据转换成小于1的数
train_x = train_x.astype('float32')/255.0
test_x = test_x.astype('float32')/255.0

restored_model = tf.keras.models.load_model(r'C:\Users\Administrator\Desktop\my_model.h5')
pre_result=restored_model.predict_classes(test_x)
print(pre_result.shape)
print(pre_result)
acc=sum(pre_result==1)/pre_result.shape[0]
print("相似度: "+str(acc))

if acc > 0.8:
    print("你是***")
else:
    user32 = windll.LoadLibrary('user32.dll')
    user32.LockWorkStation()

最后一步,添加face_4.py解锁windows运行任务计划程序库

myface.bat 文件

激活Anaconda环境
切CD至face_4.py的位置

call activate tensorflow02
cd /d E:\ziliao\LearningPy\face
python face_4.py

hide.vbs文件以隐藏程序运行时的cmd

Set ws = CreateObject("Wscript.Shell") 
ws.run "cmd /c E:\ziliao\LearningPy\face\myface.bat",vbhide

任务计划库中

创建任务

常规中触发器操作
最高权限 选择对应系统win10添加 工作站解锁时添加hide.vbs

参考:

Welcome to my Github and my CSDN blog , more information will be available about the project!

https://github.com/Yiqingde

  • 5
    点赞
  • 24
    收藏
    觉得还不错? 一键收藏
  • 9
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 9
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值