一.安装python
二.安装git
sudo apt-get install git
三.安装c++标准库
sudo apt-get install libboost-dev
sudo apt-get install libboost-python-dev
四.下载OpenFace代码
git clone https://github.com/cmusatyalab/openface.git
五.OpenCV安装
sudo apt-get install libopencv-dev
sudo apt-get install python-opencv
六.安装包管理工具pip
sudo apt install python-pip
如果报错就更新pip
pip install --upgrade pip
如果pip升级后Import Error:cannot import name main
方法一:
sudo gedit /usr/bin/pip
将原来的 from pip import main 替换为 from pip._internal import main
方法二:
sudo gedit /usr/bin/pip
将原来的
from pip import main
if __name__ == '__main__':
sys.exit(main())
替换为:
from pip import __main__
if __name__ == '__main__':
sys.exit(__main__._main())
注意:如果依然报错,重装pip
pip install pyopenssl
七.安装依赖的 PYTHON库
cd openface
sudo pip install -r requirements.txt
sudo pip install dlib
sudo pip install matplotlib
八.安装 luarocks—Lua 包管理器,提供一个命令行的方式来管理 Lua 包依赖、安装第三方 Lua 包等功能
sudo apt-get install luarocks
九.安装 TORCH—科学计算框架,支持机器学习算法
git clone https://github.com/torch/distro.git ~/torch --recursive
cd torch
bash install-deps
./install.sh
十.使 torch7 设置的刚刚的环境变量生效
source ~/.bashrc
十一.安装依赖的 LUA库
luarocks install dpnn
下面的为选装,有些函数或方法可能会用到
luarocks install image
luarocks install nn
luarocks install graphicsmagick
luarocks install torchx
luarocks install csvigo
十二.编译OpenFace代码
python setup.py build
sudo python setup.py install
十三.下载预训练后的数据
cd openface
./models/get-models.sh
安装到这一步如果一切顺利就算安装完成
demo
代码 log.py
这段代码是用来配置终端打印输出信息的,供另外一个py代码调用。
import logging
import sys
# 获取logger实例,如果参数为空则返回root logger
logger = logging.getLogger('Test')
# 指定logger输出格式
formatter = logging.Formatter('%(asctime)s %(levelname)-8s: %(message)s')
# 文件日志
# file_handler = logging.FileHandler("test.log")
# file_handler.setFormatter(formatter) # 可以通过setFormatter指定输出格式
# 控制台日志
console_handler = logging.StreamHandler(sys.stdout)
console_handler.formatter = formatter # 也可以直接给formatter赋值
# 为logger添加的日志处理器
# logger.addHandler(file_handler)
logger.addHandler(console_handler)
# 指定日志的最低输出级别,默认为WARN级别
logger.setLevel(logging.INFO)
face_compare.py
这里是实际对人脸进行匹配的代码。
import time
start = time.time()
import cv2
import itertools
import os
import numpy as np
import openface
import argparse
from log import logger
dlib_model_dir = '/home/xhb/文档/Packages/openface/models/dlib'
openface_model_dir = '/home/xhb/文档/Packages/openface/models/openface'
parser = argparse.ArgumentParser()
parser.add_argument('imgs', type=str, nargs='+', help='Input images.')
parser.add_argument('--dlibFacePredictor', type=str, help="Path to dlib's face predictor.", default=os.path.join(dlib_model_dir, "shape_predictor_68_face_landmarks.dat"))
parser.add_argument('--networkModel', type=str, help="Path to Torch network model.", default=os.path.join(openface_model_dir, 'nn4.small2.v1.t7'))
parser.add_argument('--imgDim', type=int, help="Default image dimension.", default=96)
parser.add_argument('--verbose', action='store_true')
args = parser.parse_args()
if args.verbose:
logger.info("Argument parsing and loading libraries took {} seconds.".format(time.time() - start))
start = time.time()
align = openface.AlignDlib(args.dlibFacePredictor)
net = openface.TorchNeuralNet(args.networkModel, args.imgDim)
if args.verbose:
logger.info("Loading the dlib and OpenFace models took {} seconds.".format(
time.time() - start))
def getRep(imgPath):
if args.verbose:
logger.info("Processing {}.".format(imgPath))
bgrImg = cv2.imread(imgPath)
if bgrImg is None:
raise Exception("Unable to load image: {}".format(imgPath))
rgbImg = cv2.cvtColor(bgrImg, cv2.COLOR_BGR2RGB)
if args.verbose:
logger.info("Original size: {}".format(rgbImg.shape))
start = time.time()
faceBoundingBox = align.getLargestFaceBoundingBox(rgbImg)
if faceBoundingBox is None:
raise Exception("Unable to find a face: {}".format(imgPath))
if args.verbose:
logger.info("Face detection took {} seconds.".format(tim.time() - start))
start = time.time()
alignedFace = align.align(args.imgDim, rgbImg, faceBoundingBox, landmarkIndices=openface.AlignDlib.OUTER_EYES_AND_NOSE)
if alignedFace is None:
raise Exception("Unable to align image: {}".format(imgPath))
if args.verbose:
logger.info("Face alignment took {} seconds.".format(time.time() - start))
start = time.time()
rep = net.forward(alignedFace)
if args.verbose:
logger.info("OpenFace forward pass took {} seconds.".format(time.time()-start))
logger.info("Representation:")
logger.info(rep)
return rep
for (img1, img2) in itertools.combinations(args.imgs, 2):
distance = getRep(img1) - getRep(img2)
logger.info("Comparing {} with {}.".format(img1, img2))
logger.info("Squared l2 distance between representations: {:0.3f}".format(np.dot(distance, distance)))
注意:这段代码要修改以下参数
dlib_model_dir = '/home/xxx/openface/models/dlib'
openface_model_dir = '/home/xxx/openface/models/openface'
十四:运行
比较两张图:
python ./demos/compare.py test_images/3.jpg test_images/4.jpg
比较文件夹里的所有图片:
python ./face_compare.py test_images/*