1.python
Ubuntu 16.04桌面版自带python
2.git
$ sudo apt-get install git
3.编译工具CMake
$ sudo apt-get install cmake
4.C++标准库安装
$ sudo apt-get install libboost-dev
$ sudo apt-get install libboost-python-dev
5.下载OpenFace代码
$ git clone https://github.com/cmusatyalab/openface.git
6.OpenCV安装
$ sudo apt-get install libopencv-dev
$ sudo apt-get install python-opencv
7.安装包管理工具pip
$ sudo apt install python-pip
更新pip,按上面安装不知道为什么是旧的版本,可能影响下面的操作
$ pip install --upgrade pip
8.安装依赖的 PYTHON库
$ cd openface
$ sudo pip install -r requirements.txt
$ sudo pip install dlib
$ sudo pip install matplotlib
9.安装 luarocks—Lua 包管理器,提供一个命令行的方式来管理 Lua 包依赖、安装第三方 Lua 包等功能
$ sudo apt-get install luarocks
10.安装 TORCH—科学计算框架,支持机器学习算法
$ git clone https://github.com/torch/distro.git ~/torch --recursive
$ cd torch
$ bash install-deps
$ ./install.sh
使 torch7 设置的刚刚的环境变量生效
$ source ~/.bashrc
这里只安装了CPU版本,后面如果需要再更新CUDA的使用方法
11.安装依赖的 LUA库
$ luarocks install dpnn
下面的为选装,有些函数或方法可能会用到
$ luarocks install image
$ luarocks install nn
$ luarocks install graphicsmagick
$ luarocks install torchx
$ luarocks install csvigo
12.编译OpenFace代码
$ python setup.py build
$ sudo python setup.py install
13.下载预训练后的数据
$ sh models/get-models.sh
$ wget https://storage.cmusatyalab.org/openface-models/nn4.v1.t7 -O models/openface/nn4.v1.t7
试一下调试的结果:
在demos下找到 demos/compare.py
代码如下:
#!/usr/bin/env python2
#
# Example to compare the faces in two images.
# Brandon Amos
# 2015/09/29
#
# Copyright 2015-2016 Carnegie Mellon University
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import time
start = time.time()
import argparse
import cv2
import itertools
import os
import numpy as np
np.set_printoptions(precision=2)
import openface
fileDir = os.path.dirname(os.path.realpath(__file__))
modelDir = os.path.join(fileDir, '..', 'models')
dlibModelDir = os.path.join(modelDir, 'dlib')
openfaceModelDir = os.path.join(modelDir, 'openface')
parser = argparse.ArgumentParser()
parser.add_argument('imgs', type=str, nargs='+', help="Input images.")
parser.add_argument('--dlibFacePredictor', type=str, help="Path to dlib's face predictor.",
default=os.path.join(dlibModelDir, "shape_predictor_68_face_landmarks.dat"))
parser.add_argument('--networkModel', type=str, help="Path to Torch network model.",
default=os.path.join(openfaceModelDir, 'nn4.small2.v1.t7'))
parser.add_argument('--imgDim', type=int,
help="Default image dimension.", default=96)
parser.add_argument('--verbose', action='store_true')
args = parser.parse_args()
if args.verbose:
print("Argument parsing and loading libraries took {} seconds.".format(
time.time() - start))
start = time.time()
align = openface.AlignDlib(args.dlibFacePredictor)
net = openface.TorchNeuralNet(args.networkModel, args.imgDim)
if args.verbose:
print("Loading the dlib and OpenFace models took {} seconds.".format(
time.time() - start))
def getRep(imgPath):
if args.verbose:
print("Processing {}.".format(imgPath))
bgrImg = cv2.imread(imgPath)
if bgrImg is None:
raise Exception("Unable to load image: {}".format(imgPath))
rgbImg = cv2.cvtColor(bgrImg, cv2.COLOR_BGR2RGB)
if args.verbose:
print(" + Original size: {}".format(rgbImg.shape))
start = time.time()
bb = align.getLargestFaceBoundingBox(rgbImg)
if bb is None:
raise Exception("Unable to find a face: {}".format(imgPath))
if args.verbose:
print(" + Face detection took {} seconds.".format(time.time() - start))
start = time.time()
alignedFace = align.align(args.imgDim, rgbImg, bb,
landmarkIndices=openface.AlignDlib.OUTER_EYES_AND_NOSE)
if alignedFace is None:
raise Exception("Unable to align image: {}".format(imgPath))
if args.verbose:
print(" + Face alignment took {} seconds.".format(time.time() - start))
start = time.time()
rep = net.forward(alignedFace)
if args.verbose:
print(" + OpenFace forward pass took {} seconds.".format(time.time() - start))
print("Representation:")
print(rep)
print("-----\n")
return rep
for (img1, img2) in itertools.combinations(args.imgs, 2):
d = getRep(img1) - getRep(img2)
print("Comparing {} with {}.".format(img1, img2))
print(
" + Squared l2 distance between representations: {:0.3f}".format(np.dot(d, d)))
该程序是为了判断两张照片的距离,距离越近越相似。
在目录中放三张照片,2.jpg,3.jpg,4.jpg
运行下面的命令:
python demos/compare.py {2.jpg,3.jpg,4.jpg}
结果如下:
Comparing 2.jpg with 3.jpg.
+ Squared l2 distance between representations: 0.274
Comparing 2.jpg with 4.jpg.
+ Squared l2 distance between representations: 0.363
Comparing 3.jpg with 4.jpg.
+ Squared l2 distance between representations: 0.154
可以看出3.jpg 和 4.jpg是最接近的,看看是不是同一个人呢?