注:本文基于Windows系统,PyTorch运行平台为cpu。
一、准备工作
1.1 下载代码
地址:https://github.com/YadiraF/DECA
可以直接下载代码压缩包(DECA-master.zip),并解压文件,为了方便说明,本文把根目录记为DECA。
1.2 下载FLAME2020
FLAME2020是德国马普研究所(马克斯·普朗克学会)做的FLAME model (Faces Learned with an Articulated Model and Expressions),其官网地址:https://flame.is.tue.mpg.de/,先要email注册账号,很快邮箱会收到一个确认邮件,点击确认链接后就注册成功了。最后登录官网后在Download页面下载(如下图);也可以直接下载:https://download.is.tue.mpg.de/download.php?domain=flame&sfile=FLAME2020.zip&resume=1。
下载后的文件FLAME2020.zip直接解压,把其中的generic_model.pkl拷贝到DECA\data下。
1.3 下载DECA model
下载地址是:https://drive.google.com/file/d/1rp8kdyLPvErw2dTmqtjISRVvQLj6Yzje/view,也可在此下载。
注意:deca_model.tar不是压缩文件,直接放到DECA\data下使用。
1.4 下载FLAME_albedo_from_BFM.npz
注:这个是选用,可不用下载。
FLAME_albedo_from_BFM.npz可从https://github.com/TimoBolkart/BFM_to_FLAME按照README方法获取,也可在此下载。
二、安装环境
2.1 安装requirements.txt里面依赖包
使用PyTorch是的LTS(1.8.2)CPU版,所以Python版本是3.8,其它包版本如下:
Python==3.8
PyTorch==1.8.2
torchvision==0.9.2
numpy==1.23
scipy==1.4.1
chumpy==0.70
scikit-image==0.15
opencv-python==4.1.2.30
pyyaml==5.1.1
face-alignment==1.3.5
yacs==0.1.8
kornia==0.4.1
2.2 安装PyTorch3D
因为这里使用CPU平台,所以要使用PyTorch3D。Windows系统安装PyTorch3D比较麻烦,首先要下载源码。这里使用PyTorch3D==0.70,下载地址:https://codeload.github.com/facebookresearch/pytorch3d/zip/refs/tags/v0.7.0,完成后解压。
根据README说明,首先安装如下包:
fvcore==0.1.5.post20220512
iopath==0.1.10
然后IDE中setup.py设置Run Configuration Parameters为“install”执行(如下),执行过程中有警告,可以忽略。
python setup.py install
执行完成后,可用如下代码测试一下安装是否成功:
import pytorch3d
print(pytorch3d.__version__)
三、修改代码
3.1 修改FaceAlignment运行平台
因为运行在CPU上,源代码FaceAlignment默认运行平台是CUDA,所以做如下修改:
- DECA\decalib\datasets\detectors.py:Line-20
class FAN(object):
def __init__(self, device=None): # ←def __init__(self, device=None)
import face_alignment
# ↓self.model = face_alignment.FaceAlignment(face_alignment.LandmarksType._2D, flip_input=False)
if device is None:
self.model = face_alignment.FaceAlignment(face_alignment.LandmarksType._2D, flip_input=False)
else:
self.model = face_alignment.FaceAlignment(face_alignment.LandmarksType._2D,
flip_input=False, device=device)
- DECA\decalib\datasets\datasets.py:Line-49
# ↓def __init__(self, testpath, iscrop=True, crop_size=224, scale=1.25, face_detector='fan',
# sample_step=10):
def __init__(self, testpath, iscrop=True, crop_size=224, scale=1.25, face_detector='fan',
sample_step=10, device=None):
'''
testpath: folder, imagepath_list, image path, video path
'''
if isinstance(testpath, list):
self.imagepath_list = testpath
elif os.path.isdir(testpath):
self.imagepath_list = glob(testpath + '/*.jpg') + glob(testpath + '/*.png') + glob(testpath + '/*.bmp')
elif os.path.isfile(testpath) and (testpath[-3:] in ['jpg', 'png', 'bmp']):
self.imagepath_list = [testpath]
elif os.path.isfile(testpath) and (testpath[-3:] in ['mp4', 'csv', 'vid', 'ebm']):
self.imagepath_list = video2sequence(testpath, sample_step)
else:
print(f'please check the test path: {testpath}')
exit()
# print('total {} images'.format(len(self.imagepath_list)))
self.imagepath_list = sorted(self.imagepath_list)
self.crop_size = crop_size
self.scale = scale
self.iscrop = iscrop
self.resolution_inp = crop_size
if face_detector == 'fan':
# ↓self.face_detector = detectors.FAN()
if device is None:
self.face_detector = detectors.FAN()
else:
self.face_detector = detectors.FAN(device)
- DECA\demos\demo_reconstruct.py:Line-40
# ↓testdata = datasets.TestData(args.inputpath, iscrop=args.iscrop, face_detector=args.detector,
# sample_step=args.sample_step)
testdata = datasets.TestData(args.inputpath, iscrop=args.iscrop, face_detector=args.detector,
sample_step=args.sample_step, device=device)
- DECA\demosdemo_transfer.py:Line-35
# ↓testdata = datasets.TestData(args.image_path, iscrop=args.iscrop, face_detector=args.detector)
testdata = datasets.TestData(args.image_path, iscrop=args.iscrop, face_detector=args.detector, device=device)
# ↓expdata = datasets.TestData(args.exp_path, iscrop=args.iscrop, face_detector=args.detector)
expdata = datasets.TestData(args.exp_path, iscrop=args.iscrop, face_detector=args.detector, device=device)
- DECA\demos\demo_teaser.py:Line-39
# ↓testdata = datasets.TestData(args.inputpath, iscrop=args.iscrop, face_detector=args.detector)
testdata = datasets.TestData(args.inputpath, iscrop=args.iscrop, face_detector=args.detector, device=device)
# ↓expdata = datasets.TestData(args.exp_path, iscrop=args.iscrop, face_detector=args.detector)
expdata = datasets.TestData(args.exp_path, iscrop=args.iscrop, face_detector=args.detector, device=device)
3.2 修改Model载入参数
DECA\decalib\deca.py:Line-89
# ↓checkpoint = torch.load(model_path)
checkpoint = torch.load(model_path, map_location=self.device)
3.3 添加extract_tex
extract_tex可根据需要设置为True或False。
DECA\decalib\utils\config.py:Line-43
cfg.model.use_tex = True
# ↓增加extract_tex设置
cfg.model.extract_tex = True
四、运行Demo
4.1 人脸重建(3D Face Reconstruction)
运行参数如下,如果没有下载FLAME_albedo_from_BFM.npz,“–useTex True”要删掉。
python demos\demo_reconstruct.py --inputpath TestSamples\examples --savefolder TestSamples\examples\results
--saveDepth True --saveObj True --device cpu --rasterizer_type pytorch3d --useTex True
或者在PyCharm等IDE执行,在demo_reconstruct.py的Run Configuration Parameters设置如下:
--inputpath ..\TestSamples\examples
--savefolder ..\TestSamples\examples\results
--saveDepth True
--saveObj True
--device cpu
--rasterizer_type pytorch3d
--useTex True
说明:inputpath是输入图片路径,savefolder保存结果路径。CPU执行耗时约为两分钟。部分结果如下:
4.2 表情迁移(Expression Transfer)
python demos\demo_transfer.py --image_path TestSamples\examples\xxx.jpg --exp_path TestSamples\exp\7.jpg
--savefolder TestSamples\examples\results --device cpu --rasterizer_type pytorch3d
或者在PyCharm等IDE执行,在demo_transfer.py的Run Configuration Parameters设置如下:
--image_path ..\TestSamples\examples\xxx.jpg
--exp_path ..\TestSamples\exp\7.jpg
--savefolder ..\TestSamples\examples\results
--device cpu
--rasterizer_type pytorch3d
说明:image_path是输入图片地址,exp_path是要迁移的表情图片,savefolder保存结果路径。CPU执行耗时约为1分钟。结果如下:
4.3 姿态和表情动画(Reposing and Animatio)
python demos\demo_teaser.py --inputpath TestSamples\examples\xxx.jpg --exp_path TestSamples\exp
--savefolder TestSamples\teaser\results --device cpu --rasterizer_type pytorch3d
或者在PyCharm等IDE执行,在demo_teaser.py的Run Configuration Parameters设置如下:
--inputpath ..\TestSamples\examples\xxx.jpg
--exp_path ..\TestSamples\exp
--savefolder ..\TestSamples\teaser\results
--device cpu
--rasterizer_type pytorch3d
说明:inputpath是输入图片地址,exp_path是要迁移的表情文件夹,savefolder保存结果路径。CPU执行耗时约为23分钟。结果如下:
五、常见错误
5.1 找不到指定的模块
报错信息如下:
ImportError: DLL load failed while importing win32file: 找不到指定的模块
这是IDE工具在使用Anaconda虚拟环境的时候经常会出现的DLL加载失败,解决办法在Run Configuration➤Environment Variables设置如下参数:
CONDA_DLL_SEARCH_MODIFICATION_ENABLE = 1
PyCharm设置如下图:
5.2 UserWarning: Mtl file does not exist
报错信息如下:
X:\Anaconda\envs\xxx\lib\site-packages\pytorch3d-0.7.0-py3.8-win-amd64.egg\pytorch3d\io\obj_io.py:533:
UserWarning: Mtl file does not exist: X:\xxx\template.mtl warnings.warn(f"Mtl file does not exist: {f}")
这是PyTorch3D的警告信息,不影响执行结果,可忽略。