1.按网上avatarify代码,first-order-model-maste解读,但是缺少运行的必要程序(vox-adv-cpk.pth.tar)。
(参考(https://blog.csdn.net/csdnnews/article/details/108570938)可以解决文件缺失问题)
运行环境:Python 3.7
C:\Users\Administrator\Desktop> python
Python 3.7.2 (tags/v3.7.2:9a3ffc0492, Dec 23 2018, 23:09:28) [MSC v.1916 64 bit (AMD64)] on win32
需要的库文件如下:
如果报错误,就pip install XXX 自行安装。
C:\Users\Administrator>pip freeze
certifi==2020.12.5
chardet==4.0.0
cycler==0.10.0
decorator==4.4.2
face-alignment==1.3.3
idna==2.10
imageio==2.9.0
imageio-ffmpeg==0.4.3
joblib==1.0.1
kiwisolver==1.3.1
llvmlite==0.35.0
matplotlib==3.3.4
msgpack==1.0.2
msgpack-numpy==0.4.7.1
networkx==2.5
numba==0.52.0
numpy==1.20.1
opencv-python==4.2.0.34
pandas==1.2.3
Pillow==8.1.1
pyfakewebcam==0.1.0
pyparsing==2.4.7
python-dateutil==2.8.1
pytz==2021.1
PyWavelets==1.1.1
PyYAML==5.3.1
pyzmq==20.0.0
requests==2.25.1
scikit-image==0.18.1
scikit-learn==0.24.1
scipy==1.6.1
six==1.15.0
sklearn==0.0
threadpoolctl==2.1.0
tifffile==2021.3.4
torch==1.8.0
torchvision==0.9.0
tqdm==4.58.0
typing-extensions==3.7.4.3
urllib3==1.26.3
=================================================
下载速度慢,可以更改镜像点。做法如下:
在C:\Users\Administrator\pip目录下,修改pip.ini文件(直接copy文字即可)
[global]
index-url = http://pypi.douban.com/simple/
[install]
trusted-host = pypi.douban.com
======================================
2.又在网上搜到相关代码,运行如下。
(https://blog.csdn.net/csdnnews/article/details/108570938)
"怎样用 Python 控制图片人物动起来?一文就能 Get!"
3.GPU问题解决。
可参考如下 https://blog.csdn.net/m0_37690102/article/details/108364458
“解决问题:AssertionError: Torch not compiled with CUDA enabled”
此错误是由于下载的torch没有cuda,在运行时就会出错,经过查阅,在程序最开始的地方加上:
device = torch.device(“cuda” if torch.cuda.is_available() else “cpu”)
代码其余地方出现.cuda()的地方改成.to(device)就可以在无gpu的环境中运行了。
4,运行demo.py
python demo.py
--config config/vox-adv-256.yaml
--driving_video path/to/driving/1.mp4
--source_image path/to/source/7.jpg
--checkpoint path/to/checkpoint/vox-adv-cpk.pth.tar
--relative
--adapt_scale
5.运行如下。
================================================================================================================================
C:\Python\Python37\lib\site-packages\torch\nn\functional.py:3500: UserWarning: The default behavior for interpolate/upsample with float scale_factor changed in 1.6.0 to align with other frameworks/libraries, and now uses scale_factor directly, instead of relying on the computed output size. If you wish to restore the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details.
"The default behavior for interpolate/upsample with float scale_factor changed "
0%| | 0/211 [00:00<?, ?it/s]C:\Python\Python37\lib\site-packages\torch\nn\functional.py:3826: UserWarning: Default grid_sample and affine_grid behavior has changed to align_corners=False since 1.3.0. Please specify align_corners=True if the old behavior is desired. See the documentation of grid_sample for details.
"Default grid_sample and affine_grid behavior has changed "
C:\Python\Python37\lib\site-packages\torch\nn\functional.py:3455: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details.
"See the documentation of nn.Upsample for details.".format(mode)
C:\Python\Python37\lib\site-packages\torch\nn\functional.py:1709: UserWarning: nn.functional.sigmoid is deprecated. Use torch.sigmoid instead.
warnings.warn("nn.functional.sigmoid is deprecated. Use torch.sigmoid instead.")
0%|▍ | 1/211 [00:38<2:13:00, 38.00s/it] 1%|▊ | 2/211 [01:12<2:04:46, 35.82s/it]
1%|█▏ | 3/211 [01:50<2:08:07, 36.96s/it
2%|█▌ | 4/211 [02:24<2:03:57, 35.93s/it 2%|█▉ | 5/211 [02:59<2:01:57, 35.52s/it
3%|██▎ | 6/211 =[03:33<1:59:49, 35.07s/i 3%|██▊ | 7/211 [04:08<1:58:12, 34.77s/i
4%|███▏ | 8/211 [04:42<1:57:26, 34.71s/ 4%|███▌ | 9/211 [05:16<1:56:04, 34.48s/
5%|███▉ | 10/211 [05:50<1:55:15, 34.41s/ 5%|████▎ | 11/211 [06:25<1:54:30, 34.35s
6%|████▋ | 12/211 [06:59<1:53:50, 34.32s 6%|█████ | 13/211 [07:33<1:53:17, 34.33s
7%|█████▍ | 14/211 [08:07<1:52:26, 34.24 7%|█████▊ | 15/211 [08:42<1:52:30, 34.44
8%|██████▏ | 16/211 [09:18<1:53:11, 34.8 8%|██████▌ | 17/211 [09:54<1:53:24, 35.0
9%|██████▉ | 18/211 [10:28<1:51:51, 34.7 9%|███████▍ | 19/211 [11:19<2:07:41, 39.
9%|███████▊ | 20/211 [12:01<2:08:26, 40. 10%|████████▏ | 21/211 [12:35<2:02:17, 38
10%|████████▌ | 22/211 [13:09<1:57:18, 37 11%|████████▉ | 23/211 [13:44<1:53:57, 36.37s/it]
================================================================================================
6.结果
7.训练模型由文中的 茅佳源 源提供。
7.有任何问题可留言,仅供学习参考。
==================================================================
代码下载地址:
链接:https://pan.baidu.com/s/1dlkJpYzT7tc-X3YgXTRtNA
提取码:MNSY
===================================================================
代码经修改,可以完整运行!
特别感谢
1.李秋键
(https://blog.csdn.net/csdnnews/article/details/108570938)
"怎样用 Python 控制图片人物动起来?一文就能 Get!"
提供的讲解思路,及其运行代码
2.Avatarify的开源代码提供者 Ali Aliev
Avatarify 的开源地址: https://github.com/alievk/avatarify
================================================================
后记:
效果达不到哈,本人也是在朋友用APP软件的恶搞下,开始研读源码。