3. Proposed Framework
本文提出MotionGAN,给定source image s s s及其landmark l l l,还有一段target landmark序列 l 1 T = [ l 1 , l 2 , ⋯ , l T ] l_1^T=\left [ l_1, l_2, \cdots, l_T \right ] l1T=[l1,l2,⋯,lT],生成的一段video f ~ 1 T = [ f ~ 1 , f ~ 2 , ⋯ , f ~ T ] \tilde{f}_1^T=\left [ \tilde{f}_1, \tilde{f}_2, \cdots, \tilde{f}_T \right ] f~1T=[f~1,f~2,⋯,f~T]
将2D landmark转换为heatmap image,如Figure 1所示
3.1. Sub Networks
如Figure 2所示,整个framework包括4个子网络:生成器
G
G
G、image
frame discriminator
D
f
D_f
Df、video discriminator
D
v
D_v
Dv、verification network
V
V
V
-
Generator G G G:如Figure 2(a)所示,生成器包含Encoder、LSTM Block、Decoder,生成器的输入是source image、source landmark、target landmark的叠加 [ s , l , l t ] \left [ s, l, l_t \right ] [s,l,lt],注意图中LSTM的输入输出有一个skip connection,为了简化表达,我们忽略cell和hidden state,整个生成器负责生成 T T T帧视频序列
f ~ 1 T = G ( s , l , l 1 T ) ( 1 ) \tilde{f}_1^T=G\left ( s, l, l_1^T \right ) \qquad(1) f~1T=G(s,l,l1T)(1) -
Frame Discriminator D f D_f Df:将真实图像 f t f_t ft/生成图像 f ~ t \tilde{f}_t f~t,拼接上source image、source landmark、target landmark,得到 [ s , l , f t , l t ] , [ s , l , f ~ t , l t ] \left [ s, l, f_t, l_t \right ], \left [ s, l, \tilde{f}_t, l_t \right ] [s,l,ft,lt],[s,l,f~t,lt],作为 D f D_f Df的输入, D f D_f Df的结构采用patch-GAN
-
Video Discriminator D v D_v Dv:将real video f 1 T f_1^T f1T或generated video f ~ 1 T \tilde{f}_1^T f~1T, D v D_v Dv末端有2个分支,判别real/fake,同时预测每一帧的landmark
-
Verification Network V V V:是一个人脸识别的网络,涉及损失 L i d L_{id} Lid
3.2. Loss functions
3.2.1 Image Reconstruction Loss
对于生成器
G
G
G,采用pixel-wise
ℓ
1
\ell_1
ℓ1 norm 作为reconstruction loss
L
i
m
g
G
=
1
T
∑
t
=
1
T
∥
G
(
s
,
l
,
l
t
)
−
f
t
∥
(
2
)
L_{img}^G=\frac{1}{T}\sum_{t=1}^{T}\left \| G\left ( s, l, l_t \right ) - f_t \right \| \qquad(2)
LimgG=T1t=1∑T∥G(s,l,lt)−ft∥(2)
其中
f
t
f_t
ft是ground truth image,
l
t
l_t
lt是ground truth landmark
3.2.2 Adversarial Loss
Frame Adversarial Loss:图像级别的对抗损失函数,作用在video的每一帧上
L
a
d
v
D
f
=
1
T
∑
t
=
1
T
E
f
t
[
log
(
D
f
(
s
,
l
,
f
t
,
l
t
)
)
]
+
E
f
t
[
log
(
1
−
D
f
(
s
,
l
,
G
(
s
,
l
,
l
t
)
,
l
t
)
)
]
(
3
)
\begin{aligned} L_{adv}^{D_f}=&\frac{1}{T}\sum_{t=1}^{T}\mathbb{E}_{f_t}\left [ \log\left ( D_f\left ( s, l, f_t, l_t \right ) \right ) \right ]+\\ &\mathbb{E}_{f_t}\left [ \log\left ( 1-D_f\left ( s, l, G\left ( s, l, l_t \right ), l_t \right ) \right ) \right ] \qquad(3) \end{aligned}
LadvDf=T1t=1∑TEft[log(Df(s,l,ft,lt))]+Eft[log(1−Df(s,l,G(s,l,lt),lt))](3)
Video Adversarial Loss:视频级别的对抗损失函数,作用于一个
T
T
T帧序列
L
a
d
v
D
v
=
E
f
1
T
[
log
(
D
v
(
f
1
T
)
)
]
+
E
l
1
T
[
log
(
1
−
D
v
(
G
(
s
,
l
,
l
1
T
)
)
)
]
(
4
)
\begin{aligned} L_{adv}^{D_v}=&\mathbb{E}_{f_1^T}\left [ \log\left ( D_v\left ( f_1^T \right ) \right ) \right ]+\\ &\mathbb{E}_{l_1^T}\left [ \log\left ( 1-D_v\left ( G\left ( s, l, l_1^T \right ) \right ) \right ) \right ] \qquad(4) \end{aligned}
LadvDv=Ef1T[log(Dv(f1T))]+El1T[log(1−Dv(G(s,l,l1T)))](4)
Pairwise Feature Matching Loss:使用文献[4]中的feature matching loss增加训练的稳定性,以及增强生成图像的质量
L
a
d
v
G
=
1
T
∑
t
=
1
T
∥
I
D
f
(
G
(
s
,
l
,
l
t
)
)
−
I
D
f
(
f
t
)
∥
2
2
+
∥
I
D
v
(
G
(
s
,
l
,
l
1
T
)
)
−
I
D
v
(
f
1
T
)
∥
2
2
(
5
)
\begin{aligned} L_{adv}^G=&\frac{1}{T}\sum_{t=1}^{T}\left \| I_{D_f}\left ( G\left ( s, l, l_t \right ) \right ) - I_{D_f}\left ( f_t \right ) \right \|_2^2+\\ &\left \| I_{D_v}\left ( G\left ( s, l, l_1^T \right ) \right ) - I_{D_v}\left ( f_1^T \right ) \right \|_2^2 \qquad(5) \end{aligned}
LadvG=T1t=1∑T∥∥IDf(G(s,l,lt))−IDf(ft)∥∥22+∥∥IDv(G(s,l,l1T))−IDv(f1T)∥∥22(5)
其中
I
D
f
,
I
D
v
I_{D_f}, I_{D_v}
IDf,IDv分别表示
D
f
,
D
v
D_f, D_v
Df,Dv的中间层
3.2.3 Landmarks Reconstruction Loss
D
v
D_v
Dv同时也对图像的landmark进行预测,使用
ℓ
2
\ell_2
ℓ2损失
L
l
m
s
D
v
=
∥
D
v
l
(
f
1
T
)
−
l
1
T
∥
2
2
(
6
)
L_{lms}^{D_v}=\left \| D_v^l\left ( f_1^T \right )-l_1^T \right \|_2^2 \qquad(6)
LlmsDv=∥∥Dvl(f1T)−l1T∥∥22(6)
G
G
G也要使得生成图像的landmark具有最小的loss
L
l
m
s
G
=
∥
D
v
l
(
G
(
s
,
l
,
l
1
T
)
)
−
l
1
T
∥
2
2
(
7
)
L_{lms}^G=\left \| D_v^l\left ( G\left ( s, l, l_1^T \right ) \right )-l_1^T \right \|_2^2 \qquad(7)
LlmsG=∥∥Dvl(G(s,l,l1T))−l1T∥∥22(7)
4. Experiments
4.1. Implementation Details
G
G
G的目标函数:
λ
1
L
i
m
g
G
+
λ
2
L
a
d
v
G
+
λ
3
L
l
m
s
G
+
λ
4
L
i
d
G
\lambda_1L_{img}^G+\lambda_2L_{adv}^G+\lambda_3L_{lms}^G+\lambda_4L_{id}^G
λ1LimgG+λ2LadvG+λ3LlmsG+λ4LidG
D
f
D_f
Df的目标函数:
L
a
d
v
I
D
f
L_{adv}I^{D_f}
LadvIDf
D
v
D_v
Dv的目标函数:
λ
5
L
a
d
v
D
v
+
λ
6
L
l
m
s
D
v
\lambda_5L_{adv}^{D_v}+\lambda_6L_{lms}^{D_v}
λ5LadvDv+λ6LlmsDv
超参数设置: λ 1 = 1 , λ 2 = 0.01 , λ 3 = 10 , λ 4 = 0.1 , λ 5 = 1 , λ 6 = 100 \lambda_1=1, \lambda_2=0.01, \lambda_3=10, \lambda_4=0.1, \lambda_5=1, \lambda_6=100 λ1=1,λ2=0.01,λ3=10,λ4=0.1,λ5=1,λ6=100
受限于memory size,设置 T = 4 T=4 T=4
【总结】
本文着重解决人脸视频的生成问题,指定一个face image,再指定一系列landmark,就可以生成一段新的视频,技术上没有新的idea,都是一些已有技术的组合,生成效果上由于没有看到作者提供的视频,仅从文章中的每一帧图像来看,效果尚可