不同于一般的image-to-image translation,本文主要针对带guided信息的image-to-image translation,如Fig.1所示
第1行,给定一幅试衣图像,指定guided信息为pose关键点,生成一幅pose变换后的试衣图像
第2行,给定一幅sketch图像,指定guided信息为sketch图像涂上一小块texture,生成一幅真实物品的图像
第3行,给定一幅低分辨率的depth image,指定guided信息为对应的实景图,生成一幅高分辨率的depth image
3. Bi-Directional Feature Transformation
任务的描述:we aim to translate an image from one domain to another while respecting the constraints specified by a given guidance image
3.1. Feature transformation layer
本文提出feature transformation (FT) layer,将guidance information融合进来
首先对input features进行normalization,然后使用来自guidance image的scaling和shifting parameters来执行affine transformation,如公式(1)所示
F
input
l
+
1
=
γ
guide
l
F
input
l
−
mean
(
F
input
l
)
std
(
F
input
l
)
+
β
guide
l
(
1
)
F_{\text{input}}^{l+1}=\gamma_{\text{guide}}^l\frac{F_{\text{input}}^l-\text{mean}\left ( F_{\text{input}}^l \right )}{\text{std}\left ( F_{\text{input}}^l \right )} + \beta_{\text{guide}}^l \qquad(1)
Finputl+1=γguidelstd(Finputl)Finputl−mean(Finputl)+βguidel(1)
文献[32]提出了FiLM layer,与本文提出的FT layer的比较如Fig.4所示
3.2. Bi-directional conditioning scheme
为了进一步利用guidance image的information,本文提出bi-directional conditioning
scheme,已有的conditioning schemes都是利用guidance signal单向地作用在input image,我们提出应该进行bi-directional communication
这样做的好处是:This bi-directional flow of information enables the generative model to better capture the constraints of the guidance image. In
公式(1)是利用guidance来影响input,反过来input也可以用来影响guidance,如公式(2)所示
F
guide
l
+
1
=
γ
input
l
F
guide
l
−
mean
(
F
guide
l
)
std
(
F
guide
l
)
+
β
input
l
(
2
)
F_{\text{guide}}^{l+1}=\gamma_{\text{input}}^l\frac{F_{\text{guide}}^l-\text{mean}\left ( F_{\text{guide}}^l \right )}{\text{std}\left ( F_{\text{guide}}^l \right )} + \beta_{\text{input}}^l \qquad(2)
Fguidel+1=γinputlstd(Fguidel)Fguidel−mean(Fguidel)+βinputl(2)
作者打了个比方来进一步解释Bi-directional的好处
Our intuition is that such a bi-directional approach can be seen as a bi-directional communication between a teacher (guidance branch) and a student (input image branch). A one-way communication from the teacher to the student might not help the student understand the teacher as much as two-way communication.
4. Experimental Results
使用Unet、ResNet作为生成器的结构
优化目标为image-to-image translation常用的rec loss+adversarial loss
L
G
A
N
(
G
,
D
)
+
λ
L
1
(
G
)
(
3
)
L_{GAN}(G,D)+\lambda L_1(G) \qquad(3)
LGAN(G,D)+λL1(G)(3)
【总结】
本文公式很少很简单,但思想比较interesting
仔细想想本文的方法能不能用于性别转换