论文阅读】【元学习/小样本学习】【ICLR2019】META-LEARNING WITH DOMAIN ADAPTATION
问题阐述
在传统的小样本学习/元学习中,meta-training阶段的task从一个task distribution
τ
t
r
a
i
n
τ_{train}
τtrain得到(各个task间独立同分布);meta-testing阶段的task从另一个task distribution
τ
t
e
s
t
τ_{test}
τtest得到。现有的元学习方法假设
τ
t
r
a
i
n
=
τ
t
e
s
t
τ_{train}=τ_{test}
τtrain=τtest,这篇论文讨论的是
τ
t
r
a
i
n
!
=
τ
t
e
s
t
τ_{train}!=τ_{test}
τtrain!=τtest的情况。
简单地说就是,meta-training和meta-testing阶段的task的样本来自不同的数据集,但是每一个task的support set和query set的样本来自同一个数据集。(task-level domain adaptation)
作者假设meta-training阶段会用到meta-testing的unlabeled的数据。(we assume that the model has access to the unlabeled instances in the domain of the few-shot test tasks prior to the training procedure, and utilize these instances for incorporating the domain-shift information. )
Meta Learning with Domain Adaptation (MLDA)
损失函数:
第一项就是分类损失:
第二项损失包含两项:
L
d
a
=
L
G
A
N
+
L
c
y
c
l
e
L_{da} = L_{GAN} + L_{cycle}
Lda=LGAN+Lcycle
GAN Loss:
task-cycle-consistency loss:
额外的改进:
Identity Loss:
L
i
d
t
=
E
x
t
e
s
t
−
D
t
e
s
t
[
∣
∣
G
(
x
)
−
x
∣
∣
1
]
L_{idt} = E_{x^{test}-D_{test}[||G(x)-x||_1]}
Lidt=Extest−Dtest[∣∣G(x)−x∣∣1]
reverse direction mapping:
L
c
y
c
l
e
2
=
E
x
t
e
s
t
−
D
t
e
s
t
[
∣
∣
G
(
G
′
(
x
)
)
−
x
∣
∣
1
]
L_{cycle2} = E_{x^{test}-D_{test}[||G(G'(x))-x||_1]}
Lcycle2=Extest−Dtest[∣∣G(G′(x))−x∣∣1]
实验结果