[论文笔记] Consensus Adversarial Domain Adaptation

原文链接:https://ojs.aaai.org/index.php/AAAI/article/download/4552/4430

文章提出了 Consensus Adversarial Domain Adaptation (CADA), a novel unsupervised ADA scheme that gives freedom to both target encoder and source encoder.

Objective

To improve the generalization capability of a classifier across domains without collecting
labeled data in the target domain
via ADA (与FADA不同,此处并无target samples)  


CADA Methodology

Suppose N_s samples X_s with labels Y_s are collected in the source domain with L possible classes, 与FADA类似,训练可以分为4 steps

1) 利用source domain 数据训练 source encoder M_s and a source classifier C_s 

2) Given unlabeled data in target domain X_t, train a target encoder M_t and fine-tune the source encoder M_s, such that a discriminator D cannot tell whether a sample is from the source domain or from the target domain after the associated feature mapping (M_t(X_t) vs M_s(X_s)).

值得注意的是,M_t 和M_s的参数都是基于step 1训练完的M_s初始化, 与之前的ADDA,DIFA等model不同,source encoderM_s 的参数在这一轮训练中并非固定。 在之前的model中, 由于M_s参数的固定,feature mapping is defined by the source encoder and ADA essentially tries to align the feature embeddings of the target domain with the source domain, 在这种情况下,the obtained source encoder is used as an absolute reference, which may deteriorate the DA performance because the alignment could be sub-optimal when the target samples cannot be completely embedded into the imposed representation space. 如果让M_s 的参数拥有更大的自由度,那么就有可能获得更好的域泛化能力。

3) When the D in Step 2 is not able to identify the domain label of target samples & source samples, it is an indication that the  M_tand  M_sachieved consensus by mapping the corresponding input data to a shared domain-invariant feature space. Then we fix the paras of the M_s and train a shared classifier C_{sh} using the labeled source domain data \{X_s,Y_s\}。 C_{sh} 可以直接用于target domain的分类,因为在step 2 中我们已经使得  M_tand M_s embed the  samples to the domain-invariant feature space。

 4) 对于target domain的testing, 我们利用step 2训练好的M_t 将 target test samples 映射到 domain-invariant feature space上,然后利用 shared classifier C_{sh} 进行target domain的分类。

综合上述训练过程,总loss function如下


F-CADA Methodology 

F-CADA 和CADA大致相同,除了step3 :

Suppose few labeled samples \{X_t^l,Y_t^l\} are available in the target domain. As the most vital step in F-CADA, we design a label learning algorithm to assign presumptive labels Y_t^l to target unlabeled samples X_t^l . Then, we finetune the target encoder obtained in Step 2 and build up a target classifier C_t using both unlabeled target samples with presumptive labels\{X_t^u,Y_t^l\} and labeled target samples \{X_t^l,Y_t^l\}


Evaluations

 CADA and F-CADA 的evaluation依旧是基于常见的 digit recognition tasks: validates to the task of digit recognition across domains on standard digit adaptation dataset (MNIST, USPS, and SVHN digits datasets) ; 此外还有在 WiFi dataset上的 gesture recognition tasks: the task of spatial adaptation for WiFi-enabled device-free gesture recognition (GR)

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值