Associative Domain Adaptation_notebook

Abstract

  • Our training scheme follows the paradigm that in order to effectively derive class labels for the target domain, a network should produce statistically domain invariant embedding, while minimizing the classification error on the labeled source domain.
  • We accomplish this by reinforcing associations between source and target data directly in embedding space. Our method can easily be added to any existing classification network with no structural and almost no computational overhead.

Introduction

This data may be costly to obtain or even nonexistent.

A、Domain adaptation

Function: Rather than labeling vast amounts of real-world data, one renders a similar but synthetic data-sets that is automatically labeled.

  • The problem of domain adaptation was theoretically studied in [2], relating source and target error with a statistical similarity measure of the respective domains.
  • Their results suggest that a good domain adaptation method should be based on features that are as similar as possible for source and target domain (assimilation), while reducing the prediction error in the source domain as much as possible (discrimination).
    Previous work:
    在这里插入图片描述

B、related work

  • Focusing on methods that are based on deep learning, as these have proved to be powerful learning systems and are closest to our scheme. In this work, we propose a different loss for Lsim which is more intuitive in embedding space, less computationally complex and better suitable to obtain effective embedding.

C、Contribution

  • we minimize the classification error on the source domain Ds as a proxy while enforcing representations of Dt to have similar statistics to those of Ds. This is accomplished by enforcing associations [12] between feature representations of Dt with those of Ds that are in the same class.

Contribution
• A straightforward training schedule for domain adaptation with neural networks.
• An integration of our approach into the prevailing domain adaptation formalism and a detailed comparison with the most commonly used explicit Lsim: the maximum mean discrepancy
• A simple implementation that works with arbitrary architectures
• Extensive experiments on various benchmarks for domain adaptation that outperform related deep learning methods.
• A detailed analysis demonstrating that associative domain adaptation results in effective embedding in terms of classifying target domain samples.

Associative domain adaptation

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
The walker loss by itself could be minimized by only visiting target samples that are easily associated, skipping difficult examples. This would lead to poor generalization to the target domain. Therefore, a regularizer is necessary such that each target sample is visited with equal probability. This is the function of the visit loss. It is defined by the cross entropy between the uniform distribution over target samples and the probability of visiting some target sample starting in any source sample.
在这里插入图片描述
在这里插入图片描述
The association loss enforces similar embedding (assimilation) for the source and target samples, while the classification loss minimizes the prediction error of the source data (discrimination).

Conclusion

The key idea is to optimize a joint loss function combining the classification loss on the source domain with an association loss that imposes consistency of source and target embedding. The implementation is simple, works with arbitrary architectures in an end-to-end manner and introduces no significant additional computational and structural complexity.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值