记录读论文的过程,如果哪里不对,请各位大佬指正。
该文章是读论文2的小部分理论基础
看完了论文2有一些小糊涂就找了这篇文章来看看。
文章题目:Prototypical Cross-domain Self-supervised Learning for Few-shot Unsupervised Domain Adaptation
文章地址:
代码有无:有
0.Abstract:
Unsupervised Domain Adaptation (UDA) transfers predictive models from a fully-labeled source domain to an unlabeled target domain. In some applications, however, it is expensive even to collect labels in the source domain, making most previous works impractical. To cope with this problem, recent work performed instance-wise cross-domain self-supervised learning, followed by an additional fine-tuning stage. However, the instance-wise selfsupervised learning only learns and aligns low-level discriminative features. In this paper, we propose an endto-end Prototypical Cross-domain Self-Supervised Learning (PCS) framework for Few-shot Unsupervised Domain Adaptation (FUDA)1. PCS not only performs cross-domain low-level feature alignment, but it also encodes and aligns semantic structures in the shared embedding space across domains. Our framework captures category-wise semantic structures of the data by in-domain prototypical contrastive learning; and performs feature alignment through cross-domain prototypical self-supervision. Compared with state-of-the-art methods, PCS improves the mean classification accuracy over different domain pairs on FUDA by 10.5%, 3.5%, 9.0%, and 13.2% on Office, Office-Home, VisDA-2017, and DomainNet, respectively.
abstract总结:
在UDA领域中,收集源域中标签很昂贵,所以为了解决这个问题,最近的研究提出实例式跨域自监督学习,然后进行微调。但是他有缺点,即只能学习和调整低层次的判别特征。本文提出了一种端到端的原型跨域自监督学习(PCS)框架,用于 "少量无监督域适应"(FUDA)
PCS 不仅能进行跨域低级特征对齐,还能在跨域共享嵌入空间中对语义结构进行编码和对齐。他通过域内原型对比学习捕捉数据的类别语义结构,并通过跨域原型自监督执行特征配准。
1.Introduction:
由于领域转移问题,在特定数据集上训练的深度神经网络往往无法泛化到新的领域。无监督领域适应(UDA)将预测模型从完全标记的源领域转移到无标记的目标领域。虽然在目标域中没有标签信息是一项挑战,但许多 UDA 方法可以利用源域中丰富的显式监督,以及用于域对齐的无标签目标样本,在目标域中实现高精度。然而,在现实世界的一些应用中,由于注释成本高、难度大,即使在源领域提供大规模注释也往往具有挑战性。(本篇文章就是为了解决这个问题)
在本文中,为了应对源