Domain-invariant Feature Exploration for Domain Generalization阅读笔记

1 Title

        Domain-invariant Feature Exploration for Domain Generalization(Wang Lu ,Jindong Wang,Haoliang Li,Yiqiang Chen,Xing Xie)[TMLR (07/2022)]

2 Conclusion

    This paper focuses on domain generalized representation learning, and in order to solve the problem that the invariant features obtained in the existing representation learning are not sufficient, the invariant features are divided into intra-domain invariant features and inter-domain invariant features for the first time, and an improved DG (domain generalization) method is proposed: DIFEX. Experiments are carried out on the data of image and time series in multi-source and single-source scenarios, and the experimental results show that DIFEX can obtain better results by fully exploiting features, and at the same time, it is universal in multiple scenarios and easy to expand.

3 Good Sentence

        1、The popularity and effectiveness of domain-invariant learning naturally motivate us to seek the rationale behind this kind of approach: what are the domain-invariant features and how to further improve its performance on DG?(The motivation of this research)

        2、 In this paper, we take a deep analysis of the domain-invariant features. Specifically, we argue that domain-invariant features should be originating from both internal and mutual sides:the internally-invariant features capture the intrinsic semantic knowledge of the data while the mutuallyinvariant features learn the cross-domain transferable knowledge. (the principle of the method proposed by this paper)

        3、As discussed early, Fourier phase features alone are insufficient to obtain enough discriminative features for classification. Thus, we explore the mutually-invariant features by leveraging the cross-domain knowledge contained in multiple training domains.(The improvement of this method does to learn invariant features)

        4、 This demonstrates the great performance of our approach in these datasets. Moreover, we see that alignments across domains and exploiting characteristics of data own can both bring remarkable improvements.(the result of experience shows the improvement of DIFEX)

本文聚焦于域泛化表示学习,针对现有表示学习中获取的不变特征不够充分的问题,首次将不变特征划分为域内不变特征和域间不变特征,提出了一种DG(领域 泛化)的改进方法:DIFEX

为了公平性,最后一层特征一分为二,一部分进行域内不变特征学习,一部分进行域间特征学习;

为了特征的多样性,提出了一个正则项,来让两种特征的差别尽量大。

详情:

        1、通过一个简单的蒸馏框架学习域不变特征:虽然有额外的代价,但保证了预测是端到端的,省去了不必要的FFT计算。首先使用一个老师网络来利用傅里叶相值信息来学习分类模型,从而获取有用的与分类有关的傅里叶相值信息,训练之后,我们认为老师模型可以得到与分类有关的傅里叶相值信息,那么在学生模型训练的时候,便可以让它参考老师的这部分特征,进行域内不变特征学习。分别让一半特征通过蒸馏学习域内不变特征,让一半特征通过对齐学习域间不变特征,并设置正则项使得特征之间不一致性尽量大,以获得更多的特征,增强模型鲁棒性

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值