1 Title
Learning Transferable Features with Deep Adaptation Networks(author:Mingsheng Long、Yue Cao、Jianmin Wang、Michael I. Jordan)(ICML 2015)
2 Conclusion
DAN is developed on the basis of DDC, which effectively solves two problems of DDC:
1、DDC only adapts to one layer of network, which may not be enough, as Jason's work(How transferable are features in deep neuralnetworks) has clearly pointed out that different layers can be migrated. So DAN should be adapted to multiple layers;
2、DDC uses a single core MMD,usually choose Gaussian kernel or linear kernel, and each of them may not be the optimal core. DAN uses multi-core MMD (MK-MMD). It constructs this total kernel with multiple kernels by weighting
3 Good Sentence
1、The generalization error of supervised learning machines with limited training samples will be unsatisfactorily large, while manual labeling of sufficient training data for diverse application domains may be prohibitive (The shortcomings of existing methods and incentive to establish the Deep Adaptation Networks)
2、Domain adaptation establishes knowledge transfer from the labeled
source domain to the unlabeled target domain by exploring domain-invariant structures that bridge different domains of substantial distribution discrepancy.One of the main approaches to establishing knowledge transfer is to learn domain-invariant models from data, which can bridge the source and target domains in an isomorphic latent feature space (illustrate the method proposed in the research)
3、 kernel choice is critical to the testing power of MMD since different kernels may embed probability distributions in different RKHSs where different orders of sufficient statistics can be emphasized. This is crucial for moment matching, which is not well explored by previous domain adaptation methods.(The innovation or superiority of this mothod)