Structure-Preserved Multi-Source Domain Adaptation_notebook

Abstract

we aim to preserve the whole structure from source domains and transfer it to serve the task on the target domain. The source and target data are put together for clustering,which simultaneously explores the structures of the source and target domains. The structure-preserved information from source domain further guides the clustering process on the target domain.

Introduction

there are two strategies for adaptations:

  • feature space adaptation

  • classifier adaptation.

Focus on: multi-source unsupervised domain adaptation
we are the first to formulate multi-source domain unsupervised adaptation into a semi-supervised clustering framework [15]
Experience show

Related work

Here we give a brief introduction on unsupervised domain adaptation and multi-source domain adaptation, respectively, and highlight the difference between their works and ours.
discovering latent domains

The proposed method

  • Given the problem and Symbol mark:
  • alignment projections P1 and P2;
  • Here suppose that the alignment projections P1 and P2 are given, we start from the Zs1 , Zs2 , Zt1 and Zt2.(why, how get the P1 and P2?)

A.Problem Defifinition

  • How to incorporate the structure of different domains to predict the labels of the target data?

B.Objective Function

They formulate the problem as a clustering problem.Inspired by the ensemble clustering [16], [17], the source and target data are put together for clustering, which explores the structures of target domain as well as keeps the structures of source domains consistent with the label information as much as possible.

  • The objective function of model is EQ(1) :

在这里插入图片描述
在这里插入图片描述

The objective function consists of two parts.

  • One is the standard K-means with squared Euclidean distance for the combined source and target data,
  • the other is a term measuring the disagreement between the indicator matrices Hs1 , Hs2 and the label information of the source domains.

Based on previous work [32], we have a new insight of our objective function in Eq. 1.
在这里插入图片描述

By this means, we have a new insight of our objective function, which can be rewritten (eq1) in the following formulation.

Solutions

Since the problem in Eq4 is not jointly convex to all the variables, here we iteratively update each unknown variable by taking derivation.
By taking the derivative

  • A.Fixing others, Update G1, G2.
    i.omitted
  • B.Fixing others, Update M1, M2.
    i.omitted
  • C.Fixing others, Update Hs1, Hs2.
    i.omitted
    Difference:we use a exhaustive search for the optimal assignment to find the solutions
  • D.Fixing others, Update Ht.
    i.omitted

Experimental Results

Evaluating the performance of our proposed method in terms of object recognition and face identification compared with several state-of-the-arts.

  • A.Experiment setting
    i.Databases
    ii.Competitive methods and implementation details.
  • B.Object recognition
    i.Results of single source
    ii.Results of muti-sources
    iii.Parameter analysis
  • C.Face identification
    i.Domain adaptation results

Conclusions

In this paper, we proposed a novel algorithm for multi-source unsupervised domain adaptation. Different from the existing studies, which learned a classifier in the common space with the source data and predicted the labels for target data, we preserved the whole structures of source domain for the task on the target domain. To our best knowledge, we were the fifirst to formulate the problem into a semisupervised clustering problem with missing values. Extensive experiments on two widely used databases demonstrated the large improvements of our proposed method over several state-of-the-art methods.

  • 1
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值