Gradually Vanishing Bridge for Adversarial Domain Adaptation(CVPR2020)笔记

Gradually Vanishing Bridge for Adversarial Domain Adaptation(CVPR2020)笔记

举两个现有方法存在弊端的例子:

Domain separation network / NeurIPS 2016 

弊端:由于图像重构要求较多的域信息,重构图像很可能导致学习到的域不变特征中仍含有很多剩余的特定于域的信息,这样的信息对迁移会产生负影响。

Multi-adversarial domain adaptation / AAAI 2018

弊端:过多的判别器可能会打破对抗训练脆弱的平衡。

 

:现有表示和理想表示之间的差异(两个向量间差值的范数)

现有特征表示:当前网络所提取到的特征

理想表示:进行了特征分布匹配 后的特征表示

关键点:桥会逐渐减小(渐进消失桥),GVB是一个可应用于对抗域适配方法的通用组件

  1.     GVB on Generator---GVB-G
  2.     GVB on Discriminator---GVB-D
  3.     GVB on both Generator and Discriminator---GVB-GD

 

生成器上的桥:源域或中间域与目标域的差异

判别器中:-------表示原判别器域的分界线,但有错分的情况

                  ·······表示判别器 的桥(不是域的界限,而是与理想分界线的差异)

 介于这两条线中间的线会恰好把源域和目标域分开

网络对源域和目标域的处理是完全相同的(上下对称)。把特征提取器的结果分别输入到G2和G3中,G2输出的和G3输出的是两个维度相同的向量。

是进行了分布匹配的理想表示,是当前提取到的特征表示,中所蕴含的源域特有的特征。

输入到D1和D2中得到判别器的输出结果和判别器桥上的输出结果。桥是提供了额外的判别能力,对判别器的结果进行修正,这一步是进行相加得到修正后的结果。

 

GVB on G:

如果Gamma比较大的话,说明:

1:x是一个难样本,该数据包含的较多的域特定特征

2:共享特征和该样本的特定特征较难分开

 

 

 

  • 1
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
Algorithm 1: The online LyDROO algorithm for solving (P1). input : Parameters V , {γi, ci}Ni=1, K, training interval δT , Mt update interval δM ; output: Control actions 􏰕xt,yt􏰖Kt=1; 1 Initialize the DNN with random parameters θ1 and empty replay memory, M1 ← 2N; 2 Empty initial data queue Qi(1) = 0 and energy queue Yi(1) = 0, for i = 1,··· ,N; 3 fort=1,2,...,Kdo 4 Observe the input ξt = 􏰕ht, Qi(t), Yi(t)􏰖Ni=1 and update Mt using (8) if mod (t, δM ) = 0; 5 Generate a relaxed offloading action xˆt = Πθt 􏰅ξt􏰆 with the DNN; 6 Quantize xˆt into Mt binary actions 􏰕xti|i = 1, · · · , Mt􏰖 using the NOP method; 7 Compute G􏰅xti,ξt􏰆 by optimizing resource allocation yit in (P2) for each xti; 8 Select the best solution xt = arg max G 􏰅xti , ξt 􏰆 and execute the joint action 􏰅xt , yt 􏰆; { x ti } 9 Update the replay memory by adding (ξt,xt); 10 if mod (t, δT ) = 0 then 11 Uniformly sample a batch of data set {(ξτ , xτ ) | τ ∈ St } from the memory; 12 Train the DNN with {(ξτ , xτ ) | τ ∈ St} and update θt using the Adam algorithm; 13 end 14 t ← t + 1; 15 Update {Qi(t),Yi(t)}N based on 􏰅xt−1,yt−1􏰆 and data arrival observation 􏰙At−1􏰚N using (5) and (7). i=1 i i=1 16 end With the above actor-critic-update loop, the DNN consistently learns from the best and most recent state-action pairs, leading to a better policy πθt that gradually approximates the optimal mapping to solve (P3). We summarize the pseudo-code of LyDROO in Algorithm 1, where the major computational complexity is in line 7 that computes G􏰅xti,ξt􏰆 by solving the optimal resource allocation problems. This in fact indicates that the proposed LyDROO algorithm can be extended to solve (P1) when considering a general non-decreasing concave utility U (rit) in the objective, because the per-frame resource allocation problem to compute G􏰅xti,ξt􏰆 is a convex problem that can be efficiently solved, where the detailed analysis is omitted. In the next subsection, we propose a low-complexity algorithm to obtain G 􏰅xti, ξt􏰆. B. Low-complexity Algorithm for Optimal Resource Allocation Given the value of xt in (P2), we denote the index set of users with xti = 1 as Mt1, and the complementary user set as Mt0. For simplicity of exposition, we drop the superscript t and express the optimal resource allocation problem that computes G 􏰅xt, ξt􏰆 as following (P4) : maximize 􏰀j∈M0 􏰕ajfj/φ − Yj(t)κfj3􏰖 + 􏰀i∈M1 {airi,O − Yi(t)ei,O} (28a) τ,f,eO,rO 17 ,建立了什么模型
05-12

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值