PCB-past(iccv19):Self-training with progressive augmentation for unsuper- vised cross-domain person

self-traing blog

【ICCV2019 reid】Self-training with progressive augmentation for unsupervised cross-domain person reid

论文地址
官方代码代码还没完全放出来

一篇解决无监督domain adaption reid的文章,发表在了ICCV2019上面。

前言

实际上无监督domain adaptive reid主要解决的是源数据集有标签,目标数据集无标签,我们用这样两个数据集训出来的模型要在源数据集和目标数据集上都表现得好,是一件有挑战性的工作。
在这篇论文中,作者提出了一种递进增强的自训练方法(progressive augmentation framework (PAST))来逐步提高模型在目标域上的表现。包含两个stages,一个是保守stage(conservative stage),一个是促进stage(promoting stage)。conservative stage 用triplet-based loss捕捉目标域数据点的局部结构,来提高特征表示。promoting stage在网络的最后一层增加一个可更改的分类层,连续不断地优化网络,使其可以使用关于数据分布的全局信息。更重要的,提出了一个全新的自训练策略,可选择地采用conservative和promoting stage,来递进地增强模型的表现力。此外,为了提高选择的triplet 样本的可靠性,我们在保守阶段引入了ranking-based triplet loss,这是一个基于数据对之间相似性的无标记目标函数。

介绍

通常来说,直接用source domain训好的模型去测试target domain的模型效果不好,因为源域和目标域之间的模型有domain shift的问题。在无监督跨域reid中,这个问题就变成了如何将在source domain上预训练训好的模型通过无监督有效地转换到目标域。
一些domain transfer用打伪标签的方法,比如用k-means或者DBSCAN聚类,每类当作是一个人。缺点就是效果很依赖于聚类的质量。
PAST:

  1. 在训练早期生成伪标签质量较差时抑制错误放大。
  2. 当标签质量更好后,逐步加入高置信度的标签样本,用于自训练
    在这里插入图片描述
conservative stage

图1里面我们可以看到,正确标签的数据在一开始时很低的,因为domain shift。所以我们需要筛选出confidently的标签的样本来减少label noise。通常用图像相似度来当作confidence度量,常用的clustering-based triplet loss (CTL)对伪标签的质量很敏感,所以作者提出了一个label-free的loss,叫做ranking-based triplet loss(RTL)。
具体来说,我们为整个target dataset计算一个ranking 分数矩阵,然后产生triplets(从top n和(n,2n】positive加negative样本)。然后用所提出的RTL来计算loss。
conservative stage主要是在训练早期时用来防止模型坍塌(早期label质量低)

Promoting stage

训练的triplets在大数据集训练时动态增长,如果值用triplet loss,很容易陷入局部最优(可以看图1CTL和CTL+RTL的对比)。解决:把每个cluster当作一个类,然后把学习过程当作一个分类问题(就是用了交叉熵)。因为如果label一开始质量低的话,对分类会造成很大影响,所以promoting stage放在了conservative stage后面。

方法

在这里插入图片描述
整体框架如上图所示。主要是包含两个模块:conservative stage 和promoting stage。
首先,模型在源域上进行预训练,然后用这个模型对target domain T的所有图像进行特征提取,得到F,当作我们框架的输入。在conservative stage中,基于ranking score matrix,我们通过HDBSCAN聚类的方法产生更可靠的训练样本

     T
    
    
     U
    
   
  
  
   T_U
  
 
</span><span class="katex-html"><span class="base"><span class="strut" style="height: 0.83em; vertical-align: -0.15em;"></span><span class="mord"><span class="mord mathit" style="margin-right: 0.13em;">T</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height: 0.32em;"><span style="top: -2.55em; margin-right: 0.05em; margin-left: -0.13em;"><span class="pstrut" style="height: 2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mathit mtight" style="margin-right: 0.1em;">U</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height: 0.15em;"></span></span></span></span></span></span></span></span></span>。<span class="katex--inline"><span class="katex"><span class="katex-mathml">

 
  
   
    
     T
    
    
     U
    
   
  
  
   T_U
  
 
</span><span class="katex-html"><span class="base"><span class="strut" style="height: 0.83em; vertical-align: -0.15em;"></span><span class="mord"><span class="mord mathit" style="margin-right: 0.13em;">T</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height: 0.32em;"><span style="top: -2.55em; margin-right: 0.05em; margin-left: -0.13em;"><span class="pstrut" style="height: 2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mathit mtight" style="margin-right: 0.1em;">U</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height: 0.15em;"></span></span></span></span></span></span></span></span></span>是T的一个子集。结合clustring-based triplet loss 和提出的ranking-based triplet loss,当前更新训练集的局部结构被用来模型优化。接着,我们就可以用优化了的新模型来提取特征<span class="katex--inline"><span class="katex"><span class="katex-mathml">

 
  
   
    
     F
    
    
     U
    
   
  
  
   F_U
  
 
</span><span class="katex-html"><span class="base"><span class="strut" style="height: 0.83em; vertical-align: -0.15em;"></span><span class="mord"><span class="mord mathit" style="margin-right: 0.13em;">F</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height: 0.32em;"><span style="top: -2.55em; margin-right: 0.05em; margin-left: -0.13em;"><span class="pstrut" style="height: 2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mathit mtight" style="margin-right: 0.1em;">U</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height: 0.15em;"></span></span></span></span></span></span></span></span></span>。在promoting stage,用分类的思想,用交叉熵loss来进一步的优化网络。最终,模型交替的训练conser stage’和promoting stage。</p> 

conservative stage

triplet loss 被证明是可以用来生成目标数据可靠triplets的方法,探索潜在的有意义的局部结构。
但不同于监督训练,伪标签很难构建高质量的triplets。
在本文中, 对于训练数据集T,我们首先提取每个样本的特征F,然后采用k倒排编码,引入Jaccard 距离,得到距离矩阵

     D
    
    
     J
    
   
   
    (
   
   
    
     x
    
    
     i
    
   
   
    )
   
  
  
   D_J(x_i)
  
 
</span><span class="katex-html"><span class="base"><span class="strut" style="height: 1em; vertical-align: -0.25em;"></span><span class="mord"><span class="mord mathit" style="margin-right: 0.02em;">D</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height: 0.32em;"><span style="top: -2.55em; margin-right: 0.05em; margin-left: -0.02em;"><span class="pstrut" style="height: 2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mathit mtight" style="margin-right: 0.09em;">J</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height: 0.15em;"></span></span></span></span></span><span class="mopen">(</span><span class="mord"><span class="mord mathit">x</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height: 0.31em;"><span style="top: -2.55em; margin-right: 0.05em; margin-left: 0em;"><span class="pstrut" style="height: 2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mathit mtight">i</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height: 0.15em;"></span></span></span></span></span><span class="mclose">)</span></span></span></span></span><br> <img alt="在这里插入图片描述" src="https://img-blog.csdnimg.cn/20190801175121692.png"><br> 里面存放每个样本与其他样本的jaccard 距离。<br> 接着,对<span class="katex--inline"><span class="katex"><span class="katex-mathml">

 
  
   
    
     D
    
    
     J
    
   
   
    (
   
   
    
     x
    
    
     i
    
   
   
    )
   
  
  
   D_J(x_i)
  
 
</span><span class="katex-html"><span class="base"><span class="strut" style="height: 1em; vertical-align: -0.25em;"></span><span class="mord"><span class="mord mathit" style="margin-right: 0.02em;">D</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height: 0.32em;"><span style="top: -2.55em; margin-right: 0.05em; margin-left: -0.02em;"><span class="pstrut" style="height: 2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mathit mtight" style="margin-right: 0.09em;">J</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height: 0.15em;"></span></span></span></span></span><span class="mopen">(</span><span class="mord"><span class="mord mathit">x</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height: 0.31em;"><span style="top: -2.55em; margin-right: 0.05em; margin-left: 0em;"><span class="pstrut" style="height: 2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mathit mtight">i</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height: 0.15em;"></span></span></span></span></span><span class="mclose">)</span></span></span></span></span>从小到大排序,然后再每个样本的排序后的距离矩阵存入<span class="katex--inline"><span class="katex"><span class="katex-mathml">

 
  
   
    
     D
    
    
     R
    
   
  
  
   D_R
  
 
</span><span class="katex-html"><span class="base"><span class="strut" style="height: 0.83em; vertical-align: -0.15em;"></span><span class="mord"><span class="mord mathit" style="margin-right: 0.02em;">D</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height: 0.32em;"><span style="top: -2.55em; margin-right: 0.05em; margin-left: -0.02em;"><span class="pstrut" style="height: 2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mathit mtight" style="margin-right: 0em;">R</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height: 0.15em;"></span></span></span></span></span></span></span></span></span>里面:<br> <img alt="在这里插入图片描述" src="https://img-blog.csdnimg.cn/20190801194039550.png"><br> 这里面<span class="katex--inline"><span class="katex"><span class="katex-mathml">

 
  
   
    
     D
    
    
     R
    
   
   
    (
   
   
    
     x
    
    
     i
    
   
   
    )
   
  
  
   D_R(x_i)
  
 
</span><span class="katex-html"><span class="base"><span class="strut" style="height: 1em; vertical-align: -0.25em;"></span><span class="mord"><span class="mord mathit" style="margin-right: 0.02em;">D</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height: 0.32em;"><span style="top: -2.55em; margin-right: 0.05em; margin-left: -0.02em;"><span class="pstrut" style="height: 2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mathit mtight" style="margin-right: 0em;">R</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height: 0.15em;"></span></span></span></span></span><span class="mopen">(</span><span class="mord"><span class="mord mathit">x</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height: 0.31em;"><span style="top: -2.55em; margin-right: 0.05em; margin-left: 0em;"><span class="pstrut" style="height: 2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mathit mtight">i</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height: 0.15em;"></span></span></span></span></span><span class="mclose">)</span></span></span></span></span>表示的是排序后的<span class="katex--inline"><span class="katex"><span class="katex-mathml">

 
  
   
    
     D
    
    
     J
    
   
   
    (
   
   
    
     x
    
    
     i
    
   
   
    )
   
  
  
   D_J(x_i)
  
 
</span><span class="katex-html"><span class="base"><span class="strut" style="height: 1em; vertical-align: -0.25em;"></span><span class="mord"><span class="mord mathit" style="margin-right: 0.02em;">D</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height: 0.32em;"><span style="top: -2.55em; margin-right: 0.05em; margin-left: -0.02em;"><span class="pstrut" style="height: 2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mathit mtight" style="margin-right: 0.09em;">J</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height: 0.15em;"></span></span></span></span></span><span class="mopen">(</span><span class="mord"><span class="mord mathit">x</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height: 0.31em;"><span style="top: -2.55em; margin-right: 0.05em; margin-left: 0em;"><span class="pstrut" style="height: 2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mathit mtight">i</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height: 0.15em;"></span></span></span></span></span><span class="mclose">)</span></span></span></span></span>。<br> 再者,对<span class="katex--inline"><span class="katex"><span class="katex-mathml">

 
  
   
    
     D
    
    
     R
    
   
  
  
   D_R
  
 
</span><span class="katex-html"><span class="base"><span class="strut" style="height: 0.83em; vertical-align: -0.15em;"></span><span class="mord"><span class="mord mathit" style="margin-right: 0.02em;">D</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height: 0.32em;"><span style="top: -2.55em; margin-right: 0.05em; margin-left: -0.02em;"><span class="pstrut" style="height: 2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mathit mtight" style="margin-right: 0em;">R</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height: 0.15em;"></span></span></span></span></span></span></span></span></span>用HDBSCAN聚类的方法将整个训练图像分成不同的类。HDBSCAN之后,有一些图片没有进入任一类,直接丢弃。因而我们用有分配标签的图片当作更新后的训练集<span class="katex--inline"><span class="katex"><span class="katex-mathml">

 
  
   
    
     T
    
    
     U
    
   
  
  
   T_U
  
 
</span><span class="katex-html"><span class="base"><span class="strut" style="height: 0.83em; vertical-align: -0.15em;"></span><span class="mord"><span class="mord mathit" style="margin-right: 0.13em;">T</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height: 0.32em;"><span style="top: -2.55em; margin-right: 0.05em; margin-left: -0.13em;"><span class="pstrut" style="height: 2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mathit mtight" style="margin-right: 0.1em;">U</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height: 0.15em;"></span></span></span></span></span></span></span></span></span>来对模型进行优化。</p> 

Clustering-based Triplet Loss (CTL)

我们使用的一个loss是基于batch的hard mining triplet loss。我们随机采样P个clusters,每个类里面K个图片,所以我们的minibatch是PK。对于每张anchor image

     x
    
    
     a
    
   
  
  
   x_a
  
 
</span><span class="katex-html"><span class="base"><span class="strut" style="height: 0.58em; vertical-align: -0.15em;"></span><span class="mord"><span class="mord mathit">x</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height: 0.15em;"><span style="top: -2.55em; margin-right: 0.05em; margin-left: 0em;"><span class="pstrut" style="height: 2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mathit mtight">a</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height: 0.15em;"></span></span></span></span></span></span></span></span></span>,对应的hardest 正样本<span class="katex--inline"><span class="katex"><span class="katex-mathml">

 
  
   
    
     x
    
    
     p
    
   
  
  
   x_p
  
 
</span><span class="katex-html"><span class="base"><span class="strut" style="height: 0.71em; vertical-align: -0.28em;"></span><span class="mord"><span class="mord mathit">x</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height: 0.15em;"><span style="top: -2.55em; margin-right: 0.05em; margin-left: 0em;"><span class="pstrut" style="height: 2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mathit mtight">p</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height: 0.28em;"></span></span></span></span></span></span></span></span></span>和最难的负样本<span class="katex--inline"><span class="katex"><span class="katex-mathml">

 
  
   
    
     x
    
    
     n
    
   
  
  
   x_n
  
 
</span><span class="katex-html"><span class="base"><span class="strut" style="height: 0.58em; vertical-align: -0.15em;"></span><span class="mord"><span class="mord mathit">x</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height: 0.15em;"><span style="top: -2.55em; margin-right: 0.05em; margin-left: 0em;"><span class="pstrut" style="height: 2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mathit mtight">n</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height: 0.15em;"></span></span></span></span></span></span></span></span></span>被挑选出来形成一个triplet。<br> 随着伪标签来自于聚类方法,我们重命名为CTL:<br> <img alt="在这里插入图片描述" src="https://img-blog.csdnimg.cn/2019080120000038.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L20wXzM3NjE1Mzk4,size_16,color_FFFFFF,t_70"><br> 其中<span class="katex--inline"><span class="katex"><span class="katex-mathml">

 
  
   
    
     x
    
    
     
      i
     
     
      ,
     
     
      j
     
    
   
  
  
   x_{i,j}
  
 
</span><span class="katex-html"><span class="base"><span class="strut" style="height: 0.71em; vertical-align: -0.28em;"></span><span class="mord"><span class="mord mathit">x</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height: 0.31em;"><span style="top: -2.55em; margin-right: 0.05em; margin-left: 0em;"><span class="pstrut" style="height: 2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight"><span class="mord mathit mtight">i</span><span class="mpunct mtight">,</span><span class="mord mathit mtight" style="margin-right: 0.05em;">j</span></span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height: 0.28em;"></span></span></span></span></span></span></span></span></span>表示第i个类的第j个图。</p> 

Ranking-based Triplet Loss (RTL)

显而易见,CTL的效果很取决于聚类的效果,我们很难去决定哪个聚类结果是对的,哪个是错的。
所以我们充分利用

     D
    
    
     R
    
   
  
  
   D_R
  
 
</span><span class="katex-html"><span class="base"><span class="strut" style="height: 0.83em; vertical-align: -0.15em;"></span><span class="mord"><span class="mord mathit" style="margin-right: 0.02em;">D</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height: 0.32em;"><span style="top: -2.55em; margin-right: 0.05em; margin-left: -0.02em;"><span class="pstrut" style="height: 2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mathit mtight" style="margin-right: 0em;">R</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height: 0.15em;"></span></span></span></span></span></span></span></span></span>,提出了RTL。对于一个训练anchor <span class="katex--inline"><span class="katex"><span class="katex-mathml">

 
  
   
    
     x
    
    
     a
    
   
  
  
   x_a
  
 
</span><span class="katex-html"><span class="base"><span class="strut" style="height: 0.58em; vertical-align: -0.15em;"></span><span class="mord"><span class="mord mathit">x</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height: 0.15em;"><span style="top: -2.55em; margin-right: 0.05em; margin-left: 0em;"><span class="pstrut" style="height: 2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mathit mtight">a</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height: 0.15em;"></span></span></span></span></span></span></span></span></span>,正样本从top <span class="katex--inline"><span class="katex"><span class="katex-mathml">

 
  
   
    η
   
  
  
   \eta
  
 
</span><span class="katex-html"><span class="base"><span class="strut" style="height: 0.62em; vertical-align: -0.19em;"></span><span class="mord mathit" style="margin-right: 0.03em;">η</span></span></span></span></span> 最近邻里面挑选,负样本从<span class="katex--inline"><span class="katex"><span class="katex-mathml">

 
  
   
    (
   
   
    η
   
   
    ,
   
   
    2
   
   
    η
   
   
    ]
   
  
  
   (\eta,2\eta]
  
 
</span><span class="katex-html"><span class="base"><span class="strut" style="height: 1em; vertical-align: -0.25em;"></span><span class="mopen">(</span><span class="mord mathit" style="margin-right: 0.03em;">η</span><span class="mpunct">,</span><span class="mspace" style="margin-right: 0.16em;"></span><span class="mord">2</span><span class="mord mathit" style="margin-right: 0.03em;">η</span><span class="mclose">]</span></span></span></span></span>里面挑选。另外,对于CTL里面的固定margin,我们基于<span class="katex--inline"><span class="katex"><span class="katex-mathml">

 
  
   
    
     x
    
    
     p
    
   
  
  
   x_p
  
 
</span><span class="katex-html"><span class="base"><span class="strut" style="height: 0.71em; vertical-align: -0.28em;"></span><span class="mord"><span class="mord mathit">x</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height: 0.15em;"><span style="top: -2.55em; margin-right: 0.05em; margin-left: 0em;"><span class="pstrut" style="height: 2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mathit mtight">p</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height: 0.28em;"></span></span></span></span></span></span></span></span></span>和<span class="katex--inline"><span class="katex"><span class="katex-mathml">

 
  
   
    
     x
    
    
     n
    
   
  
  
   x_n
  
 
</span><span class="katex-html"><span class="base"><span class="strut" style="height: 0.58em; vertical-align: -0.15em;"></span><span class="mord"><span class="mord mathit">x</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height: 0.15em;"><span style="top: -2.55em; margin-right: 0.05em; margin-left: 0em;"><span class="pstrut" style="height: 2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mathit mtight">n</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height: 0.15em;"></span></span></span></span></span></span></span></span></span>的ranking位置引入了soft margin。<br> RTL表示为:<br> <img alt="在这里插入图片描述" src="https://img-blog.csdnimg.cn/20190801201226579.png"><br> 选择样本的数量和CTL一致。<span class="katex--inline"><span class="katex"><span class="katex-mathml">

 
  
   
    η
   
  
  
   \eta
  
 
</span><span class="katex-html"><span class="base"><span class="strut" style="height: 0.62em; vertical-align: -0.19em;"></span><span class="mord mathit" style="margin-right: 0.03em;">η</span></span></span></span></span>是正样本的最大选择位置。</p> 

总的来说,对于CTL和RTL,总的loss为:
在这里插入图片描述

    λ
   
  
  
   \lambda
  
 
</span><span class="katex-html"><span class="base"><span class="strut" style="height: 0.69em; vertical-align: 0em;"></span><span class="mord mathit">λ</span></span></span></span></span>用来平衡两个loss。</p> 

Promoting Stage

光用triplet loss很容易陷入局部最优。对于聚类结果,我们还用了分类loss,来提高模型的泛化能力。
用的是Softmax cross-entropy loss。
在这里插入图片描述

Feature-based Weight Initialization for Classifier

因为聚类数量C每次都可能有差异,新增加的分类层

    C
   
   
    L
   
  
  
   CL
  
 
</span><span class="katex-html"><span class="base"><span class="strut" style="height: 0.68em; vertical-align: 0em;"></span><span class="mord mathit" style="margin-right: 0.07em;">C</span><span class="mord mathit">L</span></span></span></span></span>在每次HDMSCAN之后都要初始化。相比于随机初始化,我们使用了每次聚类的均值特征来作为初始化参数。具体地,对于每个cluster <span class="katex--inline"><span class="katex"><span class="katex-mathml">

 
  
   
    C
   
  
  
   C
  
 
</span><span class="katex-html"><span class="base"><span class="strut" style="height: 0.68em; vertical-align: 0em;"></span><span class="mord mathit" style="margin-right: 0.07em;">C</span></span></span></span></span>,我们计算均值特征<span class="katex--inline"><span class="katex"><span class="katex-mathml">

 
  
   
    
     F
    
    
     c
    
   
  
  
   F_c
  
 
</span><span class="katex-html"><span class="base"><span class="strut" style="height: 0.83em; vertical-align: -0.15em;"></span><span class="mord"><span class="mord mathit" style="margin-right: 0.13em;">F</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height: 0.15em;"><span style="top: -2.55em; margin-right: 0.05em; margin-left: -0.13em;"><span class="pstrut" style="height: 2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mathit mtight">c</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height: 0.15em;"></span></span></span></span></span></span></span></span></span>,然后<span class="katex--inline"><span class="katex"><span class="katex-mathml">

 
  
   
    C
   
   
    L
   
  
  
   CL
  
 
</span><span class="katex-html"><span class="base"><span class="strut" style="height: 0.68em; vertical-align: 0em;"></span><span class="mord mathit" style="margin-right: 0.07em;">C</span><span class="mord mathit">L</span></span></span></span></span>的参数<span class="katex--inline"><span class="katex"><span class="katex-mathml">

 
  
   
    W
   
  
  
   W
  
 
</span><span class="katex-html"><span class="base"><span class="strut" style="height: 0.68em; vertical-align: 0em;"></span><span class="mord mathit" style="margin-right: 0.13em;">W</span></span></span></span></span>被初始化为:<br> <img alt="在这里插入图片描述" src="https://img-blog.csdnimg.cn/20190801202153653.png"><br> <span class="katex--inline"><span class="katex"><span class="katex-mathml">

 
  
   
    W
   
   
    ∈
   
   
    
     R
    
    
     
      d
     
     
      ×
     
     
      C
     
    
   
  
  
   W\in R^{d\times C}
  
 
</span><span class="katex-html"><span class="base"><span class="strut" style="height: 0.72em; vertical-align: -0.03em;"></span><span class="mord mathit" style="margin-right: 0.13em;">W</span><span class="mspace" style="margin-right: 0.27em;"></span><span class="mrel">∈</span><span class="mspace" style="margin-right: 0.27em;"></span></span><span class="base"><span class="strut" style="height: 0.84em; vertical-align: 0em;"></span><span class="mord"><span class="mord mathit" style="margin-right: 0em;">R</span><span class="msupsub"><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height: 0.84em;"><span style="top: -3.06em; margin-right: 0.05em;"><span class="pstrut" style="height: 2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight"><span class="mord mathit mtight">d</span><span class="mbin mtight">×</span><span class="mord mathit mtight" style="margin-right: 0.07em;">C</span></span></span></span></span></span></span></span></span></span></span></span></span>。d表示的是特征维度。<br> 通过这种初始化可以避免由于随机初始化带来的性能急剧损失。</p> 

Alternate Training

learning的过程希望能够递进地提高模型的泛化能力。本论文中我们提出了一种有效且简单的自训练策略,也就是conservative stage和promoting stage交替地训练。
本文的伪代码如下:
在这里插入图片描述
在这里插入图片描述

实验

用了marknet,duke和cuhk。
backbone用了PCB(ECCV18),个人感觉这个backbone有点太强了,可以看下面这个表格,单纯的PCB在duke和marknet上面就有二十几的mAP了,记得其他的无监督domain adaption的方法都是直接一个resnet50.
在这里插入图片描述
然后对于聚类方法的比较
在这里插入图片描述
和SOFT的比较在这里插入图片描述
总体来说,在聚类的基础上加入了分类loss和triplet loss。然后提出了交替训练的方法。感觉唯一有争议的就是用的PCB作为backbone。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值