Zero- and Few-Label Semantic Segmentation
Figure 1: We propose (generalized) zero- and few-label semantic segmentation tasks, i.e. segmenting classes whose labels are not seen by the model during training or the model has a few labeled samples of those classes. To tackle these tasks, we propose a model that transfers knowledge from seen classes to unseen classes using side information, e.g. semantic word embedding trained on free text corpus.
思路
在分割网络中嵌入类别语义信息,使用辅助信息(例如基于文本语料库训练得到的语义词嵌入)将已见类的知识迁移到未见类别。
Figure 2: Our zero-label and few-label semantic segmentation model, i.e. SPNet, consists of two steps: visual semantic embedding and semantic projection. Zero-label semantic segmentation is drawn as an instance of our model. Replacing different components of SPNet, four tasks are addressed (Solid/dashed lines show the training/test procedures respectively).
两步:
一、视觉语义映射;
二、语义映射
域漂移校正
The extreme case of the imbalanced data problem occurs when there is no labeled training images of unseen classes, and this results in predictions being biased to seen classes. To fix this issue, we follow [8] and calibrate the prediction by reducing the scores of seen classes, which leads to:
arg max u ∈ S ∪ U p ( y ^ i j = u ∣ x ; [ W s ; W u ] ) − γ I [ u ∈ S ] (5) \arg\max_{u ∈ \mathcal {S∪U}} p({\hat y}_{ij} = u | x; [W_s ; W_u]) − γI[u ∈ \mathcal{S}] \tag 5 argu∈S∪Umaxp(y^ij=u∣x;[Ws;Wu])−γI[u∈S](5)
where I = 1 I= 1 I=1 if u u u is a seen class and 0 otherwise, γ ∈ [ 0 , 1 ] γ∈[0,1] γ∈[0,1] is the calibration factor tuned on a held-out validation set.
在分类任务中,一整图片对应一个类别,语义信息有对应的视觉区域。那么,在分割任务中,每个类别的所有像素似乎无差异,带有语义信息的视觉区域不明显。
实验
词向量的效果
网络结构的效果
对象大小的效果
Figure 3: mIoU of unseen classes on COCO-Stuff ordered wrt average object size (left to right).
GZSL结果
Figure 4: GZLSS results on COCO-Stuff and PASCALVOC. We report mean IoU of unseen classes, seen classes and their harmonic mean (perception model is based on ResNet101 and the semantic embedding is ft + w2v). SPNet-C represents SPNet with calibration.