abstract
Deep convolutional neural networks (DCNNs) trained on a large number of images with strong pixel-level annotations have recently significantly pushed the state-of-art in semantic image segmentation. We study the more challenging problem of learning DCNNs for semantic image segmentation from either
- weakly annotated training data such as bounding boxes or image-level labels or
- a combination of few strongly labeled and many weakly labeled images, sourced from one or multiple datasets.
We develop Expectation-Maximization (EM) methods for semantic image segmentation model training under these weakly supervised and semi-supervised settings. Extensive experimental evaluation shows that the proposed techniques can learn models delivering competitive results on the challenging PASCAL VOC 2012 image segmentation benchmark, while requiring significantly less annotation effort. We share source code implementing the proposed system at https://bitbucket.org/deeplab/deeplab-public.
Proposed Methods
像素级别的标注(全监督)
目标函数是:
J ( θ ) = log P ( y ∣ x ; θ ) = ∑ m = 1 M log P ( y m ∣ x ; θ ) J(\boldsymbol{\theta})=\log P(\boldsymbol{y} | \boldsymbol{x} ; \boldsymbol{\theta})=\sum_{m=1}^{M} \log P\left(y_{m} | \boldsymbol{x} ; \boldsymbol{\theta}\right) J(θ)=logP(y∣x;θ)=m=1∑MlogP(ym∣x;θ)
式中, θ \theta θ是DNN的参数,每个像素的标签分布可以按照下式计算:
P ( y m ∣ x ; θ ) ∝ exp ( f m ( y m ∣ x ; θ ) ) P\left(y_{m} | \boldsymbol{x} ; \boldsymbol{\theta}\right) \propto \exp \left(f_{m}\left(y_{m} | \boldsymbol{x} ; \boldsymbol{\theta}\right)\right) P(ym∣x;θ)∝exp(fm(ym∣x;θ))
式中, f m ( y m ∣ x ; θ ) f_{m}\left(y_{m} | \boldsymbol{x} ; \boldsymbol{\theta}\right) fm(ym∣x;θ)是DCNN在第 m m m个像素的输出,使SGD即可优化 J ( θ ) J(\boldsymbol{\theta}) J(θ)
图象级别的标注
当只有图像级注释可用时,我们可以观察到图像像素值 x \boldsymbol{x} x和图像级标签 z \boldsymbol{z} z,但像素级分割结果 y \boldsymbol{y} y是潜在变量。建立如下概率图模型:
P ( x , y , z ; θ ) = P ( x ) P ( y ∣ x ; θ ) P ( z ∣ y ) = P ( x ) ( ∏ m = 1 M P ( y m ∣ x ; θ ) ) P ( z ∣ y ) P(\boldsymbol{x}, \boldsymbol{y}, \boldsymbol{z} ; \boldsymbol{\theta})=P(x)P(y|x;\theta)P(z|y)=P(\boldsymbol{x})\left(\prod_{m=1}^{M} P\left(y_{m} | \boldsymbol{x} ; \boldsymbol{\theta}\right)\right) P(\boldsymbol{z} | \boldsymbol{y}) P(x,