【论文笔记】DeepCut: Object Segmentation from Bounding Box Annotations using Convolutional Neural Networks

本文介绍了DeepCut方法,它扩展了GrabCut,使用神经网络从边界框注释进行实例分割。通过能量最小化和迭代更新,实现像素级对象分割。在胎儿MRI数据集上进行实验,取得了准确的分割结果。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

这篇论文提出了一种给定弱标注的实例分割方法。其将微软研究院提出的GrabCut进行扩展,可以实现给定bounding boxes的神经网络分类器训练。该论文将分类问题视为在稠密连接的条件随机场下的能量最小化问题,并通过不断迭代实现实例分割。
在这里插入图片描述

Abstract

In this paper, we propose DeepCut, a method to obtain pixelwise object segmentations given an image dataset labelled weak annotations, in our case bounding boxes. It extends the approach of the well-known GrabCut [1] method to include machine learning by training a neural network classifier from bounding box annotations. We formulate the problem as an energy minimisation problem over a densely-connected conditional random field and iteratively update the training targets to obtain pixelwise object segmentations. Additionally, we propose variants of the DeepCut method and compare those to a na¨ıve approach to CNN training under weak supervision. We test its applicability to solve brain and lung segmentation problems on a challenging fetal magnetic resonance dataset and obtain encouraging results in terms of accuracy

Method

让我们考虑在图上使用能量函数的分割问题,如[11]所述。我们为每个像素i寻找一个标记 f f f,以最小化:
E ( f ) = ∑ i ψ u ( f i ) + ∑ i &lt; j ψ p ( f i , f j ) ( 1 ) E(f)=\sum_{i} \psi_{u}\left(f_{i}\right)+\sum_{i&lt;j} \psi_{p}\left(f_{i}, f_{j}\right) \qquad\qquad (1) E(f)=iψu(fi)+i<jψp(fi,fj)(1)
其中, ψ u ( f i ) \psi_{u}\left(f_{i}\right) ψu(fi)作为一元数据的一致性项,测量给定数据的每个像素 i i i处标签 f f f的匹配度。另外,成对正则项 ψ p ( f i , f j ) \psi_{p}\left(f_{i}, f_{j}\right) ψp

评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值