(2019 CVPR) BASNet_ Boundary-Aware Salient Object Detection (A类)

一、作者

​ Xuebin Qin, Zichen Zhang, Chenyang Huang, Chao Gao, Masood Dehghan and Martin Jagersand

二、地址

2.1 原文地址

CVPR地址

2.2 代码地址

代码

三、摘要

​ Deep Convolutional Neural Networks have been adopted for salient object detection and achieved the state-of-the-art performance. Most of the previous works however focus on region accuracy but not on the boundary quality. In this paper, we propose a predict-refine architecture, BASNet, and a new hybrid loss for Boundary-Aware Salient object detection. Specifically, the architecture is composed of a densely supervised Encoder-Decoder network and a residual refine ment module, which are respectively in charge of saliency prediction and saliency map refinement. The hybrid loss guides the network to learn the transformation between the input image and the ground truth in a three-level hierarchy – pixel-, patch- and map- level – by fusing Binary Cross Entropy (BCE), Structural SIMilarity (SSIM) and Intersection over-Union (IoU) losses. Equipped with the hybrid loss, the proposed predict-refine architecture is able to effectively segment the salient object regions and accurately predict the fine structures with clear boundaries. Experimental results on six public datasets show that our method outper forms the state-of-the-art methods both in terms of regional and boundary evaluation measures. Our method runs at over 25 fps on a single GPU.

四、主要内容

4.1 主要工作

• A novel boundary-aware salient object detection network: BASNet, which consists of a deeply supervised encoder-decoder and a residual refinement module.
• A novel hybrid loss that fuses BCE, SSIM and IoU to supervise the training process of accurate salient object prediction on three levels: pixel-level, patch-level and map-level,(八个混合损失加权求和)
• A thorough evaluation of the proposed method that in cludes comparison with 15 state-of-the-art methods on six widely used public datasets. Our method achieves state-of-the-art results in terms of both regional and boundary evaluation measures.

4.2 网络结构图 (ResNet-34)

BASNet

BASNet_RRM

4.3 预测模块

​ 经典的编码解码结构去同时获取高层次的全局上下文信息以及低层次的细节信息。为了防止过拟合每一个解码层都采用真值图进行监督学习,编码阶段采用的是ResNet-34网络,之后又加了两层以及一个桥接层。

4.4 细化模块

​ 也是经典的编码解码结构。通过学习和真值图之间的残余信息(获取高层次的信息)来达到细化的目的。粗糙主要指两个方面,一是边界存在噪声比较模糊,二是预测的区域的可能性不规则、不稳定。

4.5 损失函数

​ BCE(交叉熵损失函数)、SSIM(结构相似性)、IoU(交并比)混和损失,然后八个损失函数加权求和作为最终的损失函数。分别作用于像素级、块级和图级。

五、评估材料

​ PR曲线、MAE(平均绝对误差)、F-measure、relaxed F-measure of boundary

六、结论

​ In this paper, we proposed a novel end-to-end boundaryaware model, BASNet, and a hybrid fusing loss for accurate salient object detection. The proposed BASNet is a predict refine architecture, which consists of two components: a prediction network and a refinement module. Combined with the hybrid loss, BASNet is able to capture both large-scale and fine structures, e.g. thin regions, holes, and produce salient object detection maps with clear bound aries. Experimental results on six datasets demonstrate that our model outperforms other 15 state-of-the-art methods in terms of both region-based and boundary-aware measures. Additionally, our proposed network architecture is modular. It can be easily extended or adapted to other tasks by replacing either the predicting network or the refinement module.
BASNet_result

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值