Dual attention network for scene segmentation

Dual attention network for scene segmentation

设计巧妙,但是没法解释方法的有效性

dual attention的思路太多了,这篇是应用在场景语义分割上。

是一种channel-wise attentionspital-attention整合结构设计

channel-wise attention(卷积通道注意力): senet,卷积网络每一层有很多卷积核,每一个卷积核对应一个特征通道,相对于空间注意力机制,通道注意力在于分配各个卷积通道之间的资源。

spatial attention(视觉空间注意力机制):就是对于特征图上的每个位置进行attention调整,使模型关注到值得关注的区域上。

 

Abstract: self-attention mechanism. A dual attention networks to adaptively integeate local features with their global dependencies.运用了两种attention结构,在空间和通道维度上。

The position attention module selectively aggregates the features at each position by a weighted sum of the features at all positions. Similar features would be related to each other regardless of their distances.

The channel attention module selectively emphasizes interdependent channel maps by integrating associated features among all channel maps

1. Introduction

提出了DANet去抓住全局特征依赖在空间和通道维度,

PAMCAM

完成了很高的精度

2. Related work

Semantic segmentation

Attention modules

3. Dual attention network

Position Attention Module

语义关系对场景理解来讲很重要,目的在于抓住全局依赖性而无关位置。然而许多工作表明,局部特征信息不够。In order to model rich contectual dependencies over local feature representations, we introducte a PAM.

The PAM encodes a wider range of contextual information into local features, thus enhancing their representive capability. Next, we elaborate the process to adaptively aggregate spatia contexts.

B,C,D经过convbnrelu得到三组feature map,分别reshapeC*Nc*h*w,h*w=N,B转置N*C,B,C相乘,N*C*C*N得到N*NH*W*H*W),经过softmax出来每个position都是0-1之间,N*N成为Ssoatial attention map

Note that the more similar feature representations of the two position contributes to greater correlation between them)D和S相乘(C*N*N*N)为C*N,在reshape成C*H*W,和orign map相加,得到E

中间这个邻接矩阵确实是人为的,不像SE中的两个fc那样。

Channel Attention Module

Each channel map of high level features can be regarded as a class-specific response,and differenr semantic responses are associated with each other.

Channel attention和前面一样,只是中间的N*N变成了C*C

Attention Module Embedding with networks

4. Experiments

 

 

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值