CBAM:通道注意力+空间注意力-pytroch

https://blog.csdn.net/oYeZhou/article/details/116664508

论文地址:https://openaccess.thecvf.com/content_ECCV_2018/papers/Sanghyun_Woo_Convolutional_Block_Attention_ECCV_2018_paper.pdf

1、动机

卷积操作是通过混合通道和空间两个维度的信息来间特征提取的。在注意力方面,SE仅关注了通道注意力,没考虑空间方面的注意力。因此,本文提出了 CBAM——一种同时关注通道和空间注意力的卷积模块,可以用于CNNs架构中,以提升feature map的特征表达能力。

2、方法

CAM和SAM的架构如图2所示:

3、Pytorch实现

CBAM包含了个子模块:CAM和SAM,使用时,分别实例化它们,然后顺序应用在某个feature map之后即可。下面给出其Pytorch代码:

import torch
from torch import nn
 
 
class ChannelAttention(nn.Module):
    def __init__(self, in_planes, ratio=16):
        super(ChannelAttention, self).__init__()
        self.avg_pool = nn.AdaptiveAvgPool2d(1)
        self.max_pool = nn.AdaptiveMaxPool2d(1)
 
        self.fc1   = nn.Conv2d(in_planes, in_planes // 16, 1, bias=False)
        self.relu1 = nn.ReLU()
        self.fc2   = nn.Conv2d(in_planes // 16, in_planes, 1, bias=False)
 
        self.sigmoid = nn.Sigmoid()
 
    def forward(self, x):
        avg_out = self.fc2(self.relu1(self.fc1(self.avg_pool(x))))
        max_out = self.fc2(self.relu1(self.fc1(self.max_pool(x))))
        out = avg_out + max_out
        return self.sigmoid(out)
 

class SpatialAttention(nn.Module):
    def __init__(self, kernel_size=7):
        super(SpatialAttention, self).__init__()
 
        assert kernel_size in (3, 7), 'kernel size must be 3 or 7'
        padding = 3 if kernel_size == 7 else 1
 
        self.conv1 = nn.Conv2d(2, 1, kernel_size, padding=padding, bias=False)
        self.sigmoid = nn.Sigmoid()
        self.register_buffer()
 
    def forward(self, x):
        avg_out = torch.mean(x, dim=1, keepdim=True)
        max_out, _ = torch.max(x, dim=1, keepdim=True)
        x = torch.cat([avg_out, max_out], dim=1)
        x = self.conv1(x)
        return self.sigmoid(x)

CBAM(Convolutional Block Attention Module)是一种针对卷积神经网络(CNN)的注意力机制,在PyTorch中可以使用以下代码实现: ```python import torch import torch.nn as nn import torch.nn.functional as F class CBAM_Module(nn.Module): def __init__(self, channels, reduction=16): super(CBAM_Module, self).__init__() self.max_pool = nn.AdaptiveMaxPool2d(1) self.avg_pool = nn.AdaptiveAvgPool2d(1) self.fc1 = nn.Linear(channels, channels // reduction, bias=False) self.relu = nn.ReLU(inplace=True) self.fc2 = nn.Linear(channels // reduction, channels, bias=False) self.sigmoid_channel = nn.Sigmoid() self.sigmoid_spatial = nn.Sigmoid() self.conv = nn.Conv2d(2, 1, kernel_size=7, stride=1, padding=3) def forward(self, x): max_out = self.max_pool(x) avg_out = self.avg_pool(x) max_out = max_out.view(max_out.size(0), -1) avg_out = avg_out.view(avg_out.size(0), -1) out = torch.cat((max_out, avg_out), dim=1) out = self.fc1(out) out = self.relu(out) out = self.fc2(out) channel_attention = self.sigmoid_channel(out) channel_attention = channel_attention.view(channel_attention.size(0), channel_attention.size(1), 1, 1) x = x * channel_attention spatial_attention = torch.mean(x, dim=1, keepdim=True) spatial_attention = self.conv(spatial_attention) spatial_attention = self.sigmoid_spatial(spatial_attention) x = x * spatial_attention return x ``` 这个模块包含两个部分:通道注意力空间注意力通道注意力用来对每个通道的特征进行加权,从而强化重要的特征,减弱不重要的特征。空间注意力则用来对特征图中的每个位置进行加权,以便在不同的位置上聚焦于重要的特征。 通过在CNN中使用CBAM模块,可以提高模型的性能和准确性。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值