1 CBAM的原理和pytorch实现
具体请看这位大佬的文章,写的很好,并且给了完全可以运行的pytorch代码
【精选】CBAM——即插即用的注意力模块(附代码)_cbam模块_Billie使劲学的博客-CSDN博客
2 tensorflow 1.x的实现
就是上面的文章代码的tensorflow版本。
import tensorflow as tf
from tensorflow.keras.layers import Conv2D, Dense, GlobalAveragePooling2D, GlobalMaxPooling2D, Multiply, Reshape
class CBAMLayer(tf.keras.Model):
def __init__(self, channel, reduction=16, spatial_kernel=7):
super(CBAMLayer, self).__init__()
self.channel = channel
# Channel attention
self.max_pool = GlobalMaxPooling2D()
self.avg_pool = GlobalAveragePooling2D()
self.mlp = tf.keras.Sequential([
Dense(channel // reduction, use_bias=False),
tf.keras.layers.Activation('relu'),
Dense(channel, use_bias=False)
])
# Spatial attention
self.conv = Conv2D(1, (spatial_kernel, spatial_kernel), padding='same', use_bias=False)
self.sigmoid = tf.keras.layers.Activation('sigmoid')
def call(self, x):
max_out = self.mlp(Reshape((1, 1, self.channel))(self.max_pool(x)))
avg_out = self.mlp(Reshape((1, 1, self.channel))(self.avg_pool(x)))
channel_out = self.sigmoid(max_out + avg_out)
x = Multiply()([channel_out, x])
max_out = tf.reduce_max(x, axis=3, keepdims=True)
avg_out = tf.reduce_mean(x, axis=3, keepdims=True)
spatial_out = self.sigmoid(self.conv(tf.concat([max_out, avg_out], axis=3)))
x = Multiply()([spatial_out, x])
return x
x = tf.random_normal((1, 32, 32, 1024))
net = CBAMLayer(1024)
y = net.call(x)
print(y.shape)
环境是tensorflow 1.9 gpu版本。
具体环境文件分享出来。链接如下,里面是environment.yml文件。使用的话很简单。首先打开anaconda prompt切换到你下载的的文件的目录。然后运行下列代码。切换的话比如说你下载D盘。首先输入D: ,再输入cd 你的文件目录就好。
conda env create -f environment.yml
运行一会后,环境创建完毕,名字是 tf_gpu1_5_py36,激活使用即可。
链接:https://pan.baidu.com/s/1FhFH1ls0UfmP-1roh1ad_w?pwd=oo2t
提取码:oo2t
--来自百度网盘超级会员V4的分享