背景
在图像处理过程中,注意力机制对视觉信息的提取很重要,因为它涉及到算法所关注的特定部分。所以,不同的注意力机制,所关注的目标是不同的,即使,再好的神经网络,根据需要改变它的backbone,也许会得到意外的惊喜。下面就目前流行的yolov5网络,修改它的backbone做个Mark,添加的注意力机制是SE机制和MobileNet机制。
总体过程
1、创建新的网络模型结构文件;
2、修改models/common.py文件,添加机制的函数定义;
3、将注意力机制的名称,添加到models/yolo.py文件中;
4、将注意力添加到第一步的网络模型结构文件中;
5、修改网络模型结构文件中的其他参数;
6、修改训练文件中的’–cfg’参数(非必须,也可以命令行方式添加)。
具体步骤
第一步:
将models文件下的yolov5x.yaml,复制一份命名为yolov5x_SE.yaml和yolov5x_MobileNet.yaml;
第二步:
分别将SE注意力机制和MobileNet机制函数,添加到models/common.py文件末尾,如下:
class SE(nn.Module):
def __init__(self, c1, c2, ratio=16):
super(SE, self).__init__()
#c*1*1
self.avgpool = nn.AdaptiveAvgPool2d(1)
self.l1 = nn.Linear(c1, c1 // ratio, bias=False)
self.relu = nn.ReLU(inplace=True)
self.l2 = nn.Linear(c1 // ratio, c1, bias=False)
self.sig = nn.Sigmoid()
def forward(self, x):
b, c, _, _ = x.size()
y = self.avgpool(x).view(b, c)
y = self.l1(y)
y = self.relu(y)
y = self.l2(y)
y = self.sig(y)
y = y.view(b, c, 1, 1)
return x * y.expand_as(x)
# Mobilenetv3Small
class SeBlock(nn.Module):
def __init__(self, in_channel, reduction=4):
super().__init__()
self.Squeeze = nn.AdaptiveAvgPool2d(1)
self.Excitation = nn.Sequential()
self.Excitation.add_module('FC1', nn.Conv2d(in_channel, in_channel // reduction, kernel_size=1)) # 1*1卷积与此效果相同
self.Excitation.add_module('ReLU', nn.ReLU())
self.Excitation.add_module('FC2', nn.Conv2d(in_channel // reduction, in_channel, kernel_size=1))
self.Excitation.add_module('Sigmoid', nn.Sigmoid())
def forward(self, x):
y = self.Squeeze(x)
ouput = self.Excitation(y)
return x * (ouput.expand_as(x))
class Conv_BN_HSwish(nn.Module):
"""
This equals to
def conv_3x3_bn(inp, oup, stride):
return nn.Sequential(
nn.Conv2d(inp, oup, 3, stride, 1, bias=False),
nn.BatchNorm2d(oup),
h_swish()
)
"""
def __init__(self, c1, c2, stride):
super(Conv_BN_HSwish, self).__init__()
self.conv = nn.Conv2d(c1, c2, 3, stride, 1, bias=False)
self.bn = nn.BatchNorm2d(c2)
self.act = nn.Hardswish()
def forward(self, x):
return self.act(self.bn(self.conv(x)))
class MobileNetV3_InvertedResidual(nn.Module):
def __init__(self, inp, oup, hidden_dim, kernel_size, stride, use_se, use_hs):
super(MobileNetV3_InvertedResidual, self).__init__()
assert stride in [1, 2]
self.identity = stride == 1 and inp == oup
if inp == hidden_dim:
self.conv = nn.Sequential(
# dw
nn.Conv2d(hidden_dim, hidden_dim, kernel_size, stride, (kernel_size - 1) // 2, groups=hidden_dim,
bias=False),
nn.BatchNorm2d(hidden_dim),
nn.Hardswish() if use_hs else nn.ReLU(),
# Squeeze-and-Excite
SeBlock(hidden_dim) if use_se else nn.Sequential(),
# pw-linear
nn.Conv2d(hidden_dim, oup, 1, 1, 0, bias=False),
nn.BatchNorm2d(oup),
)
else:
self.conv = nn.Sequential(
# pw
nn.Conv2d(inp, hidden_dim, 1, 1, 0, bias=False),
nn.BatchNorm2d(hidden_dim),
nn.Hardswish() if use_hs else nn.ReLU(),
# dw
nn.Conv2d(hidden_dim, hidden_dim, kernel_size, stride, (kernel_size - 1) // 2, groups=hidden_dim,
bias=False),
nn.BatchNorm2d(hidden_dim),
# Squeeze-and-Excite
SeBlock(hidden_dim) if use_se else nn.Sequential(),
nn.Hardswish() if use_hs else nn.ReLU(),
# pw-linear
nn.Conv2d(hidden_dim, oup, 1, 1, 0, bias=False),
nn.BatchNorm2d(oup),
)
def forward(self, x):
y = self.conv(x)
if self.identity:
return x + y
else:
return y
第三步:
将注意力机制的名字添加到models/yolo.py文件,位置如下:
第四步:
将注意力添加到第一步的网络模型结构文件中,SE机制添加到C3模块后面,SPPF模块前面;MobileNet机制则替换C3模块和SPPF模块,分别如下:
第五步:
修改网络模型结构文件中的其他参数,当网络中添加了新的层之后,则该层网络后续的编号都会发生改变
① 原本Detect指定的是[17,20,23],在添加了注意力层后,就要对Detect层的参数进行修改,即原来的[17,20,23]分别变成[18,21,24];
② 同理,concat的from系数也要修改,这样网络结构才不会发生大的改变,当把SE层添加到第9层,则第9层后的编号都会加1,所以后面两个concat的from系数分别由[-1,14],[-1,10]变成[-1,15],[-1,11]。
第六步:
训练文件’–cfg’参数修改成新的网络模型结构文件名:
最后放上完整的网络模型结构文件:
yolov5x_SE.yaml
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# Parameters
nc: 10 # number of classes
depth_multiple: 1.33 # model depth multiple
width_multiple: 1.25 # layer channel multiple
anchors:
- [10,13, 16,30, 33,23] # P3/8
- [30,61, 62,45, 59,119] # P4/16
- [116,90, 156,198, 373,326] # P5/32
# YOLOv5 v6.0 backbone
backbone:
# [from, number, module, args]
[[-1, 1, Conv, [64, 6, 2, 2]], # 0-P1/2
[-1, 1, Conv, [128, 3, 2]], # 1-P2/4
[-1, 3, C3, [128]],
[-1, 1, Conv, [256, 3, 2]], # 3-P3/8
[-1, 6, C3, [256]],
[-1, 1, Conv, [512, 3, 2]], # 5-P4/16
[-1, 9, C3, [512]],
[-1, 1, Conv, [1024, 3, 2]], # 7-P5/32
[-1, 3, C3, [1024]],
[-1, 1, SE, [1024]],
[-1, 1, SPPF, [1024, 5]], # 9
]
# YOLOv5 v6.0 head
head:
[[-1, 1, Conv, [512, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 6], 1, Concat, [1]], # cat backbone P4
[-1, 3, C3, [512, False]], # 13
[-1, 1, Conv, [256, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 4], 1, Concat, [1]], # cat backbone P3
[-1, 3, C3, [256, False]], # 17 (P3/8-small)
[-1, 1, Conv, [256, 3, 2]],
[[-1, 15], 1, Concat, [1]], # cat head P4
[-1, 3, C3, [512, False]], # 20 (P4/16-medium)
[-1, 1, Conv, [512, 3, 2]],
[[-1, 11], 1, Concat, [1]], # cat head P5
[-1, 3, C3, [1024, False]], # 23 (P5/32-large)
[[18, 21, 24], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5)
]
yolov5x_MobileNet.yaml
# YOLOv5 ? by Ultralytics, GPL-3.0 license
# Parameters
nc: 10 # number of classes
depth_multiple: 1.33 # model depth multiple
width_multiple: 1.25 # layer channel multiple
anchors:
- [10,13, 16,30, 33,23] # P3/8
- [30,61, 62,45, 59,119] # P4/16
- [116,90, 156,198, 373,326] # P5/32
# Mobilenetv3-small backbone
# MobileNetV3_InvertedResidual [out_ch, hid_ch, k_s, stride, SE, HardSwish]
backbone:
# [from, number, module, args]
[[-1, 1, Conv_BN_HSwish, [16, 2]], # 0-p1/2
[-1, 1, MobileNetV3_InvertedResidual, [16, 16, 3, 2, 1, 0]], # 1-p2/4
[-1, 1, MobileNetV3_InvertedResidual, [24, 72, 3, 2, 0, 0]], # 2-p3/8
[-1, 1, MobileNetV3_InvertedResidual, [24, 88, 3, 1, 0, 0]], # 3
[-1, 1, MobileNetV3_InvertedResidual, [40, 96, 5, 2, 1, 1]], # 4-p4/16
[-1, 1, MobileNetV3_InvertedResidual, [40, 240, 5, 1, 1, 1]], # 5
[-1, 1, MobileNetV3_InvertedResidual, [40, 240, 5, 1, 1, 1]], # 6
[-1, 1, MobileNetV3_InvertedResidual, [48, 120, 5, 1, 1, 1]], # 7
[-1, 1, MobileNetV3_InvertedResidual, [48, 144, 5, 1, 1, 1]], # 8
[-1, 1, MobileNetV3_InvertedResidual, [96, 288, 5, 2, 1, 1]], # 9-p5/32
[-1, 1, MobileNetV3_InvertedResidual, [96, 576, 5, 1, 1, 1]], # 10
[-1, 1, MobileNetV3_InvertedResidual, [96, 576, 5, 1, 1, 1]], # 11
]
# YOLOv5 v6.0 head
head:
[[-1, 1, Conv, [96, 1, 1]], # 12
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 8], 1, Concat, [1]], # cat backbone P4
[-1, 3, C3, [144, False]], # 15
[-1, 1, Conv, [144, 1, 1]], # 16
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 3], 1, Concat, [1]], # cat backbone P3
[-1, 3, C3, [168, False]], # 19 (P3/8-small)
[-1, 1, Conv, [168, 3, 2]],
[[-1, 16], 1, Concat, [1]], # cat head P4
[-1, 3, C3, [312, False]], # 22 (P4/16-medium)
[-1, 1, Conv, [312, 3, 2]],
[[-1, 12], 1, Concat, [1]], # cat head P5
[-1, 3, C3, [408, False]], # 25 (P5/32-large)
[[19, 22, 25], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5)
]