YOLO改进之在YOLOv5、V8中加入RepVGG

最近在使用yolov5和v8做小目标的检测效果对比,准备复现一下drone-YOLO的工作,本来许多工程都是面向Git的借鉴(copy)工作,但是在查阅资料的时候发现很多帖子都引流到付费订阅上,另外就是某些帖子记录不全,导致反复踩坑,为了方便后来者居上,记录下改进(踩坑)过程。

RepVGG是基于VGG改造的卷积神经网络,在训练时具备多分支结构,但是在推理时仅有3*3卷积和Relu组成 。在ImageNet的Top1精度达到80%,同时推理速度相比ResNet有了明显提升。

论文链接:[2101.03697] RepVGG: Making VGG-style ConvNets Great Again (arxiv.org)

源码链接:GitHub - megvii-model/RepVGG

YOLOV5改进

yolov5的改进看似比较简单,基本上照着这篇帖子是可以完成的,但是需要注意的是,会有几个小bug,不知道是不是博主忘了贴完整代码。

问题 NameError: name ‘conv_bn’ is not defined

原因 缺少conv_bn函数
解决方法 在models/common.py文件中增加或修改以下完整函数


def conv_bn(in_channels, out_channels, kernel_size, stride, padding, groups=1):
    result = nn.Sequential()
    result.add_module('conv', nn.Conv2d(in_channels=in_channels, out_channels=out_channels,
                                        kernel_size=kernel_size, stride=stride, padding=padding, groups=groups,
                                        bias=False))
    result.add_module('bn', nn.BatchNorm2d(num_features=out_channels))

    return result

class SEBlock(nn.Module):

    def __init__(self, input_channels, internal_neurons):
        super(SEBlock, self).__init__()
        self.down = nn.Conv2d(in_channels=input_channels, out_channels=internal_neurons, kernel_size=1, stride=1,
                              bias=True)
        self.up = nn.Conv2d(in_channels=internal_neurons, out_channels=input_channels, kernel_size=1, stride=1,
                            bias=True)
        self.input_channels = input_channels

    def forward(self, inputs):
        x = F.avg_pool2d(inputs, kernel_size=inputs.size(3))
        x = self.down(x)
        x = F.relu(x)
        x = self.up(x)
        x = torch.sigmoid(x)
        x = x.view(-1, self.input_channels, 1, 1)
        return inputs * x

class RepVGGBlock(nn.Module):
    def __init__(self, in_channels, out_channels, kernel_size=3,
                 stride=1, padding=0, dilation=1, groups=1, padding_mode='zeros', deploy=False, use_se=False):
        super(RepVGGBlock, self).__init__()
        self.deploy = deploy
        self.groups = groups
        self.in_channels = in_channels
        padding_11 = padding - kernel_size // 2
        self.nonlinearity = nn.SiLU()
        # self.nonlinearity = nn.ReLU()
        if use_se:
            self.se = SEBlock(out_channels, internal_neurons=out_channels // 16)
        else:
            self.se = nn.Identity()
        if deploy:
            self.rbr_reparam = nn.Conv2d(in_channels=in_channels, out_channels=out_channels, kernel_size=kernel_size,
                                         stride=stride,
                                         padding=padding, dilation=dilation, groups=groups, bias=True,
                                         padding_mode=padding_mode)
 
        else:
            self.rbr_identity = nn.BatchNorm2d(
                num_features=in_channels) if out_channels == in_channels and stride == 1 else None
            self.rbr_dense = conv_bn(in_channels=in_channels, out_channels=out_channels, kernel_size=kernel_size,
                                     stride=stride, padding=padding, groups=groups)
            self.rbr_1x1 = conv_bn(in_channels=in_channels, out_channels=out_channels, kernel_size=1, stride=stride,
                                   padding=padding_11, groups=groups)
            # print('RepVGG Block, identity = ', self.rbr_identity)
    def switch_to_deploy(self):
        if hasattr(self, 'rbr_1x1'):
            kernel, bias = self.get_equivalent_kernel_bias()
            self.rbr_reparam = nn.Conv2d(in_channels=self.rbr_dense.conv.in_channels, out_channels=self.rbr_dense.conv.out_channels,
                                    kernel_size=self.rbr_dense.conv.kernel_size, stride=self.rbr_dense.conv.stride,
                                    padding=self.rbr_dense.conv.padding, dilation=self.rbr_dense.conv.dilation, groups=self.rbr_dense.conv.groups, bias=True)
            self.rbr_reparam.weight.data = kernel
            self.rbr_reparam.bias.data = bias
            for para in self.parameters():
                para.detach_()
            self.rbr_dense = self.rbr_reparam
            # self.__delattr__('rbr_dense')
            self.__delattr__('rbr_1x1')
            if hasattr(self, 'rbr_identity'):
                self.__delattr__('rbr_identity')
            if hasattr(self, 'id_tensor'):
                self.__delattr__('id_tensor')
            self.deploy = True
 
    def get_equivalent_kernel_bias(self):
        kernel3x3, bias3x3 = self._fuse_bn_tensor(self.rbr_dense)
        kernel1x1, bias1x1 = self._fuse_bn_tensor(self.rbr_1x1)
        kernelid, biasid = self._fuse_bn_tensor(self.rbr_identity)
        return kernel3x3 + self._pad_1x1_to_3x3_tensor(kernel1x1) + kernelid, bias3x3 + bias1x1 + biasid
 
    def _pad_1x1_to_3x3_tensor(self, kernel1x1):
        if kernel1x1 is None:
            return 0
        else:
            return torch.nn.functional.pad(kernel1x1, [1, 1, 1, 1])
 
    def _fuse_bn_tensor(self, branch):
        if branch is None:
            return 0, 0
        if isinstance(branch, nn.Sequential):
            kernel = branch.conv.weight
            running_mean = branch.bn.running_mean
            running_var = branch.bn.running_var
            gamma = branch.bn.weight
            beta = branch.bn.bias
            eps = branch.bn.eps
        else:
            assert isinstance(branch, nn.BatchNorm2d)
            if not hasattr(self, 'id_tensor'):
                input_dim = self.in_channels // self.groups
                kernel_value = np.zeros((self.in_channels, input_dim, 3, 3), dtype=np.float32)
                for i in range(self.in_channels):
                    kernel_value[i, i % input_dim, 1, 1] = 1
                self.id_tensor = torch.from_numpy(kernel_value).to(branch.weight.device)
            kernel = self.id_tensor
            running_mean = branch.running_mean
            running_var = branch.running_var
            gamma = branch.weight
            beta = branch.bias
            eps = branch.eps
        std = (running_var + eps).sqrt()
        t = (gamma / std).reshape(-1, 1, 1, 1)
        return kernel * t, beta - running_mean * gamma / std
 
    def forward(self, inputs):
        if self.deploy:
            return self.nonlinearity(self.rbr_dense(inputs))
        if hasattr(self, 'rbr_reparam'):
            return self.nonlinearity(self.se(self.rbr_reparam(inputs)))
 
        if self.rbr_identity is None:
            id_out = 0
        else:
            id_out = self.rbr_identity(inputs)
        return self.nonlinearity(self.se(self.rbr_dense(inputs) + self.rbr_1x1(inputs) + id_out))

-.-(2024.1.5更新)好像那篇帖子突然就也改成订阅了…那我就把详细的步骤也贴上来一下吧

1、新增repVGG模型的yaml文件

# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license

# Parameters
nc: 80  # number of classes
depth_multiple: 0.33  # model depth multiple
width_multiple: 0.50  # layer channel multiple
anchors:
  - [10,13, 16,30, 33,23]  # P3/8
  - [30,61, 62,45, 59,119]  # P4/16
  - [116,90, 156,198, 373,326]  # P5/32

# YOLOv5 v6.0 backbone
backbone:
  # [from, number, module, args]
  [[-1, 1, Conv, [64, 6, 2, 2]],  # 0-P1/2
   [-1, 1, Conv, [128, 3, 2]],  # 1-P2/4
   [-1, 3, RepVGGBlock, [128]],
   [-1, 1, Conv, [256, 3, 2]],  # 3-P3/8
   [-1, 6, RepVGGBlock, [256]],
   [-1, 1, Conv, [512, 3, 2]],  # 5-P4/16
   [-1, 9, C3, [512]],
   [-1, 1, Conv, [1024, 3, 2]],  # 7-P5/32
   [-1, 3, C3, [1024]],
   [-1, 1, SPPF, [1024, 5]],  # 9
  ]

# YOLOv5 v6.0 head
head:
  [[-1, 1, Conv, [512, 1, 1]],
   [-1, 1, nn.Upsample, [None, 2, 'nearest']],
   [[-1, 6], 1, Concat, [1]],  # cat backbone P4
   [-1, 3, C3, [512, False]],  # 13

   [-1, 1, Conv, [256, 1, 1]],
   [-1, 1, nn.Upsample, [None, 2, 'nearest']],
   [[-1, 4], 1, Concat, [1]],  # cat backbone P3
   [-1, 3, C3, [256, False]],  # 17 (P3/8-small)

   [-1, 1, Conv, [256, 3, 2]],
   [[-1, 14], 1, Concat, [1]],  # cat head P4
   [-1, 3, C3, [512, False]],  # 20 (P4/16-medium)

   [-1, 1, Conv, [512, 3, 2]],
   [[-1, 10], 1, Concat, [1]],  # cat head P5
   [-1, 3, C3, [1024, False]],  # 23 (P5/32-large)

   [[17, 20, 23], 1, Detect, [nc, anchors]],  # Detect(P3, P4, P5)
  ]

2、在models/common.py中新增conv_bn、SEBlock、RepVGGBlock模块

去上文的问题解决方案中复制代码即可

3、配置yolo.py

修改models/yolo.py中的parse_model函数,添加RepVGGBlock的类名,以及elif m

def parse_model(d, ch):  # model_dict, input_channels(3)
    	# 修改部分用** **标注-------不要完整复制代码,对着改一下**部分改下代码-------
        if m in {
                Conv, GhostConv, Bottleneck, GhostBottleneck, SPP, SPPF, DWConv, MixConv2d, Focus, CrossConv,
                BottleneckCSP, C3, C3TR, C3SPP, C3Ghost, nn.ConvTranspose2d, DWConvTranspose2d, C3x,**RepVGGBlock**}:
            c1, c2 = ch[f], args[0]
            if c2 != no:  # if not output
                c2 = make_divisible(c2 * gw, ch_mul)

            args = [c1, c2, *args[1:]]
            if m in {BottleneckCSP, C3, C3TR, C3Ghost, C3x}:
                args.insert(2, n)  # number of repeats
                n = 1
        **elif m is RepVGGBlock:
            c1, c2 = ch[f], args[0]
            if c2 != no:  # if not output
                c2 = make_divisible(c2 * gw, 8)**
            args = [c1, c2, *args[1:]]
        elif m is nn.BatchNorm2d:
            args = [ch[f]]
		# 省略后续代码-------------

训练的时候指定cfg为新的yaml结构即可,如

python  train.py --data ../ultralytics/ultralytics/cfg/datasets/visdrone.yaml --weights yolov5s.pt --img 640 --device 1 --batch-size 32 --cfg models/yolov5s-rep.yaml

在这里插入图片描述
ok,run!!!

YOLOV8改进

老规矩,先上问题

问题1: RuntimeError: The size of tensor a (66) must match the size of tensor b (64) at non-singleton dimension 3

原因 这种情况通常发生在卷积操作中的 padding 参数上。一般而言,当进行卷积操作时,如果使用了 padding 参数,会在输入张量的周围添加零值,以便于输出与输入的尺寸保持一致。而通常情况下,当 stride=1 且 kernel_size=3 时,使用 padding=1 会导致输出尺寸比输入尺寸大两个单位。这是因为在每一侧都会填充一个单位,使得输出尺寸相对于输入尺寸增加了两倍填充值的数量。
解决方法

##padding改成0即可
class RepVGGBlock(nn.Module):
    def __init__(self, in_channels, out_channels, kernel_size=3,
                 stride=1, padding=0, dilation=1, groups=1, padding_mode='zeros', deploy=False, use_se=False):

问题2: KeyError: ‘RepVGGBlock’

原因 这个问题着实有点无语,不过应该是自己很少敲程序,只懂得cv、run导致的-.-,发现很多帖子有人问类似的问题,但回复较少且没有解决实际问题,或者压根没有提到这个问题,如果有遇到的可以参考。
解决方法 回到ultralytics目录运行sudo python3 setup.py install 进行模块注册
前提是对应修改增加了相应模块名,如我这里增加的是RepVGGBlock模块,那么对应以下几处也需要修改:
1、ultralytics/ultralytics/nn/modules/block.py

__all__ = ('DFL', 'HGBlock', 'HGStem', 'SPP', 'SPPF', 'C1', 'C2', 'C3', 'C2f', 'C3x', 'C3TR', 'C3Ghost',
           'GhostBottleneck', 'Bottleneck', 'BottleneckCSP', 'Proto', 'RepC3', 'ResNetLayer','RepVGGBlock')

2、ultralytics/ultralytics/nn/modules/init.py

from .block import (C1, C2, C3, C3TR, DFL, SPP, SPPF, Bottleneck, BottleneckCSP, C2f, C3Ghost, C3x, GhostBottleneck,
                    HGBlock, HGStem, Proto, RepC3, ResNetLayer, RepVGGBlock)
##-----中间省略,只贴出修改部分
__all__ = ('Conv', 'Conv2', 'LightConv', 'RepConv', 'DWConv', 'DWConvTranspose2d', 'ConvTranspose', 'Focus',
           'GhostConv', 'ChannelAttention', 'SpatialAttention', 'CBAM', 'Concat', 'TransformerLayer',
           'TransformerBlock', 'MLPBlock', 'LayerNorm2d', 'DFL', 'HGBlock', 'HGStem', 'SPP', 'SPPF', 'C1', 'C2', 'C3',
           'C2f', 'C3x', 'C3TR', 'C3Ghost', 'GhostBottleneck', 'Bottleneck', 'BottleneckCSP', 'Proto', 'Detect',
           'Segment', 'Pose', 'Classify', 'TransformerEncoderLayer', 'RepC3', 'RTDETRDecoder', 'AIFI',
           'DeformableTransformerDecoder', 'DeformableTransformerDecoderLayer', 'MSDeformAttn', 'MLP', 'ResNetLayer','RepVGGBlock')

最后回到主目录,执行setup.py 该方法应该同样适用于其他魔改模块注册,用于解决各种涨点消融任务。

cd ultralytics
sudo python3 setup.py install

下面是详细点的改进步骤

1、新增yaml文件

如yolov5一致,yaml路径位于ultralytics/ultralytics/cfg/models/v8/,我这里用的是v8自带的p2结构(增加了160*160的小检测头)进行修改的,注意命名的时候需要指定scales,默认是n结构,如yolov8n-p2_REPVGG.yaml,如果你想要训练yolov8l模型,那就改成yolov8l-p2_REPVGG.yaml

# Ultralytics YOLO 🚀, AGPL-3.0 license
# YOLOv8 object detection model with P2-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect

# Parameters
nc: 10  # number of classes
scales: # model compound scaling constants, i.e. 'model=yolov8n.yaml' will call yolov8.yaml with scale 'n'
  # [depth, width, max_channels]
  n: [0.33, 0.25, 1024]
  s: [0.33, 0.50, 1024]
  m: [0.67, 0.75, 768]
  l: [1.00, 1.00, 512]
  x: [1.00, 1.25, 512]

# YOLOv8.0 backbone
backbone:
  # [from, repeats, module, args]
  - [-1, 1, Conv, [64, 3, 2]]  # 0-P1/2
  - [-1, 1, Conv, [128, 3, 2]]  # 1-P2/4
  - [-1, 3, RepVGGBlock, [128, True]]
  - [-1, 1, Conv, [256, 3, 2]]  # 3-P3/8
  - [-1, 6, RepVGGBlock, [256, True]]
  - [-1, 1, Conv, [512, 3, 2]]  # 5-P4/16
  - [-1, 6, C2f, [512, True]]
  - [-1, 1, Conv, [1024, 3, 2]]  # 7-P5/32
  - [-1, 3, C2f, [1024, True]]
  - [-1, 1, SPPF, [1024, 5]]  # 9

# YOLOv8.0-p2 head
head:
  - [-1, 1, nn.Upsample, [None, 2, 'nearest']]
  - [[-1, 6], 1, Concat, [1]]  # cat backbone P4
  - [-1, 3, C2f, [512]]  # 12

  - [-1, 1, nn.Upsample, [None, 2, 'nearest']]
  - [[-1, 4], 1, Concat, [1]]  # cat backbone P3
  - [-1, 3, C2f, [256]]  # 15 (P3/8-small)

  - [-1, 1, nn.Upsample, [None, 2, 'nearest']]
  - [[-1, 2], 1, Concat, [1]]  # cat backbone P2
  - [-1, 3, C2f, [128]]  # 18 (P2/4-xsmall)

  - [-1, 1, Conv, [128, 3, 2]]
  - [[-1, 15], 1, Concat, [1]]  # cat head P3
  - [-1, 3, C2f, [256]]  # 21 (P3/8-small)

  - [-1, 1, Conv, [256, 3, 2]]
  - [[-1, 12], 1, Concat, [1]]  # cat head P4
  - [-1, 3, C2f, [512]]  # 24 (P4/16-medium)

  - [-1, 1, Conv, [512, 3, 2]]
  - [[-1, 9], 1, Concat, [1]]  # cat head P5
  - [-1, 3, C2f, [1024]]  # 27 (P5/32-large)

  - [[18, 21, 24, 27], 1, Detect, [nc]]  # Detect(P2, P3, P4, P5)

2、在block.py中添加conv_bn、SEBlock、RepVGGBlock模块

类似yolov5的common.py,路径位于 ultralytics/ultralytics/nn/modules/block.py

3、修改tasks.py中的parse_model函数

同样地,路径位于ultralytics/ultralytics/nn/tasks.py

# 修改部分用** **标注,不要完整复制代码,对着改一下**部分改下代码-------
def parse_model(d, ch, verbose=True):  # model_dict, input_channels(3)
    """Parse a YOLO model.yaml dictionary into a PyTorch model."""
	## ---省略部分代码
    for i, (f, n, m, args) in enumerate(d['backbone'] + d['head']):  # from, number, module, args      
        m = getattr(torch.nn, m[3:]) if 'nn.' in m else globals()[m]  # get module

        for j, a in enumerate(args):
            if isinstance(a, str):
                with contextlib.suppress(ValueError):
                    args[j] = locals()[a] if a in locals() else ast.literal_eval(a)

        n = n_ = max(round(n * depth), 1) if n > 1 else n  # depth gain
        if m in (Classify, Conv, ConvTranspose, GhostConv, Bottleneck, GhostBottleneck, SPP, SPPF, DWConv, Focus,
                 BottleneckCSP, C1, C2, C2f, C3, C3TR, C3Ghost, nn.ConvTranspose2d, DWConvTranspose2d, C3x, RepC3, **RepVGGBlock**):
            c1, c2 = ch[f], args[0]
            if c2 != nc:  # if c2 not equal to number of classes (i.e. for Classify() output)
                c2 = make_divisible(min(c2, max_channels) * width, 8)
            
            args = [c1, c2, *args[1:]]
            if m in (BottleneckCSP, C1, C2, C2f, C3, C3TR, C3Ghost, C3x, RepC3):
                args.insert(2, n)  # number of repeats
                n = 1
        elif m is AIFI:
            args = [ch[f], *args]
            
        **elif m is RepVGGBlock:
            c1, c2 = ch[f], args[0]
            if c2 != nc:  # if c2 not equal to number of classes (i.e. for Classify() output)
                c2 = make_divisible(min(c2, max_channels) * width, 8)
            args = [c1, c2, *args[1:]]**
        
        elif m in (HGStem, HGBlock):
            c1, cm, c2 = ch[f], args[0], args[1]
            args = [c1, cm, c2, *args[2:]]
            if m is HGBlock:
                args.insert(4, n)  # number of repeats
                n = 1
        ## ---省略部分代码

3、对应修改问题2中的那几处模块名RepVGGBlock

ultralytics/ultralytics/nn/modules/block.py

在这里插入图片描述

ultralytics/ultralytics/nn/modules/init.py

在这里插入图片描述

ultralytics/ultralytics/nn/tasks.py

在这里插入图片描述
最后记得python setup.py一下

后续如有新坑,另再补充~~

  • 24
    点赞
  • 32
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 3
    评论
引用\[1\]:MobileOne是一个轻量级的卷积神经网络架构,它是基于YOLOv7项目的MobileOne-ms0的官方预训练权重进行训练的。你可以在该链接找到MobileOne的官方预训练权重和相关文档。\[1\] 引用\[2\]:RepVGG是一种简单但功能强大的卷积神经网络架构,它在推理时具有类似于VGG的骨干结构,主体部分由3x3卷积和ReLU激活函数堆叠而成。在训练时,模型采用多分支拓扑结构。通过结构重参数化技术,训练和推理架构被解耦,从而实现了RepVGG模型。据我们所知,在ImageNet数据集上,RepVGG的top-1准确性达到80%以上,这是老模型首次实现该精度。此外,RepVGG在NVIDIA 1080Ti GPU上的运行速度比ResNet-50快83%,比ResNet-101快101%,并且具有更高的精度。与最新的EfficientNet和RegNet等模型相比,RepVGG在精度和速度之间取得了良好的平衡。\[2\] 引用\[3\]:在使用yolov5轻量化repvgg时,可以在yolo.py文件的Model类的fuse方法加入MobileOne和MobileOneBlock部分的reparameterize操作。具体步骤如下: 1. 在Model类的fuse方法,遍历模型的所有模块。 2. 如果遇到RepConv类型的模块,调用fuse_repvgg_block方法进行融合。 3. 如果遇到RepConv_OREPA类型的模块,调用switch_to_deploy方法进行转换。 4. 如果遇到MobileOne或MobileOneBlock类型的模块,并且具有reparameterize方法,调用reparameterize方法进行重参数化操作。 5. 如果遇到Conv类型的模块,并且具有bn属性,调用fuse_conv_and_bn函数进行融合,并更新模块的forward方法。 6. 如果遇到IDetect或IAuxDetect类型的模块,调用fuse方法进行融合,并更新模块的forward方法。 7. 最后,返回更新后的模型。 这样,你就可以在yolov5轻量化repvgg使用MobileOne和MobileOneBlock,并进行相应的重参数化操作。\[3\] #### 引用[.reference_title] - *1* *3* [【YOLOv7改进轻量化】第一章——引入轻量化骨干网络MobileOne](https://blog.csdn.net/weixin_44994302/article/details/128156130)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v91^insertT0,239^v3^insert_chatgpt"}} ] [.reference_item] - *2* [目标检测算法——YOLOv5/YOLOv7改进之结合​RepVGG(速度飙升)](https://blog.csdn.net/m0_53578855/article/details/127813191)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v91^insertT0,239^v3^insert_chatgpt"}} ] [.reference_item] [ .reference_list ]
评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

三年一班gary

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值