YOLOv8改进2.5-更换QARepVGG主干网络

往期回顾:YOLOv8改进目录_convnextv2与yolov8-CSDN博客


目录

一.论文说明

二.代码改进

2.1新建qarepvgg.pt

2.2修改task.py的fuse函数

2.3task.py的parse_model函数进行注册

2.4yaml文件


一.论文说明

1.论文地址:https://arxiv.org/pdf/2212.01593.pdf

2.论文内容        

        尽管深度神经网络在视觉[4, 12, 17, 19, 35]、语言[6, 40]和语音[13]方面取得了巨大成功,但模型压缩已成为迫切需要的问题,特别是考虑到数据中心功耗的急剧增长和全球资源受限的边缘设备数量的大幅增加。网络量化[14, 15]是最有效的方法之一,因为它具有较低的内存成本和固有的整数计算优势。

        然而,在神经架构设计中,量化感知并不是首要任务,因此在很大程度上被忽视。然而,如果量化是最终部署的必要操作,这可能会带来负面影响。例如,许多著名的架构存在量化崩溃问题,如MobileNet [20, 21, 36]和EfficientNet [38],这需要解决设计或采用高级量化方案,如[26, 37, 45]和[2, 16]。

        最近,神经架构设计中最具影响力的方向之一是重参数化[8, 11, 46]。其中,RepVGG [11] 在训练期间将标准的Conv-BN-ReLU重新设计为其相同的多分支结构,这带来了强大的性能提升,同时在推理过程中不会增加任何额外的成本。由于其简单性和推理优势,它受到许多最近的视觉任务的青睐[10, 22, 28, 39, 41, 44]。然而,基于重参数化的模型面临着众所周知的量化困难,这是一种固有的缺陷,阻碍了行业应用。结果表明,使这种结构合适地量化并不是易事。一个标准的训练后量化方案将RepVGG-A0的准确性从72.4%大大降低到了52.2%。与此同时,采用量化感知训练[7]也并非易事。

         在这里,我们特别关注RepVGG [11]的量化困难。为了解决这个问题,我们探索了基础的量化原则,通过深入分析典型的重参数化架构来指导我们。也就是说,为了使网络具有更好的量化性能,权重的分布以及任意分布的处理数据都应该是“量化友好”的。这两者对确保更好的量化性能都至关重要。更重要的是,这些原则引导我们设计了一个全新的架构,我们称之为QARepVGG(即Quantization-Aware RepVGG,量化感知repvgg),它不会遭受严重的量化崩溃,其构建块如图1所示,其量化性能得到了大幅提升。

二.代码改进

2.1新建qarepvgg.pt
import cv2
import numpy as np
import pandas as pd
import requests
import torch
import torch.nn as nn
 
 
def conv_bn(in_channels, out_channels, kernel_size, stride, padding, groups=1):
    result = nn.Sequential()
    result.add_module('conv', nn.Conv2d(in_channels=in_channels, out_channels=out_channels,
                                        kernel_size=kernel_size, stride=stride, padding=padding, groups=groups,
                                        bias=False))
    result.add_module('bn', nn.BatchNorm2d(num_features=out_channels))
 
    return result
 
 
class RepVGGBlock(nn.Module):
 
    def __init__(self, in_channels, out_channels, kernel_size=3,
                 stride=1, padding=1, dilation=1, groups=1, padding_mode='zeros', deploy=False, use_se=False):
        super(RepVGGBlock, self).__init__()
        self.deploy = deploy
        self.groups = groups
        self.in_channels = in_channels
 
        padding_11 = padding - kernel_size // 2
 
        self.nonlinearity = nn.SiLU()
 
        self.se = nn.Identity()
 
        if deploy:
            self.rbr_reparam = nn.Conv2d(in_channels=in_channels, out_channels=out_channels, kernel_size=kernel_size,
                                         stride=stride,
                                         padding=padding, dilation=dilation, groups=groups, bias=True,
                                         padding_mode=padding_mode)
 
        else:
            self.rbr_identity = nn.BatchNorm2d(
                num_features=in_channels) if out_channels == in_channels and stride == 1 else None
            self.rbr_dense = conv_bn(in_channels=in_channels, out_channels=out_channels, kernel_size=kernel_size,
                                     stride=stride, padding=padding, groups=groups)
            self.rbr_1x1 = conv_bn(in_channels=in_channels, out_channels=out_channels, kernel_size=1, stride=stride,
                                   padding=padding_11, groups=groups)
 
    def _fuse_bn_tensor(self, branch):
        if branch is None:
            return 0, 0
        if isinstance(branch, nn.Sequential):
            kernel = branch.conv.weight
            running_mean = branch.bn.running_mean
            running_var = branch.bn.running_var
            gamma = branch.bn.weight
            beta = branch.bn.bias
            eps = branch.bn.eps
        else:
            assert isinstance(branch, nn.BatchNorm2d)
            if not hasattr(self, 'id_tensor'):
                input_dim = self.in_channels // self.groups
                kernel_value = np.zeros((self.in_channels, input_dim, 3, 3), dtype=np.float32)
                for i in range(self.in_channels):
                    kernel_value[i, i % input_dim, 1, 1] = 1
                self.id_tensor = torch.from_numpy(kernel_value).to(branch.weight.device)
            kernel = self.id_tensor
            running_mean = branch.running_mean
            running_var = branch.running_var
            gamma = branch.weight
            beta = branch.bias
            eps = branch.eps
        std = (running_var + eps).sqrt()
        t = (gamma / std).reshape(-1, 1, 1, 1)
        return kernel * t, beta - running_mean * gamma / std
 
    def get_equivalent_kernel_bias(self):
        kernel3x3, bias3x3 = self._fuse_bn_tensor(self.rbr_dense)
        kernel1x1, bias1x1 = self._fuse_bn_tensor(self.rbr_1x1)
        kernelid, biasid = self._fuse_bn_tensor(self.rbr_identity)
        return kernel3x3 + self._pad_1x1_to_3x3_tensor(kernel1x1) + kernelid, bias3x3 + bias1x1 + biasid
 
    def _pad_1x1_to_3x3_tensor(self, kernel1x1):
        if kernel1x1 is None:
            return 0
        else:
            return torch.nn.functional.pad(kernel1x1, [1, 1, 1, 1])
 
    def forward(self, inputs):
        if self.deploy:
            return self.nonlinearity(self.rbr_dense(inputs))
        if hasattr(self, 'rbr_reparam'):
            return self.nonlinearity(self.se(self.rbr_reparam(inputs)))
 
        if self.rbr_identity is None:
            id_out = 0
        else:
            id_out = self.rbr_identity(inputs)
 
        return self.nonlinearity(self.se(self.rbr_dense(inputs) + self.rbr_1x1(inputs) + id_out))
 
 
class QARepVGGBlock(RepVGGBlock):
    """
    RepVGGBlock is a basic rep-style block, including training and deploy status
    This code is based on https://arxiv.org/abs/2212.01593
    """
 
    def __init__(self, in_channels, dim, kernel_size=3,
                 stride=1, padding=1, dilation=1, groups=1, padding_mode='zeros', deploy=False, use_se=False):
        super(QARepVGGBlock, self).__init__(in_channels, dim, kernel_size, stride, padding, dilation, groups,
                                        padding_mode, deploy, use_se)
        if not deploy:
            self.bn = nn.BatchNorm2d(dim)
            self.rbr_1x1 = nn.Conv2d(in_channels, dim, kernel_size=1, stride=stride, groups=groups, bias=False)
            self.rbr_identity = nn.Identity() if dim == in_channels and stride == 1 else None
        self._id_tensor = None
 
    def forward(self, inputs):
        if hasattr(self, 'rbr_reparam'):
            return self.nonlinearity(self.bn(self.se(self.rbr_reparam(inputs))))
 
        if self.rbr_identity is None:
            id_out = 0
        else:
            id_out = self.rbr_identity(inputs)
 
        return self.nonlinearity(self.bn(self.se(self.rbr_dense(inputs) + self.rbr_1x1(inputs) + id_out)))
 
    def get_equivalent_kernel_bias(self):
        kernel3x3, bias3x3 = self._fuse_bn_tensor(self.rbr_dense)
        kernel = kernel3x3 + self._pad_1x1_to_3x3_tensor(self.rbr_1x1.weight)
        bias = bias3x3
 
        if self.rbr_identity is not None:
            input_dim = self.in_channels // self.groups
            kernel_value = np.zeros((self.in_channels, input_dim, 3, 3), dtype=np.float32)
            for i in range(self.in_channels):
                kernel_value[i, i % input_dim, 1, 1] = 1
            id_tensor = torch.from_numpy(kernel_value).to(self.rbr_1x1.weight.device)
            kernel = kernel + id_tensor
        return kernel, bias
 
    def switch_deploy(self):
        if hasattr(self, 'rbr_reparam'):
            return
        kernel, bias = self.get_equivalent_kernel_bias()
        self.rbr_reparam = nn.Conv2d(in_channels=self.rbr_dense.conv.in_channels,
                                     out_channels=self.rbr_dense.conv.out_channels,
                                     kernel_size=self.rbr_dense.conv.kernel_size, stride=self.rbr_dense.conv.stride,
                                     padding=self.rbr_dense.conv.padding, dilation=self.rbr_dense.conv.dilation,
                                     groups=self.rbr_dense.conv.groups, bias=True)
        self.rbr_reparam.weight.data = kernel
        self.rbr_reparam.bias.data = bias
        for para in self.parameters():
            para.detach_()
        self.__delattr__('rbr_dense')
        self.__delattr__('rbr_1x1')
        if hasattr(self, 'rbr_identity'):
            self.__delattr__('rbr_identity')
        if hasattr(self, 'id_tensor'):
            self.__delattr__('id_tensor')
        self.deploy = True
 
    def _fuse_extra_bn_tensor(self, kernel, bias, branch):
        assert isinstance(branch, nn.BatchNorm2d)
        running_mean = branch.running_mean - bias  # remove bias
        running_var = branch.running_var
        gamma = branch.weight
        beta = branch.bias
        eps = branch.eps
        std = (running_var + eps).sqrt()
        t = (gamma / std).reshape(-1, 1, 1, 1)
        return kernel * t, beta - running_mean * gamma / std
 
 
class QARepNeXt(nn.Module):
    '''
        QARepNeXt is a stage block with qarep-style basic block
    '''
 
    def __init__(self, in_channels, out_channels, n=1, isTrue=None):
        super().__init__()
        self.conv1 = QARepVGGBlock(in_channels, out_channels)
        self.block = nn.Sequential(*(QARepVGGBlock(out_channels, out_channels) for _ in range(n - 1))) if n > 1 else None
 
    def forward(self, x):
        x = self.conv1(x)
        if self.block is not None:
            x = self.block(x)
        return x
2.2修改task.py的fuse函数
    def fuse(self):
        """
        Fuse the `Conv2d()` and `BatchNorm2d()` layers of the model into a single layer, in order to improve the computation efficiency.
        Returns:
            (nn.Module): The fused model is returned.
        """
        LOGGER.info('Fusing layers... ')
        for m in self.model.modules():
            if hasattr(m, 'switch_deploy'):
                m.forward = m.forward_fuse  # update forward
            elif isinstance(m, (Conv, DWConv)) and hasattr(m, 'bn'):
                m.conv = fuse_conv_and_bn(m.conv, m.bn)  # update conv
                delattr(m, 'bn')  # remove batchnorm
                m.forward = m.forward_fuse  # update forward
 
        self.info()
        return self
2.3task.py的parse_model函数进行注册

找到ultralytics/nn/tasks.py,修改parse_model函数

        elif m in [QARepNeXt]:
            c1, c2 = ch[f], args[0]
            if c2 != nc:  # if not outputss
                c2 = make_divisible(c2 * gw, 8)
            args = [c1, c2, *args[1:]]
            if m in [QARepNeXt]:
                args.insert(2, n)  # number of repeats
                n = 1
2.4yaml文件
# Ultralytics YOLO 🚀, AGPL-3.0 license
# YOLOv8-seg instance segmentation model. For Usage examples see https://docs.ultralytics.com/tasks/segment

# Parameters
nc: 80  # number of classes
scales: # model compound scaling constants, i.e. 'model=yolov8n-seg.yaml' will call yolov8-seg.yaml with scale 'n'
  # [depth, width, max_channels]
  n: [0.33, 0.25, 1024]
  s: [0.33, 0.50, 1024]
  m: [0.67, 0.75, 768]
  l: [1.00, 1.00, 512]
  x: [1.00, 1.25, 512]

 
# YOLOv8.0s backbone
backbone:
  # [from, repeats, module, args]
  - [-1, 1, Conv, [64, 3, 2]]  # 0-P1/2
  - [-1, 1, Conv, [128, 3, 2]]  # 1-P2/4
  - [-1, 1, QARepNeXt, [128]]
  - [-1, 1, Conv, [256, 3, 2]]  # 3-P3/8
  - [-1, 6, C2f, [256, True]]
  - [-1, 1, Conv, [512, 3, 2]]  # 5-P4/16
  - [-1, 6, C2f, [512, True]]
  - [-1, 1, Conv, [1024, 3, 2]]  # 7-P5/32
  - [-1, 3, C2f, [1024, True]]
  - [-1, 1, SPPF, [1024, 5]]  # 9
 
# YOLOv8.0s head
head:
  - [-1, 1, nn.Upsample, [None, 2, 'nearest']]
  - [[-1, 6], 1, Concat, [1]]  # cat backbone P4
  - [-1, 3, C2f, [512]]  # 12
 
  - [-1, 1, nn.Upsample, [None, 2, 'nearest']]
  - [[-1, 4], 1, Concat, [1]]  # cat backbone P3
  - [-1, 3, C2f, [256]]  # 15 (P3/8-small)
 
  - [-1, 1, Conv, [256, 3, 2]]
  - [[-1, 12], 1, Concat, [1]]  # cat head P4
  - [-1, 3, C2f, [512]]  # 18 (P4/16-medium)
 
  - [-1, 1, Conv, [512, 3, 2]]
  - [[-1, 9], 1, Concat, [1]]  # cat head P5
  - [-1, 3, C2f, [1024]]  # 21 (P5/32-large)
 
  - [[15, 18, 21], 1, Detect, [nc]]  # Detect(P3, P4, P5)
 
 
 

YOLOv8是目标检测领域中的一种经典算法,其以速度快准确性高而受到广泛关注。在YOLOv8主干网络上,我们可以进行一些改进来提升其在低照度环境下的性能。 低照度条件下,图像通常会受到噪声的影响,目标的细节边缘信息可能会被模糊或者丢失,导致目标检测精度受到影响。为了克服这个问题,我们可以引入低照度增强网络来对输入图像进行预处理。低照度增强网络可以根据图像的特点对其进行自适应地增强,提升图像的亮度对比度,减少噪声的干扰。这样可以使得图像中的目标更加清晰可见,有助于提高YOLOv8的检测精度。 在主干网络的选择方面,我们可以考虑使用Pe-YOLO来替代YOLOv8原有的主干网络。Pe-YOLO是一种经过优化的主干网络,其在保持YOLOv8原有速度优势的同时,能够提升在低照度环境下目标检测的性能。Pe-YOLO采用了一些先进的网络结构设计技巧,例如注意力机制残差连接,使得主干网络具有更好的图像特征提取能力抗干扰能力。 通过将Pe-YOLO用于YOLOv8主干网络,可以加强对低照度环境下目标的探测能力,提升检测的准确率鲁棒性。此外,我们还可以对Pe-YOLO进行训练,使其能够更好地适应低照度条件下目标的特征,进一步加强目标检测的效果。 总结而言,yolov8改进中的主干篇,我们可以通过引入低照度增强网络选择Pe-YOLO作为主干网络来提升在低照度环境下的目标检测性能。这些改进可以有效地减少噪声干扰,提高目标的可见性,在大幅度提升速度的同时,保证准确率鲁棒性,使得yolov8在低照度条件下仍能取得出色的检测效果。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值