【YOLOv8改进 - 特征融合】 GELAN:YOLOV9 通用高效层聚合网络,高效且涨点

YOLOv8目标检测创新改进与实战案例专栏

专栏目录: YOLOv8有效改进系列及项目实战目录 包含卷积,主干 注意力,检测头等创新机制 以及 各种目标检测分割项目实战案例

专栏链接: YOLOv8基础解析+创新改进+实战案例

介绍

image-20240716103919452

摘要

当前的深度学习方法主要关注如何设计最合适的目标函数,以使模型的预测结果尽可能接近真实值。同时,还需设计合适的架构,以便获取足够的信息用于预测。现有方法忽略了一个事实:当输入数据经过逐层特征提取和空间变换时,会丢失大量信息。本文将深入探讨数据通过深度网络传输时的数据丢失这一重要问题,即信息瓶颈和可逆函数。我们提出了可编程梯度信息(PGI)的概念,以应对深度网络实现多重目标所需的各种变化。PGI可以为目标任务提供完整的输入信息,以计算目标函数,从而获得可靠的梯度信息来更新网络权重。此外,我们设计了一种新的轻量级网络架构——基于梯度路径规划的广义高效层聚合网络(GELAN)。GELAN的架构证明了PGI在轻量级模型上取得了优异的结果。我们在MS COCO数据集上的目标检测任务中验证了所提出的GELAN和PGI。结果表明,GELAN仅使用常规卷积操作符就能比基于深度卷积开发的最先进方法实现更好的参数利用率。PGI可用于从轻量级到大型的各种模型,可以获得完整的信息,使得从零开始训练的模型能够取得比使用大数据集预训练的最先进模型更好的结果,比较结果如图1所示。源码可在 https://github.com/WongKinYiu/yolov9 获取。

文章链接

论文地址:论文地址

代码地址:代码地址

基本原理

核心代码

class SPPELAN(nn.Module):
    # spp-elan模块
    def __init__(self, c1, c2, c3):  # 输入通道数, 输出通道数, 内部通道数
        super().__init__()
        self.c = c3
        self.cv1 = Conv(c1, c3, 1, 1)  # 使用1x1卷积将输入通道数变为c3
        self.cv2 = SP(5)  # 定义三个SP模块,核大小为5
        self.cv3 = SP(5)
        self.cv4 = SP(5)
        self.cv5 = Conv(4*c3, c2, 1, 1)  # 使用1x1卷积将4倍c3的通道数变为c2

    def forward(self, x):
        y = [self.cv1(x)]  # 先经过cv1卷积
        y.extend(m(y[-1]) for m in [self.cv2, self.cv3, self.cv4])  # 将cv1的输出依次经过三个SP模块
        return self.cv5(torch.cat(y, 1))  # 将所有输出在通道维度上拼接后经过cv5卷积

class ELAN1(nn.Module):
    # ELAN1模块
    def __init__(self, c1, c2, c3, c4):  # 输入通道数, 输出通道数, 中间通道数1, 中间通道数2
        super().__init__()
        self.c = c3 // 2
        self.cv1 = Conv(c1, c3, 1, 1)  # 使用1x1卷积将输入通道数变为c3
        self.cv2 = Conv(c3 // 2, c4, 3, 1)  # 使用3x3卷积将c3/2通道数变为c4
        self.cv3 = Conv(c4, c4, 3, 1)  # 使用3x3卷积将c4通道数保持不变
        self.cv4 = Conv(c3 + (2 * c4), c2, 1, 1)  # 使用1x1卷积将c3+2倍c4通道数变为c2

    def forward(self, x):
        y = list(self.cv1(x).chunk(2, 1))  # 先经过cv1卷积,然后在通道维度上分成两部分
        y.extend(m(y[-1]) for m in [self.cv2, self.cv3])  # 将第一部分依次经过cv2和cv3
        return self.cv4(torch.cat(y, 1))  # 将所有输出在通道维度上拼接后经过cv4卷积

    def forward_split(self, x):
        y = list(self.cv1(x).split((self.c, self.c), 1))  # 先经过cv1卷积,然后在通道维度上按指定通道数分成两部分
        y.extend(m(y[-1]) for m in [self.cv2, self.cv3])  # 将第一部分依次经过cv2和cv3
        return self.cv4(torch.cat(y, 1))  # 将所有输出在通道维度上拼接后经过cv4卷积

class RepNCSPELAN4(nn.Module):
    # csp-elan模块
    def __init__(self, c1, c2, c3, c4, c5=1):  # 输入通道数, 输出通道数, 中间通道数1, 中间通道数2, 块重复次数
        super().__init__()
        self.c = c3 // 2
        self.cv1 = Conv(c1, c3, 1, 1)  # 使用1x1卷积将输入通道数变为c3
        self.cv2 = nn.Sequential(RepNCSP(c3 // 2, c4, c5), Conv(c4, c4, 3, 1))  # 定义包含RepNCSP和3x3卷积的顺序容器
        self.cv3 = nn.Sequential(RepNCSP(c4, c4, c5), Conv(c4, c4, 3, 1))  # 定义另一个包含RepNCSP和3x3卷积的顺序容器
        self.cv4 = Conv(c3 + (2 * c4), c2, 1, 1)  # 使用1x1卷积将c3+2倍c4通道数变为c2

    def forward(self, x):
        y = list(self.cv1(x).chunk(2, 1))  # 先经过cv1卷积,然后在通道维度上分成两部分
        y.extend((m(y[-1])) for m in [self.cv2, self.cv3])  # 将第一部分依次经过cv2和cv3
        return self.cv4(torch.cat(y, 1))  # 将所有输出在通道维度上拼接后经过cv4卷积

    def forward_split(self, x):
        y = list(self.cv1(x).split((self.c, self.c), 1))  # 先经过cv1卷积,然后在通道维度上按指定通道数分成两部分
        y.extend(m(y[-1]) for m in [self.cv2, self.cv3])  # 将第一部分依次经过cv2和cv3
        return self.cv4(torch.cat(y, 1))  # 将所有输出在通道维度上拼接后经过cv4卷积

下载YoloV8代码

直接下载

GitHub地址

image-20240116225427653

Git Clone

git clone https://github.com/ultralytics/ultralytics

安装环境

进入代码根目录并安装依赖。

image-20240116230741813

image-20240116230741813

pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple

在最新版本中,官方已经废弃了requirements.txt文件,转而将所有必要的代码和依赖整合进了ultralytics包中。因此,用户只需安装这个单一的ultralytics库,就能获得所需的全部功能和环境依赖。

pip install ultralytics

引入代码

在根目录下的ultralytics/nn/目录,新建一个 featureFusion目录,然后新建一个以 GELAN为文件名的py文件, 把代码拷贝进去。

import numpy as np
import torch.nn as nn
import torch


def autopad(k, p=None, d=1):  # kernel, padding, dilation
    # Pad to 'same' shape outputs
    if d > 1:
        k = (
            d * (k - 1) + 1 if isinstance(k, int) else [d * (x - 1) + 1 for x in k]
        )  # actual kernel-size
    if p is None:
        p = k // 2 if isinstance(k, int) else [x // 2 for x in k]  # auto-pad
    return p


class Conv(nn.Module):
    # Standard convolution with args(ch_in, ch_out, kernel, stride, padding, groups, dilation, activation)
    default_act = nn.SiLU()  # default activation

    def __init__(self, c1, c2, k=1, s=1, p=None, g=1, d=1, act=True):
        super().__init__()
        self.conv = nn.Conv2d(
            c1, c2, k, s, autopad(k, p, d), groups=g, dilation=d, bias=False
        )
        self.bn = nn.BatchNorm2d(c2)
        self.act = (
            self.default_act
            if act is True
            else act
            if isinstance(act, nn.Module)
            else nn.Identity()
        )

    def forward(self, x):
        return self.act(self.bn(self.conv(x)))

    def forward_fuse(self, x):
        return self.act(self.conv(x))


class RepNBottleneck(nn.Module):
    # Standard bottleneck
    def __init__(
        self, c1, c2, shortcut=True, g=1, k=(3, 3), e=0.5
    ):  # ch_in, ch_out, shortcut, kernels, groups, expand
        super().__init__()
        c_ = int(c2 * e)  # hidden channels
        self.cv1 = RepConvN(c1, c_, k[0], 1)
        self.cv2 = Conv(c_, c2, k[1], 1, g=g)
        self.add = shortcut and c1 == c2

    def forward(self, x):
        return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x))


class RepNCSP(nn.Module):
    # CSP Bottleneck with 3 convolutions
    def __init__(
        self, c1, c2, n=1, shortcut=True, g=1, e=0.5
    ):  # ch_in, ch_out, number, shortcut, groups, expansion
        super().__init__()
        c_ = int(c2 * e)  # hidden channels
        self.cv1 = Conv(c1, c_, 1, 1)
        self.cv2 = Conv(c1, c_, 1, 1)
        self.cv3 = Conv(2 * c_, c2, 1)  # optional act=FReLU(c2)
        self.m = nn.Sequential(
            *(RepNBottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n))
        )

    def forward(self, x):
        return self.cv3(torch.cat((self.m(self.cv1(x)), self.cv2(x)), 1))


class RepConvN(nn.Module):
    """RepConv is a basic rep-style block, including training and deploy status
    This code is based on https://github.com/DingXiaoH/RepVGG/blob/main/repvgg.py
    """

    default_act = nn.SiLU()  # default activation

    def __init__(
        self, c1, c2, k=3, s=1, p=1, g=1, d=1, act=True, bn=False, deploy=False
    ):
        super().__init__()
        assert k == 3 and p == 1
        self.g = g
        self.c1 = c1
        self.c2 = c2
        self.act = (
            self.default_act
            if act is True
            else act
            if isinstance(act, nn.Module)
            else nn.Identity()
        )

        self.bn = None
        self.conv1 = Conv(c1, c2, k, s, p=p, g=g, act=False)
        self.conv2 = Conv(c1, c2, 1, s, p=(p - k // 2), g=g, act=False)

    def forward_fuse(self, x):
        """Forward process"""
        return self.act(self.conv(x))

    def forward(self, x):
        """Forward process"""
        id_out = 0 if self.bn is None else self.bn(x)
        return self.act(self.conv1(x) + self.conv2(x) + id_out)

    def get_equivalent_kernel_bias(self):
        kernel3x3, bias3x3 = self._fuse_bn_tensor(self.conv1)
        kernel1x1, bias1x1 = self._fuse_bn_tensor(self.conv2)
        kernelid, biasid = self._fuse_bn_tensor(self.bn)
        return (
            kernel3x3 + self._pad_1x1_to_3x3_tensor(kernel1x1) + kernelid,
            bias3x3 + bias1x1 + biasid,
        )

    def _avg_to_3x3_tensor(self, avgp):
        channels = self.c1
        groups = self.g
        kernel_size = avgp.kernel_size
        input_dim = channels // groups
        k = torch.zeros((channels, input_dim, kernel_size, kernel_size))
        k[np.arange(channels), np.tile(np.arange(input_dim), groups), :, :] = (
            1.0 / kernel_size**2
        )
        return k

    def _pad_1x1_to_3x3_tensor(self, kernel1x1):
        if kernel1x1 is None:
            return 0
        else:
            return torch.nn.functional.pad(kernel1x1, [1, 1, 1, 1])

    def _fuse_bn_tensor(self, branch):
        if branch is None:
            return 0, 0
        if isinstance(branch, Conv):
            kernel = branch.conv.weight
            running_mean = branch.bn.running_mean
            running_var = branch.bn.running_var
            gamma = branch.bn.weight
            beta = branch.bn.bias
            eps = branch.bn.eps
        elif isinstance(branch, nn.BatchNorm2d):
            if not hasattr(self, "id_tensor"):
                input_dim = self.c1 // self.g
                kernel_value = np.zeros((self.c1, input_dim, 3, 3), dtype=np.float32)
                for i in range(self.c1):
                    kernel_value[i, i % input_dim, 1, 1] = 1
                self.id_tensor = torch.from_numpy(kernel_value).to(branch.weight.device)
            kernel = self.id_tensor
            running_mean = branch.running_mean
            running_var = branch.running_var
            gamma = branch.weight
            beta = branch.bias
            eps = branch.eps
        std = (running_var + eps).sqrt()
        t = (gamma / std).reshape(-1, 1, 1, 1)
        return kernel * t, beta - running_mean * gamma / std

    def fuse_convs(self):
        if hasattr(self, "conv"):
            return
        kernel, bias = self.get_equivalent_kernel_bias()
        self.conv = nn.Conv2d(
            in_channels=self.conv1.conv.in_channels,
            out_channels=self.conv1.conv.out_channels,
            kernel_size=self.conv1.conv.kernel_size,
            stride=self.conv1.conv.stride,
            padding=self.conv1.conv.padding,
            dilation=self.conv1.conv.dilation,
            groups=self.conv1.conv.groups,
            bias=True,
        ).requires_grad_(False)
        self.conv.weight.data = kernel
        self.conv.bias.data = bias
        for para in self.parameters():
            para.detach_()
        self.__delattr__("conv1")
        self.__delattr__("conv2")
        if hasattr(self, "nm"):
            self.__delattr__("nm")
        if hasattr(self, "bn"):
            self.__delattr__("bn")
        if hasattr(self, "id_tensor"):
            self.__delattr__("id_tensor")


class SP(nn.Module):
    def __init__(self, k=3, s=1):
        super(SP, self).__init__()
        self.m = nn.MaxPool2d(kernel_size=k, stride=s, padding=k // 2)

    def forward(self, x):
        return self.m(x)


class SPPELAN(nn.Module):
   
    def __init__(
        self, c1, c2, c3
    ):  # ch_in, ch_out, number, shortcut, groups, expansion
        super().__init__()
        self.c = c3
        self.cv1 = Conv(c1, c3, 1, 1)
        self.cv2 = SP(5)
        self.cv3 = SP(5)
        self.cv4 = SP(5)
        self.cv5 = Conv(4 * c3, c2, 1, 1)

    def forward(self, x):
        y = [self.cv1(x)]
        y.extend(m(y[-1]) for m in [self.cv2, self.cv3, self.cv4])
        return self.cv5(torch.cat(y, 1))


class RepNCSPELAN4(nn.Module):
 
    def __init__(
        self, c1, c2, c3, c4, c5=1
    ):  # ch_in, ch_out, number, shortcut, groups, expansion
        super().__init__()
        self.c = c3 // 2
        self.cv1 = Conv(c1, c3, 1, 1)
        self.cv2 = nn.Sequential(RepNCSP(c3 // 2, c4, c5), Conv(c4, c4, 3, 1))
        self.cv3 = nn.Sequential(RepNCSP(c4, c4, c5), Conv(c4, c4, 3, 1))
        self.cv4 = Conv(c3 + (2 * c4), c2, 1, 1)

    def forward(self, x):
        y = list(self.cv1(x).chunk(2, 1))
        y.extend((m(y[-1])) for m in [self.cv2, self.cv3])
        return self.cv4(torch.cat(y, 1))

    def forward_split(self, x):
        y = list(self.cv1(x).split((self.c, self.c), 1))
        y.extend(m(y[-1]) for m in [self.cv2, self.cv3])
        return self.cv4(torch.cat(y, 1))

 

注册

ultralytics/nn/tasks.py中进行如下操作:

步骤1:

 from ultralytics.nn.featureFusion.GELAN import RepNCSPELAN4, SPPELAN

步骤2

修改def parse_model(d, ch, verbose=True):

        if m in (
            Classify, Conv, ConvTranspose, GhostConv, Bottleneck, GhostBottleneck, SPP, SPPF, DWConv, Focus,
            BottleneckCSP, C1, C2, C2f, C3, C3TR, C3Ghost, nn.ConvTranspose2d,DWConvTranspose2d, C3x, RepC3, EVCBlock,
            CAFMAttention, CloFormerAttnConv, C2f_iAFF, CSPStage, nn.Conv2d, GSConv, VoVGSCSP, C2f_CBAM, C2f_RefConv,C2f_SimAM,
            DualConv, C2f_NAM, MSFE, C2f_MDCR,C2f_MHSA, PPA, C2f_DASI, C2f_CascadedGroupAttention, C2f_DWR, C2f_DWRSeg,
            SPConv, AKConv, C2f_DySnakeConv, ScConv, C2f_ScConv, RFAConv, RFCBAMConv, RFCAConv, CAConv, CBAMConv, C2f_MultiDilatelocalAttention,
            C2f_iRMB, C2f_MSBlock, MobileViTBlock,CoordAtt, MCALayer,C2f_FocusedLinearAttention, RCSOSA,RepNCSPELAN4, SPPELAN
        ):
            c1, c2 = ch[f], args[0]
            if c2 != nc:  # if c2 not equal to number of classes (i.e. for Classify() output)
                c2 = make_divisible(min(c2, max_channels) * width, 8)

            args = [c1, c2, *args[1:]]
            if m in (BottleneckCSP, C1, C2, C2f, C3, C3TR, C3Ghost, C3x, RepC3, C2f_deformable_LKA,Sea_AttentionBlock, C2f_iAFF,CSPStage, VoVGSCSP,C2f_SimAM,
                    C2f_NAM, C2f_MDCR, C2f_DASI, C2f_DWR, C2f_DWRSeg , C2f_MultiDilatelocalAttention, C2f_iRMB, C2f_MSBlock, MobileViTBlock,C2f_FocusedLinearAttention
                    ):
                args.insert(2, n)  # number of repeats
                n = 1

image-20240716104847006

配置yolov8_GELAN.yaml

配置参考:https://github.com/WongKinYiu/yolov9/tree/5b1ea9a8b3f0ffe4fe0e203ec6232d788bb3fcff/models/detect

ultralytics/ultralytics/cfg/models/v8/yolov8_GELAN.yaml

# Ultralytics YOLO 🚀, AGPL-3.0 license
# YOLOv8 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect
 
# Parameters
nc: 80 # number of classes
scales: # model compound scaling constants, i.e. 'model=yolov8n.yaml' will call yolov8.yaml with scale 'n'
  # [depth, width, max_channels]
  n: [0.33, 0.25, 1024]
  s: [0.33, 0.50, 1024]
  m: [0.67, 0.75, 768]
  l: [1.00, 1.00, 512]
  x: [1.00, 1.25, 512]

# YOLOv8.0n backbone
backbone:
  # [from, repeats, module, args]
  - [-1, 1, Conv, [64, 3, 2]] # 0-P1/2
  - [-1, 1, Conv, [128, 3, 2]] # 1-P2/4
  - [-1, 1, RepNCSPELAN4, [128, 64, 32, 1]]
  - [-1, 1, Conv, [256, 3, 2]] # 3-P3/8
  - [-1, 1, RepNCSPELAN4, [256, 128, 64, 1]]
  - [-1, 1, Conv, [256, 3, 2]] # 5-P4/16
  - [-1, 1, RepNCSPELAN4, [256, 256, 128, 1]]
  - [-1, 1, Conv, [256, 3, 2]] # 7-P5/32
  - [-1, 1, RepNCSPELAN4, [256, 256, 128, 1]]
  - [-1, 1, SPPELAN, [256, 128]] # 9

# YOLOv8.0n head
head:
  - [-1, 1, nn.Upsample, [None, 2, "nearest"]]
  - [[-1, 6], 1, Concat, [1]] # cat backbone P4
  - [-1, 1, RepNCSPELAN4, [256, 256, 128, 1]] # 12

  - [-1, 1, nn.Upsample, [None, 2, "nearest"]]
  - [[-1, 4], 1, Concat, [1]] # cat backbone P3
  - [-1, 1, RepNCSPELAN4, [128, 128, 64, 1]] # 15 (P3/8-small)

  - [-1, 1, Conv, [256, 3, 2]]
  - [[-1, 12], 1, Concat, [1]] # cat head P4
  - [-1, 1, RepNCSPELAN4, [256, 256, 128, 1]] # 18 (P4/16-medium)

  - [-1, 1, Conv, [512, 3, 2]]
  - [[-1, 9], 1, Concat, [1]] # cat head P5
  - [-1, 1, RepNCSPELAN4, [256, 256, 128, 1]] # 21 (P5/32-large)

  - [[15, 18, 21], 1, Detect, [nc]] # Detect(P3, P4, P5)

实验

脚本

import os
from ultralytics import YOLO
 
yaml = 'ultralytics/cfg/models/v8/yolov8_GELAN.yaml'

 
model = YOLO(yaml) 
 
model.info()
if __name__ == "__main__":
   
    results = model.train(data='coco128.yaml',
                          name='yolov8_GELAN',
                          epochs=10, 
                          workers=8, 
                          batch=1)


结果

image-20240716104802060


在这里插入图片描述

  • 1
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

YOLO大王

你的打赏,我的动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值