YOLOv11 改进 - 注意力机制 | PPA(Parallelized Patch-Aware Attention)并行补丁感知注意:分层特征融合保持小目标表征

『AI先锋杯·14天征文挑战第8期』 10w+人浏览 327人参与

前言

本文介绍了用于红外小目标检测的深度学习方法HCF-Net及其在YOLOv11中的结合应用。HCF-Net采用升级版U-Net架构,包含PPA、DASI和MDCR三个关键模块。PPA模块利用分层特征融合和注意力机制,采用多分支特征提取策略,捕获不同尺度和级别的特征信息;DASI模块增强跳跃连接,实现高低维特征的自适应融合;MDCR模块通过深度可分离卷积捕获空间特征。我们将PPA模块集成进YOLOv11,替换部分原有模块。

文章目录: YOLOv11改进大全:卷积层、轻量化、注意力机制、损失函数、Backbone、SPPF、Neck、检测头全方位优化汇总

专栏链接: YOLOv11改进专栏

介绍

image-20240701160512143

摘要

红外小目标检测作为计算机视觉领域的重要研究方向,专注于红外图像中仅包含数个像素的微小目标的识别与定位任务。然而,由于目标尺寸极小且红外图像背景复杂度高,该任务面临严峻的技术挑战。本文提出了一种基于深度学习框架的红外小目标检测方法HCF-Net,通过集成多个创新功能模块显著提升了检测性能。具体而言,该方法包含并行化感知补丁注意力(PPA)模块、维度感知选择性融合(DASI)模块以及多膨胀通道优化(MDCR)模块。PPA模块采用多分支特征提取策略,有效捕获不同尺度与层次的特征信息;DASI模块实现自适应通道选择与融合机制;MDCR模块则通过多层深度可分离卷积操作获取多样化感受野范围的空间特征表达。在SIRST红外单帧图像数据集上进行的大量实验验证表明,所提出的HCF-Net模型展现出卓越的检测性能,显著超越了传统方法及现有深度学习模型的性能水平。相关实现代码已在https://github.com/zhengshuchen/HCFNet平台开源发布。

文章链接

论文地址:论文地址

代码地址:代码地址

基本原理

HCF-Net(Hierarchical Context Fusion Network)是一种用于红外小目标检测的深度学习模型,旨在提高对红外图像中微小目标的识别和定位能力。

  1. 网络架构:HCF-Net采用了一种升级版的U-Net架构,主要由三个关键模块组成:Parallelized Patch-Aware Attention(PPA)模块、Dimension-Aware Selective Integration(DASI)模块和Multi-Dilated Channel Refiner(MDCR)模块。这些模块在不同层级上解决了红外小目标检测中的挑战 。

  2. PPA模块

    • Hierarchical Feature Fusion:PPA模块利用分层特征融合和注意力机制,以在多次下采样过程中保持和增强小目标的表示,确保关键信息在整个网络中得以保留[T1]。
    • Multi-Branch Feature Extraction:PPA采用多分支特征提取策略,以捕获不同尺度和级别的特征信息,从而提高小目标检测的准确性 。
  3. DASI模块

    • Adaptive Feature Fusion:DASI模块增强了U-Net中的跳跃连接,专注于高低维特征的自适应选择和精细融合,以增强小目标的显著性 。
  4. MDCR模块

    • Spatial Feature Refinement:MDCR模块通过多个深度可分离卷积层捕获不同感受野范围的空间特征,更细致地建模目标和背景之间的差异,提高了定位小目标的能力 。

    image-20240701161043152

PPA

在红外小物体检测任务中,小物体很容易在多次降采样操作中丢失关键信息。 ,PPA 在编码器和解码器的基本组件中取代了传统的卷积运算,从而更好地应对了这一挑战。PPA 主要有两大优势:多分支特征提取和特征融合注意力。

多分支特征提取

PPA 的多分支特征提取策略如下图 所示,采用并行多分支方法,每个分支负责提取不同规模和级别的特征。这种策略有利于捕捉物体的多尺度特征,提高小物体检测的准确性。具体而言,这包括三个并行分支:局部分支、全局分支和串行卷积分支。

  1. 局部分支:通过调整逐点卷积后的特征张量 F ′ F' F,分割成一组空间上连续的斑块,进行信道平均和线性计算,再应用激活函数得到空间维度上的概率分布。

  2. 全局分支:使用高效操作,在斑块中选择与任务相关的特征,通过余弦相似度衡量加权,最终产生局部和全局特征。

  3. 串行卷积分支:采用三个 3x3 卷积层替代传统的大卷积核,得到多个输出结果,最后相加得到串行卷积输出。

这些分支计算得到的特征分别是 F l o c a l F_{local} Flocal F g l o b a l F_{global} Fglobal F c o n v F_{conv} Fconv,它们在尺寸和通道数上保持一致。

通过这种策略,PPA 不仅能有效捕获不同尺度下的特征,还能提升特征的表达能力和模型的整体性能。


image-20240701220651993

核心代码

import math
import torch
import torch.nn as nn
import torch.nn.functional as F

class conv_block(nn.Module):
    def __init__(self,
                 in_features,  # 输入特征数量
                 out_features,  # 输出特征数量
                 kernel_size=(3, 3),  # 卷积核大小
                 stride=(1, 1),  # 步长
                 padding=(1, 1),  # 填充
                 dilation=(1, 1),  # 空洞卷积率
                 norm_type='bn',  # 归一化类型
                 activation=True,  # 是否使用激活函数
                 use_bias=True,  # 是否使用偏置
                 groups=1  # 分组卷积数量
                 ):
        super().__init__()

        # 定义卷积层
        self.conv = nn.Conv2d(in_channels=in_features,
                              out_channels=out_features,
                              kernel_size=kernel_size,
                              stride=stride,
                              padding=padding,
                              dilation=dilation,
                              bias=use_bias,
                              groups=groups)

        self.norm_type = norm_type
        self.act = activation

        # 定义归一化层
        if self.norm_type == 'gn':
            self.norm = nn.GroupNorm(32 if out_features >= 32 else out_features, out_features)
        if self.norm_type == 'bn':
            self.norm = nn.BatchNorm2d(out_features)
        
        # 定义激活函数
        if self.act:
            self.relu = nn.ReLU(inplace=False)

    def forward(self, x):
        x = self.conv(x)
        if self.norm_type is not None:
            x = self.norm(x)

        if self.act:
            x = self.relu(x)
        return x
    

class LocalGlobalAttention(nn.Module):
    def __init__(self, output_dim, patch_size):
        super().__init__()
        self.output_dim = output_dim
        self.patch_size = patch_size
        self.mlp1 = nn.Linear(patch_size*patch_size, output_dim // 2)
        self.norm = nn.LayerNorm(output_dim // 2)
        self.mlp2 = nn.Linear(output_dim // 2, output_dim)
        self.conv = nn.Conv2d(output_dim, output_dim, kernel_size=1)
        self.prompt = torch.nn.parameter.Parameter(torch.randn(output_dim, requires_grad=True)) 
        self.top_down_transform = torch.nn.parameter.Parameter(torch.eye(output_dim), requires_grad=True)

    def forward(self, x):
        x = x.permute(0, 2, 3, 1)  # 重新排列张量维度 (B, H, W, C)
        B, H, W, C = x.shape
        P = self.patch_size

        # 局部分支
        local_patches = x.unfold(1, P, P).unfold(2, P, P)  # 将特征分块 (B, H/P, W/P, P, P, C)
        local_patches = local_patches.reshape(B, -1, P*P, C)  # 重新调整形状 (B, H/P*W/P, P*P, C)
        local_patches = local_patches.mean(dim=-1)  # 对最后一维取均值 (B, H/P*W/P, P*P)

        local_patches = self.mlp1(local_patches)  # 全连接层 (B, H/P*W/P, input_dim // 2)
        local_patches = self.norm(local_patches)  # 层归一化 (B, H/P*W/P, input_dim // 2)
        local_patches = self.mlp2(local_patches)  # 全连接层 (B, H/P*W/P, output_dim)

        local_attention = F.softmax(local_patches, dim=-1)  # 计算注意力 (B, H/P*W/P, output_dim)
        local_out = local_patches * local_attention  # 应用注意力权重 (B, H/P*W/P, output_dim)

        # 计算与提示向量的余弦相似度
        cos_sim = F.normalize(local_out, dim=-1) @ F.normalize(self.prompt[None, ..., None], dim=1)  # (B, N, 1)
        mask = cos_sim.clamp(0, 1)  # 计算掩码
        local_out = local_out * mask  # 应用掩码
        local_out = local_out @ self.top_down_transform  # 变换

        # 恢复形状
        local_out = local_out.reshape(B, H // P, W // P, self.output_dim)  # (B, H/P, W/P, output_dim)
        local_out = local_out.permute(0, 3, 1, 2)
        local_out = F.interpolate(local_out, size=(H, W), mode='bilinear', align_corners=False)  # 插值恢复原始尺寸
        output = self.conv(local_out)  # 卷积层

        return output


class ECA(nn.Module):
    def __init__(self,in_channel,gamma=2,b=1):
        super(ECA, self).__init__()
        k = int(abs((math.log(in_channel,2)+b)/gamma))  # 计算卷积核大小
        kernel_size = k if k % 2 else k+1  # 保证卷积核大小为奇数
        padding = kernel_size // 2  # 填充
        self.pool = nn.AdaptiveAvgPool2d(output_size=1)  # 自适应平均池化
        self.conv = nn.Sequential(
            nn.Conv1d(in_channels=1, out_channels=1, kernel_size=kernel_size, padding=padding, bias=False),  # 1D卷积层
            nn.Sigmoid()  # Sigmoid激活函数
        )

    def forward(self, x):
        out = self.pool(x)  # 平均池化
        out = out.view(x.size(0), 1, x.size(1))  # 调整形状
        out = self.conv(out)  # 1D卷积
        out = out.view(x.size(0), x.size(1), 1, 1)  # 调整形状
        return out * x  # 注意力加权


class SpatialAttentionModule(nn.Module):
    def __init__(self):
        super(SpatialAttentionModule, self).__init__()
        self.conv2d = nn.Conv2d(in_channels=2, out_channels=1, kernel_size=7, stride=1, padding=3)  # 2D卷积层
        self.sigmoid = nn.Sigmoid()  # Sigmoid激活函数

    def forward(self, x):
        avgout = torch.mean(x, dim=1, keepdim=True)  # 平均池化
        maxout, _ = torch.max(x, dim=1, keepdim=True)  # 最大池化
        out = torch.cat([avgout, maxout], dim=1)  # 连接
        out = self.sigmoid(self.conv2d(out))  # 卷积和激活
        return out * x  # 注意力加权


class PPA(nn.Module):
    def __init__(self, in_features, filters) -> None:
        super().__init__()

        # 定义跳跃连接卷积块
        self.skip = conv_block(in_features=in_features,
                               out_features=filters,
                               kernel_size=(1, 1),
                               padding=(0, 0),
                               norm_type='bn',
                               activation=False)
        
        # 定义连续卷积块
        self.c1 = conv_block(in_features=in_features,
                             out_features=filters,
                             kernel_size=(3, 3),
                             padding=(1, 1),
                             norm_type='bn',
                             activation=True)
        self.c2 = conv_block(in_features=filters,
                             out_features=filters,
                             kernel_size=(3, 3),
                             padding=(1, 1),
                             norm_type='bn',
                             activation=True)
        self.c3 = conv_block(in_features=filters,
                             out_features=filters,
                             kernel_size=(3, 3),
                             padding=(1, 1),
                             norm_type='bn',
                             activation=True)
        
        # 定义空间注意力模块
        self.sa = SpatialAttentionModule()
        # 定义ECA模块
        self.cn = ECA(filters)
        # 定义局部和全局注意力模块
        self.lga2 = LocalGlobalAttention(filters, 2)
        self.lga4 = LocalGlobalAttention(filters, 4)

        # 定义批归一化层、dropout层和激活函数
        self.bn1 = nn.BatchNorm2d(filters)
        self.drop = nn.Dropout2d(0.1)
        self.relu = nn.ReLU()
        self.gelu = nn.GELU()

    def forward(self, x):
        x_skip = self.skip(x)  # 跳跃连接输出
        x_lga2 = self.lga2(x_skip)  # 局部和全局注意力输出(大小为2的patch)
        x_lga4 = self.lga4(x_skip)  # 局部和全局注意力输出(大小为4的patch)
        x1 = self.c1(x)  # 第一个卷积块输出
        x2 = self.c2(x1)  # 第二个卷积块输出
        x3 = self.c3(x2)  # 第三个卷积块输出
        x = x1 + x2 + x3 + x_skip + x_lga2 + x_lga4  # 合并所有输出
        x = self.cn(x)  # ECA模块
        x = self.sa(x)  # 空间注意力模块
        x = self.drop(x)  # Dropout层
        x = self.bn1(x)  # 批归一化层
        x = self.relu(x)  # 激活函数
        return x

YOLOv11引入代码

在根目录下的ultralytics/nn/目录,新建一个 attention目录,然后新建一个以 PPA为文件名的py文件, 把代码拷贝进去。

import math
import torch
import torch.nn as nn
import torch.nn.functional as F


class conv_block(nn.Module):
    def __init__(self,
                 in_features,
                 out_features,
                 kernel_size=(3, 3),
                 stride=(1, 1),
                 padding=(1, 1),
                 dilation=(1, 1),
                 norm_type='bn',
                 activation=True,
                 use_bias=True,
                 groups=1
                 ):
        super().__init__()

        self.conv = nn.Conv2d(in_channels=in_features,
                              out_channels=out_features,
                              kernel_size=kernel_size,
                              stride=stride,
                              padding=padding,
                              dilation=dilation,
                              bias=use_bias,
                              groups=groups)

        self.norm_type = norm_type
        self.act = activation

        if self.norm_type == 'gn':
            self.norm = nn.GroupNorm(32 if out_features >= 32 else out_features, out_features)
        if self.norm_type == 'bn':
            self.norm = nn.BatchNorm2d(out_features)
        if self.act:
            # self.relu = nn.GELU()
            self.relu = nn.ReLU(inplace=False)

    def forward(self, x):
        x = self.conv(x)
        if self.norm_type is not None:
            x = self.norm(x)

        if self.act:
            x = self.relu(x)
        return x
    

class LocalGlobalAttention(nn.Module):
    def __init__(self, output_dim, patch_size):
        super().__init__()
        self.output_dim = output_dim
        self.patch_size = patch_size
        self.mlp1 = nn.Linear(patch_size*patch_size, output_dim // 2)
        self.norm = nn.LayerNorm(output_dim // 2)
        self.mlp2 = nn.Linear(output_dim // 2, output_dim)
        self.conv = nn.Conv2d(output_dim, output_dim, kernel_size=1)
        self.prompt = torch.nn.parameter.Parameter(torch.randn(output_dim, requires_grad=True)) 
        self.top_down_transform = torch.nn.parameter.Parameter(torch.eye(output_dim), requires_grad=True)

    def forward(self, x):
        x = x.permute(0, 2, 3, 1)
        B, H, W, C = x.shape
        P = self.patch_size

        # Local branch
        local_patches = x.unfold(1, P, P).unfold(2, P, P)  # (B, H/P, W/P, P, P, C)
        local_patches = local_patches.reshape(B, -1, P*P, C)  # (B, H/P*W/P, P*P, C)
        local_patches = local_patches.mean(dim=-1)  # (B, H/P*W/P, P*P)

        local_patches = self.mlp1(local_patches)  # (B, H/P*W/P, input_dim // 2)
        local_patches = self.norm(local_patches)  # (B, H/P*W/P, input_dim // 2)
        local_patches = self.mlp2(local_patches)  # (B, H/P*W/P, output_dim)

        local_attention = F.softmax(local_patches, dim=-1)  # (B, H/P*W/P, output_dim)
        local_out = local_patches * local_attention # (B, H/P*W/P, output_dim)

        cos_sim = F.normalize(local_out, dim=-1) @ F.normalize(self.prompt[None, ..., None], dim=1)  # B, N, 1
        mask = cos_sim.clamp(0, 1)
        local_out = local_out * mask
        local_out = local_out @ self.top_down_transform

        # Restore shapes
        local_out = local_out.reshape(B, H // P, W // P, self.output_dim)  # (B, H/P, W/P, output_dim)
        local_out = local_out.permute(0, 3, 1, 2)
        local_out = F.interpolate(local_out, size=(H, W), mode='bilinear', align_corners=False)
        output = self.conv(local_out)

        return output

 


class ECA(nn.Module):
    def __init__(self,in_channel,gamma=2,b=1):
        super(ECA, self).__init__()
        k=int(abs((math.log(in_channel,2)+b)/gamma))
        kernel_size=k if k % 2 else k+1
        padding=kernel_size//2
        self.pool=nn.AdaptiveAvgPool2d(output_size=1)
        self.conv=nn.Sequential(
            nn.Conv1d(in_channels=1,out_channels=1,kernel_size=kernel_size,padding=padding,bias=False),
            nn.Sigmoid()
        )

    def forward(self,x):
        out=self.pool(x)
        out=out.view(x.size(0),1,x.size(1))
        out=self.conv(out)
        out=out.view(x.size(0),x.size(1),1,1)
        return out*x





class SpatialAttentionModule(nn.Module):
    def __init__(self):
        super(SpatialAttentionModule, self).__init__()
        self.conv2d = nn.Conv2d(in_channels=2, out_channels=1, kernel_size=7, stride=1, padding=3)
        self.sigmoid = nn.Sigmoid()

    def forward(self, x):
        avgout = torch.mean(x, dim=1, keepdim=True)
        maxout, _ = torch.max(x, dim=1, keepdim=True)
        out = torch.cat([avgout, maxout], dim=1)
        out = self.sigmoid(self.conv2d(out))
        return out * x


class PPA(nn.Module):
    def __init__(self, in_features, filters) -> None:
         super().__init__()

         self.skip = conv_block(in_features=in_features,
                                out_features=filters,
                                kernel_size=(1, 1),
                                padding=(0, 0),
                                norm_type='bn',
                                activation=False)
         self.c1 = conv_block(in_features=in_features,
                                out_features=filters,
                                kernel_size=(3, 3),
                                padding=(1, 1),
                                norm_type='bn',
                                activation=True)
         self.c2 = conv_block(in_features=filters,
                                out_features=filters,
                                kernel_size=(3, 3),
                                padding=(1, 1),
                                norm_type='bn',
                                activation=True)
         self.c3 = conv_block(in_features=filters,
                                out_features=filters,
                                kernel_size=(3, 3),
                                padding=(1, 1),
                                norm_type='bn',
                                activation=True)
         self.sa = SpatialAttentionModule()
         self.cn = ECA(filters)
         self.lga2 = LocalGlobalAttention(filters, 2)
         self.lga4 = LocalGlobalAttention(filters, 4)

         self.bn1 = nn.BatchNorm2d(filters)
         self.drop = nn.Dropout2d(0.1)
         self.relu = nn.ReLU()

         self.gelu = nn.GELU()

    def forward(self, x):
        x_skip = self.skip(x)
        x_lga2 = self.lga2(x_skip)
        x_lga4 = self.lga4(x_skip)
        x1 = self.c1(x)
        x2 = self.c2(x1)
        x3 = self.c3(x2)
        x = x1 + x2 + x3 + x_skip + x_lga2 + x_lga4
        x = self.cn(x)
        x = self.sa(x)
        x = self.drop(x)
        x = self.bn1(x)
        x = self.relu(x)
        return x

tasks注册

ultralytics/nn/tasks.py中进行如下操作:

步骤1:

from ultralytics.nn.attention.PPA import PPA

步骤2

修改def parse_model(d, ch, verbose=True):

        if m in {
                Classify, Conv, ConvTranspose, GhostConv, Bottleneck, GhostBottleneck, SPP, SPPF, C2fPSA, C2PSA, DWConv, Focus, 
                BottleneckCSP, C1, C2, C2f, C3k2, RepNCSPELAN4, ELAN1, ADown, AConv, SPPELAN, C2fAttn, C3, C3TR, C3Ghost, 
                nn.ConvTranspose2d, DWConvTranspose2d, C3x, RepC3, PSA, SCDown, C2fCIB, AKConv, DynamicConv, ODConv2d, SAConv2d,
                DualConv, SPConv,RFAConv, RFCBAMConv, RFCAConv, CAConv, CBAMConv, CoordAtt, ContextAggregation, S2Attention,
                CPCA, BasicRFB, MCALayer, PPA
        }:

image-20241026090955015

配置yolov11-PPA.yaml

ultralytics/cfg/models/11/yolov11-PPA.yaml

# Ultralytics YOLO 🚀, AGPL-3.0 license
# YOLO11 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect
 
# Parameters
nc: 80 # number of classes
scales: # model compound scaling constants, i.e. 'model=yolo11n.yaml' will call yolo11.yaml with scale 'n'
  # [depth, width, max_channels]
  n: [0.50, 0.25, 1024] # summary: 319 layers, 2624080 parameters, 2624064 gradients, 6.6 GFLOPs
  s: [0.50, 0.50, 1024] # summary: 319 layers, 9458752 parameters, 9458736 gradients, 21.7 GFLOPs
  m: [0.50, 1.00, 512] # summary: 409 layers, 20114688 parameters, 20114672 gradients, 68.5 GFLOPs
  l: [1.00, 1.00, 512] # summary: 631 layers, 25372160 parameters, 25372144 gradients, 87.6 GFLOPs
  x: [1.00, 1.50, 512] # summary: 631 layers, 56966176 parameters, 56966160 gradients, 196.0 GFLOPs
 
# YOLO11n backbone
backbone:
  # [from, repeats, module, args]
  - [-1, 1, Conv, [64, 3, 2]] # 0-P1/2
  - [-1, 1, Conv, [128, 3, 2]] # 1-P2/4
  - [-1, 2, C3k2, [256, False, 0.25]]
  - [-1, 1, Conv, [256, 3, 2]] # 3-P3/8
  - [-1, 2, C3k2, [512, False, 0.25]]
  - [-1, 1, Conv, [512, 3, 2]] # 5-P4/16
  - [-1, 2, C3k2, [512, True]]
  - [-1, 1, Conv, [1024, 3, 2]] # 7-P5/32
  - [-1, 2, C3k2, [1024, True]]
  - [-1, 1, PPA, [1024]]
  - [-1, 1, SPPF, [1024, 5]] # 9
  - [-1, 2, C2PSA, [1024]] # 10
 
# YOLO11n head
head:
  - [-1, 1, nn.Upsample, [None, 2, "nearest"]]
  - [[-1, 6], 1, Concat, [1]] # cat backbone P4
  - [-1, 2, C3k2, [512, False]] # 13
 
  - [-1, 1, nn.Upsample, [None, 2, "nearest"]]
  - [[-1, 4], 1, Concat, [1]] # cat backbone P3
  - [-1, 2, C3k2, [256, False]] # 16 (P3/8-small)
 
  - [-1, 1, Conv, [256, 3, 2]]
  - [[-1, 14], 1, Concat, [1]] # cat head P4
  - [-1, 2, C3k2, [512, False]] # 19 (P4/16-medium)
 
  - [-1, 1, Conv, [512, 3, 2]]
  - [[-1, 11], 1, Concat, [1]] # cat head P5
  - [-1, 2, C3k2, [1024, True]] # 22 (P5/32-large)
 
  - [[17, 20, 23], 1, Detect, [nc]] # Detect(P3, P4, P5)

image-20241026090647920

# Ultralytics YOLO 🚀, AGPL-3.0 license
# YOLO11 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect

# Parameters
nc: 80 # number of classes
scales: # model compound scaling constants, i.e. 'model=yolo11n.yaml' will call yolo11.yaml with scale 'n'
  # [depth, width, max_channels]
  n: [0.50, 0.25, 1024] # summary: 319 layers, 2624080 parameters, 2624064 gradients, 6.6 GFLOPs
  s: [0.50, 0.50, 1024] # summary: 319 layers, 9458752 parameters, 9458736 gradients, 21.7 GFLOPs
  m: [0.50, 1.00, 512] # summary: 409 layers, 20114688 parameters, 20114672 gradients, 68.5 GFLOPs
  l: [1.00, 1.00, 512] # summary: 631 layers, 25372160 parameters, 25372144 gradients, 87.6 GFLOPs
  x: [1.00, 1.50, 512] # summary: 631 layers, 56966176 parameters, 56966160 gradients, 196.0 GFLOPs

# YOLO11n backbone
backbone:
  # [from, repeats, module, args]
  - [-1, 1, Conv, [64, 3, 2]] # 0-P1/2
  - [-1, 1, Conv, [128, 3, 2]] # 1-P2/4
  - [-1, 2, C3k2, [256, False, 0.25]]
  - [-1, 1, Conv, [256, 3, 2]] # 3-P3/8
  - [-1, 2, C3k2, [512, False, 0.25]]
  - [-1, 1, Conv, [512, 3, 2]] # 5-P4/16
  - [-1, 2, C3k2, [512, True]]
  - [-1, 1, Conv, [1024, 3, 2]] # 7-P5/32
  - [-1, 2, C3k2, [1024, True]]
  - [-1, 1, SPPF, [1024, 5]] # 9
  - [-1, 2, C2PSA, [1024]] # 10

# YOLO11n head
head:
  - [-1, 1, nn.Upsample, [None, 2, "nearest"]]
  - [[-1, 6], 1, Concat, [1]] # cat backbone P4
  - [-1, 2, C3k2, [512, False]] # 13

  - [-1, 1, nn.Upsample, [None, 2, "nearest"]]
  - [[-1, 4], 1, Concat, [1]] # cat backbone P3
  - [-1, 2, C3k2, [256, False]] # 16 (P3/8-small)
  - [-1, 1, PPA, [256]]  #17
  
  - [-1, 1, Conv, [256, 3, 2]]
  - [[-1, 13], 1, Concat, [1]] # cat head P4
  - [-1, 2, C3k2, [512, False]] # 20 (P4/16-medium)
  - [-1, 1, PPA, [512]]  # 21

  - [-1, 1, Conv, [512, 3, 2]]
  - [[-1, 10], 1, Concat, [1]] # cat head P5
  - [-1, 2, C3k2, [1024, True]] # 24 (P5/32-large)
  - [-1, 1, PPA, [1024]]  # 25

  - [[17, 21, 25], 1, Detect, [nc]] # Detect(P3, P4, P5)

image-20241026090816598

实验

脚本

import warnings
warnings.filterwarnings('ignore')
from ultralytics import YOLO
 
if __name__ == '__main__':
#     修改为自己的配置文件地址
    model = YOLO('/root/ultralytics-main/ultralytics/cfg/models/11/yolov11-PPA.yaml')
#     修改为自己的数据集地址
    model.train(data='/root/ultralytics-main/ultralytics/cfg/datasets/coco8.yaml',
                cache=False,
                imgsz=640,
                epochs=10,
                single_cls=False,  # 是否是单类别检测
                batch=8,
                close_mosaic=10,
                workers=0,
                optimizer='SGD',
                amp=True,
                project='runs/train',
                name='PPA',
                )
    
 

### Parallelized Patch-Aware Attention Module概述 Parallelized Patch-Aware Attention Module是一种专门设计用于提升卷积神经网络性能的结构,尤其适用于高分辨率图像处理任务。该模块通过引入局部区域(patch)的概念来增强特征表示能力[^1]。 #### 局部感知机制 此注意力机制的核心在于其能够聚焦于输入图片的不同部分即patches上,并赋予不同权重给这些片段的信息。这种特性使得模型可以更有效地捕捉到对于分类或者识别至关重要的细节位置,在多尺度下保持敏感度的同时减少计算复杂度。 #### 并行化实现方式 为了提高效率并充分利用现代硬件资源如GPU, PPAM采用了并行化的策略来进行多个补丁上的操作。具体来说就是将整个输入划分为若干不重叠的小块(patch),然后对每一个这样的子集独立地应用自注意机制(self-attention mechanism)。这样做不仅加速了训练过程而且有助于更好地提取空间关系特征。 ```python import torch.nn as nn class PPAAttention(nn.Module): def __init__(self, channels, num_heads=8): super(PPAAttention, self).__init__() self.patch_size = (4, 4) # Example patch size self.qkv_transform = nn.Conv2d(channels * 3, channels * num_heads, kernel_size=1) def forward(x): patches = extract_patches(x, self.patch_size) qkv = self.qkv_transform(patches).chunk(3, dim=-1) ... ``` 上述代码展示了如何定义一个简单的基于PyTorch框架下的Patch-Aware Attention层。这里`extract_patches()`函数负责按照指定大小切割输入张量成一系列小片;而`qkv_transform`则用来转换查询(query),键(key)以及值(value)向量以便后续计算注意力得分矩阵。
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值