Yolov6、Yolov7重参数化 RepConv 详解(代码可直接使用)

本文介绍了RepConv模型重参数化技术,以及3种融合卷积层(3x3+BN,1x1+3x3,1x1+1x3+3x1)的方法,展示了如何通过这些技术简化计算,提高模型性能。同时提到了AvgPooling转换为卷积的等效实现。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

1. RepConv简介

        RepConv是一种模型重参化技术,它可以在推理阶段将多个计算模块合并为一个,提高模型的效率和性能。它最初是用于VGG网络的,但后来也被应用到其他网络结构,如ResNet和DenseNet。RepConv的主要思想是在训练时使用多分支的卷积层,然后在推理时将分支的参数重参数化到主分支上,从而减少计算量和内存消耗。RepConv在目标检测等任务上取得了很好的效果。

2. 3x3卷积+BN 融合

卷积层的计算公式为:

C o n v ( x ) = W ( x ) + b Conv(x)=W(x)+b Conv(x)=W(x)+b

卷积层的参数为:

import torch.nn as nn
for parameter in nn.Conv2d(3,3,3).named_parameters():
     print(parameter[0])
weight
bias

BN层的计算公式为:

B N ( x ) = γ ∗ x − m e a n v a r + β BN(x)=\gamma*\frac{x-mean}{\sqrt{var}}+\beta BN(x)=γvar xmean+β

for parameter in nn.BatchNorm2d(3).named_parameters():
     print(parameter[0])
weight
bias

这里的 weight 和 bias 是 BN 公式里面的 γ {\gamma} γ β {\beta} β,训练过程中需要学习的参数。

下面将卷积的结果带入BN公式并进行化简:

B N ( C o n v ( x ) ) = γ ∗ C o n v ( x ) − m e a n v a r + β = γ ∗ W ( x ) + b − m e a n v a r + β = γ ∗ W ( x ) v a r + ( γ ∗ ( b − m e a n ) v a r + β ) \begin{align} BN(Conv(x)) &= \gamma*\frac{Conv(x)-mean}{\sqrt{var}}+\beta \\ &= \gamma*\frac{W(x)+b-mean}{\sqrt{var}}+\beta \\ &= \frac{\gamma*W(x)}{\sqrt{var}}+(\frac{\gamma*(b-mean)}{\sqrt{var}}+\beta) \end{align} BN(Conv(x))=γvar Conv(x)mean+β=γvar W(x)+bmean+β=var γW(x)+(var γ(bmean)+β)

化简后 γ ∗ W ( x ) v a r {\frac{\gamma*W(x)}{\sqrt{var}}} var γW(x) 为融合后卷积的 weight, γ ∗ ( b − m e a n ) v a r + β {\frac{\gamma*(b-mean)}{\sqrt{var}}+\beta} var γ(bmean)+β 为融合后卷积的 bias。

import torch
import torch.nn as nn

class repconv3x3(nn.Module):
     def __init__(self, c1, c2):
          super().__init__()
          self.conv1 = nn.Conv2d(c1,c2,3,1,1,bias=False)
          self.bn1 = nn.BatchNorm2d(c2)
          self.conv_fuse=nn.Conv2d(c1,c2,3,1,1)

     def fuse_conv_bn(self, conv, bn):
          bn_mean, bn_var, bn_gamma, bn_beta = ( 
          bn.running_mean, 
          bn.running_var, 
          bn.weight, 
          bn.bias, 
          ) 
          bn_std = (bn_var + bn.eps).sqrt()  
          conv_weight = nn.Parameter((bn_gamma / bn_std).reshape(-1, 1, 1, 1) * conv.weight) 
          conv_bias = nn.Parameter(bn_beta - bn_mean * bn_gamma / bn_std) 
          return conv_weight,conv_bias
    
     def forward(self, x):
          x = self.bn1(self.conv1(x))
          return x
     def forward_fuse(self, x):
          self.conv_fuse.weight.data,self.conv_fuse.bias.data=self.fuse_conv_bn(self.conv1,self.bn1)
          return self.conv_fuse(x)
    
inputs = torch.rand((1,1,3,3))
# 重点 模型调到 eval 模式,停止参数更新
model = repconv3x3(1,2).eval()

out1 = model.forward(inputs)
out2 = model.forward_fuse(inputs)

print("difference:",((out2-out1)**2).sum().item())
difference: 2.930988785010413e-14

3. 3x3卷积 + 1x1卷积

这个非常简单,只是将1x1卷积的权重padding成3x3卷积的形状,再与3x3卷积的权重相加。

import torch
import torch.nn as nn

class repconv3x3(nn.Module):
     def __init__(self, c1, c2):
          super().__init__()
          self.conv1 = nn.Conv2d(c1,c2,3,1,1)
          self.conv2 = nn.Conv2d(c1,c2,1,1,0)
          self.conv_fuse=nn.Conv2d(c1,c2,3,1,1)

     def fuse_1x1conv_3x3conv(self, conv1, conv2):
          conv1x1_weight = nn.functional.pad(conv2.weight, [1,1,1,1])
          conv_weight = conv1x1_weight + conv1.weight
          conv_bias = conv2.bias + conv1.bias

          return conv_weight,conv_bias
    
     def forward(self, x):
          x = self.conv1(x) + self.conv2(x)
          return x
     def forward_fuse(self, x):
          self.conv_fuse.weight.data,self.conv_fuse.bias.data=self.fuse_1x1conv_3x3conv(self.conv1,self.conv2)
          return self.conv_fuse(x)
    
inputs = torch.rand((1,1,3,3))
# 重点 模型调到 eval 模式
model = repconv3x3(1,2).eval()

out1 = model.forward(inputs)
out2 = model.forward_fuse(inputs)

print("difference:",((out2-out1)**2).sum().item())
difference: 2.4980018054066022e-15

4. 1x1卷积 和 3x3卷积 串联

o u t p u t = C o n v 3 x 3 ( C o n v 1 x 1 ( x ) ) output=Conv_{3x3}(Conv_{1x1}(x)) output=Conv3x3(Conv1x1(x))

首先将1x1卷积核的第0维和第1维互相调换位置,然后3x3卷积核权重与转置后的1x1卷积核进行卷积操作

import torch
import torch.nn as nn

class repconv3x3(nn.Module):
     def __init__(self, c1, c2):
          super().__init__()
          self.conv3x3 = nn.Conv2d(c2,c1,3,1,1,bias=False)
          self.conv1x1 = nn.Conv2d(c1,c2,1,1,0,bias=False)
          self.conv_fuse=nn.Conv2d(c1,c1,3,1,1,bias=False)

     def fuse_1x1conv_3x3conv(self, conv3x3, conv1x1):
          weight=nn.functional.conv2d(conv3x3.weight.data,conv1x1.weight.data.permute(1,0,2,3))
          return weight
    
     def forward(self, x):
          x = self.conv3x3(self.conv1x1(x))
          return x
     def forward_fuse(self, x):
          self.conv_fuse.weight.data =self.fuse_1x1conv_3x3conv(self.conv3x3,self.conv1x1)
          return self.conv_fuse(x)

inputs = torch.rand((1,1,3,3))
# 重点 模型调到 eval 模式
model = repconv3x3(1,2).eval()

out1 = model.forward(inputs)
out2 = model.forward_fuse(inputs)

print("difference:",((out2-out1)**2).sum().item())
difference: 4.90059381963448e-16

5. 1x1Conv、1x3Conv和3x1Conv 并联

将卷积的weight全部padding成3x3卷积的weight,然后将其相加。

import torch
import torch.nn as nn

class repconv3x3(nn.Module):
     def __init__(self, c1, c2):
          super().__init__()
          self.conv1x1 = nn.Conv2d(c1,c2,1,1,0)
          self.conv1x3 = nn.Conv2d(c2,c2,(1,3),1,(0,1))
          self.conv3x1 = nn.Conv2d(c2,c2,(3,1),1,(1,0))
          self.conv_fuse=nn.Conv2d(c1,c2,3,1,1)

     def fuse_1x1conv_1x3conv_3x1conv(self, conv1, conv2, conv3):
          weight=nn.functional.pad(conv1.weight.data,(1,1,1,1))+nn.functional.pad(conv2.weight.data,(0,0,1,1))+nn.functional.pad(conv3.weight.data,(1,1,0,0))
          bias=conv1.bias.data+conv2.bias.data+conv3.bias.data
          return weight,bias
    
     def forward(self, x):
          x = self.conv3x1(x)+self.conv1x3(x)+self.conv1x1(x)
          return x
     def forward_fuse(self, x):
          self.conv_fuse.weight.data, self.conv_fuse.bias.data=self.fuse_1x1conv_1x3conv_3x1conv(self.conv1x1,self.conv1x3,self.conv3x1)
          return self.conv_fuse(x)

inputs = torch.rand((1,2,3,3))
# 重点 模型调到 eval 模式
model = repconv3x3(2,2).eval()

out1 = model.forward(inputs)
out2 = model.forward_fuse(inputs)

print("difference:",((out2-out1)**2).sum().item())
difference: 7.327471962526033e-15

6. AvgPooling 转换 Conv

池化层是针对各个输入通道的(对单层特征图进行池化操作),而卷积层会将所有输入通道的结果相加。平均池化层可以等价一个固定权重的卷积层,假设池化核的大小为 K,那么可以设置卷积层权重为 1/K。池化权重另外要注意的是卷积层会将所有输入通道结果相加,所以我们需要对当前输入通道设置固定的权重,对其他通道权重设置为0。

import torch
import torch.nn as nn

class repconv3x3(nn.Module):
     def __init__(self, c1):
          super().__init__()
          self.avg = nn.AvgPool2d(3,1,1)
          self.conv_fuse=nn.Conv2d(c1,c1,3,1,1,bias=False)

     def fuse_avg(self):
          self.conv_fuse.weight.data[:]=0
          for i in range(self.conv_fuse.in_channels):
               self.conv_fuse.weight.data[i,i,:,:]=1/(torch.prod(torch.tensor(self.conv_fuse.kernel_size)))
    
     def forward(self, x):
          x = self.avg(x)
          return x
     def forward_fuse(self, x):
          self.fuse_avg()
          return self.conv_fuse(x)


inputs = torch.rand((1,2,3,3))
# 重点 模型调到 eval 模式
model = repconv3x3(2).eval()

out1 = model.forward(inputs)
out2 = model.forward_fuse(inputs)

print("difference:",((out2-out1)**2).sum().item())
difference: 1.0658141036401503e-14
### YOLOv8 中 RepConv 模块的实现与使用 RepConv 是一种高效的卷积模块,其核心思想是通过训练过程中动态调整参数,在推理阶段将其退化为单个标准卷积操作。这种设计可以显著减少计算量并提升模型性能。 #### 1. **RepConv 的基本原理** RepConv 结合了多条路径的信息传递方式,具体来说,它由三个部分组成: - 一条普通的 $3 \times 3$ 卷积路径, - 一条 $1 \times 1$ 卷积路径, - 以及一个恒等映射(Identity Mapping)[^1]。 在训练阶段,这三条路径共同作用;而在推理阶段,则会将它们合并成单一的 $3 \times 3$ 卷积核,从而降低计算复杂度。 #### 2. **YOLOv8 中 RepConv 的实现** 以下是基于 PyTorch 实现的一个简单版本的 RepConv: ```python import torch.nn as nn import torch class RepConv(nn.Module): def __init__(self, in_channels, out_channels, kernel_size=3, stride=1, padding=1, groups=1, act=True): super().__init__() self.act = nn.SiLU() if act is True else (act if isinstance(act, nn.Module) else nn.Identity()) # 初始化多个分支 self.rbr_identity = nn.BatchNorm2d(in_channels) if in_channels == out_channels and stride == 1 else None self.rbr_dense = nn.Conv2d(in_channels, out_channels, kernel_size, stride, padding, groups=groups, bias=False) self.rbr_1x1 = nn.Conv2d(in_channels, out_channels, 1, stride, padding=0, groups=groups, bias=False) # BatchNorms 对应不同分支 self.bn_fuse = nn.BatchNorm2d(out_channels) self.bn_reparam = nn.BatchNorm2d(out_channels) def forward(self, inputs): if hasattr(self, 'rbr_reparam'): return self.act(self.rbr_reparam(inputs)) identity_out = 0 if self.rbr_identity is None else self.rbr_identity(inputs) dense_out = self.rbr_dense(inputs) one_kernel_out = self.rbr_1x1(inputs) return self.act(identity_out + dense_out + one_kernel_out) def fuse_repvgg_block(self): if not hasattr(self, 'rbr_reparam'): device = self.rbr_dense.weight.device rbr_dense_weight = self.rbr_dense.weight.clone().to(device) # 转换 Identity 映射权重 if self.rbr_identity is not None: input_dim = self.rbr_dense.in_channels // self.rbr_dense.groups kernel_value = torch.zeros((input_dim, input_dim, 3, 3), dtype=torch.float32).to(device) for i in range(input_dim): kernel_value[i, i % input_dim, 1, 1] = 1. id_tensor = kernel_value.repeat(self.rbr_dense.groups, 1, 1, 1) rbr_identity_weight = id_tensor.to(rbr_dense_weight.device) rbr_identity_bias = 0 weight = rbr_dense_weight + rbr_identity_weight bias = rbr_identity_bias # 合并 1x1 和 Dense 权重 elif self.rbr_1x1 is not None: pad_1x1_to_3x3_tensor = nn.functional.pad(self.rbr_1x1.weight, [1, 1, 1, 1]) weight += pad_1x1_to_3x3_tensor bias += self.rbr_1x1.bias if self.rbr_1x1.bias is not None else 0 fused_bn = self.bn_fuse if self.bn_fuse is not None else self.bn_reparam running_mean = fused_bn.running_mean running_var = fused_bn.running_var gamma = fused_bn.weight beta = fused_bn.bias eps = fused_bn.eps std = (running_var + eps).sqrt() t = (gamma / std).reshape(-1, 1, 1, 1) weight = weight * t bias = beta - running_mean * gamma / std self.rbr_reparam = nn.Conv2d( in_channels=self.rbr_dense.in_channels, out_channels=self.rbr_dense.out_channels, kernel_size=self.rbr_dense.kernel_size, stride=self.rbr_dense.stride, padding=self.rbr_dense.padding, dilation=self.rbr_dense.dilation, groups=self.rbr_dense.groups, bias=True ) self.rbr_reparam.weight.data = weight self.rbr_reparam.bias.data = bias def switch_to_deploy_mode(self): self.fuse_repvgg_block() delattr(self, 'rbr_dense') delattr(self, 'rbr_1x1') if hasattr(self, 'rbr_identity'): delattr(self, 'rbr_identity') ``` 上述代码实现了 RepConv 的定义及其部署模式转换逻辑。`fuse_repvgg_block()` 方法用于将多分支结构融合为单个卷积核以便于推理加速。 #### 3. **如何在 YOLOv8 中应用 RepConv?** 要将 RepConv 集成到 YOLOv8 中,可以通过修改 `yolov8-p2.yaml` 或其他配置文件中的 backbone 层次结构完成替换工作。例如,可以在主干网络中用 RepConv 替代原有的基础卷积单元。 假设我们希望在 Backbone 下采样层中加入 RepConv 支持,那么需要编辑 YAML 文件的相关部分,并确保对应的 Python 类已正确定义好。 #### 4. **注意事项** 当实际部署时,请记得调用 `switch_to_deploy_mode()` 函数以切换至优化后的状态。这样能够进一步提高运行效率并节省内存资源消耗。 ---
评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值