深度分离卷积的第一层之后是否有激活函数

https://www.geek-share.com/detail/2806401663.html
https://blog.csdn.net/shuzfan/article/details/77129716
https://blog.csdn.net/C_chuxin/article/details/88581411
https://zhuanlan.zhihu.com/p/166736637
https://blog.csdn.net/shawroad88/article/details/95222082

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
以下是一个示例代码,用于实现使用深度可分离卷积的PSA模块。该代码使用PyTorch框架。 ```python import torch import torch.nn as nn class PSAConv(nn.Module): def __init__(self, in_channels, out_channels, kernel_size, stride=1, padding=0): super(PSAConv, self).__init__() # 第一层卷积 self.conv1 = nn.Conv2d(in_channels, out_channels, kernel_size=kernel_size, stride=stride, padding=padding, bias=False) # 第二层卷积 self.conv2 = nn.Conv2d(out_channels, out_channels, kernel_size=kernel_size, stride=stride, padding=padding, groups=out_channels, bias=False) # BN层 self.bn = nn.BatchNorm2d(out_channels) # ReLU激活函数 self.relu = nn.ReLU(inplace=True) def forward(self, x): # 第一层卷积 out = self.conv1(x) # 第二层卷积 out = self.conv2(out) # BN层 out = self.bn(out) # ReLU激活函数 out = self.relu(out) return out class PSAModule(nn.Module): def __init__(self, in_channels, out_channels, kernel_size, stride=1, padding=0): super(PSAModule, self).__init__() # PSA第一层 self.psa1 = PSAConv(in_channels, out_channels, kernel_size=kernel_size, stride=stride, padding=padding) # PSA第二层 self.psa2 = PSAConv(in_channels, out_channels, kernel_size=kernel_size, stride=stride, padding=padding) # PSA第三层 self.psa3 = PSAConv(in_channels, out_channels, kernel_size=kernel_size, stride=stride, padding=padding) # PSA第四层 self.psa4 = PSAConv(in_channels, out_channels, kernel_size=kernel_size, stride=stride, padding=padding) def forward(self, x): # PSA第一层 out1 = self.psa1(x) # PSA第二层 out2 = self.psa2(out1) # PSA第三层 out3 = self.psa3(out2) # PSA第四层 out4 = self.psa4(out3) # 将四个特征图合并 out = torch.cat((out1, out2, out3, out4), dim=1) return out ``` 在上面的代码中,`PSAConv`是一个深度可分离卷积模块,可以在`PSAModule`中重复使用。`PSAModule`是一个完整的PSA模块,其中包含了四个`PSAConv`模块,并将它们的特征图合并在一起。 在使用该模块时,我们可以将它放在我们的模型中。例如: ```python class MyModel(nn.Module): def __init__(self): super(MyModel, self).__init__() # 首先定义一些卷积层和池化层 self.conv1 = nn.Conv2d(3, 16, kernel_size=3, stride=1, padding=1, bias=False) self.bn1 = nn.BatchNorm2d(16) self.relu = nn.ReLU(inplace=True) self.pool = nn.MaxPool2d(kernel_size=2, stride=2) # PSA模块 self.psa = PSAModule(16, 16, kernel_size=3, stride=1, padding=1) # 最后一些卷积层和池化层 self.conv2 = nn.Conv2d(64, 128, kernel_size=3, stride=1, padding=1, bias=False) self.bn2 = nn.BatchNorm2d(128) self.conv3 = nn.Conv2d(128, 256, kernel_size=3, stride=1, padding=1, bias=False) self.bn3 = nn.BatchNorm2d(256) self.avgpool = nn.AdaptiveAvgPool2d((1, 1)) self.fc = nn.Linear(256, 10) def forward(self, x): out = self.conv1(x) out = self.bn1(out) out = self.relu(out) out = self.pool(out) out = self.psa(out) out = self.conv2(out) out = self.bn2(out) out = self.relu(out) out = self.pool(out) out = self.conv3(out) out = self.bn3(out) out = self.relu(out) out = self.avgpool(out) out = out.view(out.size(0), -1) out = self.fc(out) return out ``` 在上面的模型中,我们将`PSAModule`放在了模型的第二层。这里的模型只是一个示例,您可以根据自己的需要对其进行更改。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值