Pytorch搭建Peleenet

1、引言

      Peleenet也是轻量级网络家族里面的重要成员,并且在工业产品中得到了广泛应用,接下来就对Peleenet网络加以介绍。论文地址:https://arxiv.org/abs/1804.06882

2、Peleenet

      论文说是借鉴Densenet,但是我感觉不像,我对Densenet的理解就是密集连接,一个DenseBlock里面:前层输出和后面所有层的输入在concatnate(虽然我没有看过Densenet论文~~)。

      现在开始搭建~~

      (1)首先搭建三兄弟(卷积、标准化、激活)在一起,在Peleenet的论文中也提到了他把激活放在标准化之后。(这.....,大家不都这样嘛)

import torch
from torch import nn
import math

class Conv_Norm_Acti(nn.Module):
    def __init__(self,in_channels,out_channels,kernel_size,stride=1,padding=0):
        super(Conv_Norm_Acti,self).__init__()
        self.conv = nn.Conv2d(in_channels=in_channels,out_channels=out_channels,
                      kernel_size=kernel_size,stride=stride,padding=padding)
        self.norm = nn.BatchNorm2d(num_features=out_channels)
        self.acti = nn.ReLU(inplace=True)
    def forward(self,x):
        x = self.conv(x)
        x = self.norm(x)
        x = self.acti(x)
        return x

     (2)接下来搭建第一个模块Stem Block。论文说这样的结构能够保留更多的输入信息,反正就是作为先锋很嗨皮。

class Stem_Block(nn.Module):
    """
    根模块
    """
    def __init__(self,inp_channel=3,out_channels=32):
        super(Stem_Block,self).__init__()
        half_out_channels = int(out_channels/2)
        self.conv_3x3_1 = Conv_Norm_Acti(in_channels=inp_channel,out_channels=out_channels,
                                    kernel_size=3,stride=2,padding=1)
        self.conv_3x3_2 = Conv_Norm_Acti(in_channels=16,out_channels=out_channels,
                                    kernel_size=3,stride=2,padding=1)
        self.conv_1x1_1 = Conv_Norm_Acti(in_channels=32,out_channels=half_out_channels,kernel_size=1)
        self.conv_1x1_2 = Conv_Norm_Acti(in_channels=64,out_channels=out_channels,kernel_size=1)
        self.max_pool = nn.MaxPool2d(kernel_size=2,stride=2)
    def forward(self,x):
        x = self.conv_3x3_1(x)
        
        x1 = self.conv_1x1_1(x)
        x1 = self.conv_3x3_2(x1)
        x2 = self.max_pool(x)
        x_cat = torch.cat((x1,x2),dim=1)
        x_out = self.conv_1x1_2(x_cat)
        return x_out

          (3)接下来搭建最关键的结构Two-Way Dense Layer ,这个结构借鉴了Googlenet的分支结构来提取不同感受野的特征,获得更多的语义信息。论文中特别提到为了减少计算量,bottleneck之后的输出通道是根据输入通道进行动态调整的,实际上就是在降维来减少计算量,因为输入图片在前层结构中的计算量比较大,原因就是没有下采样不到位,feature map尺寸比较大,通道数如果再多,计算量就蹭蹭上去了。所以用所谓的bottleneck来降维,实际就是1x1卷积。动态调整的方式看代码就可以了。bottleneck_width超参设置影响较大,然后提到了growrate,很多人把他看成通道数变量,我已经忍了所谓的bottleneck,但是rate在我的概念中应该是一个倍数变量(但这样写的确有点麻烦。。。)。

class Two_way_dense_layer(nn.Module):
    """
    特征提取的主力
    """
    base_channel_num = 32
    def __init__(self,inp_channel,bottleneck_wid,growthrate):
        super(Two_way_dense_layer,self).__init__()
        growth_channel = self.base_channel_num*growthrate
        growth_channel = int(growth_channel/2)
        bottleneck_out = int(growth_channel*bottleneck_wid/4)
        
        if bottleneck_out > inp_channel/2:
            bottleneck_out = int(bottleneck_out/8)*4
            print("bottleneck_out is too big,adjust it to:",bottleneck_out)

        self.conv_1x1 = Conv_Norm_Acti(in_channels=inp_channel,out_channels=bottleneck_out,
                                       kernel_size=1)
        self.conv_3x3_1 = Conv_Norm_Acti(in_channels=bottleneck_out,out_channels=growth_channel,
                                      kernel_size=3,padding=1)
        self.conv_3x3_2 = Conv_Norm_Acti(in_channels=growth_channel,out_channels=growth_channel,
                                      kernel_size=3,padding=1)
    def forward(self,x):
        x_branch = self.conv_1x1(x)
        x_branch_1 = self.conv_3x3_1(x_branch)
        x_branch_2 = self.conv_3x3_1(x_branch)
        x_branch_2 = self.conv_3x3_2(x_branch_2)
        out = torch.cat((x,x_branch_1,x_branch_2),dim=1)
        return out

 

        (4)Dense Block就是重复Two way dense layer,堆积木。

class Dense_Block(nn.Module):
    def __init__(self,layer_num,inp_channel,bottleneck_wid,growthrate):
        super(Dense_Block,self).__init__()
        self.layers = nn.Sequential()
        base_channel_num = Two_way_dense_layer.base_channel_num
        for i in range(layer_num):
            layer = Two_way_dense_layer(inp_channel+i*growthrate*base_channel_num,
                                        bottleneck_wid,growthrate)
            self.layers.add_module("denselayer%d"%(i+1),layer)
    def forward(self,x):
        x = self.layers(x)
        return x

       (5) 过渡层,作用就是降维。

class Transition_layer(nn.Module):
    def __init__(self,inp_channel,use_pool=True):
        super(Transition_layer,self).__init__()
        self.conv_1x1 = Conv_Norm_Acti(in_channels=inp_channel,out_channels=inp_channel,kernel_size=1)
        self.avg_pool = nn.AvgPool2d(kernel_size=2,stride=2)
        self.use_pool = use_pool
    def forward(self,x):
        x = self.conv_1x1(x)
        if self.use_pool:
            x = self.avg_pool(x)
        return x

       (6)完成搭建,看着下表搭就好了。我没有加后面的分类层,原因是懒。需要的自己加个全局自适应均值池化啥的╰( ̄ω ̄o),很多人是想用来做检测或者分割的,需要多阶段输出,len(self.feature)是9,用切片就好啦,返回多个值,我知道你懂的🤭。

class Peleenet(nn.Module):
    def __init__(self,growthrate=1,layer_num_cfg=[3, 4, 8, 6],bottleneck_width=[1, 2, 4, 4],
                 inp_channels=[32,128,256,512]):
        super(Peleenet,self).__init__()
        base_channel_num = Two_way_dense_layer.base_channel_num
        self.features = nn.Sequential()
        self.stem_block =Stem_Block() # stride = 4
        self.features.add_module("Stage_0",self.stem_block)
        assert len(layer_num_cfg)==4 and len(bottleneck_width),"layer_num_cfg or bottleneck_width 的元素长度不是4!"
        for i in range(4):
            self.stage = Dense_Block(layer_num=layer_num_cfg[i],inp_channel=inp_channels[i],
                                  bottleneck_wid=bottleneck_width[i],growthrate=growthrate)
            if i<3:
                self.translayer = Transition_layer(inp_channel=inp_channels[i]+base_channel_num*growthrate*layer_num_cfg[i])
            else:
                self.translayer = Transition_layer(inp_channel=inp_channels[i]+base_channel_num*growthrate*layer_num_cfg[i],
                                                   use_pool=False)
            self.features.add_module("Stage_%d"%(i+1),self.stage)
            self.features.add_module("Translayer_%d"%(i+1),self.translayer)
        self._initialize_weights()
            
    def _initialize_weights(self):
        for m in self.modules():
            if isinstance(m, nn.Conv2d):
                n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
                m.weight.data.normal_(0, math.sqrt(2. / n))
                if m.bias is not None:
                    m.bias.data.zero_()
            elif isinstance(m, nn.BatchNorm2d):
                m.weight.data.fill_(1)
                m.bias.data.zero_()
    def forward(self,x):
        x = self.features(x)
        return x

 

     (7)测试

         完成了。

if __name__ == "__main__":
    inp = torch.randn((2,3,224,224))
    model = Peleenet(growthrate=1)
    result = model(inp)
    print(result.size())

# 输出结果
"""
torch.Size([2, 704, 7, 7])
"""
 

1、总结

     跟着论文撸代码理解比较深刻点~,有错误的地方请留言改正但不要批评~。同时也希望这篇博客能够帮助到大家。

 

 

  • 4
    点赞
  • 17
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

超超爱AI

土豪请把你的零钱给我点

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值