DenseNet结构与传统卷积结构对比

       在resnet结构被提出后,kaggle上出现了大量resnet与其他类型的网络结构如unet等相结合的混合结构,即使用残差块替换Unet中的卷积层,以达到增加网络层数、减轻梯度消失等效果。为了实验Dense结构是否也能够较好的替换常规的卷积结构,这篇博客中使用Pytorch编写了用于Cifar10分类问题的传统卷积神经网络结构以及DenseNet,并进行对比。

       传统卷积网络设计如下:

import torch.nn as nn
import torch.nn.functional as F

class _ConvLayer(nn.Sequential):
    def __init__(self, num_input_features, num_output_features, drop_rate):
        super(_ConvLayer, self).__init__()
        
        self.add_module('conv', nn.Conv2d(num_input_features, num_output_features,
                        kernel_size=3, stride=1, padding=1, bias=False)),
        self.add_module('relu', nn.ReLU(inplace=True)),
        self.add_module('norm', nn.BatchNorm2d(num_output_features)),
        
        self.drop_rate = drop_rate

    def forward(self, x):
        x = super(_ConvLayer, self).forward(x)
        if self.drop_rate > 0:
            x = F.dropout(x, p=self.drop_rate, training=self.training)
        return x

class CNN(nn.Module):
    def __init__(self):
        super(CNN, self).__init__()
        
        self.features = nn.Sequential()
        self.features.add_module('convlayer1', _ConvLayer(3, 64, 0.2))
        self.features.add_module('convlayer2', _ConvLayer(64, 64, 0.2))
        self.features.add_module('maxpool', nn.MaxPool2d(2, 2))
        self.features.add_module('convlayer3', _ConvLayer(64, 128, 0.2))
        self.features.add_module('convlayer4', _ConvLayer(128, 128, 0.2))
        self.features.add_module('avgpool', nn.AvgPool2d(2, 2))
        self.features.add_module('convlayer5', _ConvLayer(128, 256, 0.2))
        self.features.add_module('convlayer6', _ConvLayer(256, 256, 0.2))
        
        self.classifier = nn.Linear(256, 10)
        
        for m in self.modules():
            if isinstance(m, nn.Conv2d):
                nn.init.kaiming_normal_(m.weight)
            elif isinstance(m, nn.BatchNorm2d):
                nn.init.constant_(m.weight, 1)
                nn.init.constant_(m.bias, 0)
            elif isinstance(m, nn.Linear):
                nn.init.constant_(m.bias, 0)
                
    def forward(self, x):
        features = self.features(x)
        out = F.avg_pool2d(features, kernel_size=8, stride=1).view(features.size(0), -1)
        out = self.classifier(out)
        return out

       网络中参数数量为1,151,562,在BatchSize取64的条件下使用adam(lr=0.001)训练50个Epoch,在测试集上的分类结果如下:

Accuracy of the network on the 10000 test images: 86 %
Accuracy of plane : 93 %
Accuracy of   car : 96 %
Accuracy of  bird : 71 %
Accuracy of   cat : 67 %
Accuracy of  deer : 89 %
Accuracy of   dog : 81 %
Accuracy of  frog : 90 %
Accuracy of horse : 84 %
Accuracy of  ship : 91 %
Accuracy of truck : 93 %
Time cost: 12.649507522583008 s

       之后是作为对比的DenseNet,结构如下所示:

import torch.nn as nn
import torch.nn.functional as F
from collections import OrderedDict

class _DenseLayer(nn.Sequential):
    def __init__(self, num_input_features, growth_rate, bn_size, drop_rate):
        super(_DenseLayer, self).__init__()
        self.add_module('norm1', nn.BatchNorm2d(num_input_features)),
        self.add_module('relu1', nn.ReLU(inplace=True)),
        self.add_module('conv1', nn.Conv2d(num_input_features, bn_size *
                        growth_rate, kernel_size=1, stride=1, bias=False)),
        self.add_module('norm2', nn.BatchNorm2d(bn_size * growth_rate)),
        self.add_module('relu2', nn.ReLU(inplace=True)),
        self.add_module('conv2', nn.Conv2d(bn_size * growth_rate, growth_rate,
                        kernel_size=3, stride=1, padding=1, bias=False)),
        self.drop_rate = drop_rate

    def forward(self, x):
        new_features = super(_DenseLayer, self).forward(x)
        if self.drop_rate > 0:
            new_features = F.dropout(new_features, p=self.drop_rate, training=self.training)
        return torch.cat([x, new_features], 1)


class _DenseBlock(nn.Sequential):
    def __init__(self, num_layers, num_input_features, bn_size, growth_rate, drop_rate):
        super(_DenseBlock, self).__init__()
        for i in range(num_layers):
            layer = _DenseLayer(num_input_features + i * growth_rate, growth_rate, bn_size, drop_rate)
            self.add_module('denselayer%d' % (i + 1), layer)


class _Transition(nn.Sequential):
    def __init__(self, num_input_features, num_output_features):
        super(_Transition, self).__init__()
        self.add_module('norm', nn.BatchNorm2d(num_input_features))
        self.add_module('relu', nn.ReLU(inplace=True))
        self.add_module('conv', nn.Conv2d(num_input_features, num_output_features,
                                          kernel_size=1, stride=1, bias=False))
        self.add_module('pool', nn.AvgPool2d(kernel_size=2, stride=2))


class DenseNet(nn.Module):
    def __init__(self, growth_rate=12, block_config=(12, 12, 12),
                 num_init_features=24, bn_size=4, drop_rate=0.2, num_classes=10):

        super(DenseNet, self).__init__()

        # First convolution
        self.features = nn.Sequential(OrderedDict([
            ('conv0', nn.Conv2d(3, num_init_features, kernel_size=3, stride=1, padding=1, bias=False)),
            ('norm0', nn.BatchNorm2d(num_init_features)),
            ('relu0', nn.ReLU(inplace=True)),
        ]))

        # Each denseblock
        num_features = num_init_features
        for i, num_layers in enumerate(block_config):
            block = _DenseBlock(num_layers=num_layers, num_input_features=num_features,
                                bn_size=bn_size, growth_rate=growth_rate, drop_rate=drop_rate)
            self.features.add_module('denseblock%d' % (i + 1), block)
            num_features = num_features + num_layers * growth_rate
            if i != len(block_config) - 1:
                trans = _Transition(num_input_features=num_features, num_output_features=num_features // 2)
                self.features.add_module('transition%d' % (i + 1), trans)
                num_features = num_features // 2

        # Final batch norm
        self.features.add_module('norm5', nn.BatchNorm2d(num_features))

        # Linear layer
        self.classifier = nn.Linear(num_features, num_classes)

        # Official init from torch repo.
        for m in self.modules():
            if isinstance(m, nn.Conv2d):
                nn.init.kaiming_normal_(m.weight)
            elif isinstance(m, nn.BatchNorm2d):
                nn.init.constant_(m.weight, 1)
                nn.init.constant_(m.bias, 0)
            elif isinstance(m, nn.Linear):
                nn.init.constant_(m.bias, 0)

    def forward(self, x):
        features = self.features(x)
        out = F.relu(features, inplace=True)
        out = F.avg_pool2d(out, kernel_size=8, stride=1).view(features.size(0), -1)
        out = self.classifier(out)
        return out

       网络中参数数量为504,052,同样在BatchSize取64的条件下使用adam(lr=0.001)训练50个Epoch,在测试集上的分类结果如下:

Accuracy of the network on the 10000 test images: 88 %
Accuracy of plane : 93 %
Accuracy of   car : 100 %
Accuracy of  bird : 73 %
Accuracy of   cat : 74 %
Accuracy of  deer : 83 %
Accuracy of   dog : 93 %
Accuracy of  frog : 90 %
Accuracy of horse : 93 %
Accuracy of  ship : 93 %
Accuracy of truck : 96 %
Time cost: 74.94790506362915 s

       可以看到,在网络整体连接方式大致相同的情况下,虽然Dense网络的参数数量比传统卷积网络少了一半以上,但分类效果反而更加优秀。这应该是Dense结构大幅扩展了网络实际深度以及提高了特征利用率所导致的。但是,由于网络中过多的连接操作,同时多层小卷积自身运算量远高于单个大卷积层,Dense网络在对测试集运算的时间远远超出传统卷积网络,这也是目前Dense结构在应用时的一大难点。

  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值