前言
提示:前人LeNet到AlexNet主要作用是提高加深了网络的深度和参数数量。从AlexNet到VGG 主要是提出了后续kernel大小的依据有规律的增加channel大小,减小image size 大小的过程。 而今天来看下不同的Idea, NiN 。这个是有新加坡李敏团队开发的。他也是MXNet的早期开发者。它的作用本身并不大,并没有提高太多的识别率,但最大的贡献是减少了大量的参数,为后续的GoogleNet 思想提供重要的支持
一、NiN是什么?
它串联多个由卷积层和“全连接”层构成的小网络来构建一个深层网络
二、具体代码示例
1.NiN块
代码如下(示例):
from d2l import torch as d2l
import torch
from torch import nn
def nin_block(in_channels,out_channels,strides,kernel_size,padding):
return nn.Sequential(
nn.Conv2d(in_channels,out_channels,kernel_size,strides,padding),
nn.ReLU(),
nn.Conv2d(out_channels,out_channels,kernel_size=1),nn.ReLU(),
nn.Conv2d(out_channels,out_channels,kernel_size=1),nn.ReLU()
)
2.NiN 模块
代码如下(示例)
net = nn.Sequential(
nin_block(1,96,strides=4,kernel_size=11, padding=0),
nn.MaxPool2d(3,stride=2),
nin_block(96,256,kernel_size=5,strides=1,padding=2),
nn.MaxPool2d(kernel_size=3,stride=2),
nin_block(256,348,kernel_size=3,strides=1,padding=1),
nn.MaxPool2d(kernel_size=3,stride=2),
nn.Dropout(0.5),
nin_block(348,10,kernel_size=3,strides=1,padding=1),
nn.AdaptiveAvgPool2d((1,1)),
nn.Flatten()
)
这个还是按照AlexNet模型做修改:
3. 训练模型
# 这里查看模型的参数变化
X = torch.randn((1,1,224,224))
for layer in net:
X = layer(X)
print(layer.__class__.__name__, 'X shape is \t',X.shape)
#Sequential X shape is torch.Size([1, 96, 54, 54])
#MaxPool2d X shape is torch.Size([1, 96, 26, 26])
#Sequential X shape is torch.Size([1, 256, 26, 26])
#MaxPool2d X shape is torch.Size([1, 256, 12, 12])
#Sequential X shape is torch.Size([1, 348, 12, 12])
#MaxPool2d X shape is torch.Size([1, 348, 5, 5])
#Dropout X shape is torch.Size([1, 348, 5, 5])
#Sequential X shape is torch.Size([1, 10, 5, 5])
#AdaptiveAvgPool2d X shape is torch.Size([1, 10, 1, 1])
#Flatten X shape is torch.Size([1, 10])
# 导入模型的数据
lr,num_epochs,batch_size = 0.1,10,18
train_iter,test_iter = d2l.load_data_fashion_mnist(batch_size)
# 最后进行训练
d2l.train_ch6(net,train_iter,test_iter,num_epochs,lr)
总结
提示:这里对文章进行总结:
我所理解的NiN是Dense层和Conv层的结合体。它去除了Dense层参数过多的问题,又保留了Conv获得特征的优点。