1. 内容来源
2. Pytorch实现
2.1 NiN块
NiN的提出主要针对先前模型中全连接层参数量过大(最后一个卷积层的通道数*特征维数*特征维数*第一个线性层的输出维度),采用1*1卷积核的卷积层代替全连接层,并用全局平均池化作为输出层(池化核大小与输入维度一致,则输出通道数*1*1的张量,通过将最终通道数与分类数一致,可与输出层效果保持一致)
NiN块结构:
- 卷积层1(正常卷积层)--激活层1(ReLU)
- 卷积层2(输出通道数与输入通道数一致,1*1卷积核)--激活层2(ReLU)
- 卷积层3(输出通道数与输入通道数一致,1*1卷积核)--激活层3(ReLU)
代码实现:
import torch
from torch import nn
from d2l import torch as d2l
def nin_block(in_channels, out_channels, kernel_size, strides, padding):
return nn.Sequential(
nn.Conv2d(in_channels, out_channels, kernel_size, strides, padding), nn.ReLU(),
nn.Conv2d(out_channels, out_channels, kernel_size=1), nn.ReLU(),
nn.Conv2d(out_channels, out_channels, kernel_size=1), nn.ReLU())
2.2 模型构建
NiN模型结构:
- NiN块1(输入1通道,输出96通道,11*11卷积核,4步长,0填充)--最大池化层1(3*3核,2步长)
- NiN块2(输入96通道,输出256通道,5*5卷积核,1步长,2填充)--最大池化层2(3*3核,2步长)
- NiN块3(输入256通道,输出384通道,3*3卷积核,1步长,1填充)--最大池化层3(3*3核,2步长)--Dropout层1(50%丢弃率)
- NiN块4(输入384通道,输出10通道,3*3卷积核,1步长,1填充)--全局均值池化层1(输出维度(1,1))--展开成单维向量
代码实现:
net = nn.Sequential(
nin_block(1, 96, kernel_size=11, strides=4, padding=0),
nn.MaxPool2d(kernel_size=3, stride=2),
nin_block(96, 256, kernel_size=5, strides=1, padding=2),
nn.MaxPool2d(kernel_size=3, stride=2),
nin_block(256, 384, kernel_size=3, strides=1, padding=1),
nn.MaxPool2d(kernel_size=3, stride=2), nn.Dropout(0.5),
nin_block(384, 10, kernel_size=3, strides=1, padding=1),
nn.AdaptiveAvgPool2d((1, 1)),
nn.Flatten())
维度CHECK:
Sequential output shape: torch.Size([1, 96, 54, 54])
MaxPool2d output shape: torch.Size([1, 96, 26, 26])
Sequential output shape: torch.Size([1, 256, 26, 26])
MaxPool2d output shape: torch.Size([1, 256, 12, 12])
Sequential output shape: torch.Size([1, 384, 12, 12])
MaxPool2d output shape: torch.Size([1, 384, 5, 5])
Dropout output shape: torch.Size([1, 384, 5, 5])
Sequential output shape: torch.Size([1, 10, 5, 5])
AdaptiveAvgPool2d output shape: torch.Size([1, 10, 1, 1])
Flatten output shape: torch.Size([1, 10])
2.3 模型训练
# Training
lr, num_epochs, batch_size = 0.05, 10, 64
train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size, resize=224)
d2l.train_ch6(net, train_iter, test_iter, num_epochs, lr, d2l.try_gpu())
结果:
loss 0.529, train acc 0.816, test acc 0.811
1316.1 examples/sec on cuda:0