1. 内容来源
2. Pytorch实现
2.1 VGG块实现
与本系列之前的LeNet、AlexNet不同在于该网络使用了块结构,结构更改更灵活
VGG块结构包含:
- N个(卷积层(输入通道数=输出通道数,输出通道数自定义,3*3卷积核,1步长,1填充)--ReLU)
- 最大池化层(2*2核,2步长)
代码实现:
import torch
from torch import nn
from d2l import torch as d2l
def vgg_block(num_convs, in_channels, out_channels):
layers=[]
for _ in range(num_convs):
layers.append(nn.Conv2d(
in_channels,
out_channels,
kernel_size=3,
padding=1
))
layers.append(nn.ReLU())
in_channels = out_channels
layers.append(nn.MaxPool2d(kernel_size=2, stride=2))
return nn.Sequential(*layers)
2.2 模型构建
将若干VGG块串联,每个VGG块输入通道数等于上一个块的输出通道数(第一个块的输入通道数为1),则每个VGG块只需要设置卷积+激活层数、输出通道数即可。
VGG网络结构:
- VGG块1(1层,输出64通道)
- VGG块2(1层,输出128通道)
- VGG块3(2层,输出256通道)
- VGG块4(2层,输出512通道)
- VGG块5(2层,输出512通道)
- 展开成单维向量
- 线性层1(输入512*7*7维,输出4096维)--激活层1(ReLU)--Dropout层1(50%丢弃率)
- 线性层2(输入4096维,输出4096维)--激活层2(ReLU)--Dropout层2(50%丢弃率)
- 输出层(输入4096维,输出10维)
代码实现:
# Model
conv_arch = (
(1, 64),
(1, 128),
(2, 256),
(2, 512),
(2, 512))
def vgg(conv_arch):
conv_blks = []
in_channels = 1
for (num_convs, out_channels) in conv_arch:
conv_blks.append(vgg_block(num_convs, in_channels, out_channels))
in_channels = out_channels
return nn.Sequential(
*conv_blks, nn.Flatten(),
nn.Linear(out_channels * 7 * 7, 4096), nn.ReLU(), nn.Dropout(0.5),
nn.Linear(4096, 4096), nn.ReLU(), nn.Dropout(0.5),
nn.Linear(4096, 10)
)
net = vgg(conv_arch)
维度CHECK:
Sequential output shape : torch.Size([1, 64, 112, 112])
Sequential output shape : torch.Size([1, 128, 56, 56])
Sequential output shape : torch.Size([1, 256, 28, 28])
Sequential output shape : torch.Size([1, 512, 14, 14])
Sequential output shape : torch.Size([1, 512, 7, 7])
Flatten output shape : torch.Size([1, 25088])
Linear output shape : torch.Size([1, 4096])
ReLU output shape : torch.Size([1, 4096])
Dropout output shape : torch.Size([1, 4096])
Linear output shape : torch.Size([1, 4096])
ReLU output shape : torch.Size([1, 4096])
Dropout output shape : torch.Size([1, 4096])
Linear output shape : torch.Size([1, 10])
2.3 模型简化
将每个VGG块通道数减少至1/4并CHECK输出维度
代码实现
# Smaller for test
ratio = 4
small_conv_arch = [(pair[0], pair[1] // ratio) for pair in conv_arch]
net = vgg(small_conv_arch)
X = torch.randn(size=(1, 1, 224, 224))
for blk in net:
X = blk(X)
print(blk.__class__.__name__, 'output shape : \t', X.shape)
维度打印:
Sequential output shape : torch.Size([1, 16, 112, 112])
Sequential output shape : torch.Size([1, 32, 56, 56])
Sequential output shape : torch.Size([1, 64, 28, 28])
Sequential output shape : torch.Size([1, 128, 14, 14])
Sequential output shape : torch.Size([1, 128, 7, 7])
Flatten output shape : torch.Size([1, 6272])
Linear output shape : torch.Size([1, 4096])
ReLU output shape : torch.Size([1, 4096])
Dropout output shape : torch.Size([1, 4096])
Linear output shape : torch.Size([1, 4096])
ReLU output shape : torch.Size([1, 4096])
Dropout output shape : torch.Size([1, 4096])
Linear output shape : torch.Size([1, 10])
2.4 模型训练
# Training
lr, num_epochs, batch_size = 0.05, 10, 32
train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size, resize=224)
d2l.train_ch6(net, train_iter, test_iter, num_epochs, lr, d2l.try_gpu())
结果:
loss 0.096, train acc 0.964, test acc 0.925
890.5 examples/sec on cuda:0