目录
5.3.2.2 测试LeNet-5模型,构造一个形状为 [1,1,32,32]的输入数据送入网络,观察每一层特征图的形状变化。
5.2.3.3 使用pytorch中的相应算子,构建LeNet-5模型
5.2.3.5 令两个网络加载同样的权重,测试一下两个网络的输出结果是否一致。
5.3 基于LeNet实现手写体数字识别实验
在本实验中,我们实现经典卷积网络LeNet-5,并进行手写体数字识别任务。
5.3.1 数据
手写体数字识别是计算机视觉中最常用的图像分类任务,让计算机识别出给定图片中的手写体数字(0-9共10个数字)。由于手写体风格差异很大,因此手写体数字识别是具有一定难度的任务。
我们采用常用的手写数字识别数据集:MNIST数据集。MNIST数据集是计算机视觉领域的经典入门数据集,包含了60,000个训练样本和10,000个测试样本。这些数字已经过尺寸标准化并位于图像中心,图像是固定大小(28×28像素)。图5.12给出了部分样本的示例。
为了节省训练时间,本节选取MNIST数据集的一个子集进行后续实验,数据集的划分为:
- 训练集:1,000条样本
- 验证集:200条样本
- 测试集:200条样本
MNIST数据集分为train_set、dev_set和test_set三个数据集,每个数据集含两个列表分别存放了图片数据以及标签数据。比如train_set包含:
- 图片数据:[1 000, 784]的二维列表,包含1 000张图片。每张图片用一个长度为784的向量表示,内容是 28×28 尺寸的像素灰度值(黑白图片)。
- 标签数据:[1 000, 1]的列表,表示这些图片对应的分类标签,即0~9之间的数字。
观察数据集分布情况,代码实现如下:
import json
import gzip
# 打印并观察数据集分布情况
train_set, dev_set, test_set = json.load(gzip.open('./mnist.json.gz'))
train_images, train_labels = train_set[0][:1000], train_set[1][:1000]
dev_images, dev_labels = dev_set[0][:200], dev_set[1][:200]
test_images, test_labels = test_set[0][:200], test_set[1][:200]
train_set, dev_set, test_set = [train_images, train_labels], [dev_images, dev_labels], [test_images, test_labels]
print('Length of train/dev/test set:{}/{}/{}'.format(len(train_set[0]), len(dev_set[0]), len(test_set[0])))
Length of train/dev/test set:1000/200/200
可视化观察其中的一张样本以及对应的标签,代码如下所示:
import numpy as np
from PIL import Image
import matplotlib.pyplot as plt
image, label = train_set[0][0], train_set[1][0]
image, label = np.array(image).astype('float32'), int(label)
# 原始图像数据为长度784的行向量,需要调整为[28,28]大小的图像
image = np.reshape(image, [28,28])
image = Image.fromarray(image.astype('uint8'), mode='L')
print("The number in the picture is {}".format(label))
plt.figure(figsize=(5, 5))
plt.imshow(image,cmap='gray')
plt.show()
The number in the picture is 5
5.3.1.1 数据预处理
图像分类网络对输入图片的格式、大小有一定的要求,数据输入模型前,需要对数据进行预处理操作,使图片满足网络训练以及预测的需要。本实验主要应用了如下方法:
- 调整图片大小:LeNet网络对输入图片大小的要求为 32×32 ,而MNIST数据集中的原始图片大小却是 28×28,这里为了符合网络的结构设计,将其调整为32×32;
- 规范化: 通过规范化手段,把输入图像的分布改变成均值为0,标准差为1的标准正态分布,使得最优解的寻优过程明显会变得平缓,训练过程更容易收敛。
代码实现如下:
import torchvision.transforms as transforms
# 数据预处理
transforms = transforms.Compose([transforms.Resize(32),transforms.ToTensor(), transforms.Normalize(mean=[0.5], std=[0.5])])
这里要把图像尺寸改为32,不然后面会出现无法卷积的情况。
将原始的数据集封装为Dataset类,以便DataLoader调用。
import random
import torch.utils.data as io
class MNIST_dataset(io.Dataset):
def __init__(self, dataset, transforms, mode='train'):
self.mode = mode
self.transforms =transforms
self.dataset = dataset
def __getitem__(self, idx):
# 获取图像和标签
image, label = self.dataset[0][idx], self.dataset[1][idx]
image, label = np.array(image).astype('float32'), int(label)
image = np.reshape(image, [28,28])
image = Image.fromarray(image.astype('uint8'), mode='L')
image = self.transforms(image)
return image, label
def __len__(self):
return len(self.dataset[0])
# 固定随机种子
random.seed(0)
# 加载 mnist 数据集
train_dataset = MNIST_dataset(dataset=train_set, transforms=transforms, mode='train')
test_dataset = MNIST_dataset(dataset=test_set, transforms=transforms, mode='test')
dev_dataset = MNIST_dataset(dataset=dev_set, transforms=transforms, mode='dev')
5.3.2 模型构建
LeNet-5虽然提出的时间比较早,但它是一个非常成功的神经网络模型。基于LeNet-5的手写数字识别系统在20世纪90年代被美国很多银行使用,用来识别支票上面的手写数字。LeNet-5的网络结构如图5.13所示。
5.3.2.1 使用自定义算子,构建LeNet-5模型
我们使用上面定义的卷积层算子和汇聚层算子构建一个LeNet-5模型。
这里的LeNet-5和原始版本有4点不同:
- C3层没有使用连接表来减少卷积数量。
- 汇聚层使用了简单的平均汇聚,没有引入权重和偏置参数以及非线性激活函数。
- 卷积层的激活函数使用ReLU函数。
- 最后的输出层为一个全连接线性层。
import torch
import torch.nn as nn
import torch.nn.functional as F
import numpy as np
class Conv2D(nn.Module):
def __init__(self, in_channels, out_channels, kernel_size, stride=1, padding=0):
super(Conv2D, self).__init__()
# 创建卷积核
self.weight = torch.nn.Parameter(torch.ones(out_channels, in_channels, kernel_size, kernel_size))
# 创建偏置
self.bias = torch.nn.Parameter(torch.zeros(out_channels, 1))
self.stride = stride
self.padding = padding
# 输入通道数
self.in_channels = in_channels
# 输出通道数
self.out_channels = out_channels
# 基础卷积运算
def single_forward(self, X, weight):
# 零填充
new_X = torch.zeros([X.shape[0], X.shape[1] + 2 * self.padding, X.shape[2] + 2 * self.padding])
new_X[:, self.padding:X.shape[1] + self.padding, self.padding:X.shape[2] + self.padding] = X
u, v = weight.shape
output_w = (new_X.shape[1] - u) // self.stride + 1
output_h = (new_X.shape[2] - v) // self.stride + 1
output = torch.zeros([X.shape[0], output_w, output_h])
for i in range(0, output.shape[1]):
for j in range(0, output.shape[2]):
output[:, i, j] = torch.sum(
new_X[:, self.stride * i:self.stride * i + u, self.stride * j:self.stride * j + v] * weight,
dim=[1, 2])
return output
def forward(self, inputs):
"""
输入:
- inputs:输入矩阵,shape=[B, D, M, N]
- weights:P组二维卷积核,shape=[P, D, U, V]
- bias:P个偏置,shape=[P, 1]
"""
feature_maps = []
# 进行多次多输入通道卷积运算
p = 0
for w, b in zip(self.weight, self.bias): # P个(w,b),每次计算一个特征图Zp
multi_outs = []
# 循环计算每个输入特征图对应的卷积结果
for i in range(self.in_channels):
single = self.single_forward(inputs[:, i, :, :], w[i])
multi_outs.append(single)
# print("Conv2D in_channels:",self.in_channels,"i:",i,"single:",single.shape)
# 将所有卷积结果相加
feature_map = torch.sum(torch.stack(multi_outs), dim=0) + b # Zp
feature_maps.append(feature_map)
# print("Conv2D out_channels:",self.out_channels, "p:",p,"feature_map:",feature_map.shape)
p += 1
# 将所有Zp进行堆叠
out = torch.stack(feature_maps, 1)
return out
class Pool2D(nn.Module):
def __init__(self, size=(2, 2), mode='max', stride=1):
super(Pool2D, self).__init__()
# 汇聚方式
self.mode = mode
self.h, self.w = size
self.stride = stride
def forward(self, x):
output_w = (x.shape[2] - self.w) // self.stride + 1
output_h = (x.shape[3] - self.h) // self.stride + 1
output = torch.zeros([x.shape[0], x.shape[1], output_w, output_h])
# 汇聚
for i in range(output.shape[2]):
for j in range(output.shape[3]):
# 最大汇聚
if self.mode == 'max':
output[:, :, i, j] = torch.max(
x[:, :, self.stride * i:self.stride * i + self.w, self.stride * j:self.stride * j + self.h])
# 平均汇聚
elif self.mode == 'avg':
output[:, :, i, j] = torch.mean(
x[:, :, self.stride * i:self.stride * i + self.w, self.stride * j:self.stride * j + self.h],
dim=[2, 3])
return output
class Model_LeNet(nn.Module):
def __init__(self, in_channels, num_classes=10):
super(Model_LeNet, self).__init__()
# 卷积层:输出通道数为6,卷积核大小为5×5
self.conv1 = Conv2D(in_channels=in_channels, out_channels=6, kernel_size=5)
# 汇聚层:汇聚窗口为2×2,步长为2
self.pool2 = Pool2D(size=(2,2), mode='max', stride=2)
# 卷积层:输入通道数为6,输出通道数为16,卷积核大小为5×5,步长为1
self.conv3 = Conv2D(in_channels=6, out_channels=16, kernel_size=5, stride=1)
# 汇聚层:汇聚窗口为2×2,步长为2
self.pool4 = Pool2D(size=(2,2), mode='avg', stride=2)
# 卷积层:输入通道数为16,输出通道数为120,卷积核大小为5×5
self.conv5 = Conv2D(in_channels=16, out_channels=120, kernel_size=5, stride=1)
# 全连接层:输入神经元为120,输出神经元为84
self.linear6 = nn.Linear(120, 84)
# 全连接层:输入神经元为84,输出神经元为类别数
self.linear7 = nn.Linear(84, num_classes)
def forward(self, x):
# C1:卷积层+激活函数
output = F.relu(self.conv1(x))
# S2:汇聚层
output = self.pool2(output)
# C3:卷积层+激活函数
output = F.relu(self.conv3(output))
# S4:汇聚层
output = self.pool4(output)
# C5:卷积层+激活函数
output = F.relu(self.conv5(output))
# 输入层将数据拉平[B,C,H,W] -> [B,CxHxW]
output = torch.squeeze(output, dim=3)
output = torch.squeeze(output, dim=2)
# F6:全连接层
output = F.relu(self.linear6(output))
# F7:全连接层
output = self.linear7(output)
return output
5.3.2.2 测试LeNet-5模型,构造一个形状为 [1,1,32,32]的输入数据送入网络,观察每一层特征图的形状变化。
下面测试一下上面的LeNet-5模型,构造一个形状为 [1,1,32,32]的输入数据送入网络,观察每一层特征图的形状变化。代码实现如下:
# 这里用np.random创建一个随机数组作为输入数据
inputs = np.random.randn(*[1,1,32,32])
inputs = inputs.astype('float32')
# 创建Model_LeNet类的实例,指定模型名称和分类的类别数目
model = Model_LeNet(in_channels=1, num_classes=10)
# 通过调用LeNet从基类继承的modules()函数,查看LeNet中所包含的子层
print(model.modules())
x = torch.tensor(inputs)
for item in model.children():
# item是LeNet类中的一个子层
# 查看经过子层之后的输出数据形状
item_shapex = 0
names = []
parameter = []
for name in item.named_parameters():
names.append(name[0])
parameter.append(name[1])
item_shapex += 1
try:
x = item(x)
except:
# 如果是最后一个卷积层输出,需要展平后才可以送入全连接层
x = x.reshape([x.shape[0], -1])
x = item(x)
if item_shapex == 2:
# 查看卷积和全连接层的数据和参数的形状,
# 其中item.parameters()[0]是权重参数w,item.parameters()[1]是偏置参数b
print(item, x.shape, parameter[0].shape, parameter[1].shape)
else:
# 汇聚层没有参数
print(item, x.shape)
<generator object Module.modules at 0x000002507F2BED60>
Conv2D() torch.Size([1, 6, 28, 28]) torch.Size([6, 1, 5, 5]) torch.Size([6, 1])
Pool2D() torch.Size([1, 6, 14, 14])
Conv2D() torch.Size([1, 16, 10, 10]) torch.Size([16, 6, 5, 5]) torch.Size([16, 1])
Pool2D() torch.Size([1, 16, 5, 5])
Conv2D() torch.Size([1, 120, 1, 1]) torch.Size([120, 16, 5, 5]) torch.Size([120, 1])
Linear(in_features=120, out_features=84, bias=True) torch.Size([1, 84]) torch.Size([84, 120]) torch.Size([84])
Linear(in_features=84, out_features=10, bias=True) torch.Size([1, 10]) torch.Size([10, 84]) torch.Size([10])
从输出结果看,
- 对于大小为32×32的单通道图像,先用6个大小为5×5的卷积核对其进行卷积运算,输出为6个28×28大小的特征图;
- 6个28×28大小的特征图经过大小为2×2,步长为2的汇聚层后,输出特征图的大小变为14×14;
- 6个14×14大小的特征图再经过16个大小为5×5的卷积核对其进行卷积运算,得到16个10×10大小的输出特征图;
- 16个10×10大小的特征图经过大小为2×2,步长为2的汇聚层后,输出特征图的大小变为5×5;
- 16个5×5大小的特征图再经过120个大小为5×5的卷积核对其进行卷积运算,得到120个1×1大小的输出特征图;
- 此时,将特征图展平成1维,则有120个像素点,经过输入神经元个数为120,输出神经元个数为84的全连接层后,输出的长度变为84。
- 再经过一个全连接层的计算,最终得到了长度为类别数的输出结果。
5.2.3.3 使用pytorch中的相应算子,构建LeNet-5模型
考虑到自定义的Conv2D
和Pool2D
算子中包含多个for
循环,所以运算速度比较慢。PyTorch框架中,针对卷积层算子和汇聚层算子进行了速度上的优化,这里基于torch.nn.Conv2d
、torch.nn.MaxPool2d
和torch.nn.AvgPool2d
构建LeNet-5模型,对比与上边实现的模型的运算速度。代码实现如下:
class PyTorch_LeNet(nn.Module):
def __init__(self, in_channels, num_classes=10):
super(PyTorch_LeNet, self).__init__()
# 卷积层:输出通道数为6,卷积核大小为5×5
self.conv1 = nn.Conv2d(in_channels=in_channels, out_channels=6, kernel_size=5)
# 汇聚层:汇聚窗口为2×2,步长为2
self.pool2 = nn.MaxPool2d(2, stride=2)
# 卷积层:输入通道数为6,输出通道数为16,卷积核大小为5×5,步长为1
self.conv3 = nn.Conv2d(in_channels=6, out_channels=16, kernel_size=5, stride=1)
# 汇聚层:汇聚窗口为2×2,步长为2
self.pool4 = nn.AvgPool2d(2, stride=2)
# 卷积层:输入通道数为16,输出通道数为120,卷积核大小为5×5
self.conv5 = nn.Conv2d(in_channels=16, out_channels=120, kernel_size=5, stride=1)
# 全连接层:输入神经元为120,输出神经元为84
self.linear6 = nn.Linear(120, 84)
# 全连接层:输入神经元为84,输出神经元为类别数
self.linear7 = nn.Linear(84, num_classes)
def forward(self, x):
# C1:卷积层+激活函数
output = F.relu(self.conv1(x))
# S2:汇聚层
output = self.pool2(output)
# C3:卷积层+激活函数
output = F.relu(self.conv3(output))
# S4:汇聚层
output = self.pool4(output)
# C5:卷积层+激活函数
output = F.relu(self.conv5(output))
# 输入层将数据拉平[B,C,H,W] -> [B,CxHxW]
output = torch.squeeze(output, dim=3)
output = torch.squeeze(output, dim=2)
# F6:全连接层
output = F.relu(self.linear6(output))
# F7:全连接层
output = self.linear7(output)
return output
5.2.3.4 测试两个网络的运算速度。
from pt_lenet import PyTorch_LeNet
from module_lenet import Model_LeNet
import numpy as np
import time
import torch
# 这里用np.random创建一个随机数组作为测试数据
inputs = np.random.randn(*[1,1,32,32])
inputs = inputs.astype('float32')
x = torch.tensor(inputs)
# 创建Model_LeNet类的实例,指定模型名称和分类的类别数目
model = Model_LeNet(in_channels=1, num_classes=10)
# 创建PyTorch_LeNet类的实例,指定模型名称和分类的类别数目
torch_model = PyTorch_LeNet(in_channels=1, num_classes=10)
# 计算Model_LeNet类的运算速度
model_time = 0
for i in range(60):
strat_time = time.time()
out = model(x)
end_time = time.time()
# 预热10次运算,不计入最终速度统计
if i < 10:
continue
model_time += (end_time - strat_time)
avg_model_time = model_time / 50
print('Model_LeNet speed:', avg_model_time, 's')
# 计算torch_LeNet类的运算速度
torch_model_time = 0
for i in range(60):
strat_time = time.time()
torch_out = torch_model(x)
end_time = time.time()
# 预热10次运算,不计入最终速度统计
if i < 10:
continue
torch_model_time += (end_time - strat_time)
avg_torch_model_time = torch_model_time / 50
print('torch_LeNet speed:', avg_torch_model_time, 's')
Model_LeNet speed: 0.6059014654159546 s
torch_LeNet speed: 0.0007779264450073242 s
可以看出torch的速度快不少
5.2.3.5 令两个网络加载同样的权重,测试一下两个网络的输出结果是否一致。
from pt_lenet import PyTorch_LeNet
from module_lenet import Model_LeNet
import numpy as np
import torch
# 这里用np.random创建一个随机数组作为测试数据
inputs = np.random.randn(*[1,1,32,32])
inputs = inputs.astype('float32')
x = torch.tensor(inputs)
# 创建Model_LeNet类的实例,指定模型名称和分类的类别数目
model = Model_LeNet(in_channels=1, num_classes=10)
# 获取网络的权重
params = model.state_dict()
# 自定义Conv2D算子的bias参数形状为[out_channels, 1]
# torch API中Conv2D算子的bias参数形状为[out_channels]
# 需要进行调整后才可以赋值
for key in params:
if 'bias' in key:
params[key] = params[key].squeeze()
# 创建torch_LeNet类的实例,指定模型名称和分类的类别数目
torch_model = PyTorch_LeNet(in_channels=1, num_classes=10)
# 将Model_LeNet的权重参数赋予给torch_LeNet模型,保持两者一致
torch_model.load_state_dict(params)
# 打印结果保留小数点后6位
torch.set_printoptions(6)
# 计算Model_LeNet的结果
output = model(x)
print('Model_LeNet output: ', output)
# 计算torch_LeNet的结果
torch_output = torch_model(x)
print('torch_LeNet output: ', torch_output)
Model_LeNet output: tensor([[-67777.984375, -45821.519531, 49726.687500, 21334.896484,
1850.835327, 15550.165039, 27844.017578, 87872.117188,
-4373.378418, 10120.727539]], grad_fn=<AddmmBackward>)
torch_LeNet output: tensor([[-67777.914062, -45821.472656, 49726.640625, 21334.880859,
1850.843140, 15550.158203, 27844.000000, 87872.054688,
-4373.376465, 10120.725586]], grad_fn=<AddmmBackward>)
可以看出自定义算子和torch的参数几乎一样
5.2.3.6 统计LeNet-5模型的参数量和计算量。
参数量
按照公式(5.18)进行计算,可以得到:
- 第一个卷积层的参数量为:6×1×5×5+6=156;
- 第二个卷积层的参数量为:16×6×5×5+16=2416;
- 第三个卷积层的参数量为:120×16×5×5+120=48120;
- 第一个全连接层的参数量为:120×84+84=10164;
- 第二个全连接层的参数量为:84×10+10=850;
所以,LeNet-5总的参数量为61706。
from pt_lenet import PyTorch_LeNet
import torchsummary
model = PyTorch_LeNet(in_channels=1, num_classes=10)
model = model.cuda()
params_info = torchsummary.summary(model, (1, 32, 32))
print(params_info)
----------------------------------------------------------------
Layer (type) Output Shape Param #
================================================================
Conv2d-1 [-1, 6, 28, 28] 156
MaxPool2d-2 [-1, 6, 14, 14] 0
Conv2d-3 [-1, 16, 10, 10] 2,416
AvgPool2d-4 [-1, 16, 5, 5] 0
Conv2d-5 [-1, 120, 1, 1] 48,120
Linear-6 [-1, 84] 10,164
Linear-7 [-1, 10] 850
================================================================
Total params: 61,706
Trainable params: 61,706
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 0.00
Forward/backward pass size (MB): 0.06
Params size (MB): 0.24
Estimated Total Size (MB): 0.30
----------------------------------------------------------------
None
可以看到,结果与公式推导一致。
如果不加
model = model.cuda()
则会报错
RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same
计算量
按照公式(5.19)进行计算,可以得到:
- 第一个卷积层的计算量为:28×28×5×5×6×1+28×28×6=122304;
- 第二个卷积层的计算量为:10×10×5×5×16×6+10×10×16=241600;
- 第三个卷积层的计算量为:1×1×5×5×120×16+1×1×120=48120;
- 平均汇聚层的计算量为:16×5×5=400;
- 第一个全连接层的计算量为:120×84=10080;
- 第二个全连接层的计算量为:84×10=840;
所以,LeNet-5总的计算量为423344。
在飞桨中,还可以使用paddle.flops
API自动统计计算量。pytorch可以么?
在torch中,我们可以使用torchstat统计计算量。
from pt_lenet import PyTorch_LeNet
from torchstat import stat
model = PyTorch_LeNet(in_channels=1, num_classes=10)
# 导入模型,输入一张输入图片的尺寸
stat(model, (1, 32,32))
module name input shape output shape params memory(MB) MAdd Flops MemRead(B) MemWrite(B) duration[%] MemR+W(B)
0 conv1 1 32 32 6 28 28 156.0 0.02 235,200.0 122,304.0 4720.0 18816.0 66.61% 23536.0
1 pool2 6 28 28 6 14 14 0.0 0.00 3,528.0 4,704.0 18816.0 4704.0 0.00% 23520.0
2 conv3 6 14 14 16 10 10 2416.0 0.01 480,000.0 241,600.0 14368.0 6400.0 0.00% 20768.0
3 pool4 16 10 10 16 5 5 0.0 0.00 1,600.0 1,600.0 6400.0 1600.0 0.00% 8000.0
4 conv5 16 5 5 120 1 1 48120.0 0.00 96,000.0 48,120.0 194080.0 480.0 33.38% 194560.0
5 linear6 120 84 10164.0 0.00 20,076.0 10,080.0 41136.0 336.0 0.00% 41472.0
6 linear7 84 10 850.0 0.00 1,670.0 840.0 3736.0 40.0 0.00% 3776.0
total 61706.0 0.03 838,074.0 429,248.0 3736.0 40.0 100.00% 315632.0
=====================================================================================================================================
Total params: 61,706
-------------------------------------------------------------------------------------------------------------------------------------
Total memory: 0.03MB
Total MAdd: 838.07KMAdd
Total Flops: 429.25KFlops
Total MemR+W: 308.23KB
5.3.3 模型训练
使用交叉熵损失函数,并用随机梯度下降法作为优化器来训练LeNet-5网络。
用RunnerV3在训练集上训练5个epoch,并保存准确率最高的模型作为最佳模型。
RunnerV3:
class RunnerV3(object):
def __init__(self, model, optimizer, loss_fn, metric, **kwargs):
self.model = model
self.optimizer = optimizer
self.loss_fn = loss_fn
self.metric = metric # 只用于计算评价指标
# 记录训练过程中的评价指标变化情况
self.dev_scores = []
# 记录训练过程中的损失函数变化情况
self.train_epoch_losses = [] # 一个epoch记录一次loss
self.train_step_losses = [] # 一个step记录一次loss
self.dev_losses = []
# 记录全局最优指标
self.best_score = 0
def train(self, train_loader, dev_loader=None, **kwargs):
# 将模型切换为训练模式
self.model.train()
# 传入训练轮数,如果没有传入值则默认为0
num_epochs = kwargs.get("num_epochs", 0)
# 传入log打印频率,如果没有传入值则默认为100
log_steps = kwargs.get("log_steps", 100)
# 评价频率
eval_steps = kwargs.get("eval_steps", 0)
# 传入模型保存路径,如果没有传入值则默认为"best_model.pdparams"
save_path = kwargs.get("save_path", "best_model.pdparams")
custom_print_log = kwargs.get("custom_print_log", None)
# 训练总的步数
num_training_steps = num_epochs * len(train_loader)
if eval_steps:
if self.metric is None:
raise RuntimeError('Error: Metric can not be None!')
if dev_loader is None:
raise RuntimeError('Error: dev_loader can not be None!')
# 运行的step数目
global_step = 0
# 进行num_epochs轮训练
for epoch in range(num_epochs):
# 用于统计训练集的损失
total_loss = 0
for step, data in enumerate(train_loader):
X, y = data
# 获取模型预测
logits = self.model(X)
loss = self.loss_fn(logits, y) # 默认求mean
total_loss += loss
# 训练过程中,每个step的loss进行保存
self.train_step_losses.append((global_step, loss.item()))
if log_steps and global_step % log_steps == 0:
print(
f"[Train] epoch: {epoch}/{num_epochs}, step: {global_step}/{num_training_steps}, loss: {loss.item():.5f}")
# 梯度反向传播,计算每个参数的梯度值
loss.backward()
if custom_print_log:
custom_print_log(self)
# 小批量梯度下降进行参数更新
self.optimizer.step()
# 梯度归零
optimizer.zero_grad()
# 判断是否需要评价
if eval_steps > 0 and global_step > 0 and \
(global_step % eval_steps == 0 or global_step == (num_training_steps - 1)):
dev_score, dev_loss = self.evaluate(dev_loader, global_step=global_step)
print(f"[Evaluate] dev score: {dev_score:.5f}, dev loss: {dev_loss:.5f}")
# 将模型切换为训练模式
self.model.train()
# 如果当前指标为最优指标,保存该模型
if dev_score > self.best_score:
self.save_model(save_path)
print(
f"[Evaluate] best accuracy performence has been updated: {self.best_score:.5f} --> {dev_score:.5f}")
self.best_score = dev_score
global_step += 1
# 当前epoch 训练loss累计值
trn_loss = (total_loss / len(train_loader)).item()
# epoch粒度的训练loss保存
self.train_epoch_losses.append(trn_loss)
print("[Train] Training done!")
# 模型评估阶段,使用'torch.no_grad()'控制不计算和存储梯度
@torch.no_grad()
def evaluate(self, dev_loader, **kwargs):
assert self.metric is not None
# 将模型设置为评估模式
self.model.eval()
global_step = kwargs.get("global_step", -1)
# 用于统计训练集的损失
total_loss = 0
# 重置评价
self.metric.reset()
# 遍历验证集每个批次
for batch_id, data in enumerate(dev_loader):
X, y = data
# 计算模型输出
logits = self.model(X)
# 计算损失函数
loss = self.loss_fn(logits, y).item()
# 累积损失
total_loss += loss
# 累积评价
self.metric.update(logits, y)
dev_loss = (total_loss / len(dev_loader))
dev_score = self.metric.accumulate()
# 记录验证集loss
if global_step != -1:
self.dev_losses.append((global_step, dev_loss))
self.dev_scores.append(dev_score)
return dev_score, dev_loss
# 模型评估阶段,使用'torch.no_grad()'控制不计算和存储梯度
@torch.no_grad()
def predict(self, x, **kwargs):
# 将模型设置为评估模式
self.model.eval()
# 运行模型前向计算,得到预测值
logits = self.model(x)
return logits
def save_model(self, save_path):
torch.save(self.model.state_dict(), save_path)
def load_model(self, model_path):
state_dict = torch.load(model_path)
self.model.load_state_dict(state_dict)
Accuracy:
class Accuracy():
def __init__(self):
"""
输入:
- is_logist: outputs是logist还是激活后的值
"""
# 用于统计正确的样本个数
self.num_correct = 0
# 用于统计样本的总数
self.num_count = 0
self.is_logist = True
def update(self, outputs, labels):
"""
输入:
- outputs: 预测值, shape=[N,class_num]
- labels: 标签值, shape=[N,1]
"""
# 判断是二分类任务还是多分类任务,shape[1]=1时为二分类任务,shape[1]>1时为多分类任务
if outputs.shape[1] == 1: # 二分类
outputs = torch.squeeze(outputs, dim=-1)
if self.is_logist:
# logist判断是否大于0
preds = torch.can_cast((outputs>=0), dtype=torch.float32)
else:
# 如果不是logist,判断每个概率值是否大于0.5,当大于0.5时,类别为1,否则类别为0
preds = torch.can_cast((outputs>=0.5), dtype=torch.float32)
else:
# 多分类时,使用'torch.argmax'计算最大元素索引作为类别
preds = torch.argmax(outputs, dim=1).int()
# 获取本批数据中预测正确的样本个数
labels = torch.squeeze(labels, dim=-1)
batch_correct = torch.sum(torch.tensor(preds == labels, dtype=torch.float32)).numpy()
batch_count = len(labels)
# 更新num_correct 和 num_count
self.num_correct += batch_correct
self.num_count += batch_count
def accumulate(self):
# 使用累计的数据,计算总的指标
if self.num_count == 0:
return 0
return self.num_correct / self.num_count
def reset(self):
# 重置正确的数目和总数
self.num_correct = 0
self.num_count = 0
def name(self):
return "Accuracy"
进行训练:
import torch.optim as opti
from torch.utils.data import DataLoader
# 学习率大小
lr = 0.1
# 批次大小
batch_size = 64
# 加载数据
train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
dev_loader = DataLoader(dev_dataset, batch_size=batch_size)
test_loader = DataLoader(test_dataset, batch_size=batch_size)
# 定义LeNet网络
# 飞桨API实现的LeNet-5
model = PyTorch_LeNet(in_channels=1, num_classes=10)
# 定义优化器
optimizer = opti.SGD(model.parameters(), 0.2)
# 定义损失函数
loss_fn = F.cross_entropy
# 定义评价指标
metric = Accuracy()
# 实例化 RunnerV3 类,并传入训练配置。
runner = RunnerV3(model, optimizer, loss_fn, metric)
# 启动训练
log_steps = 15
eval_steps = 15
runner.train(train_loader, dev_loader, num_epochs=6, log_steps=log_steps,
eval_steps=eval_steps, save_path="best_model.pdparams")
[Train] epoch: 0/6, step: 0/96, loss: 2.31036
[Train] epoch: 0/6, step: 15/96, loss: 2.27804
[Evaluate] dev score: 0.11500, dev loss: 2.27864
[Evaluate] best accuracy performence has been updated: 0.00000 --> 0.11500
[Train] epoch: 1/6, step: 30/96, loss: 2.03024
[Evaluate] dev score: 0.28500, dev loss: 2.05039
[Evaluate] best accuracy performence has been updated: 0.11500 --> 0.28500
[Train] epoch: 2/6, step: 45/96, loss: 1.58493
[Evaluate] dev score: 0.52500, dev loss: 1.42669
[Evaluate] best accuracy performence has been updated: 0.28500 --> 0.52500
[Train] epoch: 3/6, step: 60/96, loss: 2.33754
[Evaluate] dev score: 0.27500, dev loss: 2.07681
[Train] epoch: 4/6, step: 75/96, loss: 1.39636
[Evaluate] dev score: 0.58500, dev loss: 1.30211
[Evaluate] best accuracy performence has been updated: 0.52500 --> 0.58500
[Train] epoch: 5/6, step: 90/96, loss: 0.37585
[Evaluate] dev score: 0.82500, dev loss: 0.54126
[Evaluate] best accuracy performence has been updated: 0.58500 --> 0.82500
[Evaluate] dev score: 0.81000, dev loss: 0.53073
[Train] Training done!
可视化观察训练集与验证集的损失变化情况。
import matplotlib.pyplot as plt
# 可视化误差
def plot(runner, fig_name):
plt.figure(figsize=(10, 5))
plt.subplot(1, 2, 1)
train_items = runner.train_step_losses[::30]
train_steps = [x[0] for x in train_items]
train_losses = [x[1] for x in train_items]
plt.plot(train_steps, train_losses, color='#8E004D', label="Train loss")
if runner.dev_losses[0][0] != -1:
dev_steps = [x[0] for x in runner.dev_losses]
dev_losses = [x[1] for x in runner.dev_losses]
plt.plot(dev_steps, dev_losses, color='#E20079', linestyle='--', label="Dev loss")
# 绘制坐标轴和图例
plt.ylabel("loss", fontsize='x-large')
plt.xlabel("step", fontsize='x-large')
plt.legend(loc='upper right', fontsize='x-large')
plt.subplot(1, 2, 2)
# 绘制评价准确率变化曲线
if runner.dev_losses[0][0] != -1:
plt.plot(dev_steps, runner.dev_scores,
color='#E20079', linestyle="--", label="Dev accuracy")
else:
plt.plot(list(range(len(runner.dev_scores))), runner.dev_scores,
color='#E20079', linestyle="--", label="Dev accuracy")
# 绘制坐标轴和图例
plt.ylabel("score", fontsize='x-large')
plt.xlabel("step", fontsize='x-large')
plt.legend(loc='lower right', fontsize='x-large')
plt.savefig(fig_name)
plt.show()
runner.load_model('best_model.pdparams')
plot(runner, 'cnn-loss1.pdf')
5.3.4 模型评价
使用测试数据对在训练过程中保存的最佳模型进行评价,观察模型在测试集上的准确率以及损失变化情况。
# 加载最优模型
runner.load_model('best_model.pdparams')
# 模型评价
score, loss = runner.evaluate(test_loader)
print("[Test] accuracy/loss: {:.4f}/{:.4f}".format(score, loss))
[Test] accuracy/loss: 0.8900/0.4025
5.3.5 模型预测
# 获取测试集中第一条数据
X, label = next(iter(test_loader))
logits = runner.predict(X)
# 多分类,使用softmax计算预测概率
pred = F.softmax(logits,dim=1)
# 获取概率最大的类别
pred_class = torch.argmax(pred[2]).numpy()
label = label[2].numpy()
# 输出真实类别与预测类别
print("The true category is {} and the predicted category is {}".format(label, pred_class))
# 可视化图片
plt.figure(figsize=(2, 2))
image, label = test_set[0][2], test_set[1][2]
image= np.array(image).astype('float32')
image = np.reshape(image, [28,28])
image = Image.fromarray(image.astype('uint8'), mode='L')
plt.imshow(image)
plt.savefig('cnn-number2.pdf')
The true category is 1 and the predicted category is 1
使用前馈神经网络实现MNIST识别,与LeNet效果对比。
import matplotlib.pyplot as plt
import torch
import time
import torch.nn.functional as F
from torch import nn, optim
from torchvision.datasets import MNIST
from torchvision.transforms import Compose, ToTensor, Normalize, Resize
from torch.utils.data import DataLoader
from sklearn.metrics import accuracy_score
# 超参数
BATCH_SIZE = 64 # 批次大小
EPOCHS = 5 # 迭代轮数
# 设备
DEVICE = 'cuda' if torch.cuda.is_available() else 'cpu'
# 数据转换
transformers = Compose(transforms=[ToTensor(), Normalize(mean=(0.1307,), std=(0.3081,))])
# 数据装载
dataset_train = MNIST(root=r'./data', train=True, download=False, transform=transformers)
dataset_test = MNIST(root=r'./data', train=False, download=False, transform=transformers)
dataloader_train = DataLoader(dataset=dataset_train, batch_size=BATCH_SIZE, shuffle=True)
dataloader_test = DataLoader(dataset=dataset_test, batch_size=BATCH_SIZE, shuffle=True)
# FNN
class FNN(nn.Module):
# 定义网络结构
def __init__(self):
super(FNN, self).__init__()
self.layer1 = nn.Linear(28 * 28, 28) # 隐藏层
self.out = nn.Linear(28, 10) # 输出层
# 计算
def forward(self, x):
# 初始形状[batch_size, 1, 28, 28]
x = x.view(-1, 28 * 28)
x = torch.relu(self.layer1(x)) # 使用relu函数激活
x = self.out(x) # 输出层
return x
# CNN
class CNN(nn.Module):
# 定义网络结构
def __init__(self):
super(CNN, self).__init__()
# 卷积层+池化层+卷积层
self.conv1 = nn.Conv2d(in_channels=1, out_channels=32, kernel_size=(3, 3), stride=(1, 1), padding=1)
self.conv2 = nn.Conv2d(in_channels=32, out_channels=64, kernel_size=(3, 3), stride=(1, 1), padding=1)
self.pool = nn.MaxPool2d(2, 2)
# dropout
self.dropout = nn.Dropout(p=0.25)
# 全连接层
self.fc1 = nn.Linear(64 * 7 * 7, 512)
self.fc2 = nn.Linear(512, 64)
self.fc3 = nn.Linear(64, 10)
# 计算
def forward(self, x):
# 初始形状[batch_size, 1, 28, 28]
x = self.pool(F.relu(self.conv1(x)))
x = self.dropout(x)
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 64 * 7 * 7)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
loss_func = nn.CrossEntropyLoss() # 交叉熵损失函数
# 记录损失值、准确率
loss_list, accuracy_list = [], []
# 计算准确率
def get_accuracy(model, datas, labels):
out = torch.softmax(model(datas), dim=1, dtype=torch.float32)
predictions = torch.max(input=out, dim=1)[1] # 最大值的索引
y_predict = predictions.to('cpu').data.numpy()
y_true = labels.to('cpu').data.numpy()
# accuracy = float(np.sum(y_predict == y_true)) / float(y_true.size) # 准确率
accuracy = accuracy_score(y_true, y_predict) # 准确率
return accuracy
# 训练
def train(model, optimizer, epoch):
model.train() # 模型训练
for i, (datas, labels) in enumerate(dataloader_train):
# 设备转换
datas = datas.to(DEVICE)
labels = labels.to(DEVICE)
# 计算结果
out = model(datas)
# 计算损失值
loss = loss_func(out, labels)
# 梯度清零
optimizer.zero_grad()
# 反向传播
loss.backward()
# 梯度更新
optimizer.step()
# 打印损失值
if i % 100 == 0:
print('Train Epoch:%d Loss:%0.6f' % (epoch, loss.item()))
loss_list.append(loss.item())
# 测试
def test(model, epoch):
model.eval()
with torch.no_grad():
for i, (datas, labels) in enumerate(dataloader_test):
# 设备转换
datas = datas.to(DEVICE)
labels = labels.to(DEVICE)
# 打印信息
if i % 20 == 0:
accuracy = get_accuracy(model, datas, labels)
print('Test Epoch:%d Accuracy:%0.6f' % (epoch, accuracy))
accuracy_list.append(accuracy)
# 运行
def run(model, optimizer, model_name):
t1 = time.time()
for epoch in range(EPOCHS):
train(model, optimizer, epoch)
test(model, epoch)
t2 = time.time()
print(f'共耗时{t2 - t1}秒')
# 绘制Loss曲线
plt.rcParams['figure.figsize'] = (16, 8)
plt.subplots(1, 2)
plt.subplot(1, 2, 1)
plt.plot(range(len(loss_list)), loss_list)
plt.title('Loss Curve')
plt.subplot(1, 2, 2)
plt.plot(range(len(accuracy_list)), accuracy_list)
plt.title('Accuracy Cure')
plt.show()
def initialize(model, model_name):
print(f'Start {model_name}')
# 查看分配显存
print('GPU_Allocated:%d' % torch.cuda.memory_allocated())
# 优化器
optimizer = optim.Adam(params=model.parameters(), lr=0.001)
run(model, optimizer, model_name)
if __name__ == '__main__':
models = [FNN().to(DEVICE),
CNN().to(DEVICE)]
model_names = ['FNN', 'CNN']
for model, model_name in zip(models, model_names):
initialize(model, model_name)
Start FNN
GPU_Allocated:366592
Train Epoch:0 Loss:2.271758
Train Epoch:0 Loss:0.565575
Train Epoch:0 Loss:0.410824
Train Epoch:0 Loss:0.318564
Train Epoch:0 Loss:0.193945
Train Epoch:0 Loss:0.369123
Train Epoch:0 Loss:0.566176
Train Epoch:0 Loss:0.723985
Train Epoch:0 Loss:0.289433
Train Epoch:0 Loss:0.235763
Test Epoch:0 Accuracy:0.875000
Test Epoch:0 Accuracy:0.906250
Test Epoch:0 Accuracy:0.890625
Test Epoch:0 Accuracy:0.875000
Test Epoch:0 Accuracy:0.828125
Test Epoch:0 Accuracy:0.937500
Test Epoch:0 Accuracy:0.937500
Test Epoch:0 Accuracy:0.953125
Train Epoch:1 Loss:0.382580
Train Epoch:1 Loss:0.200466
Train Epoch:1 Loss:0.290631
Train Epoch:1 Loss:0.208964
Train Epoch:1 Loss:0.118799
Train Epoch:1 Loss:0.134950
Train Epoch:1 Loss:0.085125
Train Epoch:1 Loss:0.282749
Train Epoch:1 Loss:0.192556
Train Epoch:1 Loss:0.237569
Test Epoch:1 Accuracy:0.953125
Test Epoch:1 Accuracy:0.875000
Test Epoch:1 Accuracy:0.921875
Test Epoch:1 Accuracy:0.937500
Test Epoch:1 Accuracy:0.937500
Test Epoch:1 Accuracy:0.937500
Test Epoch:1 Accuracy:0.906250
Test Epoch:1 Accuracy:0.937500
Train Epoch:2 Loss:0.131088
Train Epoch:2 Loss:0.318549
Train Epoch:2 Loss:0.240301
Train Epoch:2 Loss:0.110484
Train Epoch:2 Loss:0.231442
Train Epoch:2 Loss:0.311961
Train Epoch:2 Loss:0.204482
Train Epoch:2 Loss:0.144161
Train Epoch:2 Loss:0.285540
Train Epoch:2 Loss:0.185596
Test Epoch:2 Accuracy:0.953125
Test Epoch:2 Accuracy:0.906250
Test Epoch:2 Accuracy:0.937500
Test Epoch:2 Accuracy:0.953125
Test Epoch:2 Accuracy:0.921875
Test Epoch:2 Accuracy:0.984375
Test Epoch:2 Accuracy:0.953125
Test Epoch:2 Accuracy:0.937500
Train Epoch:3 Loss:0.420543
Train Epoch:3 Loss:0.135249
Train Epoch:3 Loss:0.158703
Train Epoch:3 Loss:0.110891
Train Epoch:3 Loss:0.123255
Train Epoch:3 Loss:0.390499
Train Epoch:3 Loss:0.083378
Train Epoch:3 Loss:0.113428
Train Epoch:3 Loss:0.201215
Train Epoch:3 Loss:0.160588
Test Epoch:3 Accuracy:0.921875
Test Epoch:3 Accuracy:0.937500
Test Epoch:3 Accuracy:0.984375
Test Epoch:3 Accuracy:0.968750
Test Epoch:3 Accuracy:0.937500
Test Epoch:3 Accuracy:1.000000
Test Epoch:3 Accuracy:0.968750
Test Epoch:3 Accuracy:0.953125
Train Epoch:4 Loss:0.085767
Train Epoch:4 Loss:0.260003
Train Epoch:4 Loss:0.306993
Train Epoch:4 Loss:0.058653
Train Epoch:4 Loss:0.085354
Train Epoch:4 Loss:0.131809
Train Epoch:4 Loss:0.238680
Train Epoch:4 Loss:0.097033
Train Epoch:4 Loss:0.275702
Train Epoch:4 Loss:0.149019
Test Epoch:4 Accuracy:0.984375
Test Epoch:4 Accuracy:0.921875
Test Epoch:4 Accuracy:0.937500
Test Epoch:4 Accuracy:1.000000
Test Epoch:4 Accuracy:0.984375
Test Epoch:4 Accuracy:0.937500
Test Epoch:4 Accuracy:0.953125
Test Epoch:4 Accuracy:0.968750
共耗时57.83448648452759秒
Start CNN
GPU_Allocated:483840
Train Epoch:0 Loss:2.310207
Train Epoch:0 Loss:0.326197
Train Epoch:0 Loss:0.426013
Train Epoch:0 Loss:0.192754
Train Epoch:0 Loss:0.134358
Train Epoch:0 Loss:0.092638
Train Epoch:0 Loss:0.279138
Train Epoch:0 Loss:0.160197
Train Epoch:0 Loss:0.114862
Train Epoch:0 Loss:0.110343
Test Epoch:0 Accuracy:0.984375
Test Epoch:0 Accuracy:0.937500
Test Epoch:0 Accuracy:0.984375
Test Epoch:0 Accuracy:1.000000
Test Epoch:0 Accuracy:0.968750
Test Epoch:0 Accuracy:0.968750
Test Epoch:0 Accuracy:0.968750
Test Epoch:0 Accuracy:0.968750
Train Epoch:1 Loss:0.029998
Train Epoch:1 Loss:0.039023
Train Epoch:1 Loss:0.059408
Train Epoch:1 Loss:0.058501
Train Epoch:1 Loss:0.048157
Train Epoch:1 Loss:0.057743
Train Epoch:1 Loss:0.152309
Train Epoch:1 Loss:0.070936
Train Epoch:1 Loss:0.010771
Train Epoch:1 Loss:0.046841
Test Epoch:1 Accuracy:0.968750
Test Epoch:1 Accuracy:0.953125
Test Epoch:1 Accuracy:1.000000
Test Epoch:1 Accuracy:0.984375
Test Epoch:1 Accuracy:0.984375
Test Epoch:1 Accuracy:0.984375
Test Epoch:1 Accuracy:0.984375
Test Epoch:1 Accuracy:0.953125
Train Epoch:2 Loss:0.015884
Train Epoch:2 Loss:0.058874
Train Epoch:2 Loss:0.077458
Train Epoch:2 Loss:0.085485
Train Epoch:2 Loss:0.131899
Train Epoch:2 Loss:0.016863
Train Epoch:2 Loss:0.023790
Train Epoch:2 Loss:0.047721
Train Epoch:2 Loss:0.017924
Train Epoch:2 Loss:0.007593
Test Epoch:2 Accuracy:1.000000
Test Epoch:2 Accuracy:1.000000
Test Epoch:2 Accuracy:1.000000
Test Epoch:2 Accuracy:0.953125
Test Epoch:2 Accuracy:0.984375
Test Epoch:2 Accuracy:1.000000
Test Epoch:2 Accuracy:0.984375
Test Epoch:2 Accuracy:0.937500
Train Epoch:3 Loss:0.105159
Train Epoch:3 Loss:0.055687
Train Epoch:3 Loss:0.003954
Train Epoch:3 Loss:0.049454
Train Epoch:3 Loss:0.029813
Train Epoch:3 Loss:0.026080
Train Epoch:3 Loss:0.002836
Train Epoch:3 Loss:0.086001
Train Epoch:3 Loss:0.032013
Train Epoch:3 Loss:0.055202
Test Epoch:3 Accuracy:0.984375
Test Epoch:3 Accuracy:1.000000
Test Epoch:3 Accuracy:1.000000
Test Epoch:3 Accuracy:1.000000
Test Epoch:3 Accuracy:1.000000
Test Epoch:3 Accuracy:0.984375
Test Epoch:3 Accuracy:0.984375
Test Epoch:3 Accuracy:1.000000
Train Epoch:4 Loss:0.002030
Train Epoch:4 Loss:0.135096
Train Epoch:4 Loss:0.025416
Train Epoch:4 Loss:0.001851
Train Epoch:4 Loss:0.023493
Train Epoch:4 Loss:0.012784
Train Epoch:4 Loss:0.006215
Train Epoch:4 Loss:0.003105
Train Epoch:4 Loss:0.011806
Train Epoch:4 Loss:0.007130
Test Epoch:4 Accuracy:1.000000
Test Epoch:4 Accuracy:0.984375
Test Epoch:4 Accuracy:0.984375
Test Epoch:4 Accuracy:1.000000
Test Epoch:4 Accuracy:1.000000
Test Epoch:4 Accuracy:1.000000
Test Epoch:4 Accuracy:1.000000
Test Epoch:4 Accuracy:1.000000
共耗时61.85371208190918秒
可以看出,在准确率和loss上,LeNet要明显好于FNN差别不大,训练时间上,FNN略优于LeNet。
可视化LeNet中的部分特征图和卷积核,谈谈自己的看法。
加载训练好的网络
path = '../models/'+'lenet_3.pth'
def load_model(path):
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
net = LeNet2.Net().to(device)
net.load_state_dict(torch.load(path))
return net
model = load_model(path)
print(model.features)
Sequential(
(0): Conv2d(1, 6, kernel_size=(5, 5), stride=(1, 1))
(1): ReLU(inplace=True)
(2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(3): Conv2d(6, 16, kernel_size=(5, 5), stride=(1, 1))
(4): ReLU(inplace=True)
(5): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
如上日志所示,定义LeNet的Sequential共有六层操作,我们接下来会看下没一层输出后的图片的效果
显示处理前的图片
#定义数据加载器
resize = 32
transform = transforms.Compose([transforms.Resize(size=(resize, resize)),
torchvision.transforms.ToTensor()
])
test_data = torchvision.datasets.MNIST(root="../datas",
train=False,
transform=transform,
download=False)
test_loader = torch.utils.data.DataLoader(dataset = test_data,batch_size = 1,shuffle = False)
def imshow(img):
img = img / 2 + 0.5 # unnormalize
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
plt.show()
# 随机获取训练图片
dataiter = iter(test_loader)
images, labels = dataiter.next()
# 显示图片
imshow(torchvision.utils.make_grid(images))
图片显示个数,与dataloader设置的batch_size一致
PyTorch 提供了一个名为register_forward_hook的方法,它允许传入一个可以提取特定层输出的函数
显示各层处理后的图片
在实际显示处理后的图片之前,我们先来看下,LeNet的每层的输出
self.features = nn.Sequential(
nn.Conv2d(1, 6, 5),#self.C1
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=2, stride=2),# self.S2
nn.Conv2d(6, 16, 5),#self.C3
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=2, stride=2),#self.S4
)
self.classifier = nn.Sequential(
nn.Linear(16 * 5 * 5, 120),#self.fc1
nn.ReLU(inplace=True),
nn.Linear(120, 84),#self.fc2
nn.ReLU(inplace=True),
nn.Linear(84, 10),#self.fc3
)
处理后的各层的图片效果
代码实现
def save_img(tensor, name):
#替换深度和batch_size所在的纬度值
tensor = tensor.permute((1, 0, 2, 3))#将[1, 6, 28, 28]转化成[1, 6, 28, 28]
print('output permute:',tensor.shape)
im = make_grid(tensor, normalize=True, scale_each=True, nrow=8, padding=2).permute((1, 2, 0))
im = (im.cpu().data.numpy() * 255.).astype(np.uint8)#将0~1之间的像素值,转化成0~255
Image.fromarray(im).save(name + '.jpg')
def save_img_linear(tensor, name):
#替换深度和batch_size所在的纬度值
tensor = tensor.permute((1, 0))
print('output permute:',tensor.shape)
im = make_grid(tensor, normalize=True, scale_each=True, nrow=8, padding=2).permute((1, 2, 0))
im = (im.cpu().data.numpy() * 255.).astype(np.uint8)#将0~1之间的像素值,转化成0~255
Image.fromarray(im).save(name + '.jpg')
#模型训练的时候,图片是加载到GPU进行训练的,此时的前馈过程输入格式要跟训练时保持一致
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
#features各层输出
for i in range(6):
print('------features%d------'%i)
new_model = model.features[i]
print('input:',images.shape)
print('layer:',new_model)
layer_out = new_model(images.to(device))
print('output',layer_out.shape)
save_img(layer_out, 'features'+str(i))
images = layer_out #下一层网络的输入是上一层网络的输出
#将features最后一层输出的数据[1,16,5,5]转化成[1,400]
images = images.view(-1, model.num_flat_features(images))
#classifier各层输出
for i in range(5):
print('------classifier%d------'%i)
new_model = model.classifier[i]
print('input:',images.shape)
print('layer:',new_model)
layer_out = new_model(images.to(device))
print('output',layer_out.shape)
save_img_linear(layer_out, 'classifier'+str(i))
images = layer_out #下一层网络的输入是上一层网络的输出
卷积神经网络中模型、层中输出的特征和可视化,对卷积神经网络的设计起到很重要的帮助。
心得体会
一开始,改代码时会发现训练准确率极低的情况,后来参考了同学的代码,发现是transform.Normalize的问题,不仅要将size改为32,还要调整Normlize的参数。这说明,在参数的调整中,我们还差的很多,还需要很多去学习,调整超参数不能一个一个碰运气去调整,而是要明白其中的道理,有目的性的去调整。