🍨 本文为:[🔗365天深度学习训练营] 中的学习记录博客
🍖 原作者:[K同学啊 | 接辅导、项目定制]
一、 基础配置
- 语言环境:Python3.8
- 编译器选择:Pycharm
- 深度学习环境:
-
- torch==1.12.1+cu113
- torchvision==0.13.1+cu113
二、 前期准备
1.设置GPU
import torch
import torch.nn as nn
from torchvision import transforms, datasets
import warnings
warnings.filterwarnings("ignore") #忽略警告信息
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(device)
根据个人设备情况,选择使用GPU/CPU进行训练,在Pycharm中需要添加print命令来查看是否使用了GPU ,若GPU可用则输出
cuda
该代码片段中加入了 warnings.filterwarnings("ignore")用于忽略代码运行中不必要的警告信息。
2. 导入数据
本项目所采用的数据集未收录于公开数据中,故需要自己在文件目录中导入相应数据集合,并设置对应文件目录,以供后续学习过程中使用。
运行下述代码:
import pathlib
data_dir = 'data'
data_dir = pathlib.Path(data_dir)
data_paths = list(data_dir.glob('*'))
classeNames = [str(path).split("\\")[1] for path in data_paths]
print(classeNames)
得到如下输出:
['Dark', 'Green', 'Light', 'Medium']
接下来,我们通过transforms.Compose对整个数据集进行预处理:
- 第一步:将输入图片resize成统一尺寸,即[224, 224]
- 第二步:转换为tensor,并归一化到[0,1]之间
- 第三步:转换为标准正态分布(高斯分布),使模型更容易收敛
train_transforms = transforms.Compose([
transforms.Resize([224, 224]), # 将输入图片resize成统一尺寸
transforms.RandomHorizontalFlip(), # 随机水平翻转
transforms.ToTensor(), # 将PIL Image或numpy.ndarray转换为tensor,并归一化到[0,1]之间
transforms.Normalize( # 标准化处理-->转换为标准正太分布(高斯分布),使模型更容易收敛
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]) # 其中 mean=[0.485,0.456,0.406]与std=[0.229,0.224,0.225] 从数据集中随机抽样计算得到的。
])
test_transform = transforms.Compose([
transforms.Resize([224, 224]), # 将输入图片resize成统一尺寸
transforms.ToTensor(), # 将PIL Image或numpy.ndarray转换为tensor,并归一化到[0,1]之间
transforms.Normalize( # 标准化处理-->转换为标准正太分布(高斯分布),使模型更容易收敛
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]) # 其中 mean=[0.485,0.456,0.406]与std=[0.229,0.224,0.225] 从数据集中随机抽样计算得到的。
])
total_data = datasets.ImageFolder("data",transform=train_transforms)
print(total_data)
得到如下输出:
Dataset ImageFolder
Number of datapoints: 1200
Root location: data
StandardTransform
Transform: Compose(
Resize(size=[224, 224], interpolation=bilinear, max_size=None, antialias=None)
RandomHorizontalFlip(p=0.5)
ToTensor()
Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
)
接下来,为了方便模型进行训练和推理,映射关系可以将类别标签转换为模型可以理解的数字格式:
total_data.class_to_idx
得到如下输出:
{'Dark': 0, 'Green': 1, 'Light': 2, 'Medium': 3}
3. 划分数据集
此处数据集需要做按比例划分的操作:
train_size = int(0.8 * len(total_data))
test_size = len(total_data) - train_size
train_dataset, test_dataset = torch.utils.data.random_split(total_data, [train_size, test_size])
print(train_dataset)
print(test_dataset)
根据代码所示,本文将原数据集按照8:2的比例进行了划分及打印:
<torch.utils.data.dataset.Subset object at 0x000001ACDB4237F0>
<torch.utils.data.dataset.Subset object at 0x000001ACDB42E730>
接下来,根据划分得到的训练集和验证集对数据集进行包装:
batch_size = 32
train_dl = torch.utils.data.DataLoader(train_dataset,
batch_size=batch_size,
shuffle=True,
num_workers=0)
test_dl = torch.utils.data.DataLoader(test_dataset,
batch_size=batch_size,
shuffle=True,
num_workers=0)
并通过:
for X, y in test_dl:
print("Shape of X [N, C, H, W]: ", X.shape)
print("Shape of y: ", y.shape, y.dtype)
break
输出测试数据集的数据分布情况:
Shape of X [N, C, H, W]: torch.Size([32, 3, 224, 224])
Shape of y: torch.Size([32]) torch.int64
Tips:win需要采用num_workers=0的模式。
4.手动搭建VGG-16模型
1.模型搭建
VGG-16结构说明:
- 13个卷积层(Convolutional Layer),分别用
blockX_convX
表示; - 3个全连接层(Fully connected Layer),用
classifier
表示; - 5个池化层(Pool layer);
VGG-16
包含了16个隐藏层(13个卷积层和3个全连接层),故称为VGG-16
class vgg16(nn.Module):
def __init__(self):
super(vgg16, self).__init__()
# 卷积块1
self.block1 = nn.Sequential(
nn.Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)),
nn.ReLU(),
nn.Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)),
nn.ReLU(),
nn.MaxPool2d(kernel_size=(2, 2), stride=(2, 2))
)
# 卷积块2
self.block2 = nn.Sequential(
nn.Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)),
nn.ReLU(),
nn.Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)),
nn.ReLU(),
nn.MaxPool2d(kernel_size=(2, 2), stride=(2, 2))
)
# 卷积块3
self.block3 = nn.Sequential(
nn.Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)),
nn.ReLU(),
nn.Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)),
nn.ReLU(),
nn.Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)),
nn.ReLU(),
nn.MaxPool2d(kernel_size=(2, 2), stride=(2, 2))
)
# 卷积块4
self.block4 = nn.Sequential(
nn.Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)),
nn.ReLU(),
nn.Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)),
nn.ReLU(),
nn.Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)),
nn.ReLU(),
nn.MaxPool2d(kernel_size=(2, 2), stride=(2, 2))
)
# 卷积块5
self.block5 = nn.Sequential(
nn.Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)),
nn.ReLU(),
nn.Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)),
nn.ReLU(),
nn.Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)),
nn.ReLU(),
nn.MaxPool2d(kernel_size=(2, 2), stride=(2, 2))
)
# 全连接网络层,用于分类
self.classifier = nn.Sequential(
nn.Linear(in_features=512 * 7 * 7, out_features=4096),
nn.ReLU(),
nn.Linear(in_features=4096, out_features=4096),
nn.ReLU(),
nn.Linear(in_features=4096, out_features=4)
)
def forward(self, x):
x = self.block1(x)
x = self.block2(x)
x = self.block3(x)
x = self.block4(x)
x = self.block5(x)
x = torch.flatten(x, start_dim=1)
x = self.classifier(x)
return x
device = "cuda" if torch.cuda.is_available() else "cpu"
print("Using {} device".format(device))
model = vgg16().to(device)
print(model)
可以得到如下输出:
Using cuda device
vgg16(
(block1): Sequential(
(0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): ReLU()
(2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(3): ReLU()
(4): MaxPool2d(kernel_size=(2, 2), stride=(2, 2), padding=0, dilation=1, ceil_mode=False)
)
(block2): Sequential(
(0): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): ReLU()
(2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(3): ReLU()
(4): MaxPool2d(kernel_size=(2, 2), stride=(2, 2), padding=0, dilation=1, ceil_mode=False)
)
(block3): Sequential(
(0): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): ReLU()
(2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(3): ReLU()
(4): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(5): ReLU()
(6): MaxPool2d(kernel_size=(2, 2), stride=(2, 2), padding=0, dilation=1, ceil_mode=False)
)
(block4): Sequential(
(0): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): ReLU()
(2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(3): ReLU()
(4): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(5): ReLU()
(6): MaxPool2d(kernel_size=(2, 2), stride=(2, 2), padding=0, dilation=1, ceil_mode=False)
)
(block5): Sequential(
(0): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): ReLU()
(2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(3): ReLU()
(4): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(5): ReLU()
(6): MaxPool2d(kernel_size=(2, 2), stride=(2, 2), padding=0, dilation=1, ceil_mode=False)
)
(classifier): Sequential(
(0): Linear(in_features=25088, out_features=4096, bias=True)
(1): ReLU()
(2): Linear(in_features=4096, out_features=4096, bias=True)
(3): ReLU()
(4): Linear(in_features=4096, out_features=4, bias=True)
)
)
2.查看模型信息
# 统计模型参数量以及其他指标
import torchsummary as summary
summary.summary(model, (3, 224, 224))
得到如下输出:
----------------------------------------------------------------
Layer (type) Output Shape Param #
================================================================
Conv2d-1 [-1, 64, 224, 224] 1,792
ReLU-2 [-1, 64, 224, 224] 0
Conv2d-3 [-1, 64, 224, 224] 36,928
ReLU-4 [-1, 64, 224, 224] 0
MaxPool2d-5 [-1, 64, 112, 112] 0
Conv2d-6 [-1, 128, 112, 112] 73,856
ReLU-7 [-1, 128, 112, 112] 0
Conv2d-8 [-1, 128, 112, 112] 147,584
ReLU-9 [-1, 128, 112, 112] 0
MaxPool2d-10 [-1, 128, 56, 56] 0
Conv2d-11 [-1, 256, 56, 56] 295,168
ReLU-12 [-1, 256, 56, 56] 0
Conv2d-13 [-1, 256, 56, 56] 590,080
ReLU-14 [-1, 256, 56, 56] 0
Conv2d-15 [-1, 256, 56, 56] 590,080
ReLU-16 [-1, 256, 56, 56] 0
MaxPool2d-17 [-1, 256, 28, 28] 0
Conv2d-18 [-1, 512, 28, 28] 1,180,160
ReLU-19 [-1, 512, 28, 28] 0
Conv2d-20 [-1, 512, 28, 28] 2,359,808
ReLU-21 [-1, 512, 28, 28] 0
Conv2d-22 [-1, 512, 28, 28] 2,359,808
ReLU-23 [-1, 512, 28, 28] 0
MaxPool2d-24 [-1, 512, 14, 14] 0
Conv2d-25 [-1, 512, 14, 14] 2,359,808
ReLU-26 [-1, 512, 14, 14] 0
Conv2d-27 [-1, 512, 14, 14] 2,359,808
ReLU-28 [-1, 512, 14, 14] 0
Conv2d-29 [-1, 512, 14, 14] 2,359,808
ReLU-30 [-1, 512, 14, 14] 0
MaxPool2d-31 [-1, 512, 7, 7] 0
Linear-32 [-1, 4096] 102,764,544
ReLU-33 [-1, 4096] 0
Linear-34 [-1, 4096] 16,781,312
ReLU-35 [-1, 4096] 0
Linear-36 [-1, 4] 16,388
================================================================
Total params: 134,276,932
Trainable params: 134,276,932
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 0.57
Forward/backward pass size (MB): 218.52
Params size (MB): 512.23
Estimated Total Size (MB): 731.32
----------------------------------------------------------------
参数量为:134,276,932
三、 训练模型
1. 编写训练函数
# 训练循环
def train(dataloader, model, loss_fn, optimizer):
size = len(dataloader.dataset) # 训练集的大小
num_batches = len(dataloader) # 批次数目, (size/batch_size,向上取整)
train_loss, train_acc = 0, 0 # 初始化训练损失和正确率
for X, y in dataloader: # 获取图片及其标签
X, y = X.to(device), y.to(device)
# 计算预测误差
pred = model(X) # 网络输出
loss = loss_fn(pred, y) # 计算网络输出和真实值之间的差距,targets为真实值,计算二者差值即为损失
# 反向传播
optimizer.zero_grad() # grad属性归零
loss.backward() # 反向传播
optimizer.step() # 每一步自动更新
# 记录acc与loss
train_acc += (pred.argmax(1) == y).type(torch.float).sum().item()
train_loss += loss.item()
train_acc /= size
train_loss /= num_batches
return train_acc, train_loss
2. 编写测试函数
测试函数和训练函数大致相同,但是由于不进行梯度下降对网络权重进行更新,所以不需要传入优化器
def test(dataloader, model, loss_fn):
size = len(dataloader.dataset) # 测试集的大小
num_batches = len(dataloader) # 批次数目, (size/batch_size,向上取整)
test_loss, test_acc = 0, 0
# 当不进行训练时,停止梯度更新,节省计算内存消耗
with torch.no_grad():
for imgs, target in dataloader:
imgs, target = imgs.to(device), target.to(device)
# 计算loss
target_pred = model(imgs)
loss = loss_fn(target_pred, target)
test_loss += loss.item()
test_acc += (target_pred.argmax(1) == target).type(torch.float).sum().item()
test_acc /= size
test_loss /= num_batches
return test_acc, test_loss
3.正式训练
import copy
optimizer = torch.optim.Adam(model.parameters(), lr=1e-4)
loss_fn = nn.CrossEntropyLoss() # 创建损失函数
epochs = 40
train_loss = []
train_acc = []
test_loss = []
test_acc = []
best_acc = 0 # 设置一个最佳准确率,作为最佳模型的判别指标
for epoch in range(epochs):
model.train()
epoch_train_acc, epoch_train_loss = train(train_dl, model, loss_fn, optimizer)
model.eval()
epoch_test_acc, epoch_test_loss = test(test_dl, model, loss_fn)
# 保存最佳模型到 best_model
if epoch_test_acc > best_acc:
best_acc = epoch_test_acc
best_model = copy.deepcopy(model)
train_acc.append(epoch_train_acc)
train_loss.append(epoch_train_loss)
test_acc.append(epoch_test_acc)
test_loss.append(epoch_test_loss)
# 获取当前的学习率
lr = optimizer.state_dict()['param_groups'][0]['lr']
template = ('Epoch:{:2d}, Train_acc:{:.1f}%, Train_loss:{:.3f}, Test_acc:{:.1f}%, Test_loss:{:.3f}, Lr:{:.2E}')
print(template.format(epoch + 1, epoch_train_acc * 100, epoch_train_loss,
epoch_test_acc * 100, epoch_test_loss, lr))
# 保存最佳模型到文件中
PATH = './best_model.pth' # 保存的参数文件名
torch.save(model.state_dict(), PATH)
print('Done')
得到如下输出:
Epoch: 1, Train_acc:25.9%, Train_loss:1.387, Test_acc:27.1%, Test_loss:1.339, Lr:1.00E-04
Epoch: 2, Train_acc:48.2%, Train_loss:1.009, Test_acc:62.5%, Test_loss:0.872, Lr:1.00E-04
Epoch: 3, Train_acc:65.6%, Train_loss:0.708, Test_acc:72.9%, Test_loss:0.673, Lr:1.00E-04
Epoch: 4, Train_acc:68.2%, Train_loss:0.668, Test_acc:73.3%, Test_loss:0.558, Lr:1.00E-04
Epoch: 5, Train_acc:75.3%, Train_loss:0.517, Test_acc:80.0%, Test_loss:0.436, Lr:1.00E-04
Epoch: 6, Train_acc:75.6%, Train_loss:0.489, Test_acc:84.6%, Test_loss:0.360, Lr:1.00E-04
Epoch: 7, Train_acc:79.0%, Train_loss:0.467, Test_acc:75.8%, Test_loss:0.432, Lr:1.00E-04
Epoch: 8, Train_acc:81.9%, Train_loss:0.409, Test_acc:83.8%, Test_loss:0.299, Lr:1.00E-04
Epoch: 9, Train_acc:88.6%, Train_loss:0.263, Test_acc:91.2%, Test_loss:0.203, Lr:1.00E-04
Epoch:10, Train_acc:95.1%, Train_loss:0.126, Test_acc:93.3%, Test_loss:0.214, Lr:1.00E-04
Epoch:11, Train_acc:94.7%, Train_loss:0.140, Test_acc:92.1%, Test_loss:0.174, Lr:1.00E-04
Epoch:12, Train_acc:96.9%, Train_loss:0.098, Test_acc:98.8%, Test_loss:0.027, Lr:1.00E-04
Epoch:13, Train_acc:98.4%, Train_loss:0.057, Test_acc:98.8%, Test_loss:0.026, Lr:1.00E-04
Epoch:14, Train_acc:96.9%, Train_loss:0.105, Test_acc:98.8%, Test_loss:0.056, Lr:1.00E-04
Epoch:15, Train_acc:97.8%, Train_loss:0.066, Test_acc:99.2%, Test_loss:0.050, Lr:1.00E-04
Epoch:16, Train_acc:94.4%, Train_loss:0.171, Test_acc:92.9%, Test_loss:0.163, Lr:1.00E-04
Epoch:17, Train_acc:95.8%, Train_loss:0.126, Test_acc:93.3%, Test_loss:0.166, Lr:1.00E-04
Epoch:18, Train_acc:95.9%, Train_loss:0.113, Test_acc:95.8%, Test_loss:0.092, Lr:1.00E-04
Epoch:19, Train_acc:97.1%, Train_loss:0.082, Test_acc:98.8%, Test_loss:0.040, Lr:1.00E-04
Epoch:20, Train_acc:99.2%, Train_loss:0.032, Test_acc:100.0%, Test_loss:0.009, Lr:1.00E-04
Epoch:21, Train_acc:98.8%, Train_loss:0.032, Test_acc:93.8%, Test_loss:0.145, Lr:1.00E-04
Epoch:22, Train_acc:98.9%, Train_loss:0.036, Test_acc:100.0%, Test_loss:0.006, Lr:1.00E-04
Epoch:23, Train_acc:99.8%, Train_loss:0.010, Test_acc:96.2%, Test_loss:0.108, Lr:1.00E-04
Epoch:24, Train_acc:97.9%, Train_loss:0.051, Test_acc:97.5%, Test_loss:0.060, Lr:1.00E-04
Epoch:25, Train_acc:96.4%, Train_loss:0.122, Test_acc:99.6%, Test_loss:0.030, Lr:1.00E-04
Epoch:26, Train_acc:98.9%, Train_loss:0.035, Test_acc:99.2%, Test_loss:0.019, Lr:1.00E-04
Epoch:27, Train_acc:98.8%, Train_loss:0.038, Test_acc:97.9%, Test_loss:0.049, Lr:1.00E-04
Epoch:28, Train_acc:99.1%, Train_loss:0.030, Test_acc:98.8%, Test_loss:0.038, Lr:1.00E-04
Epoch:29, Train_acc:99.3%, Train_loss:0.016, Test_acc:99.6%, Test_loss:0.014, Lr:1.00E-04
Epoch:30, Train_acc:99.9%, Train_loss:0.007, Test_acc:99.6%, Test_loss:0.005, Lr:1.00E-04
Epoch:31, Train_acc:99.7%, Train_loss:0.005, Test_acc:96.2%, Test_loss:0.109, Lr:1.00E-04
Epoch:32, Train_acc:97.6%, Train_loss:0.074, Test_acc:98.8%, Test_loss:0.043, Lr:1.00E-04
Epoch:33, Train_acc:98.1%, Train_loss:0.045, Test_acc:99.2%, Test_loss:0.032, Lr:1.00E-04
Epoch:34, Train_acc:99.1%, Train_loss:0.036, Test_acc:98.3%, Test_loss:0.047, Lr:1.00E-04
Epoch:35, Train_acc:99.0%, Train_loss:0.026, Test_acc:98.8%, Test_loss:0.080, Lr:1.00E-04
Epoch:36, Train_acc:98.8%, Train_loss:0.025, Test_acc:99.6%, Test_loss:0.074, Lr:1.00E-04
Epoch:37, Train_acc:99.3%, Train_loss:0.011, Test_acc:100.0%, Test_loss:0.001, Lr:1.00E-04
Epoch:38, Train_acc:99.2%, Train_loss:0.029, Test_acc:99.6%, Test_loss:0.012, Lr:1.00E-04
Epoch:39, Train_acc:98.4%, Train_loss:0.034, Test_acc:97.9%, Test_loss:0.086, Lr:1.00E-04
Epoch:40, Train_acc:99.4%, Train_loss:0.019, Test_acc:99.2%, Test_loss:0.014, Lr:1.00E-04
Done
四、 结果可视化
1. Loss&Accuracy
import matplotlib.pyplot as plt
#隐藏警告
import warnings
warnings.filterwarnings("ignore") #忽略警告信息
plt.rcParams['font.sans-serif'] = ['SimHei'] # 用来正常显示中文标签
plt.rcParams['axes.unicode_minus'] = False # 用来正常显示负号
plt.rcParams['figure.dpi'] = 100 #分辨率
epochs_range = range(epochs)
plt.figure(figsize=(12, 3))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, train_acc, label='Training Accuracy')
plt.plot(epochs_range, test_acc, label='Test Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, train_loss, label='Training Loss')
plt.plot(epochs_range, test_loss, label='Test Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
得到的可视化结果:
2. 指定图片进行预测
首先,先定义出一个用于预测的函数:
from PIL import Image
classes = list(total_data.class_to_idx)
def predict_one_image(image_path, model, transform, classes):
test_img = Image.open(image_path).convert('RGB')
plt.imshow(test_img) # 展示预测的图片
test_img = transform(test_img)
img = test_img.to(device).unsqueeze(0)
model.eval()
output = model(img)
_, pred = torch.max(output, 1)
pred_class = classes[pred]
print(f'预测结果是:{pred_class}')
接着调用函数对指定图片进行预测:
# 预测训练集中的某张照片
predict_one_image(image_path='data/Dark/dark (1).png',
model=model,
transform=train_transforms,
classes=classes)
得到如下结果:
预测结果是:Dark
3.模型评估
将模型调至评估模式:
best_model.eval()
epoch_test_acc, epoch_test_loss = test(test_dl, best_model, loss_fn)
print(epoch_test_acc)
print(epoch_test_loss)
print(epoch_test_acc)
得到如下输出:
1.0
0.010690234747016802
1.0
观察得到和前文中一致。
五、个人理解
本项目需要根据所给的图片,来识别咖啡豆情况。
根据项目基本要求:
- 自己搭建VGG-16网络框架;
- 调用官方的VGG-16网络框架;
- 查看模型的参数量以及相关指标;
目前均已完成,本文的主要更新为自己搭建VGG-16实现咖啡豆识别。
特别的,针对要求②所提出的调用原生VGG的方法,可参考Day-06中模型构建的方法,需要指出的是,调用原生VGG时,所展现出的网络模型为:
Using cuda device
VGG(
(features): Sequential(
(0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): ReLU(inplace=True)
(2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(3): ReLU(inplace=True)
(4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(6): ReLU(inplace=True)
(7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(8): ReLU(inplace=True)
(9): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(11): ReLU(inplace=True)
(12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(13): ReLU(inplace=True)
(14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(15): ReLU(inplace=True)
(16): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(17): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(18): ReLU(inplace=True)
(19): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(20): ReLU(inplace=True)
(21): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(22): ReLU(inplace=True)
(23): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(24): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(25): ReLU(inplace=True)
(26): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(27): ReLU(inplace=True)
(28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(29): ReLU(inplace=True)
(30): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(avgpool): AdaptiveAvgPool2d(output_size=(7, 7))
(classifier): Sequential(
(0): Linear(in_features=25088, out_features=4096, bias=True)
(1): ReLU(inplace=True)
(2): Dropout(p=0.5, inplace=False)
(3): Linear(in_features=4096, out_features=4096, bias=True)
(4): ReLU(inplace=True)
(5): Dropout(p=0.5, inplace=False)
(6): Linear(in_features=4096, out_features=4, bias=True)
)
)
参数量为:
----------------------------------------------------------------
Layer (type) Output Shape Param #
================================================================
Conv2d-1 [-1, 64, 224, 224] 1,792
ReLU-2 [-1, 64, 224, 224] 0
Conv2d-3 [-1, 64, 224, 224] 36,928
ReLU-4 [-1, 64, 224, 224] 0
MaxPool2d-5 [-1, 64, 112, 112] 0
Conv2d-6 [-1, 128, 112, 112] 73,856
ReLU-7 [-1, 128, 112, 112] 0
Conv2d-8 [-1, 128, 112, 112] 147,584
ReLU-9 [-1, 128, 112, 112] 0
MaxPool2d-10 [-1, 128, 56, 56] 0
Conv2d-11 [-1, 256, 56, 56] 295,168
ReLU-12 [-1, 256, 56, 56] 0
Conv2d-13 [-1, 256, 56, 56] 590,080
ReLU-14 [-1, 256, 56, 56] 0
Conv2d-15 [-1, 256, 56, 56] 590,080
ReLU-16 [-1, 256, 56, 56] 0
MaxPool2d-17 [-1, 256, 28, 28] 0
Conv2d-18 [-1, 512, 28, 28] 1,180,160
ReLU-19 [-1, 512, 28, 28] 0
Conv2d-20 [-1, 512, 28, 28] 2,359,808
ReLU-21 [-1, 512, 28, 28] 0
Conv2d-22 [-1, 512, 28, 28] 2,359,808
ReLU-23 [-1, 512, 28, 28] 0
MaxPool2d-24 [-1, 512, 14, 14] 0
Conv2d-25 [-1, 512, 14, 14] 2,359,808
ReLU-26 [-1, 512, 14, 14] 0
Conv2d-27 [-1, 512, 14, 14] 2,359,808
ReLU-28 [-1, 512, 14, 14] 0
Conv2d-29 [-1, 512, 14, 14] 2,359,808
ReLU-30 [-1, 512, 14, 14] 0
MaxPool2d-31 [-1, 512, 7, 7] 0
AdaptiveAvgPool2d-32 [-1, 512, 7, 7] 0
Linear-33 [-1, 4096] 102,764,544
ReLU-34 [-1, 4096] 0
Dropout-35 [-1, 4096] 0
Linear-36 [-1, 4096] 16,781,312
ReLU-37 [-1, 4096] 0
Dropout-38 [-1, 4096] 0
Linear-39 [-1, 4] 16,388
================================================================
Total params: 134,276,932
Trainable params: 16,388
Non-trainable params: 134,260,544
----------------------------------------------------------------
Input size (MB): 0.57
Forward/backward pass size (MB): 218.77
Params size (MB): 512.23
Estimated Total Size (MB): 731.57
----------------------------------------------------------------
训练结果为:
Epoch: 1, Train_acc:38.1%, Train_loss:1.318, Test_acc:78.3%, Test_loss:1.045, Lr:1.00E-04
Epoch: 2, Train_acc:67.0%, Train_loss:0.946, Test_acc:88.3%, Test_loss:0.791, Lr:1.00E-04
Epoch: 3, Train_acc:78.1%, Train_loss:0.770, Test_acc:90.0%, Test_loss:0.656, Lr:1.00E-04
Epoch: 4, Train_acc:84.3%, Train_loss:0.631, Test_acc:90.4%, Test_loss:0.546, Lr:1.00E-04
Epoch: 5, Train_acc:86.7%, Train_loss:0.535, Test_acc:91.7%, Test_loss:0.495, Lr:1.00E-04
Epoch: 6, Train_acc:87.2%, Train_loss:0.478, Test_acc:91.7%, Test_loss:0.430, Lr:1.00E-04
Epoch: 7, Train_acc:89.2%, Train_loss:0.423, Test_acc:93.8%, Test_loss:0.380, Lr:1.00E-04
Epoch: 8, Train_acc:89.9%, Train_loss:0.406, Test_acc:91.7%, Test_loss:0.377, Lr:1.00E-04
Epoch: 9, Train_acc:90.2%, Train_loss:0.368, Test_acc:91.7%, Test_loss:0.341, Lr:1.00E-04
Epoch:10, Train_acc:91.6%, Train_loss:0.345, Test_acc:95.0%, Test_loss:0.318, Lr:1.00E-04
Epoch:11, Train_acc:90.9%, Train_loss:0.337, Test_acc:94.6%, Test_loss:0.299, Lr:1.00E-04
Epoch:12, Train_acc:92.0%, Train_loss:0.329, Test_acc:94.6%, Test_loss:0.294, Lr:1.00E-04
Epoch:13, Train_acc:91.4%, Train_loss:0.309, Test_acc:94.6%, Test_loss:0.277, Lr:1.00E-04
Epoch:14, Train_acc:92.8%, Train_loss:0.283, Test_acc:92.5%, Test_loss:0.271, Lr:1.00E-04
Epoch:15, Train_acc:92.7%, Train_loss:0.279, Test_acc:96.2%, Test_loss:0.244, Lr:1.00E-04
Epoch:16, Train_acc:94.5%, Train_loss:0.246, Test_acc:95.0%, Test_loss:0.235, Lr:1.00E-04
Epoch:17, Train_acc:92.8%, Train_loss:0.263, Test_acc:94.2%, Test_loss:0.227, Lr:1.00E-04
Epoch:18, Train_acc:95.0%, Train_loss:0.237, Test_acc:94.2%, Test_loss:0.245, Lr:1.00E-04
Epoch:19, Train_acc:93.2%, Train_loss:0.246, Test_acc:96.2%, Test_loss:0.225, Lr:1.00E-04
Epoch:20, Train_acc:94.5%, Train_loss:0.231, Test_acc:94.2%, Test_loss:0.232, Lr:1.00E-04
Epoch:21, Train_acc:94.2%, Train_loss:0.228, Test_acc:94.6%, Test_loss:0.217, Lr:1.00E-04
Epoch:22, Train_acc:94.2%, Train_loss:0.227, Test_acc:93.3%, Test_loss:0.219, Lr:1.00E-04
Epoch:23, Train_acc:94.1%, Train_loss:0.218, Test_acc:95.0%, Test_loss:0.212, Lr:1.00E-04
Epoch:24, Train_acc:93.9%, Train_loss:0.223, Test_acc:95.0%, Test_loss:0.191, Lr:1.00E-04
Epoch:25, Train_acc:93.5%, Train_loss:0.215, Test_acc:95.8%, Test_loss:0.199, Lr:1.00E-04
Epoch:26, Train_acc:96.0%, Train_loss:0.185, Test_acc:96.2%, Test_loss:0.197, Lr:1.00E-04
Epoch:27, Train_acc:94.3%, Train_loss:0.203, Test_acc:94.6%, Test_loss:0.196, Lr:1.00E-04
Epoch:28, Train_acc:94.9%, Train_loss:0.193, Test_acc:97.1%, Test_loss:0.175, Lr:1.00E-04
Epoch:29, Train_acc:95.9%, Train_loss:0.183, Test_acc:95.4%, Test_loss:0.186, Lr:1.00E-04
Epoch:30, Train_acc:95.2%, Train_loss:0.178, Test_acc:94.6%, Test_loss:0.179, Lr:1.00E-04
Epoch:31, Train_acc:94.2%, Train_loss:0.197, Test_acc:95.0%, Test_loss:0.206, Lr:1.00E-04
Epoch:32, Train_acc:95.3%, Train_loss:0.176, Test_acc:94.2%, Test_loss:0.183, Lr:1.00E-04
Epoch:33, Train_acc:94.0%, Train_loss:0.192, Test_acc:93.3%, Test_loss:0.193, Lr:1.00E-04
Epoch:34, Train_acc:94.9%, Train_loss:0.181, Test_acc:95.4%, Test_loss:0.172, Lr:1.00E-04
Epoch:35, Train_acc:94.9%, Train_loss:0.168, Test_acc:95.4%, Test_loss:0.165, Lr:1.00E-04
Epoch:36, Train_acc:95.3%, Train_loss:0.171, Test_acc:95.4%, Test_loss:0.168, Lr:1.00E-04
Epoch:37, Train_acc:95.9%, Train_loss:0.159, Test_acc:95.8%, Test_loss:0.160, Lr:1.00E-04
Epoch:38, Train_acc:95.9%, Train_loss:0.163, Test_acc:95.0%, Test_loss:0.164, Lr:1.00E-04
Epoch:39, Train_acc:95.0%, Train_loss:0.168, Test_acc:96.2%, Test_loss:0.154, Lr:1.00E-04
Epoch:40, Train_acc:95.0%, Train_loss:0.156, Test_acc:95.8%, Test_loss:0.158, Lr:1.00E-04
可以看出,调用原生VGG和手动搭建VGG参数量及模型基本一致,这里的准确下降的原因猜测是加入了Dropout,使得整体准确率下降,具体原因有待深入考证。
根据项目拔高要求:
- 验证集准确率达到100%;
- 使用PPT画出VGG-16算法框架图;
- 轻量化模型(Total params=134,276,932);
目前未能实现。