上一篇文章《深度学习之pytorch实战计算机视觉》第6章 PyTorch基础讲了PyTorch基础。本章将通过搭建卷积神经网络模型对生活中的普通图片进行分类,并引入迁移学习(Transfer Learning)方法。为了验证迁移学习方法的方便性和高效性,我们先使用自定义结构的卷积神经网络模型解决图片的分问题,然后通过使用迁移学习方法得到的模型来解决同样的问题,以此来对比传统方法和迁移学习方法哪个更出色。(篇幅略长,可点击目录选择阅读)
目录
7.1 迁移学习入门
在开始之前,我们先来了解一下什么是迁移学习。在深度神经网络算法的应用过程中,如果我们面对的是数据规模较大的问题,那么在搭建好深度神经网络模型后,我们势必要花费大量的算力和时间去训练模型和优化参数,最后耗费了这么多资源得到的模型只能解决这一个问题,性价比非常低。如果用这么多资源训练的模型能够解决同一类问题,那么性价比会提高很多。通过迁移学习,对一个训练好的模型进行细微调整,就能将其应用在相似的问题,并取得很好的效果;而且对于原始数据较少的问题,也适合采用迁移学习。
需要注意,假如我们使用迁移学习解决两个毫不相干的问题,有时候迁移模型会出现负迁移,可以理解为模型泛化能力变差。
假设需要解决一个计算机视觉的图片分类问题,提供了大量的猫和狗的图片数据集,需要搭建一个对猫和狗进行分类的模型。
7.2 数据集处理
本章使用的数据集来自 Kaggle 网站上的“ Dogs vs. Cats ”竞赛项目,可以通过网络下载这些数据集(推荐一个微软的下载链接,免费而且速度不慢:https://www.microsoft.com/en-us/download/details.aspx?id=54765)。在这个数据集的训练数据集中一共有 25000 张猫和狗的图片,包括 12500 张猫的图片和 12500 张狗的图片。在测试数据集中有 12500 张图片,其中猫狗图片是无序混杂的,而且没有对应的标签。这些数据集将被用于对模型进行训练和对参数进行优化,以及在最后对模型的泛化能力进行验证。
7.2.1 验证数据集和测试数据集
在实践中,我们不会直接使用测试数据集对搭建的模型进行训练和优化,而是在训练数据集中划出一部分作为验证集来评估训练后模型的泛化能力。原因是如果我们使用测试数据集进行模型训练和优化,那么模型最终会对测试数据集产生拟合倾向,这样的模型缺少泛化能力。所以,为了防止这种情况的出现,我们会把测试数据集从模型的训练和优化过程中隔离出来,只在每轮训练结束后使用。如果模型对验证数据集和测试数据集的预测同时有高准确率和低损失值,就基本说明模型优化是成功的,模型将具备极强的泛化能力。
在这个数据集中我们分别从训练数据集的猫和狗的图片中各抽出 2500 张图片组成一个具有 5000 张图片的验证数据集。
因为本章使用的测试数据是没有标签的,而且主旨在证明迁移学习比传统的训练高效,所以暂时不使用在数据集中提供的测试数据集,只是进行模型对验证数据集的准确性的横向比较。
7.2.2 数据预览
在划分好数据集之后,先进行数据预览,可以掌握数据的基本信息,从而更好地决定如何使用这些数据。
开始部分的代码如下:
import torch
import torchvision
from torchvision import datasets,transforms
import os #集成了对文件路径和目录操作的类
import matplotlib.pyplot as plt
import time
新建一个名为DogsVSCats的文件夹,该文件夹下新建train和valid的子文件夹,在子文件夹分别新建一个cat和dog的文件夹,然后将数据集中对应的数据集放到对应名字的文件夹中(由于为了测试使用,这里我只使用了125cats和125dogs的图片)。之后进行数据的载入,代码如下:
data_dir = "DogsVSCats"
data_transform = {x:transforms.Compose([transforms.Scale([64,64]),
transforms.ToTensor()])
for x in ["train","valid"]}
image_datasets = {x:datasets.ImageFolder(root = os.path.join(data_dir,x),
transform = data_transform[x])
for x in ["train","valid"]}
dataloader = {x:torch.utils.data.DataLoader(dataset = image_datasets[x],
batch_size = 16,
shuffle = True)
for x in ["train","valid"]}
上述torch.transforms的Scale类将原始图片的大小统一缩放到64x64。对数据的变换和导入都使用了字典的形式,使用字典可以简化代码,方便之后的调用和操作。
os.path.join 就是之前提到的os包的方法,它的作用是将输入参数中的两个名字拼接成一个完整的文件路径。其他常用的 os.path 类方法如下:
- (1) os.path.dirname:用于返回某个目录的目录名,输入参数为文件的目录。
- (2) os path.exists:用于测试输入参数指定的文件是否存在。
- (3) os.path.isdir:用于测试输入参数是否是目录名。
- (4) os path.isfile:用于测试输入参数是否是一个文件。
- (5) os.path.samefile:用于测试两个输入的路径参数是否指向同一个文件。
- (6) os.path.split:用于对输入参数中的目录名进行分割,返回一个元组,该元组由目录名和文件名组成。
下面获取一个批次的数据并进行数据预览和分析,代码如下:
X_example,y_example = next(iter(dataloader["train"])) #一个批次的数据有16张图片
# 确认X_example和y_example的长度也为16
print(u"X_example 个数为{}".format(len(X_example)))
print(u"y_example 个数为{}".format(len(y_example)))
输出如下:
X_example 个数为16
y_example 个数为16
其中,X_example是Tensor数据类型的变量,因为做了图片大小的缩放变换,所以现在图片的大小全部是64x64,那么X_example维度就是(16, 3, 64, 64), 16 代表在这个批次中有 16 张图片;3代表色彩通道数,因为原始图片是彩色的(使用了RGB三个通道);64代表图片的宽度值和高度值。
y_example也是Tensor数据类型的变量,不过其中的元素全部是0和1。为什么会出现0和1?因为在进行数据装载时已经对dog文件夹和cat文件夹下的内容进行了独热编码(One-Hot Encoding),所以0和1不仅是每张图片的标签,还分别对应猫和狗的图片。我们简单打印输出来验证这个独热编码的对应关系,代码如下:
index_classes = image_datasets["train"].class_to_idx
print(index_classes)#{'cat': 0, 'dog': 1}
为了增加之后绘制图像标签的可识别性,还需要通过下述代码将原始标签的结果存储在example_classes变量中,代码如下:
example_classes = image_datasets["train"].classes
print(example_classes) #输出是只有两个元素的列表
#['cat', 'dog']
我们使用 Matplotlib 对一个批次的图片进行绘制,具体的代码如下:
#我们使用 Matplotlib 对一个批次的图片进行绘制,具体的代码如下:
img = torchvision.utils.make_grid(X_example)
img = img.numpy().transpose([1,2,0])
print([example_classes[i] for i in y_example])
plt.imshow(img)
plt.show()
输出如下:
['dog', 'cat', 'cat', 'cat', 'dog', 'dog', 'cat', 'dog', 'cat', 'cat', 'dog', 'cat', 'cat', 'cat', 'cat', 'cat']
上图是一个批次的图片展示。
7.3 模型搭建和参数优化
本节会先基于一个简化VGGNet架构搭建卷积神经网络模型井进行模型训练和参数优化,然后迁移一个完整的VGG16架构的卷积神经网络模型, 最后迁移ResNet50架构的卷积神经网络模型,并对比这三个模型在预测结果上的准确性和在泛化能力上的差异。
7.3.1 自定义VGGNet
我们首先需要搭建卷积神经网络模型,考虑到训练时间的成本,我们基于VGG16架构来搭建一个简化版的VGGNet模型(经典网络:VGG),这个简化版模型要求输入的图片大小缩放到64x64,而在标准的VGGl6架构模型中输入的图片大小应当是224x224的;同时简化版模型删除了VGG16最后的三个卷积层和池化层,也改变了全连接层中的连接参数(具体通道数可参考下面图片)。
这一系列的改变都是为了减少整个模型参与训练的参数数量。简化版模型的搭建代码如下:
class Models(torch.nn.Module):
def __init__(self):
super(Models,self).__init__()
self.Conv = torch.nn.Sequential(
torch.nn.Conv2d(3,64,kernel_size = 3,stride = 1,padding = 1),
torch.nn.ReLU(),
torch.nn.Conv2d(64,64,kernel_size = 3,stride = 1,padding = 1),
torch.nn.ReLU(),
torch.nn.MaxPool2d(kernel_size = 2,stride = 2),
torch.nn.Conv2d(64,128,kernel_size = 3,stride = 1,padding = 1),
torch.nn.ReLU(),
torch.nn.Conv2d(128,128,kernel_size = 3,stride = 1,padding = 1),
torch.nn.ReLU(),
torch.nn.MaxPool2d(kernel_size = 2,stride = 2),
torch.nn.Conv2d(128,256,kernel_size = 3,stride = 1,padding = 1),
torch.nn.ReLU(),
torch.nn.Conv2d(256,256,kernel_size = 3,stride = 1,padding = 1),
torch.nn.ReLU(),
torch.nn.Conv2d(256,256,kernel_size = 3,stride = 1,padding = 1),
torch.nn.ReLU(),
torch.nn.MaxPool2d(kernel_size = 2,stride = 2),
torch.nn.Conv2d(256,512,kernel_size = 3,stride = 1,padding = 1),
torch.nn.ReLU(),
torch.nn.Conv2d(512,512,kernel_size = 3,stride = 1,padding = 1),
torch.nn.ReLU(),
torch.nn.Conv2d(512,512,kernel_size = 3,stride = 1,padding = 1),
torch.nn.ReLU(),
torch.nn.MaxPool2d(kernel_size = 2,stride = 2),
)
self.Classes = torch.nn.Sequential(
torch.nn.Linear(4*4*512,1024),
torch.nn.ReLU(),
torch.nn.Dropout(p = 0.5),
torch.nn.Linear(1024,1024),
torch.nn.ReLU(),
torch.nn.Dropout(p = 0.5),
torch.nn.Linear(1024,2)
)
def forward(self,x):
x = self.Conv(x)
x = x.view(-1,4*4*512) #-1表示不确定的数,不确定reshape成几行,确定列数为4*4*512
x = self.Classes(x)
return x
#搭建好模型后,打印显示模型的细节,代码如下:
model = Models()
print(model)
输出如下:
Models(
(Conv): Sequential(
(0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): ReLU()
(2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(3): ReLU()
(4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(6): ReLU()
(7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(8): ReLU()
(9): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(11): ReLU()
(12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(13): ReLU()
(14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(15): ReLU()
(16): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(17): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(18): ReLU()
(19): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(20): ReLU()
(21): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(22): ReLU()
(23): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(Classes): Sequential(
(0): Linear(in_features=8192, out_features=1024, bias=True)
(1): ReLU()
(2): Dropout(p=0.5, inplace=False)
(3): Linear(in_features=1024, out_features=1024, bias=True)
(4): ReLU()
(5): Dropout(p=0.5, inplace=False)
(6): Linear(in_features=1024, out_features=2, bias=True)
)
)
然后,定义模型的损失函数和优化函数,接着进行模型优化,代码如下:
loss_f = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(),lr = 0.00001)
epoch_n = 10
time_open = time.time()
for epoch in range(epoch_n):
print("Epoch {}/{}".format(epoch,epoch_n - 1))
print("-"*20)
for phase in ["train","valid"]:
if phase == "train":
print("Training...")
model.train(True)
else:
print("Validing...")
model.train(False)
running_loss = 0.0
running_corrects = 0
for batch,data in enumerate(dataloader[phase],1):
X,y = data
X,y = Variable(X),Variable(y)
y_pred = model(X)
_,pred = torch.max(y_pred.data,1)
optimizer.zero_grad()
loss = loss_f(y_pred,y)
if phase == "train":
loss.backward()
optimizer.step()
running_loss += loss.item()
running_corrects += torch.sum(pred == y.data)
# if batch%500 == 0 and phase == "train":
# print("Batch {},Train Loss:{:.4f},Train ACC:{:.4f}".format(batch,
# running_loss/batch,
# 100*running_corrects.numpy()/(16*batch)))
if batch%5 == 0 and phase == "train":
# print("batch:",batch)
print("Train Loss:{:.4f},Train ACC:{:.4f}".format(running_loss/batch,
100*running_corrects.numpy()/(16*batch)))
epoch_loss = running_loss*16/len(image_datasets[phase])
epoch_acc = 100*running_corrects.numpy()/len(image_datasets[phase])
print("{} Loss:{:.4f} Acc:{:.4f}%".format(phase,epoch_loss,epoch_acc))
time_end = time.time() - time_open
print(time_end)
上述代码中使用的优化函数是Adam,损失函数是交叉熵,训练次数10次,最后的输出结果为:
Epoch 0/9
--------------------
Training...
batch: 5
Train Loss:0.6926,Train ACC:56.2500
batch: 10
Train Loss:0.6928,Train ACC:51.2500
train Loss:0.7205 Acc:52.5000%
Validing...
valid Loss:0.8872 Acc:50.0000%
Epoch 1/9
--------------------
Training...
batch: 5
Train Loss:0.6915,Train ACC:60.0000
batch: 10
Train Loss:0.6938,Train ACC:47.5000
train Loss:0.7217 Acc:45.0000%
Validing...
valid Loss:0.8886 Acc:50.0000%
Epoch 2/9
--------------------
Training...
batch: 5
Train Loss:0.6920,Train ACC:53.7500
batch: 10
Train Loss:0.6925,Train ACC:51.8750
train Loss:0.7200 Acc:52.0000%
Validing...
valid Loss:0.8872 Acc:50.0000%
Epoch 3/9
--------------------
Training...
batch: 5
Train Loss:0.6927,Train ACC:57.5000
batch: 10
Train Loss:0.6931,Train ACC:51.2500
train Loss:0.7208 Acc:51.5000%
Validing...
valid Loss:0.8872 Acc:50.0000%
Epoch 4/9
--------------------
Training...
batch: 5
Train Loss:0.6917,Train ACC:56.2500
batch: 10
Train Loss:0.6920,Train ACC:54.3750
train Loss:0.7202 Acc:51.5000%
Validing...
valid Loss:0.8872 Acc:50.0000%
Epoch 5/9
--------------------
Training...
batch: 5
Train Loss:0.6935,Train ACC:43.7500
batch: 10
Train Loss:0.6933,Train ACC:43.7500
train Loss:0.7210 Acc:43.0000%
Validing...
valid Loss:0.8872 Acc:50.0000%
Epoch 6/9
--------------------
Training...
batch: 5
Train Loss:0.6939,Train ACC:42.5000
batch: 10
Train Loss:0.6928,Train ACC:50.6250
train Loss:0.7212 Acc:49.5000%
Validing...
valid Loss:0.8872 Acc:50.0000%
Epoch 7/9
--------------------
Training...
batch: 5
Train Loss:0.6950,Train ACC:38.7500
batch: 10
Train Loss:0.6937,Train ACC:48.1250
train Loss:0.7212 Acc:49.5000%
Validing...
valid Loss:0.8872 Acc:50.0000%
Epoch 8/9
--------------------
Training...
batch: 5
Train Loss:0.6930,Train ACC:43.7500
batch: 10
Train Loss:0.6931,Train ACC:47.5000
train Loss:0.7213 Acc:46.5000%
Validing...
valid Loss:0.8872 Acc:50.0000%
Epoch 9/9
--------------------
Training...
batch: 5
Train Loss:0.6927,Train ACC:53.7500
batch: 10
Train Loss:0.6929,Train ACC:53.1250
train Loss:0.7207 Acc:51.5000%
Validing...
valid Loss:0.8872 Acc:50.0000%
186.1991412639618
运行时间为186.1991412639618s。
如果多一些训练数据,准确率会更高,话的时间也会更多。下面是书本给出的使用所有样本训练的输出(只展示最后一次epoch):
Epoch 9/9
Training ...
Batch 500 , Train Loss:0.5086 , Train ACC:75.1250
Batch 1000 , Train Loss: 0.5079 , Train ACC:75.1875
train Loss:0.5051 Acc:75.3450%
Validing ...
valid Loss: 0.4841 Acc:76.660%
29520.38271522522
虽然准确率不错,但因为全程使用了 CPU 进行计算,所以整个过程非常耗时,约为492分钟(492=29520/60)。下面修改代码,将在模型的过程中需要计算的参数全部迁移 GPUs上,这个过程非常简单和方便,只需重新对这部分参数进行类型转换。当然,需要先确认 GPUs 硬件是否可用,代码如下:
print(torch.cuda.is_available())
Use_gpu = torch.cuda.is_available() #输出True,返回True说明支持GPU,False说明显卡暂时不支持
返回True说明支持GPU,False说明显卡暂时不支持。如果是驱动存在问题,则最简单的办法是将显卡驱动升级到最新版本。 在完成对模型训练过程中参数的迁移之后,新的训练代码如下:
if Use_gpu:
model = model.cuda() #use GPU
epoch_n = 10
time_open = time.time()
for epoch in range(epoch_n):
print("Epoch {}/{}".format(epoch,epoch_n - 1))
print("-"*20)
for phase in ["train","valid"]:
if phase == "train":
print("Training...")
model.train(True)
else:
print("Validing...")
model.train(False)
running_loss = 0.0
running_corrects = 0
for batch,data in enumerate(dataloader[phase],1):
X,y = data
if Use_gpu:
X,y = Variable(X.cuda()),Variable(y.cuda()) #use GPU
else:
X,y = Variable(X),Variable(y)
y_pred = model(X)
_,pred = torch.max(y_pred.data,1)
optimizer.zero_grad()
loss = loss_f(y_pred,y)
if phase == "train":
loss.backward()
optimizer.step()
running_loss += loss.item()
running_corrects += torch.sum(pred == y.data)
if batch%5 == 0 and phase == "train":
# print("batch:",batch)
print("Batch {},Train Loss:{:.4f},Train ACC:{:.4f}".format(batch,running_loss/batch,
100*running_corrects.numpy()/(16*batch)))
epoch_loss = running_loss*16/len(image_datasets[phase])
epoch_acc = 100*running_corrects.numpy()/len(image_datasets[phase])
print("{} Loss:{:.4f} Acc:{:.4f}%".format(phase,epoch_loss,epoch_acc))
time_end = time.time() - time_open
print(time_end)
我自己没有在GPU上跑,下面是书本给出的使用所有样本训练的输出(只展示最后一次epoch):
Epoch 9/9
Training ...
Batch 500 , Train Loss:0.1850 , Train ACC:92.9250
Batch 1000 , Train Loss: 0.1873 , Train ACC:92.6875
train Loss:0.1903 Acc:92.4450%
Validing ...
valid Loss: 0.2874 Acc:88.0400%
855.5901200771332
从结果可以看出,不仅验证测试集的准确率提升了近10% ,而且最后输出的训练耗时缩短到了大约 14 分钟(14=855/60),与之前相比,耗时大幅下降,明显比使用CPU进行参数计算在效率上高出不少。到目前为止,我们构建的卷积神经网络模型己经具备了较高的预测准确率了,下面引入迁移学习来看看预测的准确性还能提升多少,看看计算耗时能否进一步缩短。在使用迁移学习时,我们只需对原模型的结构进行很小部分重新调整和训练,所以预计最后的结果能够有所突破。
7.3.2 迁移 VGG16
下面看看迁移学习的具体实施过程。
第1步:自动下载模型并调用
首先需要下载己经具备最优参数的模型,这需要对我们之前使用的 model = Models()代码部分进行替换,因为我们不需要自己搭建和定义训练的模型了,而是通过代码自动下载模型并直接调用,具体代码如下:
from torchvision import models
#指定下载的模型是VGG16,设置prepare=True实现下载的模型附带了优化好的参数
# model_vgg16 = models.vgg16(prepare=True) #error
model_vgg16 = models.vgg16(pretrained=True)
#查看迁移模型的细节
print(model_vgg16)
输出结果:
VGG(
(features): Sequential(
(0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): ReLU(inplace=True)
(2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(3): ReLU(inplace=True)
(4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(6): ReLU(inplace=True)
(7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(8): ReLU(inplace=True)
(9): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(11): ReLU(inplace=True)
(12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(13): ReLU(inplace=True)
(14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(15): ReLU(inplace=True)
(16): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(17): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(18): ReLU(inplace=True)
(19): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(20): ReLU(inplace=True)
(21): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(22): ReLU(inplace=True)
(23): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(24): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(25): ReLU(inplace=True)
(26): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(27): ReLU(inplace=True)
(28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(29): ReLU(inplace=True)
(30): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(avgpool): AdaptiveAvgPool2d(output_size=(7, 7))
(classifier): Sequential(
(0): Linear(in_features=25088, out_features=4096, bias=True)
(1): ReLU(inplace=True)
(2): Dropout(p=0.5, inplace=False)
(3): Linear(in_features=4096, out_features=4096, bias=True)
(4): ReLU(inplace=True)
(5): Dropout(p=0.5, inplace=False)
(6): Linear(in_features=4096, out_features=1000, bias=True)
)
)
(拓展阅读)使用vgg16的例子链接:https://www.programcreek.com/python/example/108009/torchvision.models.vgg16
第2步:对当前迁移过来的模型进行调整
尽管迁移学习要求解决的问题有很强的相似性,但是每个问题最后的输出结果要求不一样,而承担整个模型输出分类工作的是卷积神经网络中的全连接层,所以调整最多的也是全连接层。其基本思路是冻结卷积神经网络中全连接层之前的全部网络层次,被冻结的网络中的参数在模型的训练过程中不进行更新,能够被优化的参数仅仅是没有被冻结的全连接层的全部参数。
下面看看具体的代码。首先,迁移过来的VGG16架构模型在最后输出结果是1000个,我们只需要两个输出结果,所以全连接层必须进行调整。模型调整的具体代码如下:
for param in model_vgg16.parameters():
param.require_grad = False
model_vgg16.classifier = torch.nn.Sequential(torch.nn.Linear(25088,4096),
torch.nn.ReLU(),
torch.nn.Dropout(p=0.5),
torch.nn.Linear(4096,4096),
torch.nn.ReLU(),
torch.nn.Dropout(p=0.5),
torch.nn.Linear(4096,2))
Use_gpu = torch.cuda.is_available()
if Use_gpu:
model_vgg16 = model_vgg16.cuda()
cost = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model_vgg16.classifier.parameters(),lr = 0.00001)
首先,对原模型中的参数进行遍历操作,将参数中的param.requires_grad全部设置为False,这样对应的参数将不计算梯度,当然也不会进行梯度更新,这就是冻结操作;然后,定义新的全连接层结构并重新赋值给model_vgg16.classifier。在完成了新的全连接层定义后,全连接层中的param.requires_grad参数会被默认重置为True,所以不需要再次遍历参数来进行解冻操作。损失函数的loss值依然使用交叉熵进行计算,但是在优化函数中需要优化的参数设置成全连接层中的所有参数,即对model_vgg16.classifier.parameters这部分参数进行优化。在调整完模型的结构之后,我们通过打印输出对比其与模型没有进行调整前有什么不同,结果如下:
VGG(
(features): Sequential(
(0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): ReLU(inplace=True)
(2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(3): ReLU(inplace=True)
(4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(6): ReLU(inplace=True)
(7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(8): ReLU(inplace=True)
(9): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(11): ReLU(inplace=True)
(12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(13): ReLU(inplace=True)
(14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(15): ReLU(inplace=True)
(16): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(17): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(18): ReLU(inplace=True)
(19): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(20): ReLU(inplace=True)
(21): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(22): ReLU(inplace=True)
(23): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(24): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(25): ReLU(inplace=True)
(26): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(27): ReLU(inplace=True)
(28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(29): ReLU(inplace=True)
(30): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(avgpool): AdaptiveAvgPool2d(output_size=(7, 7))
(classifier): Sequential(
(0): Linear(in_features=25088, out_features=4096, bias=True)
(1): ReLU()
(2): Dropout(p=0.5, inplace=False)
(3): Linear(in_features=4096, out_features=4096, bias=True)
(4): ReLU()
(5): Dropout(p=0.5, inplace=False)
(6): Linear(in_features=4096, out_features=2, bias=True)
)
)
上述结果看出最大的不同是模型最后一部分的全连接层。接着进行新模型的训练和参数优化,通过5次训练来看看最终的结果表现。训练和优化代码如下:
epoch_n = 5
time_open = time.time()
for epoch in range(epoch_n):
print("Epoch {}/{}".format(epoch,epoch_n - 1))
print("-"*10)
for phase in ["train","valid"]:
if phase == "train":
print("Training...")
model_vgg16.train(True)
else:
print("Validing...")
model_vgg16.train(False)
running_loss = 0.0
running_corrects = 0
for batch,data in enumerate(dataloader[phase],1):
X,y = data
if Use_gpu:
X,y = Variable(X.cuda()),Variable(y.cuda()) #use GPU
else:
X,y = Variable(X),Variable(y)
y_pred = model_vgg16(X)
_,pred = torch.max(y_pred.data,1)
optimizer.zero_grad()
loss = cost(y_pred,y)
if phase == "train":
loss.backward()
optimizer.step()
running_loss += loss.item()
running_corrects += torch.sum(pred == y.data)
if batch%5 == 0 and phase == "train":
# print("batch:",batch)
print("Batch {},Train Loss:{:.4f},Train ACC:{:.4f}".format(batch,running_loss/batch,100*running_corrects.cpu().numpy()/(16*batch)))
epoch_loss = running_loss*16/len(image_datasets[phase])
epoch_acc = 100*running_corrects.cpu().numpy()/len(image_datasets[phase])
print("{} Loss:{:.4f} Acc:{:.4f}%".format(phase,epoch_loss,epoch_acc))
time_end = time.time() - time_open
print(time_end)
输出结果为:
Epoch 0/4
----------
Training...
Batch 5,Train Loss:0.6283,Train ACC:71.2500
Batch 10,Train Loss:0.5813,Train ACC:74.3750
train Loss:0.5882 Acc:75.0000%
Validing...
valid Loss:0.6409 Acc:88.0000%
Epoch 1/4
----------
Training...
Batch 5,Train Loss:0.4509,Train ACC:82.5000
Batch 10,Train Loss:0.4341,Train ACC:86.8750
train Loss:0.4463 Acc:87.0000%
Validing...
valid Loss:0.4536 Acc:88.0000%
Epoch 2/4
----------
Training...
Batch 5,Train Loss:0.3687,Train ACC:92.5000
Batch 10,Train Loss:0.3339,Train ACC:92.5000
train Loss:0.3546 Acc:91.0000%
Validing...
valid Loss:0.3814 Acc:90.0000%
Epoch 3/4
----------
Training...
Batch 5,Train Loss:0.3014,Train ACC:92.5000
Batch 10,Train Loss:0.2899,Train ACC:91.8750
train Loss:0.2824 Acc:92.5000%
Validing...
valid Loss:0.3722 Acc:88.0000%
Epoch 4/4
----------
Training...
Batch 5,Train Loss:0.1977,Train ACC:96.2500
Batch 10,Train Loss:0.2127,Train ACC:95.0000
train Loss:0.2257 Acc:94.0000%
Validing...
valid Loss:0.4214 Acc:90.0000%
4.827706813812256
通过迁移学习,最后的结果在准确率上提升非常多,而且只训练了5次,回顾一下之前训练10次的最后一次结果:
Epoch 9/9
--------------------
Training...
batch: 5
Train Loss:0.6927,Train ACC:53.7500
batch: 10
Train Loss:0.6929,Train ACC:53.1250
train Loss:0.7207 Acc:51.5000%
Validing...
valid Loss:0.8872 Acc:50.0000%
测试集准确率从50%提升到了90%,因此迁移学习是一种提升模型泛化能力的非常有效的方法。
下面是对VGG16结构的卷积神经网络模型进行迁移学习的完整代码实现:
#-------------------------------导包
import torch
import torchvision
from torchvision import datasets,transforms
import os
# import matplotlib.pyplot as plt
import time
from torch.autograd import Variable
from torchvision import models
#-------------------------------1准备数据集
data_dir = "DogsVSCats"
data_transform = {x:transforms.Compose([transforms.Scale([64,64]),
transforms.ToTensor()])
for x in ["train","valid"]}
image_datasets = {x:datasets.ImageFolder(root = os.path.join(data_dir,x),
transform = data_transform[x])
for x in ["train","valid"]}
dataloader = {x:torch.utils.data.DataLoader(dataset = image_datasets[x],
batch_size = 16,
shuffle = True)
for x in ["train","valid"]}
index_classes = image_datasets["train"].class_to_idx#{'cat': 0, 'dog': 1}
example_classes = image_datasets["train"].classes#['cat', 'dog']
#-------------------------------2搭建模型(迁移学习)
model_vgg16 = models.vgg16(pretrained=True)
for param in model_vgg16.parameters():
param.require_grad = False
model_vgg16.classifier = torch.nn.Sequential(torch.nn.Linear(25088,4096),
torch.nn.ReLU(),
torch.nn.Dropout(p=0.5),
torch.nn.Linear(4096,4096),
torch.nn.ReLU(),
torch.nn.Dropout(p=0.5),
torch.nn.Linear(4096,2))
Use_gpu = torch.cuda.is_available()
if Use_gpu:
model_vgg16 = model_vgg16.cuda() #use GPU
#-------------------------------3定义损失函数和优化器
loss_f = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model_vgg16.classifier.parameters(),lr = 0.00001)
#-------------------------------4训练
epoch_n = 5
time_open = time.time()
for epoch in range(epoch_n):
print("Epoch {}/{}".format(epoch,epoch_n - 1))
print("-"*10)
for phase in ["train","valid"]:
if phase == "train":
print("Training...")
model_vgg16.train(True)
else:
print("Validing...")
model_vgg16.train(False)
running_loss = 0.0
running_corrects = 0
for batch,data in enumerate(dataloader[phase],1):
X,y = data
if Use_gpu:
X,y = Variable(X.cuda()),Variable(y.cuda()) #use GPU
else:
X,y = Variable(X),Variable(y)
y_pred = model_vgg16(X)
_,pred = torch.max(y_pred.data,1)
optimizer.zero_grad()
loss = loss_f(y_pred,y)
if phase == "train":
loss.backward()
optimizer.step()
running_loss += loss.item()
running_corrects += torch.sum(pred == y.data)
if batch%5 == 0 and phase == "train":
# print("batch:",batch)
print("Batch {},Train Loss:{:.4f},Train ACC:{:.4f}".format(batch,running_loss/batch,100*running_corrects.cpu().numpy()/(16*batch)))
epoch_loss = running_loss*16/len(image_datasets[phase])
epoch_acc = 100*running_corrects.cpu().numpy()/len(image_datasets[phase])
print("{} Loss:{:.4f} Acc:{:.4f}%".format(phase,epoch_loss,epoch_acc))
time_end = time.time() - time_open
print(time_end)
7.3.3 迁移ResNet50
下面来看强大的ResNet架构的卷积神经网络模型的迁移学习。在下面的实例中会将ResNet架构中的ResNet50模型进行迁移,进行模型迁移的代码为 model = models.resnet50(pretrained=True)。
model_resnet50 = models.resnet50(pretrained=True)
print(model_resnet50)
和迁移VGG16模型类似,在代码中使用resnet50对vgg16进行替换就完成了对应模型的迁移。对迁移得到的模型进行打印输出,结果显示如下:
ResNet(
(conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
(layer1): Sequential(
(0): Bottleneck(
(conv1): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(downsample): Sequential(
(0): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(1): Bottleneck(
(conv1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
(2): Bottleneck(
(conv1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
)
(layer2): Sequential(
(0): Bottleneck(
(conv1): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(downsample): Sequential(
(0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)
(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(1): Bottleneck(
(conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
(2): Bottleneck(
(conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
(3): Bottleneck(
(conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
)
(layer3): Sequential(
(0): Bottleneck(
(conv1): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(downsample): Sequential(
(0): Conv2d(512, 1024, kernel_size=(1, 1), stride=(2, 2), bias=False)
(1): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(1): Bottleneck(
(conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
(2): Bottleneck(
(conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
(3): Bottleneck(
(conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
(4): Bottleneck(
(conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
(5): Bottleneck(
(conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
)
(layer4): Sequential(
(0): Bottleneck(
(conv1): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(downsample): Sequential(
(0): Conv2d(1024, 2048, kernel_size=(1, 1), stride=(2, 2), bias=False)
(1): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(1): Bottleneck(
(conv1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
(2): Bottleneck(
(conv1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
)
(avgpool): AdaptiveAvgPool2d(output_size=(1, 1))
(fc): Linear(in_features=2048, out_features=1000, bias=True)
)
同之前迁移VGG16模型一样,我们需要对ResNet50的全连接层部分进行调整,代码调整如下:
for param in model_resnet50.parameters():
param.requires_grad = False
因为ResNet50中的全连接层只有一层,所以对代码的调整非常简单,直接将最后的全连接层
(fc): Linear(in_features=2048, out_features=1000, bias=True)
修改成我们需要的分类输出,代码如下:
model_resnet50.fc = torch.nn.Linear(2048,2)
print(model_resnet50)
在调整完成后再次将模型结构进行打印输出,结果如下:
ResNet(
(conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
(layer1): Sequential(
(0): Bottleneck(
(conv1): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(downsample): Sequential(
(0): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(1): Bottleneck(
(conv1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
(2): Bottleneck(
(conv1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
)
(layer2): Sequential(
(0): Bottleneck(
(conv1): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(downsample): Sequential(
(0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)
(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(1): Bottleneck(
(conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
(2): Bottleneck(
(conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
(3): Bottleneck(
(conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
)
(layer3): Sequential(
(0): Bottleneck(
(conv1): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(downsample): Sequential(
(0): Conv2d(512, 1024, kernel_size=(1, 1), stride=(2, 2), bias=False)
(1): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(1): Bottleneck(
(conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
(2): Bottleneck(
(conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
(3): Bottleneck(
(conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
(4): Bottleneck(
(conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
(5): Bottleneck(
(conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
)
(layer4): Sequential(
(0): Bottleneck(
(conv1): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(downsample): Sequential(
(0): Conv2d(1024, 2048, kernel_size=(1, 1), stride=(2, 2), bias=False)
(1): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(1): Bottleneck(
(conv1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
(2): Bottleneck(
(conv1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
)
(avgpool): AdaptiveAvgPool2d(output_size=(1, 1))
(fc): Linear(in_features=2048, out_features=2, bias=True)
)
来对比与模型调整之前的差异结果,同样仅仅最后一部分全连接层有差异,fc层输出从1000修改成了2:
(fc): Linear(in_features=2048, out_features=2, bias=True)
接着对迁移得到的模型进行5次训练,最终结果如下:
Epoch 0/4
----------
Training...
Batch 5,Train Loss:0.5877,Train ACC:81.2500
Batch 10,Train Loss:0.5825,Train ACC:82.5000
train Loss:0.6162 Acc:81.5000%
Validing...
valid Loss:0.7227 Acc:88.0000%
Epoch 1/4
----------
Training...
Batch 5,Train Loss:0.6077,Train ACC:75.0000
Batch 10,Train Loss:0.5806,Train ACC:83.1250
train Loss:0.6092 Acc:83.0000%
Validing...
valid Loss:0.7142 Acc:88.0000%
Epoch 2/4
----------
Training...
Batch 5,Train Loss:0.5990,Train ACC:85.0000
Batch 10,Train Loss:0.5784,Train ACC:85.0000
train Loss:0.5937 Acc:85.0000%
Validing...
valid Loss:0.6561 Acc:90.0000%
Epoch 3/4
----------
Training...
Batch 5,Train Loss:0.5757,Train ACC:83.7500
Batch 10,Train Loss:0.5691,Train ACC:85.6250
train Loss:0.5948 Acc:85.5000%
Validing...
valid Loss:0.7272 Acc:86.0000%
Epoch 4/4
----------
Training...
Batch 5,Train Loss:0.5778,Train ACC:82.5000
Batch 10,Train Loss:0.5584,Train ACC:85.0000
train Loss:0.5830 Acc:84.0000%
Validing...
valid Loss:0.7168 Acc:94.0000%
5.697037696838379
可以看到,测试集的准确率同样非常理想,从原始的50%提升到了VGG的90%,进一步提升到94%,我只使用了部分数据集训练,如果使用所有数据集,效果应该会更佳。
下面是对 ResNet50 进行迁移学习的完整代码实现:
#-------------------------------导包
import torch
import torchvision
from torchvision import datasets,transforms
import os
import time
from torch.autograd import Variable
from torchvision import models
#-------------------------------1准备数据集
data_dir = "DogsVSCats"
data_transform = {x:transforms.Compose([transforms.Scale([224,224]),
transforms.ToTensor(),
transforms.Normalize(mean = [0.5,0.5,0.5],std = [0.5,0.5,0.5])])
for x in ["train","valid"]}
image_datasets = {x:datasets.ImageFolder(root = os.path.join(data_dir,x),
transform = data_transform[x])
for x in ["train","valid"]}
dataloader = {x:torch.utils.data.DataLoader(dataset = image_datasets[x],
batch_size = 16,
shuffle = True)
for x in ["train","valid"]}
index_classes = image_datasets["train"].class_to_idx#{'cat': 0, 'dog': 1}
example_classes = image_datasets["train"].classes#['cat', 'dog']
#-------------------------------2搭建模型(迁移学习)
model_resnet50 = models.resnet50(pretrained=True)
for param in model_resnet50.parameters():
param.require_grad = False
model_resnet50.fc = torch.nn.Linear(2048,2)
Use_gpu = torch.cuda.is_available()
if Use_gpu:
model_resnet50 = model_resnet50.cuda()
#-------------------------------3定义损失函数和优化器
loss_f = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model_resnet50.fc.parameters(),lr = 0.00001)
#-------------------------------4训练
epoch_n = 5
time_open = time.time()
for epoch in range(epoch_n):
print("Epoch {}/{}".format(epoch,epoch_n - 1))
print("-"*10)
for phase in ["train","valid"]:
if phase == "train":
print("Training...")
model_resnet50.train(True)
else:
print("Validing...")
model_resnet50.train(False)
running_loss = 0.0
running_corrects = 0
for batch,data in enumerate(dataloader[phase],1):
X,y = data
if Use_gpu:
X,y = Variable(X.cuda()),Variable(y.cuda()) #use GPU
else:
X,y = Variable(X),Variable(y)
y_pred = model_resnet50(X)
_,pred = torch.max(y_pred.data,1)
optimizer.zero_grad()
loss = loss_f(y_pred,y)
if phase == "train":
loss.backward()
optimizer.step()
running_loss += loss.item()
running_corrects += torch.sum(pred == y.data)
if batch%5 == 0 and phase == "train":
# print("batch:",batch)
print("Batch {},Train Loss:{:.4f},Train ACC:{:.4f}".format(batch,running_loss/batch,100*running_corrects.cpu().numpy()/(16*batch)))
epoch_loss = running_loss*16/len(image_datasets[phase])
epoch_acc = 100*running_corrects.cpu().numpy()/len(image_datasets[phase])
print("{} Loss:{:.4f} Acc:{:.4f}%".format(phase,epoch_loss,epoch_acc))
time_end = time.time() - time_open
print(time_end)
7.4 小结
通过本章的学习,我们发现:①GPUs在深度学习优化中的效率明显高于CPUs。②迁移学习非常强大,能快速解决同类问题 ,对于相似的问题不再需要从头到尾对模型的全部参数进行优化,该思路可以大大节约时间成本。如果模型的训练结果不理想,还可以训练更多的模型层次,优化更多的模型参数,而不是盲目的从头训练。
说明:记录学习笔记,如果错误欢迎指正!写文章不易,转载请联系我。