1.在实践中,很少悠然从头开始训练整个卷积网络(卷积初始化),因为拥有足够大的数据集比较少见。相反,它常见的pretain一个非常大数据集convNet(例如ImageNet,其中包含1000个类别120万的图像),然后使用无论是初始化或感兴趣的任务固定的特征提取。
这两种主要迁移学习场景如下所示:
#Finetuning网络:我们用一个预训练的网络来初始化网络,而不是随机初始化网络,就像在IamgeNet 1000数据集上训练的网络一样。其余的训练看来像平常一样。
#ConvNet作为固定特征提取器:在这里,我们将固定最终完全连接层之外的所有网络的权重。这个最后完全连接的层被替换为具有随机权重的新层,并且只有这个层被训练。
from __future__ import print_function,division
import torch
import torch.nn as nn
import torch.optim as optim
from torch.optim import lr_scheduler
from torch.autograd import Variable
import numpy as np
import torchvision
from torchvision import datasets,models,transforms
import matplotlib.pyplot as plt
import time
import os
import copy
plt.ion()#打开交互模式
2.加载数据
使用torchvision和torch.utils.data包来加载数据。
我们今天要解决的问题是训练一个模型来分类ants和bees。蚂蚁和蜜蜂我们大约均有120个训练图像。每个class有75个验证图像。通常情况下。如果从零开始训练,这是一个非常小的数据集。由于我们使用迁移学习,所以我们能训练的非常好。
这个数据集是iamgenet的一个非常小的子集。数据集
data_transforms={
'train':transforms.Compose([
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485,0.456,0.406],[0.229,0.224,0.225])
]),
'val':transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485,0.456,0.406],[0.229,0.224,0.225])
]),
}
data_dir='hymenoptera_data'
image_datasets={x:datasets.ImageFolder(os.path.join(data_dir,x),
data_transforms[x])
for x in ['train','val']}
dataloaders={x:torch.utils.data.DataLoader(image_datasets[x],batch_size=4,
shuffle=True,num_workers=4)
for x in ['train','val']}
dataset_sizes={x:len(image_datasets[x]) for x in ['train','val']}
class_names=image_datasets['train'].classes
use_gpu=torch.cuda.is_available()
class torchvision.transforms.CenterCrop(size)
将给定的PIL.Image
进行中心切割,得到给定的size
,size
可以是tuple
,(target_height, target_width)
。size
也可以是一个Integer
,在这种情况下,切出来的图片的形状是正方形。
class torchvision.transforms.RandomCrop(size, padding=0)
切割中心点的位置随机选取。size
可以是tuple
也可以是Integer
。
class torchvision.transforms.RandomHorizontalFlip
随机水平翻转给定的PIL.Image
,概率为0.5
。即:一半的概率翻转,一半的概率不翻转。
class torchvision.transforms.RandomSizedCrop(size, interpolation=2)
先将给定的PIL.Image
随机切,然后再resize
成给定的size
大小。
class torchvision.transforms.Pad(padding, fill=0)
将给定的PIL.Image
的所有边用给定的pad value
填充。 padding:
要填充多少像素 fill:
用什么值填充
对Tensor进行变换
class torchvision.transforms.Normalize(mean, std)
给定均值:(R,G,B)
方差:(R,G,B)
,将会把Tensor
正则化。即:Normalized_image=(image-mean)/std
。
Conversion Transforms
class torchvision.transforms.ToTensor
把一个取值范围是[0,255]
的PIL.Image
或者shape
为(H,W,C)
的numpy.ndarray
,转换成形状为[C,H,W]
,取值范围是[0,1.0]
的torch.FloadTensor
ImageFolder
一个通用的数据加载器,数据集中的数据以以下方式组织
root/dog/xxx.png
root/dog/xxy.png
root/dog/xxz.png
root/cat/123.png
root/cat/nsdf3.png
root/cat/asd932_.png
dset.ImageFolder(root="root folder path", [transform, target_transform])
torchvision.datasets
torchvision.datasets
中包含了以下数据集
- MNIST
- COCO(用于图像标注和目标检测)(Captioning and Detection)
- LSUN Classification
- ImageFolder
- Imagenet-12
- CIFAR10 and CIFAR100
- STL10
Datasets
拥有以下API
:
__getitem__
__len__
由于以上Datasets
都是 torch.utils.data.Dataset
的子类,所以,他们也可以通过torch.utils.data.DataLoader
使用多线程(python的多进程)。
举例说明:torch.utils.data.DataLoader(coco_cap, batch_size=args.batchSize, shuffle=True, num_workers=args.nThreads)
在构造函数中,不同的数据集直接的构造函数会有些许不同,但是他们共同拥有 keyword
参数。 In the constructor, each dataset has a slightly different API as needed, but they all take the keyword args: - transform
: 一个函数,原始图片作为输入,返回一个转换后的图片。(详情请看下面关于torchvision-tranform
的部分)
target_transform
- 一个函数,输入为target
,输出对其的转换。例子,输入的是图片标注的string
,输出为word
的索引。
3.可视化几张图片
让我们显示一些训练图像,以便于理解数据增强
def imshow(inp, title=None):
"""Imshow for Tensor."""
inp = inp.numpy().transpose((1, 2, 0))
mean = np.array([0.485, 0.456, 0.406])
std = np.array([0.229, 0.224, 0.225])
inp = std * inp + mean
inp = np.clip(inp, 0, 1)
plt.imshow(inp)
if title is not None:
plt.title(title)
plt.pause(0.001) # pause a bit so that plots are updated
# Get a batch of training data
inputs, classes = next(iter(dataloaders['train']))
# Make a grid from batch
out = torchvision.utils.make_grid(inputs)
imshow(out, title=[class_names[x] for x in classes])
4训练模型
现在,让我们编写一个通用函数来训练一个模型。
#调整学习率
#保存最佳模型,参数scheduler是一个LR调度器对象z
def train_model(model, criterion, optimizer, scheduler, num_epochs=25):
since = time.time()
best_model_wts = copy.deepcopy(model.state_dict())
best_acc = 0.0
for epoch in range(num_epochs):
print('Epoch {}/{}'.format(epoch, num_epochs - 1))
print('-' * 10)
# Each epoch has a training and validation phase
for phase in ['train', 'val']:
if phase == 'train':
scheduler.step()
model.train(True) # Set model to training mode
else:
model.train(False) # Set model to evaluate mode
running_loss = 0.0
running_corrects = 0
# Iterate over data.
for data in dataloaders[phase]:
# get the inputs
inputs, labels = data
# wrap them in Variable
if use_gpu:
inputs = Variable(inputs.cuda())
labels = Variable(labels.cuda())
else:
inputs, labels = Variable(inputs), Variable(labels)
# zero the parameter gradients
optimizer.zero_grad()
# forward
outputs = model(inputs)
_, preds = torch.max(outputs.data, 1)
loss = criterion(outputs, labels)
# backward + optimize only if in training phase
if phase == 'train':
loss.backward()
optimizer.step()
# statistics
running_loss += loss.data[0] * inputs.size(0)
running_corrects += torch.sum(preds == labels.data)
epoch_loss = running_loss / dataset_sizes[phase]
epoch_acc = running_corrects / dataset_sizes[phase]
print('{} Loss: {:.4f} Acc: {:.4f}'.format(
phase, epoch_loss, epoch_acc))
# deep copy the model
if phase == 'val' and epoch_acc > best_acc:
best_acc = epoch_acc
best_model_wts = copy.deepcopy(model.state_dict())
print()
time_elapsed = time.time() - since
print('Training complete in {:.0f}m {:.0f}s'.format(
time_elapsed // 60, time_elapsed % 60))
print('Best val Acc: {:4f}'.format(best_acc))
# load best model weights
model.load_state_dict(best_model_wts)
return model
5.可视化模型
def visualize_model(model, num_images=6):
was_training = model.training
model.eval()
images_so_far = 0
fig = plt.figure()
for i, data in enumerate(dataloaders['val']):
inputs, labels = data
if use_gpu:
inputs, labels = Variable(inputs.cuda()), Variable(labels.cuda())
else:
inputs, labels = Variable(inputs), Variable(labels)
outputs = model(inputs)
_, preds = torch.max(outputs.data, 1)
for j in range(inputs.size()[0]):
images_so_far += 1
ax = plt.subplot(num_images//2, 2, images_so_far)
ax.axis('off')
ax.set_title('predicted: {}'.format(class_names[preds[j]]))
imshow(inputs.cpu().data[j])
if images_so_far == num_images:
model.train(mode=was_training)
return
model.train(mode=was_training)
6.微调网络(Finetuning)
model_ft = models.resnet18("/home/yuyangyg/.torch/models/resnet18-5c106cde.pth")
num_ftrs = model_ft.fc.in_features
model_ft.fc = nn.Linear(num_ftrs, 2)
if use_gpu:
model_ft = model_ft.cuda()
criterion = nn.CrossEntropyLoss()
# Observe that all parameters are being optimized
optimizer_ft = optim.SGD(model_ft.parameters(), lr=0.001, momentum=0.9)
# Decay LR by a factor of 0.1 every 7 epochs
exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=7, gamma=0.1)
这里自己导入模型权重,直接下载由于网络问题,下载很慢
训练评估
在CPU下大约需要15-25分钟,在GPU下不到一分钟
model_ft = train_model(model_ft, criterion, optimizer_ft, exp_lr_scheduler,
num_epochs=25)
visualize_model(model_ft)
7.ConvNet作为固定特征提取器
在这里,我们需要冻结最后一层之外的所有网络。我们需要设置requires_grad=False来冻结参数,以便于在backward()反向传播时部件不计算梯度。
model_conv = torchvision.models.resnet18(pretrained=True)
for param in model_conv.parameters():
param.requires_grad = False
# Parameters of newly constructed modules have requires_grad=True by default
num_ftrs = model_conv.fc.in_features
model_conv.fc = nn.Linear(num_ftrs, 2)
if use_gpu:
model_conv = model_conv.cuda()
criterion = nn.CrossEntropyLoss()
# Observe that only parameters of final layer are being optimized as
# opoosed to before.
optimizer_conv = optim.SGD(model_conv.fc.parameters(), lr=0.001, momentum=0.9)
# Decay LR by a factor of 0.1 every 7 epochs
exp_lr_scheduler = lr_scheduler.StepLR(optimizer_conv, step_size=7, gamma=0.1)
model_conv = train_model(model_conv, criterion, optimizer_conv,
exp_lr_scheduler, num_epochs=25)
visualize_model(model_conv)
plt.ioff()
plt.show()
Linear layers
class torch.nn.Linear(in_features, out_features, bias=True)
对输入数据做线性变换:y=Ax+b
- in_features - 每个输入样本的大小
- out_features - 每个输出样本的大小
- bias - 若设置为False,这层不会学习偏置。默认值:True
形状:
- 输入: (N,in_features)(N,in_features)
- 输出: (N,out_features)(N,out_features)
变量:
- weight -形状为(out_features x in_features)的模块中可学习的权值
- bias -形状为(out_features)的模块中可学习的偏置
https://ptorch.com/news/138.html