这篇博客总结了以下教程的最后一节:
https://pytorch.org/tutorials/beginner/deep_learning_60min_blitz.html
一般套路是先定义一个模型类,取名为Net或Model之类。
然后net = Net().
还附加了一些内容。
模型类
代码例子:
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120) # why 5: 32,28,14,10,5
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = torch.flatten(x, 1) # flatten all dimensions except batch
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
net = Net()
步骤:
- 先用一个类定义模型。
- 至少两个方法:__init__和forward。
- init: 先要super一下。
- init: 然后定义每一层(卷积层、池化层、全连接层)。通常用现成的torch.nn的函数完成,输入网络结构参数
- forward: 利用上文定义的卷积和全连接,以及torch.nn.functional的函数,如maxpool,relu等。
- forward: 输入是x,每一步都用x,return也是x
PyTorch 中,nn 与 nn.functional 有什么区别?
- 功能和运行效率基本一样,调用方法不同。
- nn不需要自己定义和管理weight。官方推荐:
- 具有学习参数的(例如,conv2d, linear, batch_norm)采用nn.Xxx方式
- 没有学习参数的(例如,maxpool, loss func, activation func)等根据个人选择使用nn.functional.xxx或者nn.Xxx方式。
- 但关于dropout,该知乎用户强烈推荐使用nn.Xxx方式
常用前向传播方法:
https://pytorch.org/docs/stable/nn.html
https://pytorch.org/docs/stable/nn.functional.html
https://pytorch.org/docs/stable/generated/torch.flatten.html
torch.nn.Conv2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros', device=None, dtype=None)
#常用参数就前三个。
torch.nn.Linear(in_features, out_features, bias=True, device=None, dtype=None)
torch.nn.functional.max_pool2d(input,kernal_size)
torch.nn.functional.relu(input, inplace=False) → Tensor
torch.flatten(input, start_dim=0, end_dim=-1) → Tensor
add_module方法
除了上面的写法,也可以用add_module方法给一个module加上子module。
子module可以是一些层,也可以是另一个nn.module.
训练
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
for epoch in range(2): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# get the inputs; data is a list of [inputs, labels]
inputs, labels = data
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward() # calculate grad to determine how to update the parameters
optimizer.step() # update the parameters
# print statistics
running_loss += loss.item()
if i % 2000 == 1999: # print every 2000 mini-batches
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 2000))
running_loss = 0.0
print('Finished Training')
PATH = './cifar_net.pth'
torch.save(net.state_dict(), PATH)
步骤:
- 定义一个优化器
- 定义loss函数
- for循环,外层是每个epoch
- for循环,内层是每个minibatch,
- 在准备好输入和标签数据后,训练需要的是如下代码:
optimizer.zero_grad()
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward() # calculate grad to determine how to update the parameters
optimizer.step() # update the parameters
- 如果需要打印loss,可以参考完整代码。里面每2000个minibatches就打印这两千个batch的平均loss.
模型的保存和读取
各有一个函数:
torch.save(net.state_dict(),PATH) #保存模型
net = Net()
net.load_state_dict(torch.load(PATH)) # 读取模型
测试单个样本
outputs = net(images) # 直接用net(input)即可得到softmax值
_, predicted = torch.max(outputs, 1) # 取最大,得到预测标签
print('Predicted: ', ' '.join('%5s' % classes[predicted[j]]
for j in range(4)))
测试整个测试集
步骤:
-用一个with torch.no_grad():
包住后面的内容,因为不需要计算梯度。
- 遍历整个测试集,每个都用net()计算一遍,统计总数和正确预测的个数。
correct = 0
total = 0
# since we're not training, we don't need to calculate the gradients for our outputs
with torch.no_grad():
for data in testloader:
images, labels = data
# calculate outputs by running images through the network
outputs = net(images)
# the class with the highest energy is what we choose as prediction
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 10000 test images: %d %%' % (
100 * correct / total))
使用GPU
# 定义设备
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print(device)
net.to(device) # 网络转到设备
inputs, labels = data[0].to(device), data[1].to(device) # 数据转到设备
使用多个GPU
在上面的使用的那个GPU操作后:
if torch.cuda.device_count() > 1:
print("Let's use", torch.cuda.device_count(), "GPUs!")
model = nn.DataParallel(model)
model.to(device)
完整代码
#!/usr/bin/env python3.6.5
# -*- coding: UTF-8 -*-
"""
@Author: YuQiao
@Date: 2020/12/22 20:26
@File: TrainingAClassifier.py
"""
'''
For this tutorial, we will use the CIFAR10 dataset.
It has the classes:
‘airplane’, ‘automobile’, ‘bird’, ‘cat’, ‘deer’, ‘dog’, ‘frog’, ‘horse’, ‘ship’, ‘truck’.
The images in CIFAR-10 are of size 3x32x32,
i.e. 3-channel color images of 32x32 pixels in size.
'''
import torch
import torchvision
import torchvision.transforms as transforms
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
download=True, transform=transform)
##
print("trainset")
print(trainset)
##
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,
shuffle=True, num_workers=2)
print("trainloader")
print(trainloader)
##
testset = torchvision.datasets.CIFAR10(root='./data', train=False,
download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=4,
shuffle=False, num_workers=2)
classes = ('plane', 'car', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
##
import matplotlib.pyplot as plt
import numpy as np
# functions to show an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
plt.show()
# get some random training images
dataiter = iter(trainloader)
images, labels = dataiter.next()
# show images
imshow(torchvision.utils.make_grid(images))
# print labels
print(' '.join('%5s' % classes[labels[j]] for j in range(4)))
##
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120) # why 5: 32,28,14,10,5
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5) # n_samples, data of an sample
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
net = Net()
##
import torch.optim as optim
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
##
# Qiao's test cell
for i, data in enumerate(trainloader, 0): # the second parameter means the initial value of i
# get the inputs; data is a list of [inputs, labels]
inputs, labels = data
print("data",data)
print("inputs", inputs.shape)
print("labels", labels.shape)
##
print(labels)
# inputs torch.Size([4, 3, 32, 32])
# labels torch.Size([4])
# print(len(testloader))
# inputs, labels = testloader[0]
# print(inputs.size)
# print(labels.size)
##
for epoch in range(2): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# get the inputs; data is a list of [inputs, labels]
inputs, labels = data
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward() # calculate grad to determine how to update the parameters
optimizer.step() # update the parameters
# print statistics
running_loss += loss.item()
if i % 2000 == 1999: # print every 2000 mini-batches
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 2000))
running_loss = 0.0
print('Finished Training')
##
#Let’s quickly save our trained model:
PATH = './cifar_net.pth'
torch.save(net.state_dict(), PATH)
##
dataiter = iter(testloader)
images, labels = dataiter.next()
# print images
imshow(torchvision.utils.make_grid(images))
print('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(4)))
##
net = Net()
net.load_state_dict(torch.load(PATH))
##
outputs = net(images)
##
_, predicted = torch.max(outputs, 1)
print('Predicted: ', ' '.join('%5s' % classes[predicted[j]]
for j in range(4)))
##
# Let us look at how the network performs on the whole dataset.
correct = 0
total = 0
with torch.no_grad():
for data in testloader:
images, labels = data
outputs = net(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 10000 test images: %d %%' % (
100 * correct / total))
##
# what are the classes that performed well, and the classes that did not perform well:
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
with torch.no_grad():
for data in testloader:
images, labels = data
outputs = net(images)
_, predicted = torch.max(outputs, 1)
c = (predicted == labels).squeeze()
for i in range(4):
label = labels[i]
class_correct[label] += c[i].item()
class_total[label] += 1
for i in range(10):
print('Accuracy of %5s : %2d %%' % (
classes[i], 100 * class_correct[i] / class_total[i]))
##
其他类
dataset
dataset是dataloader的一个参数。
dataset类是负责处理 index -> sample的映射的类。
map类型的dataset
一般自定义数据集可能需要重写以下方法:
_init_(self): 一般可以初始化本地文件路径或文件名列表
_getitem_(self,index): return your data。这个data可以有任意多项。
_len_(self): return 一个整形
Iterable式数据集
主要用于数据大小未知,或者以流的形式输入等本地文件不固定的情况。用的不多,暂时略了。
dataloader
dataloader负责从给定数据集产生一个个batch。
DataLoader(dataset, batch_size=1, shuffle=False, sampler=None,
batch_sampler=None, num_workers=0, collate_fn=None,
pin_memory=False, drop_last=False, timeout=0,
worker_init_fn=None)
- dataset
- batch_size
- shuffle:是否打乱。每个epoch中训练样本的顺序是否相同。
- num_workers (python:int, optional) – 多少个子程序同时工作来获取数据,多线程。
- sampler: 继承自torch.utils.data.sampler.Sampler。torch子带的有:
- Sequential Sampler
- RandomSampler
- SubsetRandomSampler
- BatchSampler
optimizer
loss