Pytorch入门-day1-(1)
工欲善其事,必先利其器
以下分几个方面来展开学习
工具
资源
网站
Pytorch官网:在这有最全的文档,和博客教学,主要资源来自于此
由主网站 延伸的子网站
- tutorials一切以官方教程开始,主要看quick start
- pytorch-cn中文文档这个纯当参考,感觉不咋地
- pytorch 在github上的ApacheCN 翻译中文版,正校验1.7版本:注:此时Pytorch官方已经更新到了1.8.1稳定版,nightly版本是1.9版本
- 1.7文档地址:注 这个就是官方resources中的学习资源
- apacheCN官网
在Google Colab中运行代码
本机环境已装
Pytorch 1.8 gpu cudatoolkit 10.2.89
torchvision 0.9
torchaudio 0.8.0
怎么学
快速开始吧
处理数据
PyTorch有两个处理数据的原语:
torch.utils.data.DataLoader和torch.utils.data.Dataset。
Dataset存储样本及其相应的标签,并DataLoader在Dataset之外包裹一个可迭代的对象。
注: DataLoader 是一个包含Dataset类型的可迭代对象,Dataset包含样本和对应标签
import torch
from torch import nn
from torch.utils.data import DataLoader
from torchvision import datasets
from torchvision.transforms import ToTensor, Lambda, Compose
import matplotlib.pyplot as plt
PyTorch提供了特定领域的库,例如TorchText, TorchVision和TorchAudio,所有这些库都包含数据集。在本教程中,我们将使用TorchVision数据集。
该torchvision.datasets模块包含Dataset许多现实世界的视觉数据的对象,例如CIFAR,COCO(在此完整列表)。在本教程中,我们使用FashionMNIST数据集。每个TorchVision 的Dataset包含两个参数:transform和 target_transform分别修改样本和标签。
# Download training data from open datasets.
training_data = datasets.FashionMNIST(
root="data",
train=True,
download=True,
transform=ToTensor(),
)
# Download test data from open datasets.
test_data = datasets.FashionMNIST(
root="data",
train=False,
download=True,
transform=ToTensor(),
)
我们将Dataset当作参数传递给DataLoader。这在我们的数据集上包装了一个可迭代的对象,并支持自动批处理,采样,改组和多进程数据加载。在这里,我们将批处理大小定义为64,即,可迭代的数据加载器中的每个元素将返回一批64个功能部件和标签。
batch_size = 64
# Create data loaders.
train_dataloader = DataLoader(training_data, batch_size=batch_size)
test_dataloader = DataLoader(test_data, batch_size=batch_size)
for X, y in test_dataloader:
print("Shape of X [N, C, H, W]: ", X.shape)
print("Shape of y: ", y.shape, y.dtype)
break
output:
Shape of X [N, C, H, W]: torch.Size([64, 1, 28, 28])
Shape of y: torch.Size([64]) torch.int64
创建模型
为了在PyTorch中定义一个神经网络,我们创建了一个从nn.Module继承的类。我们在__init__函数中定义网络的层,并在函数中指定数据如何通过网络forward。为了加速神经网络中的操作,我们将其移至GPU(如果有)。
# Get cpu or gpu device for training.
device = "cuda" if torch.cuda.is_available() else "cpu"
print("Using {} device".format(device))
# Define model
class NeuralNetwork(nn.Module):
def __init__(self):
super(NeuralNetwork, self).__init__()
self.flatten = nn.Flatten()
self.linear_relu_stack = nn.Sequential(
nn.Linear(28*28, 512),
nn.ReLU(),
nn.Linear(512, 512),
nn.ReLU(),
nn.Linear(512, 10),
nn.ReLU()
)
def forward(self, x):
x = self.flatten(x)
logits = self.linear_relu_stack(x)
return logits
model = NeuralNetwork().to(device)
print(model)
output:
Using cuda device
NeuralNetwork(
(flatten): Flatten(start_dim=1, end_dim=-1)
(linear_relu_stack): Sequential(
(0): Linear(in_features=784, out_features=512, bias=True)
(1): ReLU()
(2): Linear(in_features=512, out_features=512, bias=True)
(3): ReLU()
(4): Linear(in_features=512, out_features=10, bias=True)
(5): ReLU()
)
)
优化模型参数
要训练模型,我们需要损失函数 和优化器。
loss_fn = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=1e-3)
在单个训练循环中,模型对训练数据集进行预测(分批进给),然后反向传播预测误差以调整模型的参数。
def train(dataloader, model, loss_fn, optimizer):
size = len(dataloader.dataset)
for batch, (X, y) in enumerate(dataloader):
X, y = X.to(device), y.to(device)
# Compute prediction error
pred = model(X)
loss = loss_fn(pred, y)
# Backpropagation
optimizer.zero_grad()
loss.backward()
optimizer.step()
if batch % 100 == 0:
loss, current = loss.item(), batch * len(X)
print(f"loss: {loss:>7f} [{current:>5d}/{size:>5d}]")
我们还将对照测试数据集检查模型的性能,以确保模型正在学习。
def test(dataloader, model):
size = len(dataloader.dataset)
model.eval()
test_loss, correct = 0, 0
with torch.no_grad():
for X, y in dataloader:
X, y = X.to(device), y.to(device)
pred = model(X)
test_loss += loss_fn(pred, y).item()
correct += (pred.argmax(1) == y).type(torch.float).sum().item()
test_loss /= size
correct /= size
print(f"Test Error: \n Accuracy: {(100*correct):>0.1f}%, Avg loss: {test_loss:>8f} \n")
训练过程是在几个迭代(历元)上进行的。在每个时期,模型都会学习参数以做出更好的预测。我们在每个时期打印模型的准确性和损失;我们希望看到每个时期的精度都会提高而损耗会降低。
epochs = 5
for t in range(epochs):
print(f"Epoch {t+1}\n-------------------------------")
train(train_dataloader, model, loss_fn, optimizer)
test(test_dataloader, model)
print("Done!")
output:
Epoch 1
-------------------------------
loss: 2.308457 [ 0/60000]
loss: 2.297256 [ 6400/60000]
loss: 2.278359 [12800/60000]
loss: 2.269392 [19200/60000]
loss: 2.253581 [25600/60000]
loss: 2.241885 [32000/60000]
loss: 2.246830 [38400/60000]
loss: 2.230555 [44800/60000]
loss: 2.213488 [51200/60000]
loss: 2.188743 [57600/60000]
Test Error:
Accuracy: 54.0%, Avg loss: 0.034381
Epoch 2
-------------------------------
loss: 2.202937 [ 0/60000]
loss: 2.185693 [ 6400/60000]
loss: 2.149241 [12800/60000]
loss: 2.154213 [19200/60000]
loss: 2.099240 [25600/60000]
loss: 2.102819 [32000/60000]
loss: 2.114654 [38400/60000]
loss: 2.083009 [44800/60000]
loss: 2.061793 [51200/60000]
loss: 2.008113 [57600/60000]
Test Error:
Accuracy: 57.0%, Avg loss: 0.031586
Epoch 3
-------------------------------
loss: 2.048800 [ 0/60000]
loss: 1.998590 [ 6400/60000]
loss: 1.930384 [12800/60000]
loss: 1.953652 [19200/60000]
loss: 1.835590 [25600/60000]
loss: 1.882969 [32000/60000]
loss: 1.886603 [38400/60000]
loss: 1.849273 [44800/60000]
loss: 1.824851 [51200/60000]
loss: 1.732873 [57600/60000]
Test Error:
Accuracy: 57.2%, Avg loss: 0.027470
Epoch 4
-------------------------------
loss: 1.822591 [ 0/60000]
loss: 1.732545 [ 6400/60000]
loss: 1.642681 [12800/60000]
loss: 1.684417 [19200/60000]
loss: 1.529449 [25600/60000]
loss: 1.654818 [32000/60000]
loss: 1.630370 [38400/60000]
loss: 1.631863 [44800/60000]
loss: 1.588262 [51200/60000]
loss: 1.471685 [57600/60000]
Test Error:
Accuracy: 59.4%, Avg loss: 0.023740
Epoch 5
-------------------------------
loss: 1.611129 [ 0/60000]
loss: 1.502379 [ 6400/60000]
loss: 1.409730 [12800/60000]
loss: 1.466273 [19200/60000]
loss: 1.297876 [25600/60000]
loss: 1.489053 [32000/60000]
loss: 1.437142 [38400/60000]
loss: 1.483427 [44800/60000]
loss: 1.416397 [51200/60000]
loss: 1.301810 [57600/60000]
Test Error:
Accuracy: 61.4%, Avg loss: 0.021151
Done!
了解有关训练模型的更多信息。
保存模型
保存模型的常用方法是序列化内部状态字典(包含模型参数)。
torch.save(model.state_dict(), "model.pth")
print("Saved PyTorch Model State to model.pth")
Saved PyTorch Model State to model.pth
加载模型
加载模型的过程包括重新创建模型结构并将状态字典加载到其中。
model = NeuralNetwork()
model.load_state_dict(torch.load("model.pth"))
现在可以使用该模型进行预测。
classes = [
"T-shirt/top",
"Trouser",
"Pullover",
"Dress",
"Coat",
"Sandal",
"Shirt",
"Sneaker",
"Bag",
"Ankle boot",
]
model.eval()
x, y = test_data[0][0], test_data[0][1]
with torch.no_grad():
pred = model(x)
predicted, actual = classes[pred[0].argmax(0)], classes[y]
print(f'Predicted: "{predicted}", Actual: "{actual}"')
Predicted: "Ankle boot", Actual: "Ankle boot"
代码执行逻辑
torch-------nn
torchvision.datasets
Dataset
transform
target_transform
---------------------------------基本数据操作
datasets.FashionMNIST
DataLoader()
torchvision ------------datasets
datasets.xxx( train=True,transform=ToTensor())---------training_data
datasets.xxx( train=False,transform=ToTensor())---------training_data
torch.utils.data----------有DataLoader
DataLoader(training_data,batch_size = ?)
在for in 中运行DataLoader迭代器
查看不同sample中的shape 和type
torchvision.transforms-------ToTensor, Lambda, Compose
-------------------------类与继承
__init__中定义层
def __init__(self):
super(xxx,self).__init__()
self.flatten = nn.Flatten()
self.linear_relu_stack = nn.Sequential(
nn.Linear(28*28, 512),
nn.ReLU(),
nn.Linear(512, 512),
nn.ReLU(),
nn.Linear(512, 10),
nn.ReLU()
)
函数-定义forward
def forward(self, x):
x = self.flatten(x)
logits = self.linear_relu_stack(x)
return logits
# Get cpu or gpu device for training.
device = "cuda" if torch.cuda.is_available() else "cpu"
print("Using {} device".format(device))
.to(device)
model = NeuralNetwork().to(device)
print(model)