PyTorch-Tutorials【pytorch官方教程中英文详解】- 1 Quickstart

72 篇文章 29 订阅
28 篇文章 18 订阅

PyTorch深度学习实践概论笔记5-课后练习2:pytorch官方教程【中英讲解】中跟着刘老师课后练习给的链接学习了pytorch官方教程,后来发现现在有更新版的教程,有时间正好也一起学习一下。

官网链接:Quickstart — PyTorch Tutorials 1.10.1+cu102 documentation

This section runs through the API for common tasks in machine learning. Refer to the links in each section to dive deeper.

【本节介绍机器学习中常见任务的API。请参考每一节中的链接来深入了解。】

1 Working with data

PyTorch has two primitives to work with data: torch.utils.data.DataLoader and torch.utils.data.Dataset. Dataset stores the samples and their corresponding labels, and DataLoader wraps an iterable around the Dataset.

【PyTorch有两个基本原件来处理数据:torch.utils.data.DataLoader和torch.utils.data.Dataset。数据集存储示例及其对应的标签,而DataLoader在数据集周围包装了一个可迭代对象。】

import torch
from torch import nn
from torch.utils.data import DataLoader
from torchvision import datasets
from torchvision.transforms import ToTensor, Lambda, Compose
import matplotlib.pyplot as plt

PyTorch offers domain-specific libraries such as TorchText, TorchVision, and TorchAudio, all of which include datasets. For this tutorial, we will be using a TorchVision dataset.

【PyTorch提供了领域特定的库,如TorchText、TorchVision和TorchAudio,所有这些库都包含了数据集。在本教程中,我们将使用TorchVision数据集。】

The torchvision.datasets module contains Dataset objects for many real-world vision data like CIFAR, COCO (full list here). In this tutorial, we use the FashionMNIST dataset. Every TorchVision Dataset includes two arguments: transform and target_transform to modify the samples and labels respectively.

【torchvision.datasets模块包含许多真实世界的视觉数据的数据集对象,如CIFAR, COCO(完整列表在这里)。在本教程中,我们使用FashionMNIST数据集。每个TorchVision数据集包括两个参数:transform和target_transform来分别修改样本标签

# Download training data from open datasets.
training_data = datasets.FashionMNIST(
    root="data",
    train=True,
    download=True,
    transform=ToTensor(),
)

# Download test data from open datasets.
test_data = datasets.FashionMNIST(
    root="data",
    train=False,
    download=True,
    transform=ToTensor(),
)

出现以上页面表示下载完毕。 

We pass the Dataset as an argument to DataLoader. This wraps an iterable over our dataset, and supports automatic batching, sampling, shuffling and multiprocess data loading. Here we define a batch size of 64, i.e. each element in the dataloader iterable will return a batch of 64 features and labels.

【我们将Dataset作为参数传递给DataLoader。它在我们的数据集上包装了一个可迭代对象,并支持自动批处理、采样、洗牌和多进程数据加载。这里我们定义了一个64的批处理大小,即dataloader可迭代对象中的每个元素将返回一个包含64个特性和标签的批处理。】

batch_size = 64

# Create data loaders.
train_dataloader = DataLoader(training_data, batch_size=batch_size)
test_dataloader = DataLoader(test_data, batch_size=batch_size)

for X, y in test_dataloader:
    print("Shape of X [N, C, H, W]: ", X.shape)
    print("Shape of y: ", y.shape, y.dtype)
    break

输出结果:

Shape of X [N, C, H, W]:  torch.Size([64, 1, 28, 28])
Shape of y:  torch.Size([64]) torch.int64

2 Creating Models

To define a neural network in PyTorch, we create a class that inherits from nn.Module. We define the layers of the network in the __init__ function and specify how data will pass through the network in the forward function. To accelerate operations in the neural network, we move it to the GPU if available.

【要在PyTorch中定义神经网络,需要创建一个继承自nn.Module的类。我们在__init__函数中定义网络层,并在forward函数中指定数据将如何通过网络。为了加速神经网络的操作,我们将其移动到GPU(如果可用的话)。】

# Get cpu or gpu device for training.
device = "cuda" if torch.cuda.is_available() else "cpu"
print(f"Using {device} device")

# Define model
class NeuralNetwork(nn.Module):
    def __init__(self):
        super(NeuralNetwork, self).__init__()
        self.flatten = nn.Flatten()
        self.linear_relu_stack = nn.Sequential(
            nn.Linear(28*28, 512),
            nn.ReLU(),
            nn.Linear(512, 512),
            nn.ReLU(),
            nn.Linear(512, 10)
        )

    def forward(self, x):
        x = self.flatten(x)
        logits = self.linear_relu_stack(x)
        return logits

model = NeuralNetwork().to(device)
print(model)

输出结果:

Using cuda device
NeuralNetwork(
  (flatten): Flatten()
  (linear_relu_stack): Sequential(
    (0): Linear(in_features=784, out_features=512, bias=True)
    (1): ReLU()
    (2): Linear(in_features=512, out_features=512, bias=True)
    (3): ReLU()
    (4): Linear(in_features=512, out_features=10, bias=True)
  )
)

3 Optimizing the Model Parameters

To train a model, we need a loss function and an optimizer.

【为了训练一个模型,我们需要一个损失函数和一个优化器。】

loss_fn = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=1e-3)

In a single training loop, the model makes predictions on the training dataset (fed to it in batches), and backpropagates the prediction error to adjust the model’s parameters.

【在单个训练循环中,模型对训练数据集进行预测(批量反馈给它),并反向传播预测误差以调整模型的参数。】

def train(dataloader, model, loss_fn, optimizer):
    size = len(dataloader.dataset)
    model.train()
    for batch, (X, y) in enumerate(dataloader):
        X, y = X.to(device), y.to(device)

        # Compute prediction error
        pred = model(X)
        loss = loss_fn(pred, y)

        # Backpropagation
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()

        if batch % 100 == 0:
            loss, current = loss.item(), batch * len(X)
            print(f"loss: {loss:>7f}  [{current:>5d}/{size:>5d}]")

We also check the model’s performance against the test dataset to ensure it is learning.

【我们还根据测试数据集检查模型的性能,以确保它正在学习。】

def test(dataloader, model, loss_fn):
    size = len(dataloader.dataset)
    num_batches = len(dataloader)
    model.eval()
    test_loss, correct = 0, 0
    with torch.no_grad():
        for X, y in dataloader:
            X, y = X.to(device), y.to(device)
            pred = model(X)
            test_loss += loss_fn(pred, y).item()
            correct += (pred.argmax(1) == y).type(torch.float).sum().item()
    test_loss /= num_batches
    correct /= size
    print(f"Test Error: \n Accuracy: {(100*correct):>0.1f}%, Avg loss: {test_loss:>8f} \n")

The training process is conducted over several iterations (epochs). During each epoch, the model learns parameters to make better predictions. We print the model’s accuracy and loss at each epoch; we’d like to see the accuracy increase and the loss decrease with every epoch.

【训练过程是在几个迭代(epoch)中进行的。在每个epoch,模型学习参数以做出更好的预测。我们打印模型在每个epoch的精度和损失;我们希望看到随着每次epoch精度的提高和损失的减少。】

输出结果:

Epoch 1
-------------------------------
loss: 2.312082  [    0/60000]
loss: 2.308383  [ 6400/60000]
loss: 2.291206  [12800/60000]
loss: 2.278038  [19200/60000]
loss: 2.278187  [25600/60000]
loss: 2.239554  [32000/60000]
loss: 2.243814  [38400/60000]
loss: 2.218009  [44800/60000]
loss: 2.220359  [51200/60000]
loss: 2.179153  [57600/60000]
Test Error: 
 Accuracy: 50.2%, Avg loss: 2.186154 

Epoch 2
-------------------------------
loss: 2.200397  [    0/60000]
loss: 2.193103  [ 6400/60000]
loss: 2.144217  [12800/60000]
loss: 2.145714  [19200/60000]
loss: 2.110848  [25600/60000]
loss: 2.060621  [32000/60000]
loss: 2.072192  [38400/60000]
loss: 2.015157  [44800/60000]
loss: 2.022693  [51200/60000]
loss: 1.941412  [57600/60000]
Test Error: 
 Accuracy: 57.9%, Avg loss: 1.948830 

Epoch 3
-------------------------------
loss: 1.987648  [    0/60000]
loss: 1.960848  [ 6400/60000]
loss: 1.854541  [12800/60000]
loss: 1.870596  [19200/60000]
loss: 1.780776  [25600/60000]
loss: 1.733272  [32000/60000]
loss: 1.734519  [38400/60000]
loss: 1.654310  [44800/60000]
loss: 1.673718  [51200/60000]
loss: 1.556576  [57600/60000]
Test Error: 
 Accuracy: 60.4%, Avg loss: 1.582394 

Epoch 4
-------------------------------
loss: 1.652018  [    0/60000]
loss: 1.618765  [ 6400/60000]
loss: 1.472668  [12800/60000]
loss: 1.521895  [19200/60000]
loss: 1.418463  [25600/60000]
loss: 1.399883  [32000/60000]
loss: 1.403308  [38400/60000]
loss: 1.344995  [44800/60000]
loss: 1.377540  [51200/60000]
loss: 1.262564  [57600/60000]
Test Error: 
 Accuracy: 63.3%, Avg loss: 1.299130 

Epoch 5
-------------------------------
loss: 1.377499  [    0/60000]
loss: 1.361680  [ 6400/60000]
loss: 1.199262  [12800/60000]
loss: 1.285391  [19200/60000]
loss: 1.171470  [25600/60000]
loss: 1.180164  [32000/60000]
loss: 1.192588  [38400/60000]
loss: 1.152072  [44800/60000]
loss: 1.190913  [51200/60000]
loss: 1.087640  [57600/60000]
Test Error: 
 Accuracy: 64.8%, Avg loss: 1.120123 

Done!

上述结果表明,随着训练次数的增加,train loss越来越小;test准确率越来越高,损失越来越小。

4 Saving Models

A common way to save a model is to serialize the internal state dictionary (containing the model parameters).

【保存模型的一种常见方法是序列化内部状态字典(包含模型参数)。】

torch.save(model.state_dict(), "model.pth")
print("Saved PyTorch Model State to model.pth")

5 Loading Models

The process for loading a model includes re-creating the model structure and loading the state dictionary into it.

【加载模型的过程包括重新创建模型结构和将状态字典加载到其中。】

model = NeuralNetwork()
model.load_state_dict(torch.load("model.pth"))

This model can now be used to make predictions.

【这个模型现在可以用来进行预测。】

classes = [
    "T-shirt/top",
    "Trouser",
    "Pullover",
    "Dress",
    "Coat",
    "Sandal",
    "Shirt",
    "Sneaker",
    "Bag",
    "Ankle boot",
]

model.eval()
x, y = test_data[0][0], test_data[0][1]
with torch.no_grad():
    pred = model(x)
    predicted, actual = classes[pred[0].argmax(0)], classes[y]
    print(f'Predicted: "{predicted}", Actual: "{actual}"')

输出结果:

Predicted: "Ankle boot", Actual: "Ankle boot"

说明:记录学习笔记,如果错误欢迎指正!写文章不易,转载请联系我。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值