pytorch深度学习入门(4)之-优化模型参数与保存加载模型

优化模型参数

现在我们有了模型和数据,是时候通过优化数据上的参数来训练、验证和测试我们的模型了。训练模型是一个迭代过程;在每次迭代中,模型都会对输出进行猜测,计算其猜测中的误差(损失),收集误差相对于其参数的导数(如我们在上一节中看到的),并使用梯度下降优化这些参数。有关此过程的更详细演练,请观看3Blue1Brown 的有关反向传播的视频。

前提代码

我们加载前面有关数据集和数据加载器 以及构建模型部分的代码。

import torch
from torch import nn
from torch.utils.data import DataLoader
from torchvision import datasets
from torchvision.transforms import ToTensor

training_data = datasets.FashionMNIST(
    root="data",
    train=True,
    download=True,
    transform=ToTensor()
)

test_data = datasets.FashionMNIST(
    root="data",
    train=False,
    download=True,
    transform=ToTensor()
)

train_dataloader = DataLoader(training_data, batch_size=64)
test_dataloader = DataLoader(test_data, batch_size=64)

class NeuralNetwork(nn.Module):
    def __init__(self):
        super().__init__()
        self.flatten = nn.Flatten()
        self.linear_relu_stack = nn.Sequential(
            nn.Linear(28*28, 512),
            nn.ReLU(),
            nn.Linear(512, 512),
            nn.ReLU(),
            nn.Linear(512, 10),
        )

    def forward(self, x):
        x = self.flatten(x)
        logits = self.linear_relu_stack(x)
        return logits

model = NeuralNetwork()

输出:

Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-images-idx3-ubyte.gz
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-images-idx3-ubyte.gz to data/FashionMNIST/raw/train-images-idx3-ubyte.gz

  0%|          | 0/26421880 [00:00<?, ?it/s]
  0%|          | 65536/26421880 [00:00<01:12, 365935.94it/s]
  1%|          | 229376/26421880 [00:00<00:38, 684031.23it/s]
  2%|2         | 622592/26421880 [00:00<00:15, 1704292.26it/s]
  4%|4         | 1179648/26421880 [00:00<00:10, 2390814.28it/s]
 10%|9         | 2588672/26421880 [00:00<00:04, 5487501.11it/s]
 20%|#9        | 5275648/26421880 [00:00<00:02, 9550838.15it/s]
 28%|##7       | 7274496/26421880 [00:01<00:01, 12084904.81it/s]
 38%|###7      | 9994240/26421880 [00:01<00:01, 13572411.67it/s]
 45%|####5     | 11927552/26421880 [00:01<00:00, 14854191.35it/s]
 56%|#####5    | 14745600/26421880 [00:01<00:00, 15505203.82it/s]
 63%|######2   | 16547840/26421880 [00:01<00:00, 16031487.17it/s]
 74%|#######3  | 19496960/26421880 [00:01<00:00, 16509921.65it/s]
 80%|########  | 21200896/26421880 [00:01<00:00, 16570738.25it/s]
 92%|#########1| 24215552/26421880 [00:01<00:00, 16987190.33it/s]
 98%|#########8| 25919488/26421880 [00:02<00:00, 16899134.22it/s]
100%|##########| 26421880/26421880 [00:02<00:00, 12614715.34it/s]
Extracting data/FashionMNIST/raw/train-images-idx3-ubyte.gz to data/FashionMNIST/raw

Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-labels-idx1-ubyte.gz
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-labels-idx1-ubyte.gz to data/FashionMNIST/raw/train-labels-idx1-ubyte.gz

  0%|          | 0/29515 [00:00<?, ?it/s]
100%|##########| 29515/29515 [00:00<00:00, 330608.99it/s]
Extracting data/FashionMNIST/raw/train-labels-idx1-ubyte.gz to data/FashionMNIST/raw

Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-images-idx3-ubyte.gz
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-images-idx3-ubyte.gz to data/FashionMNIST/raw/t10k-images-idx3-ubyte.gz

  0%|          | 0/4422102 [00:00<?, ?it/s]
  1%|1         | 65536/4422102 [00:00<00:11, 364547.95it/s]
  5%|5         | 229376/4422102 [00:00<00:06, 684461.84it/s]
 15%|#4        | 655360/4422102 [00:00<00:02, 1799973.55it/s]
 34%|###4      | 1507328/4422102 [00:00<00:00, 3214932.13it/s]
 73%|#######2  | 3211264/4422102 [00:00<00:00, 6844902.13it/s]
100%|##########| 4422102/4422102 [00:00<00:00, 5987963.83it/s]
Extracting data/FashionMNIST/raw/t10k-images-idx3-ubyte.gz to data/FashionMNIST/raw

Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-labels-idx1-ubyte.gz
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-labels-idx1-ubyte.gz to data/FashionMNIST/raw/t10k-labels-idx1-ubyte.gz

  0%|          | 0/5148 [00:00<?, ?it/s]
100%|##########| 5148/5148 [00:00<00:00, 38352179.38it/s]
Extracting data/FashionMNIST/raw/t10k-labels-idx1-ubyte.gz to data/FashionMNIST/raw

超参数

超参数是可调整的参数,可让您控制模型优化过程。不同的超参数值会影响模型训练和收敛速度

我们定义以下训练超参数:

Number of Epochs - 迭代数据集的次数

Batch Size - 参数更新之前通过网络传播的数据样本数量

学习率- 每个批次/时期更新模型参数的量。较小的值会导致学习速度较慢,而较大的值可能会导致训练期间出现不可预测的行为。

learning_rate = 1e-3
batch_size = 64
epochs = 5

优化循环
一旦我们设置了超参数,我们就可以使用优化循环来训练和优化我们的模型。优化循环的每次迭代称为一个epoch。

每个纪元由两个主要部分组成:
训练循环- 迭代训练数据集并尝试收敛到最佳参数。

验证/测试循环- 迭代测试数据集以检查模型性能是否有所改善。

让我们简单熟悉一下训练循环中使用的一些概念。向前跳转查看优化循环的完整实现。

损失函数
当提供一些训练数据时,我们未经训练的网络可能不会给出正确的答案。损失函数衡量的是得到的结果与目标值的不相似程度,它是我们在训练时想要最小化的损失函数。为了计算损失,我们使用给定数据样本的输入进行预测,并将其与真实数据标签值进行比较。

常见的损失函数包括用于回归任务的nn.MSELoss(均方误差)和 用于分类的nn.NLLLoss(负对数似然)。 nn.CrossEntropyLoss结合了nn.LogSoftmax和nn.NLLLoss。

我们将模型的输出 logits 传递给nn.CrossEntropyLoss,这将标准化 logits 并计算预测误差。

# Initialize the loss function
loss_fn = nn.CrossEntropyLoss()

优化器
优化是调整模型参数以减少每个训练步骤中模型误差的过程。优化算法定义了如何执行此过程(在本例中我们使用随机梯度下降)。所有优化逻辑都封装在该optimizer对象中。这里,我们使用SGD优化器;此外,PyTorch 中还有许多不同的优化器 ,例如 ADAM 和 RMSProp,它们可以更好地处理不同类型的模型和数据。

我们通过注册需要训练的模型参数并传入学习率超参数来初始化优化器。

optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)

在训练循环内,优化分三个步骤进行:
调用optimizer.zero_grad()重置模型参数的梯度。默认情况下渐变相加;为了防止重复计算,我们在每次迭代时明确地将它们归零。

通过调用 来反向传播预测损失loss.backward()。PyTorch 存储每个参数的损失梯度。

一旦我们有了梯度,我们就可以optimizer.step()通过后向传递中收集的梯度来调整参数。

我们定义了train_loop优化代码的循环,并test_loop根据我们的测试数据评估模型的性能。

def train_loop(dataloader, model, loss_fn, optimizer):
    size = len(dataloader.dataset)
    # Set the model to training mode - important for batch normalization and dropout layers
    # Unnecessary in this situation but added for best practices
    model.train()
    for batch, (X, y) in enumerate(dataloader):
        # Compute prediction and loss
        pred = model(X)
        loss = loss_fn(pred, y)

        # Backpropagation
        loss.backward()
        optimizer.step()
        optimizer.zero_grad()

        if batch % 100 == 0:
            loss, current = loss.item(), (batch + 1) * len(X)
            print(f"loss: {loss:>7f}  [{current:>5d}/{size:>5d}]")


def test_loop(dataloader, model, loss_fn):
    # Set the model to evaluation mode - important for batch normalization and dropout layers
    # Unnecessary in this situation but added for best practices
    model.eval()
    size = len(dataloader.dataset)
    num_batches = len(dataloader)
    test_loss, correct = 0, 0

    # Evaluating the model with torch.no_grad() ensures that no gradients are computed during test mode
    # also serves to reduce unnecessary gradient computations and memory usage for tensors with requires_grad=True
    with torch.no_grad():
        for X, y in dataloader:
            pred = model(X)
            test_loss += loss_fn(pred, y).item()
            correct += (pred.argmax(1) == y).type(torch.float).sum().item()

    test_loss /= num_batches
    correct /= size
    print(f"Test Error: \n Accuracy: {(100*correct):>0.1f}%, Avg loss: {test_loss:>8f} \n")

我们初始化损失函数和优化器,并将其传递给train_loop和test_loop。请随意增加纪元数来跟踪模型改进的性能。

loss_fn = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)

epochs = 10
for t in range(epochs):
    print(f"Epoch {t+1}\n-------------------------------")
    train_loop(train_dataloader, model, loss_fn, optimizer)
    test_loop(test_dataloader, model, loss_fn)
print("Done!")

输出:

Epoch 1
-------------------------------
loss: 2.298730  [   64/60000]
loss: 2.289123  [ 6464/60000]
loss: 2.273286  [12864/60000]
loss: 2.269406  [19264/60000]
loss: 2.249603  [25664/60000]
loss: 2.229407  [32064/60000]
loss: 2.227368  [38464/60000]
loss: 2.204261  [44864/60000]
loss: 2.206193  [51264/60000]
loss: 2.166651  [57664/60000]
Test Error:
 Accuracy: 50.9%, Avg loss: 2.166725

Epoch 2
-------------------------------
loss: 2.176750  [   64/60000]
loss: 2.169595  [ 6464/60000]
loss: 2.117500  [12864/60000]
loss: 2.129272  [19264/60000]
loss: 2.079674  [25664/60000]
loss: 2.032928  [32064/60000]
loss: 2.050115  [38464/60000]
loss: 1.985236  [44864/60000]
loss: 1.987887  [51264/60000]
loss: 1.907162  [57664/60000]
Test Error:
 Accuracy: 55.9%, Avg loss: 1.915486

Epoch 3
-------------------------------
loss: 1.951612  [   64/60000]
loss: 1.928685  [ 6464/60000]
loss: 1.815709  [12864/60000]
loss: 1.841552  [19264/60000]
loss: 1.732467  [25664/60000]
loss: 1.692914  [32064/60000]
loss: 1.701714  [38464/60000]
loss: 1.610632  [44864/60000]
loss: 1.632870  [51264/60000]
loss: 1.514263  [57664/60000]
Test Error:
 Accuracy: 58.8%, Avg loss: 1.541525

Epoch 4
-------------------------------
loss: 1.616448  [   64/60000]
loss: 1.582892  [ 6464/60000]
loss: 1.427595  [12864/60000]
loss: 1.487950  [19264/60000]
loss: 1.359332  [25664/60000]
loss: 1.364817  [32064/60000]
loss: 1.371491  [38464/60000]
loss: 1.298706  [44864/60000]
loss: 1.336201  [51264/60000]
loss: 1.232145  [57664/60000]
Test Error:
 Accuracy: 62.2%, Avg loss: 1.260237

Epoch 5
-------------------------------
loss: 1.345538  [   64/60000]
loss: 1.327798  [ 6464/60000]
loss: 1.153802  [12864/60000]
loss: 1.254829  [19264/60000]
loss: 1.117322  [25664/60000]
loss: 1.153248  [32064/60000]
loss: 1.171765  [38464/60000]
loss: 1.110263  [44864/60000]
loss: 1.154467  [51264/60000]
loss: 1.070921  [57664/60000]
Test Error:
 Accuracy: 64.1%, Avg loss: 1.089831

Epoch 6
-------------------------------
loss: 1.166889  [   64/60000]
loss: 1.170514  [ 6464/60000]
loss: 0.979435  [12864/60000]
loss: 1.113774  [19264/60000]
loss: 0.973411  [25664/60000]
loss: 1.015192  [32064/60000]
loss: 1.051113  [38464/60000]
loss: 0.993591  [44864/60000]
loss: 1.039709  [51264/60000]
loss: 0.971077  [57664/60000]
Test Error:
 Accuracy: 65.8%, Avg loss: 0.982440

Epoch 7
-------------------------------
loss: 1.045165  [   64/60000]
loss: 1.070583  [ 6464/60000]
loss: 0.862304  [12864/60000]
loss: 1.022265  [19264/60000]
loss: 0.885213  [25664/60000]
loss: 0.919528  [32064/60000]
loss: 0.972762  [38464/60000]
loss: 0.918728  [44864/60000]
loss: 0.961629  [51264/60000]
loss: 0.904379  [57664/60000]
Test Error:
 Accuracy: 66.9%, Avg loss: 0.910167

Epoch 8
-------------------------------
loss: 0.956964  [   64/60000]
loss: 1.002171  [ 6464/60000]
loss: 0.779057  [12864/60000]
loss: 0.958409  [19264/60000]
loss: 0.827240  [25664/60000]
loss: 0.850262  [32064/60000]
loss: 0.917320  [38464/60000]
loss: 0.868384  [44864/60000]
loss: 0.905506  [51264/60000]
loss: 0.856353  [57664/60000]
Test Error:
 Accuracy: 68.3%, Avg loss: 0.858248

Epoch 9
-------------------------------
loss: 0.889765  [   64/60000]
loss: 0.951220  [ 6464/60000]
loss: 0.717035  [12864/60000]
loss: 0.911042  [19264/60000]
loss: 0.786085  [25664/60000]
loss: 0.798370  [32064/60000]
loss: 0.874939  [38464/60000]
loss: 0.832796  [44864/60000]
loss: 0.863254  [51264/60000]
loss: 0.819742  [57664/60000]
Test Error:
 Accuracy: 69.5%, Avg loss: 0.818780

Epoch 10
-------------------------------
loss: 0.836395  [   64/60000]
loss: 0.910220  [ 6464/60000]
loss: 0.668506  [12864/60000]
loss: 0.874338  [19264/60000]
loss: 0.754805  [25664/60000]
loss: 0.758453  [32064/60000]
loss: 0.840451  [38464/60000]
loss: 0.806153  [44864/60000]
loss: 0.830360  [51264/60000]
loss: 0.790281  [57664/60000]
Test Error:
 Accuracy: 71.0%, Avg loss: 0.787271

完整代码


import torch
from torch import nn
from torch.utils.data import DataLoader
from torchvision import datasets
from torchvision.transforms import ToTensor

training_data = datasets.FashionMNIST(
    root="data",
    train=True,
    download=True,
    transform=ToTensor()
)

test_data = datasets.FashionMNIST(
    root="data",
    train=False,
    download=True,
    transform=ToTensor()
)

train_dataloader = DataLoader(training_data, batch_size=64)
test_dataloader = DataLoader(test_data, batch_size=64)

class NeuralNetwork(nn.Module):
    def __init__(self):
        super().__init__()
        self.flatten = nn.Flatten()
        self.linear_relu_stack = nn.Sequential(
            nn.Linear(28*28, 512),
            nn.ReLU(),
            nn.Linear(512, 512),
            nn.ReLU(),
            nn.Linear(512, 10),
        )

    def forward(self, x):
        x = self.flatten(x)
        logits = self.linear_relu_stack(x)
        return logits

model = NeuralNetwork()

learning_rate = 1e-3
batch_size = 64
epochs = 5

# Initialize the loss function
loss_fn = nn.CrossEntropyLoss()

optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)

def train_loop(dataloader, model, loss_fn, optimizer):
    size = len(dataloader.dataset)
    # Set the model to training mode - important for batch normalization and dropout layers
    # Unnecessary in this situation but added for best practices
    model.train()
    for batch, (X, y) in enumerate(dataloader):
        # Compute prediction and loss
        pred = model(X)
        loss = loss_fn(pred, y)

        # Backpropagation
        loss.backward()
        optimizer.step()
        optimizer.zero_grad()

        if batch % 100 == 0:
            loss, current = loss.item(), (batch + 1) * len(X)
            print(f"loss: {loss:>7f}  [{current:>5d}/{size:>5d}]")


def test_loop(dataloader, model, loss_fn):
    # Set the model to evaluation mode - important for batch normalization and dropout layers
    # Unnecessary in this situation but added for best practices
    model.eval()
    size = len(dataloader.dataset)
    num_batches = len(dataloader)
    test_loss, correct = 0, 0

    # Evaluating the model with torch.no_grad() ensures that no gradients are computed during test mode
    # also serves to reduce unnecessary gradient computations and memory usage for tensors with requires_grad=True
    with torch.no_grad():
        for X, y in dataloader:
            pred = model(X)
            test_loss += loss_fn(pred, y).item()
            correct += (pred.argmax(1) == y).type(torch.float).sum().item()

    test_loss /= num_batches
    correct /= size
    print(f"Test Error: \n Accuracy: {(100*correct):>0.1f}%, Avg loss: {test_loss:>8f} \n")


loss_fn = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)

epochs = 10
for t in range(epochs):
    print(f"Epoch {t+1}\n-------------------------------")
    train_loop(train_dataloader, model, loss_fn, optimizer)
    test_loop(test_dataloader, model, loss_fn)
print("Done!")

保存并加载模型

接下来,我们将了解如何通过保存、加载和运行模型预测来持久保存模型状态。

import torch
import torchvision.models as models

保存和加载模型权重
PyTorch 模型将学习到的参数存储在内部状态字典中,称为state_dict。这些可以通过以下方法保存torch.save :

model = models.vgg16(weights='IMAGENET1K_V1')
torch.save(model.state_dict(), 'model_weights.pth')

输出

Downloading: "https://download.pytorch.org/models/vgg16-397923af.pth" to /var/lib/jenkins/.cache/torch/hub/checkpoints/vgg16-397923af.pth

  0%|          | 0.00/528M [00:00<?, ?B/s]
  2%|2         | 11.4M/528M [00:00<00:04, 120MB/s]
  4%|4         | 22.9M/528M [00:00<00:04, 120MB/s]
  7%|6         | 34.6M/528M [00:00<00:04, 121MB/s]
  9%|8         | 46.2M/528M [00:00<00:04, 121MB/s]
 11%|#         | 57.8M/528M [00:00<00:04, 121MB/s]
 13%|#3        | 69.4M/528M [00:00<00:03, 121MB/s]
 15%|#5        | 81.1M/528M [00:00<00:03, 122MB/s]
 18%|#7        | 92.9M/528M [00:00<00:03, 122MB/s]
 20%|#9        | 105M/528M [00:00<00:03, 122MB/s]
 22%|##2       | 116M/528M [00:01<00:03, 122MB/s]
 24%|##4       | 128M/528M [00:01<00:03, 122MB/s]
 26%|##6       | 140M/528M [00:01<00:03, 122MB/s]
 29%|##8       | 151M/528M [00:01<00:03, 122MB/s]
 31%|###       | 163M/528M [00:01<00:03, 122MB/s]
 33%|###3      | 175M/528M [00:01<00:03, 122MB/s]
 35%|###5      | 186M/528M [00:01<00:02, 123MB/s]
 38%|###7      | 198M/528M [00:01<00:02, 123MB/s]
 40%|###9      | 210M/528M [00:01<00:02, 123MB/s]
 42%|####2     | 222M/528M [00:01<00:02, 123MB/s]
 44%|####4     | 234M/528M [00:02<00:02, 123MB/s]
 46%|####6     | 245M/528M [00:02<00:02, 123MB/s]
 49%|####8     | 257M/528M [00:02<00:02, 123MB/s]
 51%|#####     | 269M/528M [00:02<00:02, 123MB/s]
 53%|#####3    | 280M/528M [00:02<00:02, 123MB/s]
 55%|#####5    | 292M/528M [00:02<00:02, 123MB/s]
 58%|#####7    | 304M/528M [00:02<00:01, 123MB/s]
 60%|#####9    | 316M/528M [00:02<00:01, 123MB/s]
 62%|######2   | 327M/528M [00:02<00:01, 123MB/s]
 64%|######4   | 339M/528M [00:02<00:01, 123MB/s]
 66%|######6   | 351M/528M [00:03<00:01, 123MB/s]
 69%|######8   | 363M/528M [00:03<00:01, 123MB/s]
 71%|#######   | 374M/528M [00:03<00:01, 123MB/s]
 73%|#######3  | 386M/528M [00:03<00:01, 123MB/s]
 75%|#######5  | 398M/528M [00:03<00:01, 123MB/s]
 78%|#######7  | 410M/528M [00:03<00:01, 123MB/s]
 80%|#######9  | 421M/528M [00:03<00:00, 123MB/s]
 82%|########2 | 433M/528M [00:03<00:00, 123MB/s]
 84%|########4 | 445M/528M [00:03<00:00, 123MB/s]
 87%|########6 | 457M/528M [00:03<00:00, 123MB/s]
 89%|########8 | 468M/528M [00:04<00:00, 123MB/s]
 91%|######### | 480M/528M [00:04<00:00, 123MB/s]
 93%|#########3| 492M/528M [00:04<00:00, 123MB/s]
 95%|#########5| 504M/528M [00:04<00:00, 123MB/s]
 98%|#########7| 515M/528M [00:04<00:00, 123MB/s]
100%|#########9| 527M/528M [00:04<00:00, 123MB/s]
100%|##########| 528M/528M [00:04<00:00, 123MB/s]

要加载模型权重,您需要先创建同一模型的实例,然后使用load_state_dict()方法加载参数。

model = models.vgg16() # we do not specify ``weights``, i.e. create untrained model
model.load_state_dict(torch.load('model_weights.pth'))
model.eval()

输出:

VGG(
  (features): Sequential(
    (0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (1): ReLU(inplace=True)
    (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (3): ReLU(inplace=True)
    (4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (6): ReLU(inplace=True)
    (7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (8): ReLU(inplace=True)
    (9): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (11): ReLU(inplace=True)
    (12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (13): ReLU(inplace=True)
    (14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (15): ReLU(inplace=True)
    (16): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (17): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (18): ReLU(inplace=True)
    (19): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (20): ReLU(inplace=True)
    (21): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (22): ReLU(inplace=True)
    (23): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (24): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (25): ReLU(inplace=True)
    (26): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (27): ReLU(inplace=True)
    (28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (29): ReLU(inplace=True)
    (30): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  )
  (avgpool): AdaptiveAvgPool2d(output_size=(7, 7))
  (classifier): Sequential(
    (0): Linear(in_features=25088, out_features=4096, bias=True)
    (1): ReLU(inplace=True)
    (2): Dropout(p=0.5, inplace=False)
    (3): Linear(in_features=4096, out_features=4096, bias=True)
    (4): ReLU(inplace=True)
    (5): Dropout(p=0.5, inplace=False)
    (6): Linear(in_features=4096, out_features=1000, bias=True)
  )
)

笔记

请务必model.eval()在推理之前调用方法,将 dropout 和批量归一化层设置为评估模式。如果不这样做将会产生不一致的推理结果。

保存和加载带有形状的模型
加载模型权重时,我们需要首先实例化模型类,因为该类定义了网络的结构。我们可能希望将此类的结构与模型一起保存,在这种情况下,我们可以将model(而不是model.state_dict())传递给保存函数:

torch.save(model, 'model.pth')

然后我们可以像这样加载模型:

model = torch.load('model.pth')
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

码农呆呆

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值