今天在运行双向循环神经网络(Bidirectional Recurrent Neural Network,BRNN)的一个demo时遇到了一个报错。
BRNN 的反向传播过程类似于其他深度学习模型,需要使用损失函数对输出进行评估,并使用优化器进行反向传播来更新模型参数。
运行出现报错的原始代码如下:
import torch
import torch.nn as nn
import torchvision.datasets as datasets
import torchvision.transforms as transforms
import matplotlib.pyplot as plt
# 定义 BRNN 模型
class BRNN(nn.Module):
def __init__(self, input_size, hidden_size, num_layers, num_classes):
super(BRNN, self).__init__()
self.hidden_size = hidden_size
self.num_layers = num_layers
self.lstm = nn.LSTM(input_size, hidden_size, num_layers, batch_first=True, bidirectional=True)
self.fc = nn.Linear(hidden_size * 2, num_classes) # 双向 LSTM,因此有 2 个隐藏层
def forward(self, x):
# Set initial states
h0 = torch.zeros(self.num_layers * 2, x.size(0), self.hidden_size).to(device) # 双向 LSTM,因此有 2 倍的层数
c0 = torch.zeros(self.num_layers * 2, x.size(0), self.hidden_size).to(device)
# Forward propagate LSTM
out, _ = self.lstm(x, (h0, c0)) # out: tensor of shape (batch_size, seq_length, hidden_size*2)
# Decode the hidden state of the last time step
out = self.fc(out[:, -1, :])
return out
# 设置超参数
input_size = 28
hidden_size = 128
num_layers = 2
num_classes = 10
batch_size = 100
num_epochs = 2
learning_rate = 0.01
# 加载 MNIST 数据集
train_dataset = datasets.MNIST(root='D:/Python_file/Deep_Learning/pycharm_pytorch/12-LeNet模型/data/', train=True, transform=transforms.ToTensor(), download=False)
test_dataset = datasets.MNIST(root='D:/Python_file/Deep_Learning/pycharm_pytorch/12-LeNet模型/data/', train=False, transform=transforms.ToTensor())
# 数据加载器
train_loader = torch.utils.data.DataLoader(dataset=train_dataset, batch_size=batch_size, shuffle=True)
test_loader = torch.utils.data.DataLoader(dataset=test_dataset, batch_size=batch_size, shuffle=False)
# 设备
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# 实例化模型
model = BRNN(input_size, hidden_size, num_layers, num_classes).to(device)
# 定义损失函数和优化器
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
# 可视化训练过程
train_losses = []
test_losses = []
# 训练模型
total_step = len(train_loader)
for epoch in range(num_epochs):
train_loss = 0.0
for i, (images, labels) in enumerate(train_loader):
images = images.view(-1, 28, 28).to(device)
labels = labels.to(device)
# Forward pass
outputs = model(images)
loss = criterion(outputs, labels)
# Backward and optimize
optimizer.zero_grad()
loss.backward()
optimizer.step()
train_loss += loss.item()
if (i + 1) % 100 == 0:
print('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}'
.format(epoch + 1, num_epochs, i + 1, total_step, loss.item()))
train_losses.append(train_loss / total_step)
# 测试模型
model.eval() # eval mode (batchnorm uses moving mean/variance instead of mini-batch mean/variance)
with torch.no_grad():
test_loss = 0.0
correct = 0
total = 0
for images, labels in test_loader:
images = images.view(-1, 28, 28).to(device)
labels = labels.to(device)
outputs = model(images)
loss = criterion(outputs, labels)
test_loss += loss.item()
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
test_losses.append(test_loss / len(test_loader))
print('Test Loss: {:.4f}, Test Accuracy: {:.2f}%'.format(test_loss / len(test_loader), 100 * correct / total))
# 可视化损失
plt.plot(train_losses, label='Training loss')
plt.plot(test_losses, label='Validation loss')
plt.legend()
plt.show()
# 保存模型
torch.save(model.state_dict(), 'model.ckpt')
报错信息:RuntimeError: cudnn RNN backward can only be called in training mode
这个错误信息提示,“cudnn RNN backward can only be called in training mode”意味着你尝试在非训练模式下调用循环神经网络(RNN)的反向传播函数。这在PyTorch中通常是因为模型被设置为了评估(evaluation)模式,而在评估模式下,PyTorch不会计算梯度。在PyTorch中,模型有两种模式:训练模式和评估模式。训练模式下,会计算梯度并进行反向传播;而评估模式下,则不会进行这些操作。
解决方案
确保在进行反向传播之前,模型处于训练模式。你可以通过调用.train()
方法来将模型设置为训练模式。例如:
model.train() # 将模型设置为训练模式
在原始代码中增加上述这一段代码:
再次运行代码,发现问题解决:
总结:
这个报错通常是因为在训练循环中,模型被设置为评估模式 (model.eval()
) 后尝试进行了第二次的反向传播或访问了在第一次反向传播后已释放的中间张量。
在每个epoch结束时调用了 model.eval()
来评估模型,这是正确的。但是,需要确保在下一个epoch的训练开始之前将模型重新设置为训练模式。这是通过调用 model.train()
实现的。
for epoch in range(num_epochs):
model.train() # 确保在训练开始时模型处于训练模式
train_loss = 0.0
# 训练过程...
这样做可以确保模型在训练时处于正确的模式,并在评估时禁用Dropout和Batch Normalization的行为。