pytorch训练错误记录

目录

错误1 :IndexError: too many indices for array: array is 0-dimensional, but 1 were indexed

1.1 针对numpy array内部的数据为list形式的情况

1.2 针对numpy array内部的数据为dict形式的情况

错误2: ValueError: Only one class present in y_true. ROC AUC score is not defined in that case

错误3: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first



错误1 :IndexError: too many indices for array: array is 0-dimensional, but 1 were indexed

1. 由于 Numpy  ndarray 中各元素长度不同引起的;

1.1 针对numpy array内部的数据为list形式的情况

例如:

>>> import numpy as np
>>> b = np.array([[1,2,3,4], [5,6,7,8], [9,10,11]])
>>> b[:,2]
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
IndexError: too many indices for array: array is 1-dimensional, but 2 were indexed

上述代码中 b 中的第三维数组与第1,2维数组维度不同,因此从数组中筛选取元素就会报上述错误;

但是构建这个b的时候是可以构建,numpy将b构建成如下的形式:

>>> b
array([list([1, 2, 3, 4]), list([5, 6, 7, 8]), list([9, 10, 11])],
      dtype=object)
>>> b.shape
(3,)

构建成了一个列表数组。

1.2 针对numpy array内部的数据为dict形式的情况

那么如果我们将字典类型数据保存成了numpy的形式,将如何获取数据:

例如数据形式为:

>>> tweet = {k:[] for k in range(3)}
>>> tweet[0].append('The ScreenWeek h15 Daily is out! http://t.co/yi5z7oD9j9')
>>> tweet[0].append('The ScreenWeek ')
>>> tweet[1].append('The nWeek ')
>>> tweet[1].append('The n ')
>>> tweet[2].append('The scscasn ')
>>> tweet[2].append('The scscacsdcsv ')
>>> tweet = np.array(tweet)
>>> tweet
array({0: ['The ScreenWeek h15 Daily is out! http://t.co/yi5z7oD9j9', 'The ScreenWeek '], 1: ['The nWeek ', 'The n '], 2: ['The scscasn ', 'The scscacsdcsv ']},
      dtype=object)

当实验numpy形式获取数据时,经常会报上述错误:

>>> tweet[0]
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
IndexError: too many indices for array: array is 0-dimensional, but 1 were indexed

因此,如果我们要获取数据,应该采用如下方式:(转化为list形式获取数据)

>>> tweet = tweet.tolist()
>>> tweet[0]
['The ScreenWeek h15 Daily is out! http://t.co/yi5z7oD9j9', 'The ScreenWeek ']

错误2: ValueError: Only one class present in y_true. ROC AUC score is not defined in that case

我在使用sklearn.metrics类 的 roc_auc_score 方法计算AUC时,出现了错误2;

AUC 是需要分类数据的任一类都有足够的数据,这样才有意义;

这可能是由于数据集不平衡引起的;

可以使用try-except还防止错误;

import numpy as np
from sklearn.metrics import roc_auc_score
y_true = np.array([0, 0, 0, 0])
y_scores = np.array([1, 0, 0, 0])
try:
    roc_auc_score(y_true, y_scores)
except ValueError:
    pass

参考:python - roc_auc_score - y_true 中只有一个类 - IT工具网

错误3: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first

list中元素为tensor,需要将list类型数据转换为tensor类型数据;

转换方式,先将list中tensor元素放到cpu, 再转换为numpy;

转化方式,如下面代码:

des_features = [des.cpu().detach().numpy() for des in des_features]
des_features = torch.tensor(des_features)

错误4: RuntimeError:  [enforce fail at ..\caffe2\serialize\inline_container.cc:300] . unexpected pos 158720512 vs 158720408

磁盘空间不足,删点数据

  • 3
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
以下是一个简单的pycharm深度学习代码,可以直接运行不出现错误,只用pytorch库,训练100个epoch,得到训练集和验证集的准确率曲线、损失函数曲线以及交叉熵函数曲线。这个代码使用了MNIST数据集进行训练和测试。 ```python import torch import torch.nn as nn import torch.optim as optim from torchvision import datasets, transforms # 超参数设置 batch_size = 64 learning_rate = 0.01 num_epochs = 100 # 数据预处理 transform = transforms.Compose([transforms.ToTensor(),transforms.Normalize((0.1307,), (0.3081,))]) # 加载数据集 train_dataset = datasets.MNIST(root='./data', train=True, transform=transform, download=True) test_dataset = datasets.MNIST(root='./data', train=False, transform=transform, download=True) train_loader = torch.utils.data.DataLoader(dataset=train_dataset, batch_size=batch_size, shuffle=True) test_loader = torch.utils.data.DataLoader(dataset=test_dataset, batch_size=batch_size, shuffle=False) # 定义模型 class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(1, 32, kernel_size=5) self.maxpool1 = nn.MaxPool2d(kernel_size=2) self.conv2 = nn.Conv2d(32, 64, kernel_size=5) self.maxpool2 = nn.MaxPool2d(kernel_size=2) self.fc1 = nn.Linear(1024, 512) self.fc2 = nn.Linear(512, 10) def forward(self, x): x = self.maxpool1(torch.relu(self.conv1(x))) x = self.maxpool2(torch.relu(self.conv2(x))) x = x.view(-1, 1024) x = torch.relu(self.fc1(x)) x = self.fc2(x) return x # 定义模型、损失函数和优化器 model = Net() criterion = nn.CrossEntropyLoss() optimizer = optim.SGD(model.parameters(), lr=learning_rate) train_losses = [] train_accs = [] test_losses = [] test_accs = [] # 训练模型 for epoch in range(num_epochs): train_loss = 0 train_acc = 0 test_loss = 0 test_acc = 0 # 训练模式 model.train() for i, (inputs, targets) in enumerate(train_loader): optimizer.zero_grad() outputs = model(inputs) loss = criterion(outputs, targets) loss.backward() optimizer.step() train_loss += loss.item() _, predicted = torch.max(outputs.data, 1) train_acc += (predicted == targets).sum().item() # 测试模式 model.eval() with torch.no_grad(): for inputs, targets in test_loader: outputs = model(inputs) loss = criterion(outputs, targets) test_loss += loss.item() _, predicted = torch.max(outputs.data, 1) test_acc += (predicted == targets).sum().item() # 记录训练和测试的损失和准确率 train_loss /= len(train_loader.dataset) train_acc /= len(train_loader.dataset) test_loss /= len(test_loader.dataset) test_acc /= len(test_loader.dataset) train_losses.append(train_loss) train_accs.append(train_acc) test_losses.append(test_loss) test_accs.append(test_acc) # 输出训练过程中的信息 print('Epoch [{}/{}], Train Loss: {:.4f}, Train Acc: {:.4f}, Test Loss: {:.4f}, Test Acc: {:.4f}' .format(epoch+1, num_epochs, train_loss, train_acc, test_loss, test_acc)) # 绘制训练和测试的准确率曲线、损失函数曲线以及交叉熵函数曲线 import matplotlib.pyplot as plt plt.plot(train_losses, label='Train Loss') plt.plot(test_losses, label='Test Loss') plt.legend() plt.show() plt.plot(train_accs, label='Train Acc') plt.plot(test_accs, label='Test Acc') plt.legend() plt.show() plt.plot(train_losses, label='Train Cross Entropy') plt.plot(test_losses, label='Test Cross Entropy') plt.legend() plt.show() ``` 这个代码使用了一个简单的卷积神经网络来对MNIST数据集进行分类。在训练过程中,我们记录训练集和验证集的损失和准确率,并且最后绘制了训练和测试的准确率曲线、损失函数曲线以及交叉熵函数曲线。你可以通过修改超参数和网络结构来进行实验。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值