RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the

1.问题

今天练习使用pytorch的时候,准备上GPU,结果出现了以下错误:

RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same

我百度了几个解决办法,但问题刚好和我的相反,他们的问题是:

RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor)

我的是输入类型(Input)不是GPU类型的,百度的是权重(weight)不是GPU类型的,所以解决办法不适用。

2.代码(已调整好,可以正确运行)

import torch.optim
import torchvision.datasets

# 准备数据集
from torch import nn
from torch.utils.data import DataLoader
from torch.utils.tensorboard import SummaryWriter
from time import time

print(torch.cuda.is_available())

train_data = torchvision.datasets.CIFAR10(root="./dataset",
                                          train=True,
                                          transform=torchvision.transforms.ToTensor(),
                                          download=True)

test_data = torchvision.datasets.CIFAR10(root="./dataset",
                                         train=False,
                                         transform=torchvision.transforms.ToTensor(),
                                         download=True)
print("训练集数据长度:%d" % len(train_data))
print("测试集数据长度:%d" % len(test_data))

# 利用DataLoader加载数据
train_data_loader = DataLoader(train_data, batch_size=64)
test_data_loader = DataLoader(test_data, batch_size=64)


# 搭建神经网络(单独放在一个.py文件)
class Net(nn.Module):
    def __init__(self) -> None:
        super(Net, self).__init__()
        self.model = nn.Sequential(
            nn.Conv2d(3, 32, 5, 1, 2),
            nn.MaxPool2d(2, 2),
            nn.Conv2d(32, 32, 5, 1, 2),
            nn.MaxPool2d(2, 2),
            nn.Conv2d(32, 64, 5, 1, 2),
            nn.MaxPool2d(2, 2),
            nn.Flatten(),
            nn.Linear(1024, 64),
            nn.Linear(64, 10)

        )

    def forward(self, x):
        x = self.model(x)
        return x


# 创建网络模型
net = Net()
# 只有模型、数据、损失函数可以运行在GPU上
# if torch.cuda.is_available():
#     net = net.cuda()
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
net.to(device)


# 损失函数
loss_fn = nn.CrossEntropyLoss()
# if torch.cuda.is_available():
#     loss_fn = loss_fn.cuda()
loss_fn.to(device)

# 优化器
learning_rate = 1e-2  # 0.01
optimizer = torch.optim.SGD(net.parameters(), lr=learning_rate)

# 设置训练网络的参数
# 记录训练次数
total_train_step = 0
# 记录训练次数
total_test_step = 0
# 训练轮数
epoch = 10

start_time = time()
writer = SummaryWriter("./logs/train")
for i in range(epoch):
    print("------第%d轮训练------" % (i + 1))

    # 训练步骤
    for data in train_data_loader:
        imgs, targets = data
        if torch.cuda.is_available():
            imgs, targets = imgs.cuda(), targets.cuda()
        # imgs.to(device)
        # targets.to(device)
        outputs = net(imgs)
        loss = loss_fn(outputs, targets)

        # 优化器优化模型
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()

        total_train_step += 1
        if total_train_step % 100 == 0:
            end_time = time()
            print(end_time - start_time)
            print("训练次数:{},loss:{}".format(total_train_step, loss.item()))  # .item()能把tensor类型转化为数字
            writer.add_scalar("train_loss", loss.item(), total_train_step)

    # 测试步骤
    total_test_loss = 0
    total_accuracy = 0
    with torch.no_grad():  # 测试时,将梯度归零。不需要调整梯度进行优化
        for data in test_data_loader:
            imgs, targets = data
            if torch.cuda.is_available():
                imgs, targets = imgs.cuda(), targets.cuda()
            # imgs.to(device)
            # targets.to(device)
            outputs = net(imgs)
            loss = loss_fn(outputs, targets)

            total_test_loss += loss.item()
            accuracy = (outputs.argmax(1) == targets).sum()
            total_accuracy += accuracy
    print("整体测试集上的loss: {}".format(total_test_loss))
    print("整体测试集上的正确率: {}".format(total_accuracy / len(test_data)))

    writer.add_scalar("test_loss", total_test_loss, total_test_step)
    writer.add_scalar("test_accuracy", total_accuracy / len(test_data), total_test_step)
    total_test_step += 1

    # 保存模型
    # torch.save(net.state_dict(), "model_{}.pth".format(i))
    # print("第{}轮训练模型已保存".format(i))

writer.close()

3.解决办法

device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
for data in train_data_loader:
   imgs, targets = data
    if torch.cuda.is_available():
        imgs, targets = imgs.cuda(), targets.cuda()
    # imgs.to(device)
    # targets.to(device)
    outputs = net(imgs)

在上述代码中,imgs和targets不能用.to(device)的形式,这样使用后就会出现Input类型(torch.FloatTensor)不是GPU类型的,只能用另外一种方式:

if torch.cuda.is_available():
    imgs, targets = imgs.cuda(), targets.cuda()

这样可以解决输入和权重类型不匹配的问题。

4.参考

https://stackoverflow.com/questions/59013109/runtimeerror-input-type-torch-floattensor-and-weight-type-torch-cuda-floatte

  • 2
    点赞
  • 6
    收藏
    觉得还不错? 一键收藏
  • 2
    评论
这个错误是由于输入类型(torch.FloatTensor)和权重类型(torch.cuda.HalfTensor)不匹配导致的。根据你提供的引用内容,我们可以看到这个错误与之前的错误形式类似,只是类型不同。 根据引用和引用,我们可以推断出,当输入类型是torch.FloatTensor时,权重类型应该是torch.FloatTensor;当输入类型是torch.cuda.FloatTensor时,权重类型应该是torch.cuda.FloatTensor。因此,当输入类型是torch.FloatTensor时,权重类型为torch.cuda.HalfTensor是不匹配的,导致了错误的发生。 总之,为了解决这个错误,你需要确保输入类型和权重类型是相同的。你可以将输入和权重类型都改为torch.FloatTensor或者都改为torch.cuda.HalfTensor来解决这个问题。<span class="em">1</span><span class="em">2</span><span class="em">3</span> #### 引用[.reference_title] - *1* *2* [【PyTorch】常见错误: RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda....](https://blog.csdn.net/qq_40520596/article/details/106980556)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_2"}}] [.reference_item style="max-width: 50%"] - *3* [五、RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor)](https://blog.csdn.net/panchang199266/article/details/128153179)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_2"}}] [.reference_item style="max-width: 50%"] [ .reference_list ]

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值