报错处理:RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should...

报错处理:RuntimeError: Input type torch.FloatTensor and weight type -torch.cuda.FloatTensor should...

1. 错误名称

return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same

2. 错误原因

根据stackoverflow的问答,这个错误产生的原因是:
You get this error because your model is on the GPU, but your data is on the CPU. So, you need to send your input tensors to the GPU.
意思是输入数据和模型不在一个地方,模型在GPU上,数据在CPU上,应该把数据输入到GPU上面。

3. 修复方法

Stackoverflow给出了几种建议方法:

  • 第一种是添加代码把数据输入到GPU
# You get this error because your model is on the GPU, but your data is on the CPU. So, you need to send your input tensors to the GPU.
inputs, labels = data                         # this is what you had
inputs, labels = inputs.cuda(), labels.cuda() # add this line
  • 或者这种,把数据tensor和标签修改为模型所输入的位置
x = x.to(device, dtype=torch.float32)
y = y.to(device, dtype=torch.float32)

亲测可用的是下面这个:

model.to(dev)
data = data.to(dev)

如果数据的不会修改,建议model.to(dev)去掉,但是这样的话,测试了似乎是优先选择cpu模式了

4. mnist数据集测试的案例

其实,现在问题症结已经找到了,就是因为用的pytorch的DataLoader,与前面这些不一致,需要通过iter函数遍历数据,所以需要每次都手动把数据喂给GPU

# mnist数据加载
transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize(mean=(0.5,), std=(0.5,))])
train_set = datasets.MNIST(root='./data', train=True, transform=transform, download=True)
test_set = datasets.MNIST(root='./data', train=False, transform=transform, download=True)
batch_size = 32
num_workers = 1
train_loader = data.DataLoader(train_set, batch_size=batch_size, shuffle=True, num_workers=num_workers)
test_loader = data.DataLoader(test_set, batch_size=batch_size, shuffle=False, num_workers=num_workers)
loaders = {'train':train_loader,
           'test':test_loader

}
#############################################################################################################
# 模型训练
from torch.autograd import Variable
num_epochs = 10

def train(num_epochs, cnn, loaders):
    cnn.train()

    # Train the model
    total_step = len(loaders['train'])

    for epoch in range(num_epochs):
        for i, (images, labels) in enumerate(train_loader):
            images, labels = images.to(device), labels.to(device)
            # gives batch data, normalize x when iterate train_loader
            # print(f"images.shape:{images.shape};bx shape:{Variable(images).shape}; by shape:{Variable(labels).shape}; output shape:{model(Variable(images) ).shape}")
            b_x = Variable(images)  # batch x
            b_y = Variable(labels)  # batch y

            output = model(b_x)
            loss = loss_func(output, b_y)

            # clear gradients for this training step
            optimizer.zero_grad()

            # backpropagation, compute gradients
            loss.backward()
            # apply gradients
            optimizer.step()

            if (i + 1) % 100 == 0:
                print('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}'
                      .format(epoch + 1, num_epochs, i + 1, total_step, loss.item()))
            pass

        pass

    pass
  • 用gpu的测试结果
Epoch [1/10], Step [100/1875], Loss: 1.1673
Epoch [1/10], Step [200/1875], Loss: 0.4501
Epoch [1/10], Step [300/1875], Loss: 0.6198
Epoch [1/10], Step [400/1875], Loss: 0.3469
Epoch [1/10], Step [500/1875], Loss: 0.3964
Epoch [1/10], Step [600/1875], Loss: 0.4239
Epoch [1/10], Step [700/1875], Loss: 0.4740
Epoch [1/10], Step [800/1875], Loss: 0.3085
Epoch [1/10], Step [900/1875], Loss: 0.6178
Epoch [1/10], Step [1000/1875], Loss: 0.6015
Epoch [1/10], Step [1100/1875], Loss: 0.1161
Epoch [1/10], Step [1200/1875], Loss: 0.2946
Epoch [1/10], Step [1300/1875], Loss: 0.2689
Epoch [1/10], Step [1400/1875], Loss: 0.3746
Epoch [1/10], Step [1500/1875], Loss: 0.2356
Epoch [1/10], Step [1600/1875], Loss: 0.0904
Epoch [1/10], Step [1700/1875], Loss: 0.2226

参考资料

【1】stackoverflow 问答:RuntimeError: Input type (torch.FloatTensor) and…

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

GuGuDa123

你的鼓励将是我最大的动力!

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值