用pytorch(1.1版本)写的代码调用GPU(cuda)时,很容易下面的报错:
RuntimeError: Expected object of backend CUDA but got backend CPU for argument
报错原因与pytorch高版本(如1.6版本)报错原因相同:
RuntimeError: CUDA error: an illegal memory access was encountered
解决方法:
https://blog.csdn.net/weixin_44414948/article/details/109894479
报错原因:
模型model、输入数据(input_image、input_label)没有全部移动到GPU(cuda)上。
解决方法:
将model、criterion、input_image、input_label全部移动到cuda上,实例代码如下:
model = model.cuda()
criterion = criterion.cuda()
input_image = input_iamge.cuda()
input_label = input_label.cuda()
或
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = model.to(device)
criterion = criterion.to(device)
input_image = input_iamge.to(device)
input_label = input_label.to(device)
注意:input_image、input_label在移动到cuda前,一定先转换为torch的Tensor数据类型,不然会报错!!!