1. 第一个容易遇到的坑: 你的target标签字典中含有list等非tensor的数据类型,这时候不处理会报错如下:
(1)'dict' object has no attribute 'cuda'
(2)'list' object has no attribute 'cuda'
解决方法:
(1)https://blog.csdn.net/york1996/article/details/103164696
(2)将list堆叠成tensor即可,使用torch.stack()
2. 第二个容易遇到的坑:你的数据加载到了GPU,然而你的网络权重还在CPU中,此时会报错如下:
解决方法:
将所有网络层定义放入 def __init__(self, )中去,如果在forward中使用了torch.nn直接调用的网络层,那么那个层的权重就只能留在CPU中了。举例如下:
错误示例:
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 96, 11, stride=4)
self.pool1 = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(96, 256, 5, stride=2)
self.pool2 = nn.MaxPool2d(2, 2)
self.conv3 = nn.Conv2d(256, 384, 3, padding=1)
self.conv4 = nn.Conv2d(384, 384, 3, padding=1)
self.conv5 = nn.Conv2d(384, 512, 3, padding=1)
...
...
def forward(self, x):
bn = x.shape[0]
x = self.pool1(F.relu(self.conv1(x)))
x = nn.BatchNorm2d(96)(x) #此处的nn.BatchNorm2d直接调用后会固定在CPU中
x1_1 = F.relu(self.conv_1_1(x))
...
...
改正示例:
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 96, 11, stride=4)
self.pool1 = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(96, 256, 5, stride=2)
self.pool2 = nn.MaxPool2d(2, 2)
self.conv3 = nn.Conv2d(256, 384, 3, padding=1)
self.conv4 = nn.Conv2d(384, 384, 3, padding=1)
self.conv5 = nn.Conv2d(384, 512, 3, padding=1)
self.BatchNorm2d = nn.BatchNorm2d(96) #若要使用GPU,必须在init初始化时定义nn网络层
...
...
def forward(self, x):
bn = x.shape[0]
x = self.pool1(F.relu(self.conv1(x)))
x = self.BatchNorm2d(x) #这是改正后的,现在就可以顺利使用model.to('cuda:0')
x1_1 = F.relu(self.conv_1_1(x))
...
...