pytorch-errors

0.

RuntimeError: save_for_backward can only save input or output tensors, but argument 0 doesn't satisfy this condition


When custom Funciton & Module, and the module need backward, the input should be Variable not Tensor


1. 

RuntimeError: Assertion `cur_target >= 0 && cur_target < n_classes' failed

e.g. lab[lab>=n_classes] = 0


2. RuntimeError: std::bad_cast pytorch

check date type

e.g.

Variable( torch.from_numpy(data) ).float().cuda()

Variable( torch.from_numpy(label).long().cuda()


3. RuntimeError: tensors are on different GPUs

some part not use gpu eg. model

but data use gpu


4. RuntimeError: CUDNN_STATUS_BAD_PARAM

check input and output channels of some layer


5. THCudaCheck FAIL file=/b/wheel/pytorch-src/torch/lib/THC/generic/THCStorage.c line=79 error=2 : out of memory
Segmentation fault

https://discuss.pytorch.org/t/segmentation-fault-when-loading-weight/1381/8


6. RuntimeError: CHECK_ARG(input->nDimension == output->nDimension) failed at torch/csrc/cudnn/Conv.cpp:275

input data shape is different from desired input shape of model 


7. torch.utils.data Dataset...  

File "//anaconda3/lib/python3.6/site-packages/torch/functional.py", line 60, in stack
    return torch.cat(inputs, dim, out=out)
TypeError: cat received an invalid combination of arguments - got (list, int, out=torch.ByteTensor), but expected one of:
 * (sequence[torch.ByteTensor] seq)
 * (sequence[torch.ByteTensor] seq, int dim)


TypeError: cat received an invalid combination of arguments - got (list, int), but expected one of:
 * (sequence[torch.ByteTensor] seq)
 * (sequence[torch.ByteTensor] seq, int dim)
      didn't match because some of the arguments have invalid types: (list, int)


Important: each iteration should return same data type 

convert to same dtype then in train process convert it to desired dtype

concatenate operation operate on the items of same dtype 


8.   File "/home/wenyu/anaconda3/lib/python3.6/site-packages/torch/autograd/variable.py", line 167, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph, retain_variables)
  File "/home/wenyu/anaconda3/lib/python3.6/site-packages/torch/autograd/__init__.py", line 99, in backward
    variables, grad_variables, retain_graph)

RuntimeError: CUDNN_STATUS_MAPPING_ERROR


class_num with loss maybe not match


9 . RuntimeError: CUDNN_STATUS_INTERNAL_ERROR

model and data maybe on the different GPU




评论 5
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值