问题描述:在添加了CBAM注意力机制后,运行train.py文件出现的错误:
RuntimeError: adaptive_max_pool2d_backward_cuda does not have a deterministic implementation, but you set 'torch.use_deterministic_algorithms(True)'. You can turn off determinism just for this operation, or you can use the 'warn_only=True' option, if that's acceptable for your application. You can also file an issue at https://github.com/pytorch/pytorch/issues to help us prioritize adding deterministic support for this operation.
解决方法:
查看error位置:
在train.py文件中,大概324行左右,修改下列代码
# Backward
scaler.scale(loss).backward()
在原代码的基础上添加一行代码:
# Backward
torch.use_deterministic_algorithms(False) # 添加代码
scaler.scale(loss).backward()
重新运行 train.py 文件
修改成功!
推荐博主:
【PyTorch】解决RuntimeError: adaptive_max_pool2d_backward_cuda ...(添加注意力机制)_ericdiii的博客-CSDN博客
加油!每天学会一点点!!!