报错:
RuntimeError: adaptive_max_pool2d_backward_cuda does not have a deterministic implementation, but you set 'torch.use_deterministic_algorithms(True)'. You can turn off determinism just for this operation, or you can use the 'warn_only=True' option, if that's acceptable for your application. You can also file an issue at https://github.com/pytorch/pytorch/issues to help us prioritize adding deterministic support for this operation.
方法:
这里以YOLOv5 7.0版本为例,在train.py
中321行,或者直接搜索scaler.scale(loss).backward(),在其前面关闭这个决定性算法。添加代码如下
原因:
问题出在反向传播上,先前添加SE注意力机制时不会出现这样的报错,因为其只考虑了通道注意力机制,而添加的CBAM注意力机制还考虑了空间注意力机制。