RuntimeError: Could not run ‘torchvision::nms’ with arguments from the ‘CUDA’ backend. This could be because the operator doesn’t exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. ‘torchvision::nms’ is only available for these backends: [CPU, BackendSelect, Named, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, Tracer, Autocast, Batched, VmapMode].
CPU: registered at C:\Users\circleci\project\torchvision\csrc\ops\cpu\nms_kernel.cpp:111 [kernel]
BackendSelect: fallthrough registered at …\aten\src\ATen\core\BackendSelectFallbackKernel.cpp:3 [backend fallback]
Named: registered at …\aten\src\ATen\core\NamedRegistrations.cpp:7 [backend fallback]
AutogradOther: fallthrough registered at …\aten\src\ATen\core\VariableFallbackKernel.cpp:35 [backend fallback]
AutogradCPU: fallthrough registered at …\aten\src\ATen\core\VariableFallbackKernel.cpp:39 [backend fallback]
AutogradCUDA: fallthrough registered at …\aten\src\ATen\core\VariableFallbackKernel.cpp:43 [backend fallback]
AutogradXLA: fallthrough registered at …\aten\src\ATen\core\VariableFallbackKernel.cpp:47 [backend fallback]
Tracer: fallthrough registered at …\torch\csrc\jit\frontend\tracer.cpp:999 [backend fallback]
Autocast: fallthrough registered at …\aten\src\ATen\autocast_mode.cpp:250 [backend fallback]
Batched: registered at …\aten\src\ATen\BatchingRegistrations.cpp:1016 [backend fallback]
VmapMode: fallthrough registered at …\aten\src\ATen\VmapModeRegistrations.cpp:33 [backend fallback]