用笔记本的小破GPU做深度学习,总是报这个错:
RuntimeError: CUDA out of memory. Tried to allocate 58.00 MiB (GPU 0; 2.00 GiB total capacity; 200.61 MiB already allocated; 56.88 MiB free; 216.00 MiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
不想向调小batch size妥协,重启jupyter内核没啥用,还老是卡死。
最后发现一个很管用的解决办法:
1. 打开cmd
2. 输入 nvidia-smi,查看python进程的PID
3. 使用 taskkill /pid xxxx -f 杀死这个进程。(xxxx是查询到的PID编号)
实例: