参考了一篇博文 https://blog.csdn.net/weixin_32393347/article/details/104395179
问题
在进行网络训练时报错
CUDA out of memory. Tried to allocate 2.10 GiB (GPU 0; 14.76 GiB total capacity; 1.06Gib already allocated; 138.44Mib free; 18.32Mib cached)
解决方案
降低batch_size大小或在报错代码前加上:释放无关内存
if hasattr(torch.cuda, 'empty_cache'):
torch.cuda.empty_cache()