RuntimeError: CUDA out of memory. Tried to allocate 326.00 MiB (GPU 0; 23.70 GiB total capacity; 19.91 GiB already allocated; 140.56 MiB free; 21.20 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation
vim ~/.bashrc
export TORCH_CUDA_ALLOC_CONF="max_split_size_mb=512"
tmux分屏操作:
https://www.cnblogs.com/longbigbeard/p/9513491.html