运行eval.py时报错
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 2.80 GiB. GPU 0 has a total capacty of 39.39 GiB of which 671.94 MiB is free. Process 934575 has 38.72 GiB memory in use. Of the allocated memory 34.85 GiB is allocated by PyTorch, and 3.35 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
尝试1:清理GPU缓存
torch.cuda.empty_cache()
结论:不适用,报错信息仍和原来一样
尝试2:修改max_split_size_mb值为256MB(Linux)
export PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:256
结论:成功运行