1问题
在docker中使用pytorch dataloader是可能会出现如下错误:
2 解决方法
docker中通过df -h 查看磁盘使用:
可以看到/dev/shm只有64M, 但是data_loader设置的num_works较多,且其通过共享内存协作,导致内存不足。
Please note that PyTorch uses shared memory to share data between processes, so if torch multiprocessing is used (e.g. for multithreaded data loaders) the default shared memory segment size that container runs with is not enough, and you should increase shared memory size either with --ipc=host or --shm-size command line options to nvidia-docker run.
解决方法: