RuntimeError: Cannot re-initialize CUDA in forked subprocess.

问题:        

        尝试使用python自带的multiprocessing模块调用torch模型使用cuda加快预测速度,报错RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the ‘spawn’ start method

原因:

        在Python3中使用 spawn 或 forkseverver 启动方法才支持在进程之间共享CUDA张量。而multiprocessing 是使用 fork 创建子进程,不被 CUDA 运行时所支持。

        fork和spawn是构建子进程的不同方式:

forkspawn
分叉产卵
继承非继承
从头构建一个子进程,父进程的数据拷贝到子进程的空间中,拥有自己的Python解释器
变量具有与父进程中相同的id
N在每个子进程开始时导入模块
子进程获取在 name == main 块中定义的变量N

解决方法:

if __name__=='__main__':
    torch.multiprocessing.set_start_method('spawn')

参考链接:

  • 4
    点赞
  • 7
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
deviceQuery.exe Starting... CUDA Device Query (Runtime API) version (CUDART static linking) Detected 1 CUDA Capable device(s) Device 0: "GeForce GTX 650" CUDA Driver Version / Runtime Version 9.1 / 8.0 CUDA Capability Major/Minor version number: 3.0 Total amount of global memory: 2048 MBytes (2147483648 bytes) ( 2) Multiprocessors, (192) CUDA Cores/MP: 384 CUDA Cores GPU Max Clock rate: 1072 MHz (1.07 GHz) Memory Clock rate: 2500 Mhz Memory Bus Width: 128-bit L2 Cache Size: 262144 bytes Maximum Texture Dimension Size (x,y,z) 1D=(65536), 2D=(65536, 65536), 3D=(4096, 4096, 4096) Maximum Layered 1D Texture Size, (num) layers 1D=(16384), 2048 layers Maximum Layered 2D Texture Size, (num) layers 2D=(16384, 16384), 2048 layers Total amount of constant memory: 65536 bytes Total amount of shared memory per block: 49152 bytes Total number of registers available per block: 65536 Warp size: 32 Maximum number of threads per multiprocessor: 2048 Maximum number of threads per block: 1024 Max dimension size of a thread block (x,y,z): (1024, 1024, 64) Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535) Maximum memory pitch: 2147483647 bytes Texture alignment: 512 bytes Concurrent copy and kernel execution: Yes with 1 copy engine(s) Run time limit on kernels: Yes Integrated GPU sharing Host Memory: No Support host page-locked memory mapping: Yes Alignment requirement for Surfaces: Yes Device has ECC support: Disabled CUDA Device Driver Mode (TCC or WDDM): WDDM (Windows Display Driver Model) Device supports Unified Addressing (UVA): Yes Device PCI Domain ID / Bus ID / location ID: 0 / 1 / 0 Compute Mode: deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 9.1, CUDA Runtime Version = 8.0, NumDevs = 1, Device0 = GeForce GTX 650 Result = PASS
当您在使用CUDA进行多进程编程时,如果出现"RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method"的错误提示,这是因为在使用fork方式创建子进程时,CUDA运行时不支持重新初始化CUDA。 为了解决这个问题,您可以使用'spawn'启动方法来创建子进程。在您的代码中添加以下内容可以解决这个问题: ```python if __name__=='__main__': torch.multiprocessing.set_start_method('spawn') ``` 这样就会使用'spawn'启动方法来创建子进程,从而解决了CUDA多进程编程的问题。同时,您可以参考以下链接来了解更多关于这个问题的信息: - [日志提示“RuntimeError: Cannot re-initialize CUDA in forked subprocess”](https://britishgeologicalsurvey.github.io/science/python-forking-vs-spawn/) - [Python中使用spawn或forkserver启动方法](https://www.pythonf.cn/read/65459) - [Stack Overflow上的相关问题和解答](https://stackoverflow.com/questions/64095876/multiprocessing-fork-vs-spawn) <span class="em">1</span><span class="em">2</span><span class="em">3</span> #### 引用[.reference_title] - *1* *3* [Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you](https://blog.csdn.net/weixin_37913042/article/details/103018611)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_1"}}] [.reference_item style="max-width: 50%"] - *2* [pythonRuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing](https://blog.csdn.net/ResumeProject/article/details/125449639)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_1"}}] [.reference_item style="max-width: 50%"] [ .reference_list ]

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值