Pytorch c++ 部署报错解决方案

目录

1. Only the versions between 2017 and 2019 (inclusive) are supported!

2. Cannot find cuDNN library.  Turning the option off


下面是配套的视频教程:

Pytorch 快速实战教程:0_Pytorch实战前言_哔哩哔哩_bilibili

Pytorch 分割实战教程:介绍一个图像分割的网络搭建利器,Segmentation model PyTorch_哔哩哔哩_bilibili


C++ 部署的时候,demo 写完之后,提示如下错误

1. Only the versions between 2017 and 2019 (inclusive) are supported!

C:\Qt\Tools\CMake_64\share\cmake-3.21\Modules\CMakeTestCUDACompiler.cmake:56: error: The CUDA compiler "C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.3/bin/nvcc.exe" is not able to compile a simple test program. It fails with the following output: Change Dir: D:/AI/Learn/engineer/build-cppdemo-Desktop_Qt_5_15_1_MSVC2019_64bit-Release/CMakeFiles/CMakeTmp Run Build Command(s):C:/PROGRA~1/MICROS~1/2022/COMMUN~1/Common7/IDE/COMMON~1/MICROS~1/CMake/Ninja/ninja.exe cmTC_b9b7c && [1/2] Building CUDA object CMakeFiles\cmTC_b9b7c.dir\main.cu.obj FAILED: CMakeFiles/cmTC_b9b7c.dir/main.cu.obj C:\PROGRA~1\NVIDIA~2\CUDA\v11.3\bin\nvcc.exe      -c D:\AI\Learn\engineer\build-cppdemo-Desktop_Qt_5_15_1_MSVC2019_64bit-Release\CMakeFiles\CMakeTmp\main.cu -o CMakeFiles\cmTC_b9b7c.dir\main.cu.obj C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.3\include\crt/host_config.h(160): fatal error C1189: #error:  -- unsupported Microsoft Visual Studio version! Only the versions between 2017 and 2019 (inclusive) are supported! The nvcc flag '-allow-unsupported-compiler' can be used to override this version check; however, using an unsupported host compiler may cause compilation failure or incorrect run time execution. Use at your own risk. main.cu ninja: build stopped: subcommand failed.

大概意思是说cuda 只支持vs1025~vs2019 的版本,我本地安装的是vs2022, 所以安装vs2019 之后,重新配置QT 的kits ,问题解决

重新配置时,点击编译器目录下的re-detect 按钮, 就可以把刚安装的vs2019的编译器检测到

 然后执行cmake 时,发现cmake 依旧失败失败log 如下

2. Cannot find cuDNN library.  Turning the option off

Running C:\Qt\Tools\CMake_64\bin\cmake.exe -S D:/AI/Learn/engineer/cppdemo -B D:/AI/Learn/engineer/build-cppdemo-Desktop_Qt_5_15_1_MSVC2019_64bit-Release in D:\AI\Learn\engineer\build-cppdemo-Desktop_Qt_5_15_1_MSVC2019_64bit-Release.
-- Caffe2: CUDA detected: 11.3
-- Caffe2: CUDA nvcc is: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.3/bin/nvcc.exe
-- Caffe2: CUDA toolkit directory: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.3
-- Caffe2: Header version is: 11.3
-- Could NOT find CUDNN (missing: CUDNN_LIBRARY_PATH CUDNN_INCLUDE_PATH) 
CMake Warning at D:/AI/Learn/engineer/libtorch_win1_13_1_gpu/share/cmake/Caffe2/public/cuda.cmake:120 (message):
  Caffe2: Cannot find cuDNN library.  Turning the option off
Call Stack (most recent call first):
  D:/AI/Learn/engineer/libtorch_win1_13_1_gpu/share/cmake/Caffe2/Caffe2Config.cmake:92 (include)
  D:/AI/Learn/engineer/libtorch_win1_13_1_gpu/share/cmake/Torch/TorchConfig.cmake:68 (find_package)
  CMakeLists.txt:10 (find_package)


CMake Warning at D:/AI/Learn/engineer/libtorch_win1_13_1_gpu/share/cmake/Caffe2/public/cuda.cmake:214 (message):
  Failed to compute shorthash for libnvrtc.so
Call Stack (most recent call first):
  D:/AI/Learn/engineer/libtorch_win1_13_1_gpu/share/cmake/Caffe2/Caffe2Config.cmake:92 (include)
  D:/AI/Learn/engineer/libtorch_win1_13_1_gpu/share/cmake/Torch/TorchConfig.cmake:68 (find_package)
  CMakeLists.txt:10 (find_package)


-- Autodetected CUDA architecture(s):  7.5
-- Added CUDA NVCC flags for: -gencode;arch=compute_75,code=sm_75
CMake Error at D:/AI/Learn/engineer/libtorch_win1_13_1_gpu/share/cmake/Caffe2/Caffe2Config.cmake:100 (message):
  Your installed Caffe2 version uses cuDNN but I cannot find the cuDNN
  libraries.  Please set the proper cuDNN prefixes and / or install cuDNN.
Call Stack (most recent call first):
  D:/AI/Learn/engineer/libtorch_win1_13_1_gpu/share/cmake/Torch/TorchConfig.cmake:68 (find_package)
  CMakeLists.txt:10 (find_package)


-- Configuring incomplete, errors occurred!
See also "D:/AI/Learn/engineer/build-cppdemo-Desktop_Qt_5_15_1_MSVC2019_64bit-Release/CMakeFiles/CMakeOutput.log".
See also "D:/AI/Learn/engineer/build-cppdemo-Desktop_Qt_5_15_1_MSVC2019_64bit-Release/CMakeFiles/CMakeError.log".
CMake process exited with exit code 1.

Elapsed time: 00:04.

 说是没有安装cuDNN, 那就下载一个呗,下载地址如下

cuDNN Download | NVIDIA Developer

 选择跟自己cuda 版本匹配的,点击下载,下载后copy到cuda toolkits 的目录下

 

 拷贝完之后,验证是否成功

 出现两个PASS 意味着已经安装成功了,重新cmake demo 程序

可以正常运行~

3. 下载历史版本的libtorch

how could i get old version of libtorch , thanks · Issue #40961 · pytorch/pytorch · GitHub 

  • 3
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
您好!关于PyTorch的安装和报错处理,我可以帮助您解答。首先,您可以按照以下步骤安装PyTorch: 1. 确保已经安装了Python。PyTorch支持Python 3.6及以上版本。 2. 打开命令行终端,使用pip或conda命令安装PyTorch。如果您使用pip,可以运行以下命令: ``` pip install torch torchvision ``` 如果您使用conda,可以运行以下命令: ``` conda install pytorch torchvision -c pytorch ``` 在安装过程中,可能会遇到一些报错。以下是一些常见的报错及其处理方法: 1. 报错信息:`ERROR: Could not find a version that satisfies the requirement torch`。 处理方法:请确保您输入的命令正确,并且网络连接正常。如果网络连接不稳定,可以尝试使用其他源进行安装。 2. 报错信息:`ModuleNotFoundError: No module named 'torch'`。 处理方法:这通常表示PyTorch未成功安装。请检查安装命令是否正确,并尝试重新安装PyTorch。 3. 报错信息:`CUDA out of memory`。 处理方法:这表示您的显存不足以运行当前的模型或数据。您可以尝试减少输入数据的大小,或使用更小的模型。另外,您也可以考虑使用较大的显存或在云平台上运行模型。 4. 报错信息:`ImportError: DLL load failed: The specified module could not be found`。 处理方法:这可能是由于缺少一些依赖库导致的。您可以尝试更新您的操作系统和显卡驱动程序,并重新安装PyTorch。 如果您遇到其他报错,可以提供具体的报错信息,我将尽力帮助您解决问题。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值