- onnx官方文档链接:Python | onnxruntime
- 环境配置
显卡 | RTX3060 | |
显卡驱动 | 531.14 | |
CUDA | NVCUDA64.DLL 为12.1.68 | 官方文件版本cuda_12.1.0_531.14_windows |
cuDNN | 8.9.7.29_cuda12 | |
onnxruntime | 1.18.0 |
- 安装onnxruntime:
# onnxruntime推理环境,安装GPU,首先通过清华源安装,安装的同时会自动安装onnxruntimegpu的附属依赖, # pip install onnxruntime-gpu==1.18.0 -i https://pypi.tuna.tsinghua.edu.cn/simple # 然后卸载onnxruntimegpu,然后通过官网源安装一遍 # pip install onnxruntime-gpu==1.18.0 -i https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-12/pypi/simple/
pip install onnxruntime-gpu==1.18.0 -i https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-12/pypi/simple/
- 测试程序(路径更换成自己的模型路径)
import onnxruntime print(onnxruntime.__version__) # print(onnxruntime.get_build_info()) print(onnxruntime.get_device()) print(onnxruntime.get_available_providers()) modelpath = 'G:/YOLO-testPrj/onnxgpu_link/modelfile/yolo8n.onnx' MYprovider = ['CUDAExecutionProvider'] session = onnxruntime.InferenceSession(path_or_bytes=modelpath, providers=MYprovider) print('use') print(session.get_providers()) print(session.get_outputs()) print(session.get_provider_options())