1、安装openvino环境
Download the Intel Distribution of OpenVINO Toolkit
按照以上链接安装openvino转换环境
2、切换到mo_onnx.py 目录下执行
python mo_onnx.py --input_model (onnx位置) --model_name(输出位置)
(openvino) C:\Users\yangrui.cheng\AppData\Local\Continuum\anaconda3\envs\pad\Lib\site-packages\openvino\tools\mo>python mo_onnx.py --input_model D:\shell\resnet50.onnx --model_name D:\source\openvino
Check for a new version of Intel(R) Distribution of OpenVINO(TM) toolkit here https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit/download.html?cid=other&source=prod&campid=ww_2023_bu_IOTG_OpenVINO-2022-3&content=upg_all&medium=organic or on https://github.com/openvinotoolkit/openvino
[ INFO ] The model was converted to IR v11, the latest model format that corresponds to the source DL framework input/output format. While IR v11 is backwards compatible with OpenVINO Inference Engine API v1.0, please use API v2.0 (as of 2022.1) to take advantage of the latest improvements in IR v11.
Find more information about API v2.0 and IR v11 at https://docs.openvino.ai/latest/openvino_2_0_transition_guide.html
[ SUCCESS ] Generated IR version 11 model.
[ SUCCESS ] XML file: D:\source\openvino.xml
[ SUCCESS ] BIN file: D:\source\openvino.bin
执行结果如上