Openvino Linux 2022新版本(2.0)安装与运行sample流程

        安装openvino就是无限踩坑的流程,现在用两种方式安装完毕,分享一下经验。

新版OPENVINO需要安装转换模型的工具以及本体。下载链接如下

下载英特尔® 发行版 OpenVINO™ 工具套件 (intel.cn)

1.工具的安装

根据自己的需要选择需要的模块框架,用PIP安装即可。

2.本体安装

在同一页面选择运行时(runtime)的安装,也是根据自己需要选择。(本人第一次选择归档文件安装,第二次选择git安装,后面git安装出现了问题,拷贝了归档文件中的python文件夹解决)

### OpenVINO on Linux Installation and Usage Guide #### Prerequisites Before installing the Intel® Distribution of OpenVINO™ toolkit, ensure that the system meets all prerequisites. For Ubuntu-based systems, it is essential to have a supported version such as Ubuntu 20.04 LTS or earlier versions like Ubuntu 18.04.3 LTS[^2]. The hardware should also be compatible with OpenVINO requirements. #### Docker Image Setup for OpenVINO For users preferring containerized environments, an updated Docker image from DockerHub can simplify setup significantly. Specifically targeting those using NVIDIA GPUs alongside CPUs within an Ubuntu environment: ```bash docker pull openvino/ubuntu20_dev:latest docker run -it --rm --net=host --name openvino openvino/ubuntu20_dev:latest ``` This command sequence pulls down the latest available OpenVINO development image built specifically for Ubuntu 20 and runs this image interactively without needing additional configuration due to automatic detection mechanisms provided by the official images[^1]. #### Installing Directly on Host Machine Alternatively, direct installation onto the host machine involves downloading the installer package directly from the official website following detailed instructions at [official documentation](https://docs.openvinotoolkit.org/latest/_docs_install_guides_installing_openvino_linux.html): - Downloading the appropriate offline/online installers. - Executing the installation script after extracting files. - Configuring environmental variables post-installation through `source /opt/intel/openvino/bin/setupvars.sh`. #### Post-Installation Configuration After successful installation, configuring the environment correctly ensures seamless integration between different components including TensorFlow*, Caffe*, ONNX models conversion utilities among others which are key features offered by Intel's proprietary edition aimed primarily towards inference acceleration across various platforms supporting Movidius Myriad X VPU devices too. #### Verification Steps To verify whether everything has been set up properly one could execute sample applications bundled inside `/opt/intel/openvino/deployment_tools/demo` directory where multiple pre-configured demos reside ready-to-use out-of-the-box showcasing capabilities ranging from object recognition over video streams to facial landmark estimation tasks etc. --related questions-- 1. What specific steps need attention when setting up GPU support during OpenVINO installations? 2. How does model optimizer tool work under OpenVINO framework converting third-party frameworks into IR format suitable for deployment? 3. Can you provide examples demonstrating how to deploy trained neural networks utilizing OpenVINO runtime APIs effectively? 4. Are there any differences in functionality between running OpenVINO via Docker compared to native setups?
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值