采用OpenVINO进行缺陷检测
采集并标注数据
- 采集图像,根据图像属性不同,分别存放在不同子目录中,
- 一个是good,存放合格工件的图像;
- 一个是defective,存放有缺陷图像;
构建深度学习网络
- 采用已经训练好的深度学习网络结构,我们采用squeezenet网络进行迁移学习;
- 因为我们只需要分两类,我们可以保留squeezenet网络的前面的层,将最后几层重新构建,进行迁移学习就可以了。
训练网络
- 将数据分为training数据(占70%),validation数据(占20%),和test数据(占10%),
- 进行训练,可以采用GPU进行训练,以节约时间,
- 测试训练好的网络
- 将训练好的网络保存为ONNX模型xuDnnet.onnx。
采用Openvino优化并部署网络
- 采用mo对xuDnnet.onnx模型进行优化:
- 编写分类推理程序classification_sample.py,
- 用测试数据对优化后的模型进行测试,
python /opt/intel/openvino/deployment_tools/model_optimizer/mo.py --input_model=xuDnnet.onnx --output_dir=. --model_name=xuDnnet.fp16 --data_type=FP16
采用合格工件图片进行验证,结果显示合格:
xu@HPUbuntu:~/openvino_demo/DefectD$ python classification_sample.py -m xuDnnet.fp16.xml -i images/good/frame51.png --labels xuDnnet-labels.txt
Using TensorFlow backend.
[ INFO ] Creating Inference Engine
[ INFO ] Loading network files:
xuDnnet.fp16.xml
xuDnnet.fp16.bin
[ INFO ] Preparing input blobs
[ WARNING ] Image images/good/frame51.png is resized from (128, 128) to (227, 227)
[ INFO ] Batch size is 1
[ INFO ] Loading model to the plugin
[ INFO ] Starting inference in synchronous mode
[ INFO ] Processing output blob
[ INFO ] Top 10 results:
Image images/good/frame51.png
classid probability
------- -----------
Good 0.9911501
Defect 0.0088499
采用有缺陷工件图片进行验证,结果显示有缺陷:
xu@HPUbuntu:~/openvino_demo/DefectD$ python classification_sample.py -m xuDnnet.fp16.xml -i images/defective/frame13.png --labels xuDnnet-labels.txt
Using TensorFlow backend.
[ INFO ] Creating Inference Engine
[ INFO ] Loading network files:
xuDnnet.fp16.xml
xuDnnet.fp16.bin
[ INFO ] Preparing input blobs
[ WARNING ] Image images/defective/frame13.png is resized from (128, 128) to (227, 227)
[ INFO ] Batch size is 1
[ INFO ] Loading model to the plugin
[ INFO ] Starting inference in synchronous mode
[ INFO ] Processing output blob
[ INFO ] Top 10 results:
Image images/defective/frame13.png
classid probability
------- -----------
Defect 0.9999893
Good 0.0000107
性能评估
评估该算法在CPU上的性能:
处理器:Intel(R) Core(TM) i7-7700HQ CPU @ 2.80GHz
内 存:16GB
系 统:Ubuntu 18.04LTS虚拟机
python /opt/intel/openvino/deployment_tools/tools/benchmark_tool/benchmark_app.py -m xuDnnet.fp16.xml -i testImg1.png -d CPU
[Step 1/11] Parsing and validating input arguments
[ WARNING ] -nstreams default value is determined automatically for a device. Although the automatic selection usually provides a reasonable performance, but it still may be non-optimal for some cases, for more information look at README.
[Step 2/11] Loading Inference Engine
[ INFO ] InferenceEngine:
API version............. 2.1.42025
[ INFO ] Device info
CPU
MKLDNNPlugin............ version 2.1
Build................... 42025
[Step 3/11] Reading the Intermediate Representation network
[ INFO ] Read network took 29.94 ms
[Step 4/11] Resizing network to match image sizes and given batch
[ INFO ] Network batch size: 1
[Step 5/11] Configuring input of the model
[Step 6/11] Setting device configuration
[Step 7/11] Loading the model to the device
[ INFO ] Load network took 448.04 ms
[Step 8/11] Setting optimal runtime parameters
[Step 9/11] Creating infer requests and filling input blobs with images
[ INFO ] Network input 'data' precision U8, dimensions (NCHW): 1 3 227 227
[ WARNING ] No input files were given: all inputs will be filled with random values!
[ INFO ] Infer Request 0 filling
[ INFO ] Fill input 'data' with random values (image is expected)
[ INFO ] Infer Request 1 filling
[ INFO ] Fill input 'data' with random values (image is expected)
[ INFO ] Infer Request 2 filling
[ INFO ] Fill input 'data' with random values (image is expected)
[ INFO ] Infer Request 3 filling
[ INFO ] Fill input 'data' with random values (image is expected)
[Step 10/11] Measuring performance (Start inference asyncronously, 4 inference requests using 4 streams for CPU, limits: 60000 ms duration)
[Step 11/11] Dumping statistics report
Count: 13232 iterations
Duration: 60018.62 ms
Latency: 17.27 ms
Throughput: 220.46 FPS