Darkent Yolov3转华为om模型

在将darknet转换caffe模型的时候,为了验证转换是否正确,采用开源项目caffe-yolov3对模型进行验证,但是caffe-yolov3缺省需要cuda支持,首先找了RTX3090来测试,发现由于RTX3090需要CUDA11.1,而caffe不支持cuda11.1, 所以降cuda版本到10.1,在GPU:RTX2080Ti上测试。现将环境安装和测试过程记录如下:

  • 显卡:RTX2080Ti
  • 系统:Ubuntu18.04LTS
  • CUDA版本: 10.1
  • cudnn版本:7.6.5

本教程主要包含以下几个步骤

  • 1.安装相关的依赖包
  • 2.编译安装caffe
  • 3.编译安装pycaffe
  • 4.安装darknet2caffe
  • 5.安装caffe-yolov3
  • 6.Caffemodel 转换为om
  • 验证OM模型

环境准备

1.下载Nvidia官方docker

docker pull nvidia/cuda:10.1-cudnn7-devel-ubuntu18.04

2.创建docker启动脚本:caffe-nvidia.sh

docker run \
        --gpus all \
        --name caffe-nvidia \
        --shm-size=1g \
        --ulimit memlock=-1 \
        --ulimit stack=67108864 \
        -it \
        -v /home:/data \  #将host的home目录挂载到docker系统的/data目录
        docker.io/nvidia/cuda:10.1-cudnn7-devel-ubuntu18.04

3.查看docker是否启动

adlink@ADLINK-2080Ti:~$ docker ps
CONTAINER ID        IMAGE                                       COMMAND       CREATED      STATUS      PORTS   NAMES
520d507f29dc        nvidia/cuda:10.1-cudnn7-devel-ubuntu18.04   "/bin/bash"   3 days ago   Up 3 days           caffe-nvidia-11

4.进入Docker环境

adlink@ADLINK-2080Ti:~$ docker exec -it 520d507f29dc /bin/bash
root@520d507f29dc:/# ls
bin  boot  data  dev  etc  home  lib  lib64  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var
root@520d507f29dc:/#

5.安装相关的依赖包

执行以下指令安装相关的依赖包,如果安装较慢的话,可以进行换源,具体换源教程见

apt-get install libprotobuf-dev libleveldb-dev libsnappy-dev libopencv-dev libhdf5-serial-dev protobuf-compiler
apt-get install --no-install-recommends libboost-all-dev
apt-get install libopenblas-dev liblapack-dev libatlas-base-dev
apt-get install libgflags-dev libgoogle-glog-dev liblmdb-dev
apt-get install git cmake build-essential wget
apt-get install python-dev python-numpy

6.设置cuda环境变量

输入 vim ~/.bashrc 打开主目录下的 .bashrc文件添加如下路径:

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda-10.1/lib64
export PATH=$PATH:/usr/local/cuda-10.1/bin
export CUDA_HOME=$CUDA_HOME:/usr/local/cuda-10.1
终端运行:source ~/.bashrc 使之生效

安装caffe

1.下载caffe源码

cd /opt										  #将caffe安装在/opt目录
git clone  https://github.com/twmht/caffe.git #由于upsample功能还没有合入官方主分支,需要从第三方git上clone含有此功能的git库
git checkout -b upsample origin/upsample 	  #切换到upsample分支

2.安装依赖

为了解决编译caffe时出现的”CUDA_cublas_device_LIBRARY (ADVANCED)“错误,升级默认的cmake 3.10.2到最新版3.14

wget https://cmake.org/files/v3.14/cmake-3.14.0-Linux-x86_64.tar.gz

mv cmake-3.14.0-Linux-x86_64 /opt/cmake

ln -sf /opt/cmake/bin/* /usr/bin/12

cmake --version

3.编译caffe

在 caffe 目录下执行

mkdir build
cd build
cmake ..
make all -j8

4.测试caffe

caffe编译完成后可运行如下命令进行测试

make runtest -j8
[ RUN      ] NeuronLayerTest/3.TestSigmoid
[       OK ] NeuronLayerTest/3.TestSigmoid (0 ms)
[----------] 58 tests from NeuronLayerTest/3 (1530 ms total)

[----------] Global test environment tear-down
[==========] 2211 tests from 289 test cases ran. (288632 ms total)
[  PASSED  ] 2211 tests.
[100%] Built target runtest

编译安装pycaffe

1.编译pycaffe

进入caffe目录

make pycaffe -j8

sudo echo export PYTHONPATH="/opt/caffe/python:$PYTHONPATH" >> ~/.bashrc

source ~/.bashrc

2.验证pycaffe接口

编译 pycaffe 成功后,验证一下是否可以在 python 中导入 caffe 包,首先进入 python 环境:

python
然后导入 caffe :
>>> import caffe

若不报错则表示 caffe 的 python 接口已正确编译

问题1:ImportError: No module named skimage.io

解决方法

python -m pip config set global.index-url https://mirrors.aliyun.com/pypi/simple
#若没有安装pip: apt install python-pip
#升级到最新版本:python -m pip install --upgrade pip
#注意此处如果使用的python版本
python -m pip install -U scikit-image 

问题2:”ImportError: No module named google.protobuf.internal“

解决方法

 python -m pip install protobuf

安装darknet2caffe

1.下载darknet2caffe

git clone https://github.com/ChenYingpeng/darknet2caffe.git

2.配置darknet2caffe

修改 darknet2caffe.py 中 caffe_root 的 路径,让其指向容器内caffe的路径

# The caffe module needs to be on the Python path;
#  we'll add it here explicitly.
caffe_root='/opt/caffe/' ##修改此行
#os.chdir(caffe_root)
import sys
sys.path.insert(0,caffe_root+'python')

3.转换模型

原始darknet模型下载 提取码:28c2

cd darknet2caffe
 python darknet2caffe.py /data/adlink/yolov3colour-V1.4.0.cfg  /data/adlink/yolov3colour-V1.4.0.weights  yolov3colour-V1.4.0.prototxt yolov3colour-V1.4.0.caffemodel

问题:”ImportError: No module named torch“

解决方法:

python -m pip  install torch future

转换成功的caffe模型下载,提取码:4hbc

安装caffe-yolov3

1. 下载caffe-yolov3

git clone https://github.com/ChenYingpeng/caffe-yolov3.git

2. 编译caffe-yolov3

根据以下diff内容修改CMakefileLists.txt文件

--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -43,8 +43,8 @@ set(CMAKE_ARCHIVE_OUTPUT_DIRECTORY ${PROJECT_OUTPUT_DIR}/lib)
 # build C/C++ interface
 include_directories(${PROJECT_INCLUDE_DIR} ${GIE_PATH}/include)
 include_directories(${PROJECT_INCLUDE_DIR}
-       /home/chen/caffe/include
-       /home/chen/caffe/build/include
+       /opt/caffe/include
+       /opt/caffe/build/include
 )


@@ -53,11 +53,11 @@ file(GLOB inferenceIncludes src/*.h )

 cuda_add_library(yolov3-plugin SHARED ${inferenceSources})
 target_link_libraries(yolov3-plugin
-       /home/chen/caffe/build/lib/libcaffe.so
+       /opt/caffe/build/lib/libcaffe.so
        /usr/lib/x86_64-linux-gnu/libglog.so
-       /usr/lib/x86_64-linux-gnu/libgflags.so.2
+       /usr/lib/x86_64-linux-gnu/libgflags.so.2.2
        /usr/lib/x86_64-linux-gnu/libboost_system.so
-       /usr/lib/x86_64-linux-gnu/libGLEW.so.1.13
+       /usr/lib/x86_64-linux-gnu/libGLEW.so.2.0
 )

修改demo/demo.cpp

--- a/demo/demo.cpp
+++ b/demo/demo.cpp
@@ -53,7 +53,7 @@ int main( int argc, char** argv )
     Mat img = imread(image_path);

     //detect
-    float thresh = 0.3;
+    float thresh = 0.5;
     std::vector<bbox_t> bbox_vec = detector.detect(img,thresh);

     //show detection results
@@ -74,9 +74,10 @@ int main( int argc, char** argv )
     }

     show with opencv
-    namedWindow("show",CV_WINDOW_AUTOSIZE);
-    imshow("show",img);
-    waitKey(0);
+    //namedWindow("show",CV_WINDOW_AUTOSIZE);
+    //imshow("show",img);
+    //waitKey(0);
+    imwrite("result.jpg", img);

     LOG(INFO) << "done.";
     return 0;

修改src/detector.h

diff --git a/src/detector.h b/src/detector.h
index 826630a..1aff5db 100644
--- a/src/detector.h
+++ b/src/detector.h
@@ -38,5 +38,5 @@ private:
     vector<Blob<float>*> m_blobs;

     float m_thresh = 0.001;
-    int m_classes = 80; //coco classes
+    int m_classes = 9; //coco classes
 };

问题:No rule to make target ‘/usr/lib/x86_64-linux-gnu/libGLEW.so.2.0’, needed by 'x86_64/lib/libyolov3-plugin.so’

解决方法:

apt-get install -y libglew-dev

3. 测试caffe模型

./build/x86_64/bin/demo ../darknet2caffe/yolov3colour-V1.4.0.prototxt  ../darknet2caffe/yolov3colour-V1.4.0.caffemodel ./images/ceshi.jpg

Caffemodel 转换为om

1.修改yolov3colour-V1.4.0.prototxt

根据以下的diff内容,修改yolov3colour-V1.4.0.prototxt

--- yolov3colour-V1.4.0.prototxt        2021-01-19 15:04:11.923965612 +0800
+++ /data/adlink/yolov3colour_acl-V1.4.0.prototxt       2021-01-18 18:07:01.000000000 +0800
@@ -1,9 +1,16 @@
 name: "Darkent2Caffe"
 input: "data"
-input_dim: 1
-input_dim: 3
-input_dim: 768
-input_dim: 768
+input_shape {
+       dim: 1
+       dim: 3
+       dim: 768
+       dim: 768
+}
+input: "img_info"
+input_shape {
+       dim: 1
+       dim: 4
+}

 layer {
     bottom: "data"
@@ -3198,3 +3205,99 @@
         bias_term: true
     }
 }
+
+layer {
+       bottom: "layer82-conv"
+       top: "yolo1_coords"
+       top: "yolo1_obj"
+       top: "yolo1_classes"
+       name: "yolo1"
+       type: "Yolo"
+       yolo_param {
+               boxes: 3
+               coords: 4
+               classes: 9
+               yolo_version: "V3"
+               softmax: true
+               background: false
+    }
+}
+
+layer {
+       bottom: "layer94-conv"
+       top: "yolo2_coords"
+       top: "yolo2_obj"
+       top: "yolo2_classes"
+       name: "yolo2"
+       type: "Yolo"
+       yolo_param {
+               boxes: 3
+               coords: 4
+               classes: 9
+               yolo_version: "V3"
+               softmax: true
+               background: false
+       }
+}
+
+layer {
+       bottom: "layer106-conv"
+       top: "yolo3_coords"
+       top: "yolo3_obj"
+       top: "yolo3_classes"
+       name: "yolo3"
+       type: "Yolo"
+       yolo_param {
+               boxes: 3
+               coords: 4
+               classes: 9
+               yolo_version: "V3"
+               softmax: true
+               background: false
+       }
+}
+
+layer {
+       name: "detection_out3"
+       type: "YoloV3DetectionOutput"
+       bottom: "yolo1_coords"
+       bottom: "yolo2_coords"
+       bottom: "yolo3_coords"
+       bottom: "yolo1_obj"
+       bottom: "yolo2_obj"
+       bottom: "yolo3_obj"
+       bottom: "yolo1_classes"
+       bottom: "yolo2_classes"
+       bottom: "yolo3_classes"
+       bottom: "img_info"
+       top: "box_out"
+       top: "box_out_num"
+       yolov3_detection_output_param {
+                           boxes: 3
+                           classes: 9
+                           relative: true
+                           obj_threshold: 0.5
+                           score_threshold: 0.5
+                           iou_threshold: 0.3
+                           pre_nms_topn: 512
+                           post_nms_topn: 1024
+                                                  biases_high: 17
+                                                  biases_high: 15
+                                                  biases_high: 22
+                                                  biases_high: 18
+                                                  biases_high: 24
+                                                  biases_high: 23
+                                                  biases_mid: 30
+                                                  biases_mid: 27
+                                                  biases_mid: 37
+                                                  biases_mid: 33
+                                                  biases_mid: 188
+                                                  biases_mid: 93
+                                                  biases_low: 225
+                                                  biases_low: 114
+                                                  biases_low: 271
+                                                  biases_low: 135
+                                                  biases_low: 336
+                                                  biases_low: 172
+       }
+}

2.执行模型转换

atc --model=./yolov3colour-V1.4.0.prototxt --weight=./yolov3colour-V1.4.0.caffemodel --framework=0 --output=./yolov3_aipp_768_768 --soc_version=Ascend310 --insert_op_conf=./aipp_yolov3.cfg

其中aipp_yolov3.cfg内容如下:

aipp_op {
aipp_mode : static
related_input_rank : 0
input_format : YUV420SP_U8
src_image_size_w : 768
src_image_size_h : 768
crop : false
csc_switch : true
rbuv_swap_switch : false
matrix_r0c0 : 256
matrix_r0c1 : 0
matrix_r0c2 : 359
matrix_r1c0 : 256
matrix_r1c1 : -88
matrix_r1c2 : -183
matrix_r2c0 : 256
matrix_r2c1 : 454
matrix_r2c2 : 0
input_bias_0 : 0
input_bias_1 : 128
input_bias_2 : 128
var_reci_chn_0 : 0.0039216
var_reci_chn_1 : 0.0039216
var_reci_chn_2 : 0.0039216
}

转换成功的OM模型下载,提取码:42er

验证OM模型

1. 下载代码

wget https://atlas-release.obs.cn-south-1.myhuaweicloud.com/MindX%20SDK/V100R020C10/ApiSamples/Ascend-apisamples_20.1.0_src.tar.gz
tar xvf Ascend-apisamples_20.1.0_src.tar.gz
cd src/Samples/InferObjectDetection

2. 修改相关文件

将生成的om模型拷贝到data/models/yolov3目录,并按下面的diff内容,修改data/config/setup.config

--- a/InferObjectDetection/data/config/setup.config
+++ b/InferObjectDetection/data/config/setup.config
@@ -5,8 +5,8 @@
 device_id = 0 #use the device to run the program

 #yolov3 model path
-model_path = ./data/models/yolov3/yolov3.om
+model_path = ./data/models/yolov3/yolov3_aipp_768_768.om

 #yolov3 model input width and height
-model_width = 416
-model_height = 416
+model_width = 768
+model_height = 768

修改data/models/yolov3/coco.names文件

@@ -1,81 +1,10 @@
-# This file is originally from https://github.com/pjreddie/darknet/blob/master/data/coco.names
-person
-bicycle
-car
-motorbike
-aeroplane
-bus
-train
-truck
-boat
-traffic light
-fire hydrant
-stop sign
-parking meter
-bench
-bird
-cat
-dog
-horse
-sheep
-cow
-elephant
-bear
-zebra
-giraffe
-backpack
-umbrella
-handbag
-tie
-suitcase
-frisbee
-skis
-snowboard
-sports ball
-kite
-baseball bat
-baseball glove
-skateboard
-surfboard
-tennis racket
-bottle
-wine glass
-cup
-fork
-knife
-spoon
-bowl
-banana
-apple
-sandwich
-orange
-broccoli
-carrot
-hot dog
-pizza
-donut
-cake
-chair
-sofa
-pottedplant
-bed
-diningtable
-toilet
-tvmonitor
-laptop
-mouse
-remote
-keyboard
-cell phone
-microwave
-oven
-toaster
-sink
-refrigerator
-book
-clock
-vase
-scissors
-teddy bear
-hair drier
-toothbrush
+# This file is originally from https://github.com/pjreddie/darknet/blob/master/data/coco.names
+green
+yellow
+blue
+red
+white
+orange
+black
+gray
+layer

修改InferObjectDetection/include/Yolov3Post.h文件, 将CLASS_NUM的值改为9

--- a/InferObjectDetection/include/Yolov3Post.h
+++ b/InferObjectDetection/include/Yolov3Post.h
@@ -24,7 +24,7 @@ const int YOLOV3_CAFFE = 0;
 const int YOLOV3_TF = 1;
 const int YOLOV3_MINDSPORE = 2;

-const int CLASS_NUM = 80;
+const int CLASS_NUM = 9;
 const int BIASES_NUM = 18; // Yolov3 anchors, generate from train data, coco dataset
 const float g_biases[BIASES_NUM] = {10, 13, 16, 30, 33, 23, 30, 61, 62, 45, 59, 119, 116, 90, 156, 198, 373, 326};

3. 编译代码

bash build.sh

4. 运行InferObjectDetection

将优博讯提供的测试文件拷贝到data目录

cd dist
./main -i ../data/ceshi.jpg
HwHiAiUser@ecs-x86-adlink:~/youboxun_0111/src/Samples/InferObjectDetection/dist$ ./main -i ../data/ceshi.jpg
[Info ][2021-01-19 16:56:36:011932][ResourceManager.cpp InitResource:76] Initialized acl successfully.
[Info ][2021-01-19 16:56:36:497470][ResourceManager.cpp InitResource:85] Open device 0 successfully.
[Info ][2021-01-19 16:56:36:504986][ResourceManager.cpp InitResource:92] Created context for device 0 successfully
[Info ][2021-01-19 16:56:36:505045][ResourceManager.cpp InitResource:103] Init resource successfully.
[Info ][2021-01-19 16:56:36:505072][AclProcess.cpp InitResource:132] Create context successfully
[Info ][2021-01-19 16:56:36:505144][AclProcess.cpp InitResource:138] Create stream successfully
[Info ][2021-01-19 16:56:36:534015][AclProcess.cpp InitModule:100] Initialize dvppCommon_ successfully
[Info ][2021-01-19 16:56:36:534099][ModelProcess.cpp Init:191] ModelProcess:Begin to init instance.
[Info ][2021-01-19 16:56:36:914160][AclProcess.cpp InitModule:110] Initialize ModelProcess_ successfully
[Info ][2021-01-19 16:56:36:914331][AclProcess.cpp InitModule:116] Loaded label successfully.
[Info ][2021-01-19 16:56:36:954428][AclProcess.cpp YoloV3PostProcess:375] The number of output buffers of yolov3 model is 2
[Info ][2021-01-19 16:56:36:954568][AclProcess.cpp YoloV3PostProcess:393] #Obj0, box(931, 483.75, 970, 515)   confidence: 0.999023 label: green
[Info ][2021-01-19 16:56:36:954609][AclProcess.cpp YoloV3PostProcess:393] #Obj1, box(890, 481.25, 930.5, 512)   confidence: 0.999023 label: green
[Info ][2021-01-19 16:56:36:954625][AclProcess.cpp YoloV3PostProcess:393] #Obj2, box(967.5, 480, 1007, 513.5)   confidence: 0.999023 label: green
[Info ][2021-01-19 16:56:36:954639][AclProcess.cpp YoloV3PostProcess:393] #Obj3, box(850, 442.5, 891, 474.75)   confidence: 0.999023 label: green
[Info ][2021-01-19 16:56:36:954653][AclProcess.cpp YoloV3PostProcess:393] #Obj4, box(935, 448.75, 975, 482.25)   confidence: 0.998047 label: green
[Info ][2021-01-19 16:56:36:954667][AclProcess.cpp YoloV3PostProcess:393] #Obj5, box(845, 475, 885.5, 507.5)   confidence: 0.998047 label: green
[Info ][2021-01-19 16:56:36:954681][AclProcess.cpp YoloV3PostProcess:393] #Obj6, box(895, 448.75, 933, 482)   confidence: 0.998047 label: green
[Info ][2021-01-19 16:56:36:954695][AclProcess.cpp YoloV3PostProcess:393] #Obj7, box(939, 413.75, 977.5, 448.5)   confidence: 0.99707 label: green
[Info ][2021-01-19 16:56:36:954709][AclProcess.cpp YoloV3PostProcess:393] #Obj8, box(962.5, 232.5, 1007.5, 279.75)   confidence: 0.996094 label: green
[Info ][2021-01-19 16:56:36:954723][AclProcess.cpp YoloV3PostProcess:393] #Obj9, box(957.5, 280, 999, 323.75)   confidence: 0.996094 label: green
[Info ][2021-01-19 16:56:36:954737][AclProcess.cpp YoloV3PostProcess:393] #Obj10, box(1000, 267.5, 1041, 316.5)   confidence: 0.994141 label: green
[Info ][2021-01-19 16:56:36:954751][AclProcess.cpp YoloV3PostProcess:393] #Obj11, box(790, 577.5, 831, 603)   confidence: 0.999023 label: yellow
[Info ][2021-01-19 16:56:36:954765][AclProcess.cpp YoloV3PostProcess:393] #Obj12, box(790, 604, 833, 631.5)   confidence: 0.998047 label: yellow
[Info ][2021-01-19 16:56:36:954778][AclProcess.cpp YoloV3PostProcess:393] #Obj13, box(749, 599, 784, 621.5)   confidence: 0.998047 label: yellow
[Info ][2021-01-19 16:56:36:954792][AclProcess.cpp YoloV3PostProcess:393] #Obj14, box(712.5, 621, 747, 644)   confidence: 0.998047 label: yellow
[Info ][2021-01-19 16:56:36:954806][AclProcess.cpp YoloV3PostProcess:393] #Obj15, box(750, 622.5, 785.5, 644)   confidence: 0.99707 label: yellow
[Info ][2021-01-19 16:56:36:954819][AclProcess.cpp YoloV3PostProcess:393] #Obj16, box(752.5, 572.5, 785.5, 596)   confidence: 0.994141 label: yellow
[Info ][2021-01-19 16:56:36:954833][AclProcess.cpp YoloV3PostProcess:393] #Obj17, box(787.5, 630, 825.5, 654.5)   confidence: 0.994141 label: yellow
[Info ][2021-01-19 16:56:36:954847][AclProcess.cpp YoloV3PostProcess:393] #Obj18, box(677.5, 620, 710.5, 644)   confidence: 0.993164 label: yellow
[Info ][2021-01-19 16:56:36:954861][AclProcess.cpp YoloV3PostProcess:393] #Obj19, box(675, 592.5, 712.5, 617)   confidence: 0.991211 label: yellow
[Info ][2021-01-19 16:56:36:954874][AclProcess.cpp YoloV3PostProcess:393] #Obj20, box(637.5, 592.5, 675, 620)   confidence: 0.998047 label: blue
[Info ][2021-01-19 16:56:36:954898][AclProcess.cpp YoloV3PostProcess:393] #Obj21, box(636, 564, 674, 591.5)   confidence: 0.99707 label: blue
[Info ][2021-01-19 16:56:36:954912][AclProcess.cpp YoloV3PostProcess:393] #Obj22, box(637.5, 621, 675, 645)   confidence: 0.990234 label: blue
[Info ][2021-01-19 16:56:36:954925][AclProcess.cpp YoloV3PostProcess:393] #Obj23, box(910, 280, 955, 322.25)   confidence: 0.999023 label: red
[Info ][2021-01-19 16:56:36:954939][AclProcess.cpp YoloV3PostProcess:393] #Obj24, box(861, 280, 905, 321)   confidence: 0.999023 label: red
[Info ][2021-01-19 16:56:36:954953][AclProcess.cpp YoloV3PostProcess:393] #Obj25, box(912.5, 231.25, 958, 276.5)   confidence: 0.998047 label: red
[Info ][2021-01-19 16:56:36:954967][AclProcess.cpp YoloV3PostProcess:393] #Obj26, box(709, 275, 757.5, 321)   confidence: 0.998047 label: red
[Info ][2021-01-19 16:56:36:954981][AclProcess.cpp YoloV3PostProcess:393] #Obj27, box(812.5, 283.75, 857, 322.75)   confidence: 0.998047 label: red
[Info ][2021-01-19 16:56:36:954995][AclProcess.cpp YoloV3PostProcess:393] #Obj28, box(759, 182.5, 809, 229.75)   confidence: 0.99707 label: red
[Info ][2021-01-19 16:56:36:955009][AclProcess.cpp YoloV3PostProcess:393] #Obj29, box(862.5, 233.75, 908, 276.5)   confidence: 0.99707 label: red
[Info ][2021-01-19 16:56:36:955023][AclProcess.cpp YoloV3PostProcess:393] #Obj30, box(812.5, 238.75, 859.5, 279)   confidence: 0.99707 label: red
[Info ][2021-01-19 16:56:36:955037][AclProcess.cpp YoloV3PostProcess:393] #Obj31, box(707.5, 225, 755.5, 272)   confidence: 0.996094 label: red
[Info ][2021-01-19 16:56:36:955051][AclProcess.cpp YoloV3PostProcess:393] #Obj32, box(812.5, 195, 863, 236.875)   confidence: 0.996094 label: red
[Info ][2021-01-19 16:56:36:955065][AclProcess.cpp YoloV3PostProcess:393] #Obj33, box(759, 232.5, 809, 277)   confidence: 0.996094 label: red
[Info ][2021-01-19 16:56:36:955079][AclProcess.cpp YoloV3PostProcess:393] #Obj34, box(760, 280, 807, 320.25)   confidence: 0.995117 label: red
[Info ][2021-01-19 16:56:36:955092][AclProcess.cpp YoloV3PostProcess:393] #Obj35, box(710, 465, 750, 499)   confidence: 0.999023 label: white
[Info ][2021-01-19 16:56:36:955106][AclProcess.cpp YoloV3PostProcess:393] #Obj36, box(800, 471.25, 840, 507.5)   confidence: 0.999023 label: white
[Info ][2021-01-19 16:56:36:955120][AclProcess.cpp YoloV3PostProcess:393] #Obj37, box(709, 427.5, 750, 462.75)   confidence: 0.999023 label: white
[Info ][2021-01-19 16:56:36:955134][AclProcess.cpp YoloV3PostProcess:393] #Obj38, box(802.5, 435, 843, 469.5)   confidence: 0.998047 label: white
[Info ][2021-01-19 16:56:36:955148][AclProcess.cpp YoloV3PostProcess:393] #Obj39, box(762.5, 390, 799.5, 426.5)   confidence: 0.99707 label: white
[Info ][2021-01-19 16:56:36:955162][AclProcess.cpp YoloV3PostProcess:393] #Obj40, box(756, 426.25, 795, 460.25)   confidence: 0.995117 label: white
[Info ][2021-01-19 16:56:36:955176][AclProcess.cpp YoloV3PostProcess:393] #Obj41, box(757.5, 460, 795, 496)   confidence: 0.994141 label: white
[Info ][2021-01-19 16:56:36:955201][AclProcess.cpp YoloV3PostProcess:393] #Obj42, box(872.5, 607.5, 905.5, 632)   confidence: 0.996094 label: orange
[Info ][2021-01-19 16:56:36:955216][AclProcess.cpp YoloV3PostProcess:393] #Obj43, box(827.5, 627.5, 862, 655)   confidence: 0.994141 label: orange
[Info ][2021-01-19 16:56:36:955230][AclProcess.cpp YoloV3PostProcess:393] #Obj44, box(937.5, 630, 968, 655.5)   confidence: 0.993164 label: orange
[Info ][2021-01-19 16:56:36:955244][AclProcess.cpp YoloV3PostProcess:393] #Obj45, box(867.5, 632.5, 902, 655.5)   confidence: 0.991211 label: orange
[Info ][2021-01-19 16:56:36:955257][AclProcess.cpp YoloV3PostProcess:393] #Obj46, box(832.5, 599, 868, 626.5)   confidence: 0.988281 label: orange
[Info ][2021-01-19 16:56:36:955271][AclProcess.cpp YoloV3PostProcess:393] #Obj47, box(911, 600, 944, 625.5)   confidence: 0.969727 label: orange
[Info ][2021-01-19 16:56:36:955285][AclProcess.cpp YoloV3PostProcess:393] #Obj48, box(907.5, 624, 939, 650)   confidence: 0.96875 label: orange
[Info ][2021-01-19 16:56:36:955302][AclProcess.cpp YoloV3PostProcess:393] #Obj49, box(835, 572.5, 870, 599)   confidence: 0.96582 label: orange
[Info ][2021-01-19 16:56:36:955315][AclProcess.cpp YoloV3PostProcess:393] #Obj50, box(664, 431.25, 702.5, 469.75)   confidence: 0.998047 label: black
[Info ][2021-01-19 16:56:36:955329][AclProcess.cpp YoloV3PostProcess:393] #Obj51, box(627.5, 427.5, 662.5, 465.5)   confidence: 0.99707 label: black
[Info ][2021-01-19 16:56:36:955344][AclProcess.cpp YoloV3PostProcess:393] #Obj52, box(626, 467.5, 666, 501.5)   confidence: 0.995117 label: black
[Info ][2021-01-19 16:56:36:955368][AclProcess.cpp YoloV3PostProcess:393] #Obj53, box(660, 392.5, 702, 430.25)   confidence: 0.994141 label: black
[Info ][2021-01-19 16:56:36:955389][AclProcess.cpp YoloV3PostProcess:393] #Obj54, box(667.5, 471.25, 705.5, 504)   confidence: 0.992188 label: black
[Info ][2021-01-19 16:56:36:955407][AclProcess.cpp YoloV3PostProcess:393] #Obj55, box(612.5, 251.25, 655, 288)   confidence: 0.999023 label: gray
[Info ][2021-01-19 16:56:36:955426][AclProcess.cpp YoloV3PostProcess:393] #Obj56, box(664, 247.5, 705, 281.75)   confidence: 0.999023 label: gray
[Info ][2021-01-19 16:56:36:955443][AclProcess.cpp YoloV3PostProcess:393] #Obj57, box(660, 205, 702, 243.75)   confidence: 0.999023 label: gray
[Info ][2021-01-19 16:56:36:955462][AclProcess.cpp YoloV3PostProcess:393] #Obj58, box(662.5, 283.75, 700.5, 318.5)   confidence: 0.998047 label: gray
[Info ][2021-01-19 16:56:36:955480][AclProcess.cpp YoloV3PostProcess:393] #Obj59, box(617.5, 287.5, 657, 323)   confidence: 0.992188 label: gray
[Info ][2021-01-19 16:56:36:955499][AclProcess.cpp YoloV3PostProcess:393] #Obj60, box(575, 511.25, 1051, 665.5)   confidence: 0.999023 label: layer
[Info ][2021-01-19 16:56:36:955517][AclProcess.cpp YoloV3PostProcess:393] #Obj61, box(590, 101.25, 1081, 337)   confidence: 0.996094 label: layer
[Info ][2021-01-19 16:56:36:955537][AclProcess.cpp YoloV3PostProcess:393] #Obj62, box(612.5, 328.75, 1042, 521)   confidence: 0.972656 label: layer
[Info ][2021-01-19 16:56:36:955828][main.cpp Process:171] [Process Delay] cost: 41.479ms        fps: 24.1086
[Info ][2021-01-19 16:56:36:956659][AclProcess.cpp Release:50] Destroy stream successfully
[Info ][2021-01-19 16:56:36:956707][ModelProcess.cpp DeInit:146] Model[yolov3][0] deinit begin
[Info ][2021-01-19 16:56:36:974189][ModelProcess.cpp DeInit:185] Model[yolov3][0] deinit success
[Info ][2021-01-19 16:56:37:076075][ResourceManager.cpp Release:44] Finalized acl successfully.
HwHiAiUser@ecs-x86-adlink:~/youboxun_0111/src/Samples/InferObjectDetection/dist$
  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 3
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值