使用MNN进行模型推理(ubuntu)

MNN是阿里开源的针对模型在移动端部署的框架,在模型移植的前期,我们往往需要经历代码调试的过程,而这个过程在PC端进行会比较方便。本文记录在ubuntu上MNN的运行环境配置,以及使用MNN进行推理的主要步骤。

本文所用模型为一个mnist模型,模型由TensorFlow训练,并转化为.mnn模型,推理程序主要依赖OpenCV和MNN两个库。

MNN安装

MNN介绍、安装和编译

环境配置

  • 如果你习惯于使用QT,可以参考以下内容编写.pro文件进行qmake:
CONFIG += c++11

TARGET = mnist
CONFIG += console
CONFIG -= app_bundle
TEMPLATE = app
SOURCES += main.cpp

#opencv
INCLUDEPATH += /usr/local/include/opencv\
               /usr/local/include\

LIBS += -L/home/yinliang/software/opencv-3.4.1/build/lib \
 -lopencv_stitching -lopencv_objdetect \
-lopencv_superres -lopencv_videostab  \#-lippicv -lopencv_shape -lopencv_videoio
-lopencv_imgcodecs\
-lopencv_calib3d -lopencv_features2d -lopencv_highgui \
-lopencv_video \
-lopencv_photo -lopencv_ml -lopencv_imgproc -lopencv_flann -lopencv_core

#mnn
INCLUDEPATH +=/home/yinliang/software/MNN/include\
/home/yinliang/software/MNN/include/MNN\
/home/yinliang/software/MNN/schema/current\
/home/yinliang/software/MNN/tools\
/home/yinliang/software/MNN/tools/cpp\
/home/yinliang/software/MNN/source\
/home/yinliang/software/MNN/source/backend\
/home/yinliang/software/MNN/source/core\
/home/yinliang/software/MNN/source/cv\
/home/yinliang/software/MNN/source/math\
/home/yinliang/software/MNN/source/shape\
/home/yinliang/software/MNN/3rd_party\
/home/yinliang/software/MNN/3rd_party/imageHelper

LIBS += -L/home/yinliang/software/MNN/build
LIBS += -lMNN
  • 如果你习惯于使用CMake,可以参考以下内容编写你的CMakeLists.txt:
cmake_minimum_required(VERSION 3.10)
project(mnist)

set(CMAKE_CXX_STANDARD 11)

find_package(OpenCV REQUIRED)
set(MNN_DIR /home/yinliang/software/MNN)
include_directories(${MNN_DIR}/include)
include_directories(${MNN_DIR}/include/MNN)
include_directories(${MNN_DIR}/tools)
include_directories(${MNN_DIR}/tools/cpp)
include_directories(${MNN_DIR}/source)
include_directories(${MNN_DIR}/source/backend)
include_directories(${MNN_DIR}/source/core)

LINK_DIRECTORIES(${MNN_DIR}/build)
add_executable(mnist main.cpp)
target_link_libraries(mnist -lMNN ${OpenCV_LIBS})

代码解析

#include "Backend.hpp"
#include "Interpreter.hpp"
#include "MNNDefine.h"
#include "Interpreter.hpp"
#include "Tensor.hpp"
#include <math.h>
#include <opencv2/opencv.hpp>
#include <iostream>
#include <stdio.h>
using namespace MNN;
using namespace cv;

int main(void)
{
   // 填写自己的测试图像和mnn模型文件路径
    std::string image_name = ".../test.jpg";
    const char* model_name = ".../mnist.mnn";
    // 一些任务调度中的配置参数
    int forward = MNN_FORWARD_CPU;
    // int forward = MNN_FORWARD_OPENCL;
    int precision  = 2;
    int power      = 0;
    int memory     = 0;
    int threads    = 1;
    int INPUT_SIZE = 28;

    cv::Mat raw_image    = cv::imread(image_name.c_str());
    //imshow("image", raw_image);
    int raw_image_height = raw_image.rows;
    int raw_image_width  = raw_image.cols;
    cv::Mat image;
    cv::resize(raw_image, image, cv::Size(INPUT_SIZE, INPUT_SIZE));
    // 1. 创建Interpreter, 通过磁盘文件创建: static Interpreter* createFromFile(const char* file);
    std::shared_ptr<Interpreter> net(Interpreter::createFromFile(model_name));
    MNN::ScheduleConfig config;
    // 2. 调度配置,
    // numThread决定并发数的多少,但具体线程数和并发效率,不完全取决于numThread
    // 推理时,主选后端由type指定,默认为CPU。在主选后端不支持模型中的算子时,启用由backupType指定的备选后端。
    config.numThread = threads;
    config.type      = static_cast<MNNForwardType>(forward);
    MNN::BackendConfig backendConfig;
    // 3. 后端配置
    // memory、power、precision分别为内存、功耗和精度偏好
    backendConfig.precision = (MNN::BackendConfig::PrecisionMode)precision;
    backendConfig.power = (MNN::BackendConfig::PowerMode) power;
    backendConfig.memory = (MNN::BackendConfig::MemoryMode) memory;
    config.backendConfig = &backendConfig;
    // 4. 创建session
    auto session = net->createSession(config);
    net->releaseModel();

    clock_t start = clock();
    // preprocessing
    image.convertTo(image, CV_32FC3);
    image = image / 255.0f;
    // 5. 输入数据
    // wrapping input tensor, convert nhwc to nchw
    std::vector<int> dims{1, INPUT_SIZE, INPUT_SIZE, 3};
    auto nhwc_Tensor = MNN::Tensor::create<float>(dims, NULL, MNN::Tensor::TENSORFLOW);
    auto nhwc_data   = nhwc_Tensor->host<float>();
    auto nhwc_size   = nhwc_Tensor->size();
    ::memcpy(nhwc_data, image.data, nhwc_size);

    std::string input_tensor = "data";
    // 获取输入tensor
    // 拷贝数据, 通过这类拷贝数据的方式,用户只需要关注自己创建的tensor的数据布局,
    // copyFromHostTensor会负责处理数据布局上的转换(如需)和后端间的数据拷贝(如需)。
    auto inputTensor  = net->getSessionInput(session, nullptr);
    inputTensor->copyFromHostTensor(nhwc_Tensor);

    // 6. 运行会话
    net->runSession(session);

    // 7. 获取输出
    std::string output_tensor_name0 = "dense1_fwd";
    // 获取输出tensor
    MNN::Tensor *tensor_scores  = net->getSessionOutput(session, output_tensor_name0.c_str());

    MNN::Tensor tensor_scores_host(tensor_scores, tensor_scores->getDimensionType());
    // 拷贝数据
    tensor_scores->copyToHostTensor(&tensor_scores_host);

    // post processing steps
    auto scores_dataPtr  = tensor_scores_host.host<float>();

    // softmax
    float exp_sum = 0.0f;
    for (int i = 0; i < 10; ++i)
    {
        float val = scores_dataPtr[i];
        exp_sum += val;
    }
    // get result idx
    int  idx = 0;
    float max_prob = -10.0f;
    for (int i = 0; i < 10; ++i)
    {
        float val  = scores_dataPtr[i];
        float prob = val / exp_sum;
        if (prob > max_prob)
        {
            max_prob = prob;
            idx      = i;
        }
    }
    printf("the result is %d\n", idx);

    return 0;
}

References:

  1. MNN官方文档
  2. https://github.com/xindongzhang/MNN-APPLICATIONS/tree/master/applications/mnist
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值