骁龙神经处理引擎SDK参考指南(22)

281 篇文章 30 订阅
17 篇文章 8 订阅

骁龙神经处理引擎SDK参考指南(22)


6.5.2 C教程 - 构建示例

介绍

本教程演示如何构建可在 PC 或目标设备上执行神经网络模型的 C 示例应用程序。

注意:虽然此示例代码不执行任何错误检查,但强烈建议用户在使用 SNPE API 时检查错误。

大多数应用程序在使用神经网络时将遵循以下模式:

  • 获取可用运行时
  • 负载网络
  • 设置网络生成器选项
  • 加载网络输入
    - 使用 ITensors
    - 使用用户缓冲区
  • 执行网络和过程输出
    - 使用 ITensors
    - 使用用户缓冲区
Snpe_Runtime_t runtime = checkRuntime();
Snpe_DlContainer_Handle_t containerHandle = loadContainerFromFile(dlc);
Snpe_SNPE_Handle_t snpeHandle = setBuilderOptions(containerHandle, runtime, useUserSuppliedBuffers);
Snpe_TensorMap_Handle_t inputTensorMapHandle = loadInputTensor(snpeHandle, fileLine); // ITensor
loadInputUserBuffer(applicationInputBuffers, snpeHandle, fileLine); // User Buffer
executeNetwork(snpeHandle , inputTensorMapHandle, OutputDir, inputListNum); // ITensor
executeNetwork(snpeHandle, inputMapHandle, outputMapHandle, applicationOutputBuffers, OutputDir, inputListNum); // User Buffer

以下部分描述了如何实施上述每个步骤。

获取可用运行时

下面的代码摘录说明了如何使用本机 API 检查特定运行时是否可用(以 GPU 运行时为例)。

Snpe_Runtime_t checkRuntime()
{
    Snpe_DlVersion_Handle_t versionHandle = Snpe_Util_GetLibraryVersion();
    Snpe_Runtime_t Runtime;
    std::cout << "SNPE Version: " << Snpe_DlVersion_ToString(versionHandle) << std::endl; //Print Version number
    Snpe_DlVersion_Delete(versionHandle);
    if (Snpe_Util_IsRuntimeAvailable(SNPE_RUNTIME_GPU)) {
        Runtime = SNPE_RUNTIME_GPU;
    } else {
        Runtime = SNPE_RUNTIME_CPU;
    }
    return Runtime;
}

负载网络
下面的代码摘录说明了如何从 SNPE 容器文件 (DLC) 加载网络。

Snpe_DlContainer_Handle_t loadContainerFromFile(std::string containerPath)
{
    Snpe_DlContainer_Handle_t containerHandle = Snpe_DlContainer_Open(containerPath.c_str());
    return containerHandle;
}

设置网络生成器选项
以下代码演示了如何实例化 SNPE Builder 对象,该对象将用于使用给定参数执行网络。

Snpe_SNPE_Handle_t setBuilderOptions(Snpe_DlContainer_Handle_t containerHandle,
                                                   Snpe_RuntimeListHandle_t runtimeListHandle,
                                                   bool useUserSuppliedBuffers)
{
    Snpe_SNPE_Handle_t snpeHandle;
    Snpe_SNPEBuilder_Handle_t snpeBuilderHandle = Snpe_SNPEBuilder_Create(containerHandle);
    Snpe_SNPEBuilder_SetRuntimeProcessorOrder(runtimeList)
    Snpe_SNPEBuilder_SetUseUserSuppliedBuffers(useUserSuppliedBuffers)
    snpeHandle = Snpe_SNPEBuilder_Build(snpeBuilderHandle);
    return snpeHandle;
}

加载网络输入
网络输入和输出可以是用户支持的缓冲区或 ITensors(内置 SNPE 缓冲区),但不能同时是两者。使用用户支持的缓冲区的优点是它消除了从用户缓冲区创建 ITensors 的额外副本。加载网络输入的两种方法如下所示。

使用用户缓冲区
SNPE 可以从用户支持的缓冲区创建其网络输入和输出。请注意,SNPE 期望缓冲区的值在其执行期间存在并有效。

这是一个从用户支持的缓冲区创建 SNPE UserBuffer 并将其存储在 UserBufferMap 中的函数。这些映射是所有输入或输出用户缓冲区的方便集合,可以传递给 SNPE 以执行网络。

免责声明:缓冲区的步幅应该已经为用户所知,不应按如下所示计算。显示的计算仅用于执行示例代码。

void createUserBuffer(Snpe_UserBufferMap_Handle_t userBufferMapHandle,
                      std::unordered_map<std::string, std::vector<uint8_t>>& applicationBuffers,
                      std::vector<Snpe_IUserBuffer_Handle_t> snpeUserBackedBuffersHandle,
                      Snpe_SNPE_Handle_t snpeHandle,
                      const char * name)
{
   // get attributes of buffer by name
   Snpe_IBufferAttributes_Handle_t bufferAttributesOptHandle = Snpe_SNPE_GetInputOutputBufferAttributes(snpeHandle, name);
   if (bufferAttributesOptHandle == nullptr) throw std::runtime_error(std::string("Error obtaining attributes for input tensor ") + name);
   // calculate the size of buffer required by the input tensor
   Snpe_TensorShape_Handle_t bufferShapeHandle = Snpe_IBufferAttributes_GetDims(bufferAttributesOptHandle);
   // Calculate the stride based on buffer strides, assuming tightly packed.
   // Note: Strides = Number of bytes to advance to the next element in each dimension.
   // For example, if a float tensor of dimension 2x4x3 is tightly packed in a buffer of 96 bytes, then the strides would be (48,12,4)
   // Note: Buffer stride is usually known and does not need to be calculated.
   std::vector<size_t> strides(Snpe_TensorShape_Rank(bufferShapeHandle));
   strides[strides.size() - 1] = sizeof(float);
   size_t stride = strides[strides.size() - 1];
   for (size_t i = Snpe_TensorShape_Rank(bufferShapeHandle) - 1; i > 0; i--)
   {
      stride *= Snpe_TensorShape_At(bufferShapeHandle, i);
      strides[i-1] = stride;
   }
   Snpe_TensorShape_Handle_t stridesHandle = Snpe_TensorShape_CreateDimsSize(strides.data(), Snpe_TensorShape_Rank(bufferShapeHandle));
   size_t bufferElementSize = Snpe_IBufferAttributes_GetElementSize(bufferAttributesOptHandle);
   size_t bufSize = calcSizeFromDims(Snpe_TensorShape_GetDimensions(bufferShapeHandle), Snpe_TensorShape_Rank(bufferShapeHandle), bufferElementSize);
   // set the buffer encoding type
   Snpe_UserBufferEncoding_Handle_t userBufferEncodingFloatHandle = Snpe_UserBufferEncodingFloat_Create();
   // create user-backed storage to load input data onto it
   applicationBuffers.emplace(name, std::vector<uint8_t>(bufSize));
   // create SNPE user buffer from the user-backed buffer
   ubsHandle.push_back(Snpe_Util_CreateUserBuffer(applicationBuffers.at(name).data(),
                                                  bufSize,
                                                  stridesHandle,
                                                  userBufferEncodingFloatHandle));
   // add the user-backed buffer to the inputMap, which is later on fed to the network for execution
   Snpe_UserBufferMap_Add(userBufferMapHandle, name, snpeUserBackedBuffersHandle.back());
}

然后,以下函数显示了如何将输入数据从文件加载到用户缓冲区。请注意,输入值只是简单地加载到用户支持的缓冲区中,SNPE 可以在其上创建 SNPE UserBuffers,如上所示。

void loadInputUserBuffer(std::unordered_map<std::string, std::vector<uint8_t>>& applicationBuffers,
                               Snpe_SNPE_Handle_t snpeHandle,
                               const std::string& fileLine)
{
    // get input tensor names of the network that need to be populated
    Snpe_StringList_Handle_t inputNamesHandle = Snpe_SNPE_GetInputTensorNames(snpeHandle);
    if (inputNamesHandle == nullptr) throw std::runtime_error("Error obtaining input tensor names");
    assert(Snpe_StringList_Size(inputNamesHandle) > 0);
    // treat each line as a space-separated list of input files
    std::vector<std::string> filePaths;
    split(filePaths, fileLine, ' ');
    if (Snpe_StringList_Size(inputNamesHandle)) std::cout << "Processing DNN Input: " << std::endl;
    for (size_t i = 0; i < Snpe_StringList_Size(inputNamesHandle); i++) {
        const char* name = Snpe_StringList_At(inputNamesHandle, i);
        std::string filePath(filePaths[i]);
        // print out which file is being processed
        std::cout << "\t" << i + 1 << ") " << filePath << std::endl;
        // load file content onto application storage buffer,
        // on top of which, SNPE has created a user buffer
        loadByteDataFile(filePath, applicationBuffers.at(name));
    };
}

使用 ITensors

Snpe_TensorMap_Handle_t loadInputTensor (Snpe_SNPE_Handle_t snpeHandle, std::string& fileLine)
{
    Snpe_ITensor_Handle_t input;
    Snpe_StringList_Handle_t strListHandle = Snpe_SNPE_GetInputTensorNames(snpeHandle);
    if (strListHandle == nullptr) throw std::runtime_error("Error obtaining Input tensor names");
    // Make sure the network requires only a single input
    assert (Snpe_StringList_Size(strListHandle) == 1);
    // If the network has a single input, each line represents the input file to be loaded for that input
    std::string filePath(fileLine);
    std::cout << "Processing DNN Input: " << filePath << "\n";
    std::vector<float> inputVec = loadFloatDataFile(filePath);
    /* Create an input tensor that is correctly sized to hold the input of the network. Dimensions that have no fixed size will be represented with a value of 0. */
    auto inputDimsHandle = Snpe_SNPE_GetInputDimensions(snpeHandle, Snpe_StringList_At(strListHandle, 0));
    /* Calculate the total number of elements that can be stored in the tensor so that we can check that the input contains the expected number of elements.
       With the input dimensions computed create a tensor to convey the input into the network. */
    input = Snpe_Util_CreateITensor(inputDimsHandle);
    /* Copy the loaded input file contents into the networks input tensor.SNPE's ITensor supports C++ STL functions like std::copy() */
    std::copy(inputVec.begin(), inputVec.end(), (float*)Snpe_ITensor_GetData(input));
    Snpe_TensorMap_Handle_t inputTensorMapHandle = Snpe_TensorMap_Create();
    Snpe_TensorMap_Add(inputTensorMapHandle, Snpe_StringList_At(strListHandle, 0), inputs[i]);
    return inputTensorMapHandle;
}

执行网络和过程输出
以下代码片段使用本机 API 执行网络(在 UserBuffer 或 ITensor 模式下)并展示如何迭代新填充的输出张量。

使用用户缓冲区

void executeNetwork(Snpe_SNPE_Handle_t snpeHandle,
                    Snpe_UserBufferMap_Handle_t inputMapHandle,
                    Snpe_UserBufferMap_Handle_t outputMapHandle,
                    std::unordered_map<std::string,std::vector<uint8_t>>& applicationOutputBuffers,
                    const std::string& outputDir,
                    int num)
{
    // Execute the network and store the outputs in user buffers specified in outputMap
    Snpe_SNPE_ExecuteUserBuffers(snpeHandle, inputMapHandle, outputMapHandle);
    // Get all output buffer names from the network
    Snpe_StringList_Handle_t outputBufferNamesHandle = Snpe_UserBufferMap_GetUserBufferNames(outputMapHandle);
    // Iterate through output buffers and print each output to a raw file
    std::for_each(Snpe_StringList_Begin(outputBufferNamesHandle), Snpe_StringList_End(outputBufferNamesHandle), [&](const char* name)
    {
       std::ostringstream path;
       path << outputDir << "/Result_" << num << "/" << name << ".raw";
       SaveUserBuffer(path.str(), applicationOutputBuffers.at(name));
    });
}
// The following is a partial snippet of the function
void SaveUserBuffer(const std::string& path, const std::vector<uint8_t>& buffer) {
   ...
   std::ofstream os(path, std::ofstream::binary);
   if (!os)
   {
      std::cerr << "Failed to open output file for writing: " << path << "\n";
      std::exit(EXIT_FAILURE);
   }
   if (!os.write((char*)(buffer.data()), buffer.size()))
   {
      std::cerr << "Failed to write data to: " << path << "\n";
      std::exit(EXIT_FAILURE);
   }
}

使用 ITensors

void executeNetwork(Snpe_SNPE_Handle_t snpeHandle,
                    Snpe_TensorMap_Handle_t inputTensorMapHandle,
                    std::string OutputDir,
                    int num)
{
    // Execute the network and store the outputs that were specified when creating the network in a TensorMap
    Snpe_TensorMap_Handle_t outputTensorMapHandle = Snpe_TensorMap_Create();
    Snpe_SNPE_ExecuteITensors(snpeHandle, inputTensorMapHandle, outputTensorMapHandle);
    Snpe_StringList_Handle_t tensorNamesHandle = Snpe_TensorMap_GetTensorNames(outputTensorMapHandle);
    // Iterate through the output Tensor map, and print each output layer name
    std::for_each( Snpe_StringList_Begin(tensorNamesHandle), Snpe_StringList_End(tensorNamesHandle), [&](const char* name)
    {
        std::ostringstream path;
        path << OutputDir << "/"
        << "Result_" << num << "/"
        << name << ".raw";
        auto tensorHandle = Snpe_TensorMap_GetTensor_Ref(outputTensorMapHandle, name);
        SaveITensor(path.str(), tensorHandle);
    });
    // Clean up created handles
    Snpe_TensorMap_Delete(outputTensorMapHandle);
    Snpe_StringList_Delete(tensorNamesHandle);
}
// The following is a partial snippet of the function
void SaveITensor(const std::string& path, Snpe_ITensor_Handle_t tensorHandle)
{
   ...
   std::ofstream os(path, std::ofstream::binary);
   if (!os)
   {
      std::cerr << "Failed to open output file for writing: " << path << "\n";
      std::exit(EXIT_FAILURE);
   }
   auto begin = static_cast<float*>(Snpe_ITensor_GetData(tensorHandle));
   auto size = Snpe_ITensor_GetSize(tensorHandle);
   for ( auto it = begin; it != begin + size; ++it )
   {
      float f = *it;
      if (!os.write(reinterpret_cast<char*>(&f), sizeof(float)))
      {
         std::cerr << "Failed to write data to: " << path << "\n";
         std::exit(EXIT_FAILURE);
      }
   }
}
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值