SampleOnnxMnist_readme_自用

“Hello World” For TensorRT From ONNX

Table Of Contents

Description

This sample, sampleOnnxMNIST, converts a model trained on the MNIST dataset in Open Neural Network Exchange (ONNX) format to a TensorRT network and runs inference on the network.

ONNX is a standard for representing deep learning models that enables models to be transferred between frameworks.

How does this sample work?

This sample creates and runs the TensorRT engine from an ONNX model of the MNIST network. It demonstrates how TensorRT can consume an ONNX model as input to create a network.

Specifically, this sample:

Converting the ONNX model to a TensorRT network

The model file can be converted to a TensorRT network using the ONNX parser. The parser can be initialized with the
network definition that the parser will write to and the logger object.

auto parser = nvonnxparser::createParser(*network, sample::gLogger.getTRTLogger());

The ONNX model file is then passed onto the parser along with the logging level

if (!parser->parseFromFile(model_file, static_cast<int>(sample::gLogger.getReportableSeverity())))
{
	  string msg("failed to parse onnx file");
	  sample::gLogger->log(nvinfer1::ILogger::Severity::kERROR, msg.c_str());
	  exit(EXIT_FAILURE);
}

To view additional information about the network, including layer information and individual layer dimensions, issue the following call:

parser->reportParsingInfo();

After the TensorRT network is constructed by parsing the model, the TensorRT engine can be built to run inference.

Building the engine

To build the engine, create the builder and pass a logger created for TensorRT which is used for reporting errors, warnings and informational messages in the network:
IBuilder* builder = createInferBuilder(sample::gLogger);

To build the engine from the generated TensorRT network, issue the following call:
nvinfer1::ICudaEngine* engine = builder->buildCudaEngine(*network);

After you build the engine, verify that the engine is running properly by confirming the output is what you expected. The output format of this sample should be the same as the output of sampleMNIST.

Running inference

To run inference using the created engine, see Performing Inference In C++.

Note: It’s important to preprocess the data and convert it to the format accepted by the network. In this example, the sample input is in PGM (portable graymap) format. The model expects an input of image 1x28x28 scaled to between [0,1].

TensorRT API layers and ops

In this sample, the following layers are used. For more information about these layers, see the TensorRT Developer Guide: Layers documentation.

Activation layer
The Activation layer implements element-wise activation functions. Specifically, this sample uses the Activation layer with the type kRELU.

Convolution layer
The Convolution layer computes a 2D (channel, height, and width) convolution, with or without bias.

FullyConnected layer
The FullyConnected layer implements a matrix-vector product, with or without bias.

Pooling layer
The Pooling layer implements pooling within a channel. Supported pooling types are maximum, average and maximum-average blend.

Scale layer
The Scale layer implements a per-tensor, per-channel, or per-element affine transformation and/or exponentiation by constant values.

Shuffle layer
The Shuffle layer implements a reshape and transpose operator for tensors.

Running the sample

  1. Compile this sample by running make in the <TensorRT root directory>/samples/sampleOnnxMNIST directory. The binary named sample_onnx_mnist will be created in the <TensorRT root directory>/bin directory.

    cd <TensorRT root directory>/samples/sampleOnnxMNIST
    make
    

    Where <TensorRT root directory> is where you installed TensorRT.

  2. Run the sample to build and run the MNIST engine from the ONNX model.

    ./sample_onnx_mnist [-h or --help] [-d or --datadir=<path to data directory>] [--useDLACore=<int>] [--int8 or --fp16]
    
  3. Verify that the sample ran successfully. If the sample runs successfully you should see output similar to the following:

    &&&& RUNNING TensorRT.sample_onnx_mnist # ./sample_onnx_mnist
    ----------------------------------------------------------------
    Input filename: ../../../../../../data/samples/mnist/mnist.onnx
    ONNX IR version: 0.0.3
    Opset version: 1
    Producer name: CNTK
    Producer version: 2.4
    Domain:
    Model version: 1
    Doc string:
    ----------------------------------------------------------------
    [I] Input:
    @@@@@@@@@@@@@@@@@@@@@@@@@@@@
    @@@@@@@@@@@@@@@@@@@@@@@@@@@@
    @@@@@@@@@@@@@@@@@@@@@@@@@@@@
    @@@@@@@@@@@@@@@@@@@@@@@@@@@@
    @@@@@@@@@@@*.  .*@@@@@@@@@@@
    @@@@@@@@@@*.     +@@@@@@@@@@
    @@@@@@@@@@. :#+   %@@@@@@@@@
    @@@@@@@@@@.:@@@+  +@@@@@@@@@
    @@@@@@@@@@.:@@@@:  +@@@@@@@@
    @@@@@@@@@@=%@@@@:  +@@@@@@@@
    @@@@@@@@@@@@@@@@#  +@@@@@@@@
    @@@@@@@@@@@@@@@@*  +@@@@@@@@
    @@@@@@@@@@@@@@@@:  +@@@@@@@@
    @@@@@@@@@@@@@@@@:  +@@@@@@@@
    @@@@@@@@@@@@@@@*  .@@@@@@@@@
    @@@@@@@@@@%**%@.  *@@@@@@@@@
    @@@@@@@@%+.  .:  .@@@@@@@@@@
    @@@@@@@@=  ..    :@@@@@@@@@@
    @@@@@@@@:  *@@:  :@@@@@@@@@@
    @@@@@@@%   %@*    *@@@@@@@@@
    @@@@@@@%   ++ ++  .%@@@@@@@@
    @@@@@@@@-    +@@-  +@@@@@@@@
    @@@@@@@@=  :*@@@#  .%@@@@@@@
    @@@@@@@@@+*@@@@@%.   %@@@@@@
    @@@@@@@@@@@@@@@@@@@@@@@@@@@@
    @@@@@@@@@@@@@@@@@@@@@@@@@@@@
    @@@@@@@@@@@@@@@@@@@@@@@@@@@@
    @@@@@@@@@@@@@@@@@@@@@@@@@@@@
    
    [I] Output:
    Prob 0 0.0000 Class 0:
    Prob 1 0.0000 Class 1:
    Prob 2 1.0000 Class 2: **********
    Prob 3 0.0000 Class 3:
    Prob 4 0.0000 Class 4:
    Prob 5 0.0000 Class 5:
    Prob 6 0.0000 Class 6:
    Prob 7 0.0000 Class 7:
    Prob 8 0.0000 Class 8:
    Prob 9 0.0000 Class 9:
    
    &&&& PASSED TensorRT.sample_onnx_mnist # ./sample_onnx_mnist
    

    This output shows that the sample ran successfully; PASSED.

Sample --help options

To see the full list of available options and their descriptions, use the -h or --help command line option.

Additional resources

The following resources provide a deeper understanding about the ONNX project and MNIST model:

ONNX

Models

Documentation

License

For terms and conditions for use, reproduction, and distribution, see the TensorRT Software License Agreement documentation.

Changelog

March 2019
This README.md file was recreated, updated and reviewed.

Known issues

There are no known issues in this sample.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值