tensorrt获取输入输出

利用Netron打开onnx,右边名字:

int input_index = engine->getBindingIndex("inout1.1");
	int output_index = engine->getBindingIndex("191");

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
TensorRT是NVIDIA推出的深度学习推理加速库,支持多种深度学习框架,如TensorFlow、Caffe、PyTorch等。如果需要实现多输入输出的功能,可以采用以下步骤: 1. 定义输入和输出张量: ```C // 定义输入张量 nvinfer1::ITensor* input1 = network->addInput("input1", DataType::kFLOAT, Dims3{3, 224, 224}); nvinfer1::ITensor* input2 = network->addInput("input2", DataType::kFLOAT, Dims3{3, 224, 224}); // 定义输出张量 nvinfer1::ITensor* output1 = network->addOutput("output1", DataType::kFLOAT, Dims3{10, 1, 1}); nvinfer1::ITensor* output2 = network->addOutput("output2", DataType::kFLOAT, Dims3{100, 1, 1}); ``` 2. 定义网络结构: ```C // 定义网络结构 nvinfer1::IConvolutionLayer* conv1 = network->addConvolution(*input1, 32, Dims2{3,3}, weights["conv1.weight"], weights["conv1.bias"]); nvinfer1::IActivationLayer* relu1 = network->addActivation(*conv1->getOutput(0), ActivationType::kRELU); nvinfer1::IPoolingLayer* pool1 = network->addPooling(*relu1->getOutput(0), PoolingType::kMAX, Dims2{2,2}); nvinfer1::IConvolutionLayer* conv2 = network->addConvolution(*pool1->getOutput(0), 64, Dims2{3,3}, weights["conv2.weight"], weights["conv2.bias"]); nvinfer1::IActivationLayer* relu2 = network->addActivation(*conv2->getOutput(0), ActivationType::kRELU); nvinfer1::IPoolingLayer* pool2 = network->addPooling(*relu2->getOutput(0), PoolingType::kMAX, Dims2{2,2}); nvinfer1::IFullyConnectedLayer* fc1 = network->addFullyConnected(*pool2->getOutput(0), 128, weights["fc1.weight"], weights["fc1.bias"]); nvinfer1::IActivationLayer* relu3 = network->addActivation(*fc1->getOutput(0), ActivationType::kRELU); nvinfer1::IFullyConnectedLayer* fc2 = network->addFullyConnected(*relu3->getOutput(0), 10, weights["fc2.weight"], weights["fc2.bias"]); nvinfer1::ITensor* output1 = fc2->getOutput(0); nvinfer1::IFullyConnectedLayer* fc3 = network->addFullyConnected(*relu3->getOutput(0), 100, weights["fc3.weight"], weights["fc3.bias"]); nvinfer1::ITensor* output2 = fc3->getOutput(0); ``` 3. 创建推理引擎: ```C // 创建推理引擎 builder->setMaxBatchSize(1); builder->setMaxWorkspaceSize(1 << 30); builder->setFp16Mode(true); builder->setInt8Mode(false); builder->setStrictTypeConstraints(true); nvinfer1::ICudaEngine* engine = builder->buildCudaEngine(*network); ``` 4. 执行推理: ```C // 执行推理 nvinfer1::IExecutionContext* context = engine->createExecutionContext(); float* input1_data = ... // 获取输入1的数据 float* input2_data = ... // 获取输入2的数据 float* output1_data = new float[10]; float* output2_data = new float[100]; void* bindings[] = { input1_data, input2_data, output1_data, output2_data }; const int batchSize = 1; context->execute(batchSize, bindings); ``` 5. 释放资源: ```C // 释放资源 delete[] output1_data; delete[] output2_data; context->destroy(); engine->destroy(); parser->destroy(); network->destroy(); builder->destroy(); ```

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值