TFLite:代码分析(3): interpreter run

interperter invoke的调用过程就是按顺序调用op

TfLiteStatus Interpreter::Invoke() {

  // Invocations are always done in node order.
  // Note that calling Invoke repeatedly will cause the original memory plan to
  // be reused, unless either ResizeInputTensor() or AllocateTensors() has been
  // called.
  // TODO(b/71913981): we should force recalculation in the presence of dynamic
  // tensors, because they may have new value which in turn may affect shapes
  // and allocations.
  for (int execution_plan_index = 0;
       execution_plan_index < execution_plan_.size(); execution_plan_index++) {
    if (execution_plan_index == next_execution_plan_index_to_prepare_) {
      TF_LITE_ENSURE_STATUS(PrepareOpsAndTensors());
      TF_LITE_ENSURE(&context_, next_execution_plan_index_to_prepare_ >=
                                    execution_plan_index);
    }
    int node_index = execution_plan_[execution_plan_index];
    TfLiteNode& node = nodes_and_registration_[node_index].first;
    const TfLiteRegistration& registration =
        nodes_and_registration_[node_index].second;

    if (OpInvoke(registration, &node) == kTfLiteError) {
      status = kTfLiteError;
    }
  }
  return status;
}

  // Invoke the operator represented by 'node'. 最终的执行体
  TfLiteStatus OpInvoke(const TfLiteRegistration& op_reg, TfLiteNode* node) {
    if (op_reg.invoke == nullptr) return kTfLiteError;
    return op_reg.invoke(&context_, node);
  }

// A structure representing an instance of a node.
// This structure only exhibits the inputs, outputs and user defined data, not
// other features like the type
.
typedef struct {
  // Inputs to this node expressed as indices into the simulator's tensors.
  TfLiteIntArray* inputs;

  // Outputs to this node expressed as indices into the simulator's tensors.
  TfLiteIntArray* outputs;

  // Temporary tensors uses during the computations. This usually contains no
  // tensors, but ops are allowed to change that if they need scratch space of
  // any sort.
  TfLiteIntArray* temporaries;

  // Opaque data provided by the node implementer through `Registration.init`.
  void* user_data;

  // Opaque data provided to the node if the node is a builtin. This is usually
  // a structure defined in builtin_op_data.h
  void* builtin_data;

  // Custom initial data. This is the opaque data provided in the flatbuffer.
  // WARNING: This is an experimental interface that is subject to change.
  const void* custom_initial_data;
  int custom_initial_data_size;
} TfLiteNode;

以下是将TensorFlow 1.15模型转换为TFLITE模型的详细代码和操作步骤: 1. 安装TensorFlow 1.15和TFLITE 在终端中执行以下命令来安装TensorFlow 1.15和TFLITE: ``` pip install tensorflow==1.15 pip install tensorflow==1.15-gpu pip install tensorflow==1.15-tflite ``` 或者使用以下命令: ``` pip install tensorflow==1.15 tensorflow-gpu==1.15 tensorflow-tensorboard==1.15 tensorflow-estimator==1.15 tensorflow-addons==0.10.0 tensorflow-datasets==3.0.0 tensorflow-hub==0.7.0 tensorflow-metadata==0.25.0 tensorflow-probability==0.7.0 tensorflow-serving-api==1.15.0 tensorflow-transform==0.15.0 tensorflow-io==0.11.0 pip install tensorflow==1.15-tflite ``` 2. 加载TensorFlow模型 在Python脚本中,使用以下代码加载TensorFlow模型: ``` import tensorflow as tf # Load the TensorFlow model model = tf.keras.models.load_model('path/to/the/model') ``` 3. 将TensorFlow模型转换为TFLITE模型 使用以下代码TensorFlow模型转换为TFLITE模型: ``` # Convert the TensorFlow model to TFLITE converter = tf.lite.TFLiteConverter.from_keras_model(model) tflite_model = converter.convert() # Save the TFLITE model with open('model.tflite', 'wb') as f: f.write(tflite_model) ``` 在上面的代码中,将TensorFlow模型转换为TFLITE模型的步骤是: - 从Keras模型创建一个转换器 - 使用转换器将模型转换为TFLITE格式 - 将TFLITE模型保存到磁盘上 在保存TFLITE模型时,可以将文件名更改为任何你想要的名称。 4. 加载TFLITE模型 使用以下代码加载TFLITE模型: ``` # Load the TFLITE model interpreter = tf.lite.Interpreter(model_path='model.tflite') interpreter.allocate_tensors() ``` 在上面的代码中,使用TFLITE解释器加载模型,并调用“allocate_tensors”方法以分配解释器所需的所有张量。 5. 运行TFLITE模型 使用以下代码TFLITE模型上运行推理: ``` # Run inference on the TFLITE model input_data = ... # Load input data interpreter.set_tensor(interpreter.get_input_details()[0]['index'], input_data) interpreter.invoke() output_data = interpreter.get_tensor(interpreter.get_output_details()[0]['index']) ``` 在上面的代码中,需要将输入数据加载到“input_data”变量中,并将其设置为TFLITE解释器的输入张量。然后,使用“invoke”方法运行推理,并从解释器的输出张量中获取结果。 以上就是将TensorFlow 1.15模型转换为TFLITE模型的详细代码和操作步骤。
评论 6
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值