TensorFlow Lite 模型转换命令COCO介绍

最近刚好在做移动端的识别工作,需要将Tensorflow 训练好的模型转换成.tflite模型,先给出COCO命令脚本的地址,下面对一部分用法进行翻译,后面y有需要再补充。
先给出脚本命令中包含的用法

Table of contents:

*   [Convert a TensorFlow SavedModel to TensorFlow Lite](#savedmodel)
*   [Convert a TensorFlow GraphDef to TensorFlow Lite for float
    inference](#graphdef-float)
*   [Quantization](#quantization)
    *   [Convert a TensorFlow GraphDef to TensorFlow Lite for quantized
        inference](#graphdef-quant)
    *   [Use "dummy-quantization" to try out quantized inference on a float
        graph](#dummy-quant)
*   [Specifying input and output arrays](#specifying-input-and-output-arrays)
    *   [Multiple output arrays](#multiple-output-arrays)
    *   [Multiple input arrays](#multiple-input-arrays)
    *   [Specifying subgraphs](#specifying-subgraphs)
*   [Other conversions supported by TOCO](#other-conversions)
    *   [Optimize a TensorFlow GraphDef](#optimize-graphdef)
    *   [Convert a TensorFlow Lite FlatBuffer back into TensorFlow GraphDef
        format](#to-graphdef)
*   [Logging](#logging)
    *   [Standard logging](#standard-logging)
    *   [Verbose logging](#verbose-logging)
    *   [Graph "video" logging](#graph-video-logging)
*   [Graph visualizations](#graph-visualizations)
    *   [Using --output_format=GRAPHVIZ_DOT](#using-output-formatgraphviz-dot)
    *   [Using --dump_graphviz](#using-dump-graphviz)
    *   [Legend for the graph visualizations](#graphviz-legend)
  1. ## Convert a TensorFlow SavedModel to TensorFlow Lite
    将SavedModelde保存的模型转化为Tensorflow lite格式
    注:SavedModel模型是通过tf.saved_model.builder.SavedModelBuilder指令生成的,可以将计算流图和变量值同时集成在模型中
    The follow example converts a basic TensorFlow SavedModel into a Tensorflow Lite
    FlatBuffer to perform floating-point inference.
bazel run --config=opt \
  third_party/tensorflow/contrib/lite/toco:toco -- \
  --savedmodel_directory=/tmp/saved_model \
  --output_file=/tmp/foo.tflite

[SavedModel] (https://www.tensorflow.org/programmers_guide/saved_model#using_savedmodel_with_estimators)
has fewer required flags than frozen graphs (described below) due to access to additional data contained within the SavedModel. The values for--input_arrays and --output_arrays are an aggregated, alphabetized list of the inputs and outputs in the [SignatureDefs (https://www.tensorflow.org/serving/signature_defs) within the [MetaGraphDef] (https://www.tensorflow.org/programmers_guide/saved_model#apis_to_build_and_load_a_savedmodel) specified by --savedmodel_tagset. The value for input_shapes is automatically determined from the MetaGraphDef whenever possible. The default value for --inference_type for SavedModels is FLOAT.
There is currently no support for MetaGraphDefs without a SignatureDef or forMetaGraphDefs that use the [assets/directory] (https://www.tensorflow.org/programmers_guide/saved_model#structure_of_a_savedmodel_directory).

由于SavedModel模型中已经包含其他数据,所以所需参数比冻结图要少。--input_arrays--output_arrays通过--savedmodel_tagset确定, input_shapes的值自动从MetaGraphDef中确定。SavedModels的默认接口类型是`FLOAT。

2.## Convert a TensorFlow GraphDef to TensorFlow Lite for float inference
将tensorflow计算图转为浮点接口的 TensorFlow Lite
The follow example converts a basic TensorFlow GraphDef (frozen by [freeze_graph.py(https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/tools/freeze_graph.py)) into a TensorFlow Lite FlatBuffer to perform floating-point inference. Frozen
graphs contain the variables stored in Checkpoint files as Const ops.
冻结图以常量的形式保存Checkpoint文件中的权重变量,即将模型文件和权值文件进行了合并。

curl https://storage.googleapis.com/download.tensorflow.org/models/mobilenet_v1_0.50_128_frozen.tgz \
  | tar xzv -C /tmp
bazel run --config=opt \
  //tensorflow/contrib/lite/toco:toco -- \
  --input_file=/tmp/mobilenet_v1_0.50_128/frozen_graph.pb \
  --output_file=/tmp/foo.tflite \
  --inference_type=FLOAT \
  --input_shape=1,128,128,3 \
  --input_array=input \
  --output_array=MobilenetV1/Predictions/Reshape_1

3.## Convert a TensorFlow GraphDef to TensorFlow Lite for quantized inference
将tensorflow量化接口计算图转为成 TensorFlow Lite
TOCO is compatible with fixed point quantization models described 【here】(https://www.tensorflow.org/performance/quantization). These are float models with[FakeQuant*] (https://www.tensorflow.org/api_guides/python/array_ops#Fake_quantization) ops inserted at the boundaries of fused layers to record min-max range information. This generates a quantized inference workload that reproduces the
quantization behavior that was used during training.The following command generates a quantized TensorFlow Lite FlatBuffer from a “quantized” TensorFlow GraphDef.
TOCO兼容量化模型

bazel run --config=opt \
  //tensorflow/contrib/lite/toco:toco -- \
  --input_file=/tmp/some_quantized_graph.pb \
  --output_file=/tmp/foo.tflite \
  --input_format=TENSORFLOW_GRAPHDEF \
  --output_format=TFLITE \
  --inference_type=QUANTIZED_UINT8 \
  --input_shape=1,128,128,3 \
  --input_array=input \
  --output_array=MobilenetV1/Predictions/Reshape_1 \
  --mean_value=128 \
  --std_value=127

4.## Use \”dummy-quantization\” to try out quantized inference on a float graph
将浮点计算图转为量化接口
In order to evaluate the possible benefit of generating a quantized graph, TOCO
allows “dummy-quantization” on float graphs. The flags --default_ranges_min
and --default_ranges_max accept plausible values for the min-max ranges of the
values in all arrays that do not have min-max information. “Dummy-quantization”
will produce lower accuracy but will emulate the performance of a correctly
quantized model.The example below contains a model using Relu6 activation functions. Therefore,
a reasonable guess is that most activation ranges should be contained in [0, 6].

--default_ranges_min和`–default_ranges_max假定最大最小值的范围。由于使用了ReLu6作为激励函数,因此,一个合理的猜测是大多数激励值在[0,6]之间。

curl https://storage.googleapis.com/download.tensorflow.org/models/mobilenet_v1_0.50_128_frozen.tgz \
  | tar xzv -C /tmp
bazel run --config=opt \
  //tensorflow/contrib/lite/toco:toco -- \
  --input_file=/tmp/mobilenet_v1_0.50_128/frozen_graph.pb \
  --output_file=/tmp/foo.cc \
  --input_format=TENSORFLOW_GRAPHDEF \
  --output_format=TFLITE \
  --inference_type=QUANTIZED_UINT8 \
  --input_shape=1,128,128,3 \
  --input_array=input \
  --output_array=MobilenetV1/Predictions/Reshape_1 \
  --default_ranges_min=0 \
  --default_ranges_max=6 \
  --mean_value=127.5 \
  --std_value=127.5
阅读更多
换一批

没有更多推荐了,返回首页