TensorFlow Lite相关

本文探讨了在将TensorFlow模型转换为适用于移动端的TensorFlow Lite格式时遇到的问题,包括转换错误、量化问题以及Android NN API在Pixel设备上的性能影响。详细介绍了转换流程,并提供了相关错误的解决方案。
摘要由CSDN通过智能技术生成

谷歌移动端深度学习框架TensorFlow Lite正式发布 https://zhuanlan.zhihu.com/p/31063235

https://github.com/tensorflow/tensorflow/labels/comp%3Alite


https://github.com/tensorflow/tensorflow/issues/15387

from pb tf_dst_dtype == DT_UINT8 || tf_dst_dtype == DT_INT32 || tf_dst_dtype == DT_FLOAT Abort trap: 6

https://github.com/tensorflow/tensorflow/issues/15267

waiting response
it has model source code

Tensorflow Lite exhibits longer inference time when build with Android NN API on Google Pixel 1

https://github.com/tensorflow/tensorflow/issues/15554

Pixel 2016 currently does not have a hardware driver for NN API. So your model is actually running on CPU, plus some overhead for model initialization.

Another source of overhead is for input and output tensors. If the buffer is not already created as shared memory, an copy might occur during invocation. We will keep improving the integration to make it more efficient.



1.  tensorflow lite: convert error for the quantized graphdef

https://stackoverflow.com/questions/47463204/tensorflow-lite-convert-error-for-the-quantized-graphdef


基本的转换流程可参考,有官方回复为什么quantized graphdef会fail,大概是由于新旧quantized机制不一样



2. Android tensorflow lite kernel_util.cc:34 input_product_scale < output_scale 

https://github.com/tensorflow/tensorflow/issues/14642


基本的转换流程可参考


3. Build Tensorflow Lite C++ API into a dynamic library for Android

https://github.com/tensorflow/tensorflow/issues/14758

备份,暂时不知道有什么用


4. tensorflow lite: error when convert frozen model to lite format

https://github.com/tensorflow/tensorflow/issues/14761

提供了几个可用model链接,待尝试


5. freeze_graph "No variables to save"


https://github.com/tensorflow/tensorflow/issues/14580


6.  Tensorflow Lite demo app with inception-v3/Mobilenet_v1 (float) model crashes

https://github.com/tensorflow/tensorflow/issues/14719

line 46:

//private static final String MODEL_PATH = "mobilenet_quant_v1_224.tflite";
private static final String MODEL_PATH = "mobilenet_v1_1.0_224.tflite";
//private static final String MODEL_PATH = "inceptionv3_slim_2016.tflite";
/** Name of the label file stored in Assets. */
private static final String LABEL_PATH = "labels.txt";
//private static final String LABEL_PATH = "imagenet_slim_labels.txt";

line 61:

//static final int DIM_IMG_SIZE_X = 224;
//static final int DIM_IMG_SIZE_Y = 224;

static final int DIM_IMG_SIZE_X = 224;
static final int DIM_IMG_SIZE_Y = 224;

line 76

  /** A ByteBuffer to hold image data, to be feed into Tensorflow Lite as inputs. */
  //private ByteBuffer imgData = null;
  private float[][][][] imgData = null;

  /** An array to hold inference results, to be feed into Tensorflow Lite as outputs. */
  //private byte[][] labelProbArray = null;
  private float[][] labelProbArray = null;


line 98

/*
    imgData =
        ByteBuffer.allocateDirect(
            DIM_BATCH_SIZE * DIM_IMG_SIZE_X * DIM_IMG_SIZE_Y * DIM_PIXEL_SIZE );
            */
    imgData = new float[DIM_BATCH_SIZE][DIM_IMG_SIZE_X][DIM_IMG_SIZE_Y][DIM_PIXEL_SIZE];
    //imgData.order(ByteOrder.nativeOrder());
    //labelProbArray = new byte[1][labelList.size()];
    labelProbArray = new float[1][labelList.size()];

line 116

    //imgData.rewind();

line 166

    for (int i = 0; i < DIM_IMG_SIZE_X; ++i) {
      for (int j = 0; j < DIM_IMG_SIZE_Y; ++j) {
        final int val = intValues[pixel++];
        //imgData.put((byte) ((val >> 16) & 0xFF));
        //imgData.put((byte) ((val >> 8) & 0xFF));
        //imgData.put((byte) (val & 0xFF));
        imgData[0][i][j][0] = (float) ((val >> 16) & 0xFF);
        imgData[0][i][j][1] = (float) ((val >> 8) & 0xFF);
        imgData[0][i][j][2] = (float) (val & 0xFF);
      }
    }

line 184

      sortedLabels.add(
         // new AbstractMap.SimpleEntry<>(labelList.get(i), (labelProbArray[0][i] & 0xff) / 255.0f));
              new AbstractMap.SimpleEntry<>(labelList.get(i), (labelProbArray[0][i])));


after the above change, the model mobilenet_v1_1.0_224_float_2017_11_08 can run successfully.

Mobilenet 0.75 192 Float Mobilenet 1.0 224 Float success

Inception V3 Slim 2016 can run, but the output probabilities are larger than 1.0.....

Inception V3 2015 cannot run successfully, even though modified as the users said: 'Additionally, I received a tensor length error for the labels, the file had 1001 and (somewhere) it expected 1008, so I filled in 7 lines of foo1, foo2, etc.'



7. ## Convert a TensorFlow GraphDef to TensorFlow Lite for float inference

curl https://storage.googleapis.com/download.tensorflow.org/models/mobilenet_v1_0.50_128_frozen.tgz \
  | tar xzv -C /tmp

bazel build --config=opt tensorflow/contrib/lite/toco:toco
float:
bazel run --config=opt \
  //tensorflow/contrib/lite/toco:toco -- \
  --input_file=/tmp/mobilenet_v1_0.50_128/frozen_graph.pb \
  --output_file=/tmp/checkpoints/mobilenet_v1_0.50_128/mobilenet_v1_0.50_128.tflite \
  --input_format=TENSORFLOW_GRAPHDEF \
  --output_format=TFLITE \
  --inference_type=FLOAT \
  --input_shape=1,128,128,3 \
  --input_array=input \
  --output_array=MobilenetV1/Predictions/Reshape_1

success info:


INFO: Found 1 target...
Target //tensorflow/contrib/lite/toco:toco up-to-date:
  bazel-bin/tensorflow/contrib/lite/toco/toco
INFO: Elapsed time: 31.581s, Critical Path: 3.86s

INFO: Running command line: bazel-bin/tensorflow/contrib/lite/toco/toco '--input_file=/home/jiao/Downloads/mobilenet_v1_0.50_128/frozen_graph.pb' '--output_file=/tmp/checkpoints/mobilenet_v1_0.50_128/foo.lite' '--input_format=TENSORFLOW_GRAPHDEF' '--output_format=TFLITE' '--input_type=FLOAT' '--inference_type=FLOAT' '--input_shape=1,128,128,3' '--input_array=input' '--output_array=MobilenetV1/Predictions/Reshape_1'
2017-11-28 14:37:30.386852: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] Before general graph transformations: 416 operators, 583 arrays (0 quantized)
2017-11-28 14:37:30.404590: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] After general graph transformations pass 1: 31 operators, 89 arrays (0 quantized)
2017-11-28 14:37:30.404949: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] Before dequantization graph transformations: 31 operators, 89 arrays (0 quantized)
2017-11-28 14:37:30.405179: I tensorflow/contrib/lite/toco/allocate_transient_arrays.cc:312] Total transient array allocated size: 1048576 bytes, theoretical optimal value: 786432 bytes.
2017-11-28 14:37:30.405411: I tensorflow/contrib/lite/toco/toco_tooling.cc:255] Estimated count of arithmetic ops: 0.099218 billion (note that a multiply-add is counted as 2 ops).


quantized:

bazel run --config=opt \
  //tensorflow/contrib/lite/toco:toco -- \
  --input_file=/tmp/mobilenet_v1_0.50_128/quantized_graph.pb \
  --output_file=/tmp/checkpoints/mobilenet_v1_0.50_128/foo_quantized.tflite \
  --input_format=TENSORFLOW_GRAPHDEF \
  --output_format=TFLITE \
  --input_type=QUANTIZED_UINT8 \
  --inference_type=QUANTIZED_UINT8 \
  --input_shape=1,128,128,3 \
  --input_array=input \
  --output_array=MobilenetV1/Predictions/Reshape_1 \
  --default_ranges_min=0 \
  --default_ranges_max=6 \
  --mean_value=128 \
  --std_value=127

fail info:

INFO: Found 1 target...
Target //tensorflow/contrib/lite/toco:toco up-to-date:
  bazel-bin/tensorflow/contrib/lite/toco/toco
INFO: Elapsed time: 0.741s, Critical Path: 0.02s

INFO: Running command line: bazel-bin/tensorflow/contrib/lite/toco/toco '--input_file=/tmp/mobilenet_v1_0.50_128/quantized_graph.pb' '--output_file=/tmp/checkpoints/mobilenet_v1_0.50_128/foo_quantized.tflite' '--input_format=TENSORFLOW_GRAPHDEF' '--output_format=TFLITE' '--input_type=QUANTIZED_UINT8' '--inference_type=QUANTIZED_UINT8' '--input_shape=1,128,128,3' '--input_array=input' '--output_array=MobilenetV1/Predictions/Reshape_1' '--default_ranges_min=0' '--default_ranges_max=6' '--mean_value=128' '--std_value=127'
2017-12-06 14:51:45.883181: W tensorflow/contrib/lite/toco/toco_cmdline_flags.cc:177] --input_type is deprecated. Use --inference_input_type.
2017-12-06 14:51:45.893793: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1046] Converting unsupported operation: Dequantize
2017-12-06 14:51:45.893929: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1046] Converting unsupported operation: Dequantize
2017-12-06 14:51:45.894013: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1046] Converting unsupported operation: Dequantize
2017-12-06 14:51:45.894093: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1046] Converting unsupported operation: Dequantize
2017-12-06 14:51:45.894172: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1046] Converting unsupported operation: Dequantize
2017-12-06 14:51:45.894269: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1046] Converting unsupported operation: Dequantize
2017-12-06 14:51:45.894565: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1046] Converting unsupported operation: Dequantize
2017-12-06 14:51:45.894936: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1046] Converting unsupported operation: Dequantize
2017-12-06 14:51:45.897022: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] Before general graph transformations: 138 operators, 226 arrays (0 quantized)
2017-12-06 14:51:45.898811: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] After general graph transformations pass 1: 57 operators, 140 arrays (1 quantized)
2017-12-06 14:51:45.899319: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] Before pre-quantization graph transformations: 57 operators, 140 arrays (1 quantized)
2017-12-06 14:51:45.899380: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] Before quantization graph transformations: 57 operators, 140 arrays (1 quantized)
2017-12-06 14:51:45.899403: W tensorflow/contrib/lite/toco/graph_transformations/quantize.cc:132] Constant array MobilenetV1/Conv2d_0/weights/read/_82__cf__82 lacks MinMax information. To make up for that, we will now compute the MinMax from actual array elements. That will result in quantization parameters that probably do not match whichever arithmetic was used during training, and thus will probably be a cause of poor inference accuracy.
2017-12-06 14:51:45.899440: W tensorflow/contrib/lite/toco/graph_transformations/quantize.cc:132] Constant array MobilenetV1/Conv2d_1_depthwise/depthwise_weights/read/_79__cf__79 lacks MinMax information. To make up for that, we will now compute the MinMax from actual array elements. That will result in quantization parameters that probably do not match whichever arithmetic was used during training, and thus will probably be a cause of poor inference accuracy.
2017-12-06 14:51:45.899473: W tensorflow/contrib/lite/toco/graph_transformations/quantize.cc:132] Constant array MobilenetV1/Conv2d_1_pointwise/weights/read/_76__cf__76 lacks MinMax information. To make up for that, we will now compute the MinMax from actual array elements. That will result in quantization parameters that probably do not match whichever arithmetic was used during training, and thus will probably be a cause of poor inference accuracy.
2017-12-06 14:51:45.899511: W tensorflow/contrib/lite/toco/graph_transformations/quantize.cc:132] Constant array MobilenetV1/Conv2d_2_depthwise/depthwise_weights/read/_73__cf__73 lacks MinMax information. To make up for that, we will now compute the MinMax from actual array elements. That will result in quantization parameters that probably do not match whichever arithmetic was used during training, and thus will probably be a cause of poor inference accuracy.
2017-12-06 14:51:45.899543: W tensorflow/contrib/lite/toco/graph_transformations/quantize.cc:132] Constant array MobilenetV1/Conv2d_2_pointwise/weights/read/_70__cf__70 lacks MinMax information. To make up for that, we will now compute the MinMax from actual array elements. That will result in quantization parameters that probably do not match whichever arithmetic was used during training, and thus will probably be a cause of poor inference accuracy.
2017-12-06 14:51:45.899617: W tensorflow/contrib/lite/toco/graph_transformations/quantize.cc:132] Constant array MobilenetV1/Conv2d_3_depthwise/depthwise_weights/read/_67__cf__67 lacks MinMax information. To make up for that, we will now compute the MinMax from actual array elements. That will result in quantization parameters that probably do not match whichever arithmetic was used during training, and thus will probably be a cause of poor inference accuracy.
2017-12-06 14:51:45.899656: W tensorflow/contrib/lite/toco/graph_transformations/quantize.cc:132] Constant array MobilenetV1/Conv2d_3_pointwise/weights/read/_64__cf__64 lacks MinMax information. To make up for that, we will now compute the MinMax from actual array elements. That will result in quantization parameters that probably do not match whichever arithmetic was used during training, and thus will probably be a cause of poor inference accuracy.
2017-12-06 14:51:45.899760: W tensorflow/contrib/lite/toco/graph_transformations/quantize.cc:132] Constant array MobilenetV1/Conv2d_4_depthwise/depthwise_weights/read/_61__cf__61 lacks MinMax information. To make up for that, we will now compute the MinMax from actual array elements. That will result in quantization parameters that probably do not match whichever arithmetic was used during training, and thus will probably be a cause of poor inference accuracy.
2017-12-06 14:51:45.899806: W tensorflow/contrib/lite/toco/graph_transformations/quantize.cc:132] Constant array MobilenetV1/Conv2d_4_pointwise/weights/read/_58__cf__58 lacks MinMax information. To make up for that, we will now compute the MinMax from actual array elements. That will result in quantization parameters that probably do not match whichever arithmetic was used during training, and thus will probably be a cause of poor inference accuracy.
2017-12-06 14:51:45.899984: W tensorflow/contrib/lite/toco/graph_transformations/quantize.cc:132] Constant array MobilenetV1/Conv2d_5_depthwise/depthwise_weights/read/_55__cf__55 lacks MinMax information. To make up for that, we will now compute the MinMax from actual array elements. That will result in quantization parameters that probably do not match whichever arithmetic was used during training, and thus will probably be a cause of poor inference accuracy.
2017-12-06 14:51:45.900035: W tensorflow/contrib/lite/toco/graph_transformations/quantize.cc:132] Constant array MobilenetV1/Conv2d_5_pointwise/weights/read/_52__cf__52 lacks MinMax information. To make up for that, we will now compute the MinMax from actual array elements. That will result in quantization parameters that probably do not match whichever arithmetic was used during training, and thus will probably be a cause of poor inference accuracy.
2017-12-06 14:51:45.900362: W tensorflow/contrib/lite/toco/graph_transformations/quantize.cc:132] Constant array MobilenetV1/Conv2d_6_depthwise/depthwise_weights/read/_49__cf__49 lacks MinMax information. To make up for that, we will now compute the MinMax from actual array elements. That will result in quantization parameters that probably do not match whichever arithmetic was used during training, and thus will probably be a cause of poor inference accuracy.
2017-12-06 14:51:45.900412: W tensorflow/contrib/lite/toco/graph_transformations/quantize.cc:132] Constant array MobilenetV1/Conv2d_6_pointwise/weights/read/_46__cf__46 lacks MinMax information. To make up for that, we will now compute the MinMax from actual array elements. That will result in quantization parameters that probably do not match whichever arithmetic was used during training, and thus will probably be a cause of poor inference accuracy.
2017-12-06 14:51:45.901041: W tensorflow/contrib/lite/toco/graph_transformations/quantize.cc:132] Constant array MobilenetV1/Conv2d_7_depthwise/depthwise_weights/read/_43__cf__43 lacks MinMax information. To make up for that, we will now compute the MinMax from actual array elements. That will result in quantization parameters that probably do not match whichever arithmetic was used during training, and thus will probably be a cause of poor inference accuracy.
2017-12-06 14:51:45.901111: F tensorflow/contrib/lite/toco/graph_transformations/quantize.cc:354] Unimplemented: this graph contains an operator of type (Unsupported TensorFlow op: Dequantize) for which the quantized form is not yet implemented. Sorry, and patches welcome (that's a relatively fun patch to write, mostly providing the actual quantized arithmetic code for this op).
jiao@jiao-linux:~/code/source/tensorflow$


评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值