本教程主要介绍:
tensorflow-lite初探
有哪些资料?
- 安装
- iOS quickstart
- TensorFlow Lite converter:tf-lite converter总体介绍
- Converter command line examples:如何通过
tflite_convert
命令行进行模型转换和可视化 - Converter Python API guide:你也可以用python脚本进行转换
- TensorFlow Lite & TensorFlow Compatibility Guide:tf-lite支持哪些层
- How to use custom operators:怎么定义新的层(教程不详细,如何自定义层需要更多的探索)
iOS
- TensorFlow Lite example apps:可供参考的iOS示例代码
- Optimized models for common mobile and edge use cases:可供参考的模型
如何转换模型?
Single input and output arrays
tflite_convert \
--graph_def_file=frozen_model_relu_no_dp.pb \
--output_file=tflite/converted_dcscn_model_relu_dropout.tflite \
--input_arrays=x \
--input_shape=1,200,200,1 \
--output_arrays=R-CNN1/R-CNN1_conv \
--dump_graphviz_dir=tflite
Multiple input arrays
tflite_convert \
--graph_def_file=frozen_model_295.pb \
--output_file=tflite/prelu_295/converted_dcscn_model_prelu_295.tflite \
--input_arrays=x,x2 \
--input_shape=1,200,200,1:1,400,400,1 \
--output_arrays=output \
--dump_graphviz_dir=tflite/prelu_295
What you should pay attention to?
1. 设置--input_shape=1,200,200,1
通常input是采用placeholder的写法,如下所示:
self.x = tf.placeholder(tf.float32, shape=[None, None, None, self.channels], name="x")
如果按照官方mobilenet的教程:
tflite_convert \
--output_file=/tmp/foo.tflite \
--graph_def_file=/tmp/mobilenet_v1_0.50_128/frozen_graph.pb \
--input_arrays=input \
--output_arrays=MobilenetV1/Predictions/Reshape_1
不设置--input_shape=1,200,200,1
的话,则会报如下错误:
ValueError: None is only supported in the 1st dimension. Tensor 'x' has invalid shape '[None, None, None, 1]'.
2. 不支持的层,训练时避免使用
dropblock
- LogicalNot
- LogicalOr
- Prod
dropout
- DEPTH_TO_SPACE
- RandomUniform
prelu
- Abs
- Stack
others
后续官方可能添加的层: https://github.com/tensorflow/tensorflow/issues/21526
如何部署到手机
记录一下测试模型的路径:
/Users/kindlehe/Project/tensorflow/dcscn-super-resolution_old/model/pb
error
Command /bin/sh failed with exit code 7
网络问题,打开proxifier
Couldn’t find “converted_dcscn_model_relu_no_dp_no_ps.tflite” in bundle.
点击项目文件——Build phrase ——Copy Bundle Resources——勾选create groups、copy if needed——拖动converted_dcscn_model_relu_no_dp_no_ps.tflite到ImageClassification/Model目录下
资料
Swift之Vision 图像识别框架
CVPixelBuffer
1. 获取宽度:CVPixelBufferGetWidth
2. 获取格式:CVPixelBufferGetPixelFormatType
3. 获取每行的Bytes:inputImageRowBytes
4. 获取vImage Buffer:vImage_Buffer(data: UnsafeMutableRawPointer!, height: vImagePixelCount, width: vImagePixelCount, rowBytes: Int)
5. 获取像素首地址:CVPixelBufferGetBaseAddress(self)?.advanced(by: originY * inputImageRowBytes + originX * imageChannels)
6. CVPixelBufferRef 转换为UIimage:https://blog.csdn.net/yutaotst/article/details/53520381
7. UIImage转CVPixelBuffer:[imageToBuffer.swift](https://gist.github.com/cristhianleonli/4ef3a6ee359c2d3d5b3e09bb8c7eaef5)
8. CVPixelBufferRef与UIImage的互相转换:https://blog.csdn.net/jeffasd/article/details/78181856
9. Picking images with UIImagePickerController in Swift 5: https://theswiftdev.com/2019/01/30/picking-images-with-uiimagepickercontroller-in-swift-5/
10.UIImage扩展:https://github.com/melvitax/ImageHelper/blob/master/Sources/ImageHelper.swift