【教程】Super-Resolution模型如何部署到tensorflow-lite?

本教程主要介绍:

tensorflow-lite初探

有哪些资料?

  1. 安装
  2. iOS quickstart
  3. TensorFlow Lite converter:tf-lite converter总体介绍
  4. Converter command line examples:如何通过tflite_convert命令行进行模型转换和可视化
  5. Converter Python API guide:你也可以用python脚本进行转换
  6. TensorFlow Lite & TensorFlow Compatibility Guide:tf-lite支持哪些层
  7. How to use custom operators:怎么定义新的层(教程不详细,如何自定义层需要更多的探索)

iOS

  1. TensorFlow Lite example apps:可供参考的iOS示例代码
  2. Optimized models for common mobile and edge use cases:可供参考的模型

如何转换模型?

Single input and output arrays

tflite_convert \
--graph_def_file=frozen_model_relu_no_dp.pb \
--output_file=tflite/converted_dcscn_model_relu_dropout.tflite \
--input_arrays=x \
--input_shape=1,200,200,1 \
--output_arrays=R-CNN1/R-CNN1_conv \
--dump_graphviz_dir=tflite

Multiple input arrays

tflite_convert \
--graph_def_file=frozen_model_295.pb \
--output_file=tflite/prelu_295/converted_dcscn_model_prelu_295.tflite \
--input_arrays=x,x2 \
--input_shape=1,200,200,1:1,400,400,1 \
--output_arrays=output \
--dump_graphviz_dir=tflite/prelu_295

What you should pay attention to?

1. 设置--input_shape=1,200,200,1

通常input是采用placeholder的写法,如下所示:

self.x = tf.placeholder(tf.float32, shape=[None, None, None, self.channels], name="x")

如果按照官方mobilenet的教程:

tflite_convert \
  --output_file=/tmp/foo.tflite \
  --graph_def_file=/tmp/mobilenet_v1_0.50_128/frozen_graph.pb \
  --input_arrays=input \
  --output_arrays=MobilenetV1/Predictions/Reshape_1

不设置--input_shape=1,200,200,1的话,则会报如下错误:

ValueError: None is only supported in the 1st dimension. Tensor 'x' has invalid shape '[None, None, None, 1]'.

2. 不支持的层,训练时避免使用

dropblock
  1. LogicalNot
  2. LogicalOr
  3. Prod
dropout
  1. DEPTH_TO_SPACE
  2. RandomUniform
prelu
  1. Abs
  2. Stack
others

后续官方可能添加的层: https://github.com/tensorflow/tensorflow/issues/21526

如何部署到手机

记录一下测试模型的路径:
/Users/kindlehe/Project/tensorflow/dcscn-super-resolution_old/model/pb

error

Command /bin/sh failed with exit code 7
网络问题,打开proxifier

Couldn’t find “converted_dcscn_model_relu_no_dp_no_ps.tflite” in bundle.
点击项目文件——Build phrase ——Copy Bundle Resources——勾选create groups、copy if needed——拖动converted_dcscn_model_relu_no_dp_no_ps.tflite到ImageClassification/Model目录下

资料

Swift之Vision 图像识别框架
CVPixelBuffer

1. 获取宽度:CVPixelBufferGetWidth
2. 获取格式:CVPixelBufferGetPixelFormatType
3. 获取每行的Bytes:inputImageRowBytes
4. 获取vImage BuffervImage_Buffer(data: UnsafeMutableRawPointer!, height: vImagePixelCount, width: vImagePixelCount, rowBytes: Int)
5. 获取像素首地址:CVPixelBufferGetBaseAddress(self)?.advanced(by: originY * inputImageRowBytes + originX * imageChannels)
6.  CVPixelBufferRef 转换为UIimage:https://blog.csdn.net/yutaotst/article/details/53520381
7. UIImage转CVPixelBuffer:[imageToBuffer.swift](https://gist.github.com/cristhianleonli/4ef3a6ee359c2d3d5b3e09bb8c7eaef5)
8. CVPixelBufferRef与UIImage的互相转换:https://blog.csdn.net/jeffasd/article/details/78181856
9. Picking images with UIImagePickerController in Swift 5: https://theswiftdev.com/2019/01/30/picking-images-with-uiimagepickercontroller-in-swift-5/
10.UIImage扩展:https://github.com/melvitax/ImageHelper/blob/master/Sources/ImageHelper.swift 
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值