TensorFlow Lite is for mobile and embedded devices.
Tensorflow Lite 是针对移动和嵌入式设备的
TensorFlow Lite is the official solution for running machine learning models on mobile and embedded devices. It enables on‑device machine learning inference with low latency and a small binary size on Android, iOS, and other operating systems.
TF lite是针对在移动和嵌入式设备上运行机器学习模型的官方解决方案。
支持在Android,IOS和其他操作系统的设备上以低延迟和小的二进制大小进行机器学习推理。
Many benefits 诸多优势
On-device ML inference is difficult because of the many constraints—TensorFlow Lite can solve these:
基于设备的ML推理是困难的,由于很多限制——TF lite可以解决这些:
-
Performance 性能
TF Lite is fast with no noticeable accuracy loss—see the metrics. 没有明显的精度损失同时很快
-
Portability 可移植性
Android, iOS, and more specialized IoT devices. 安卓,IOS和更多特殊的IOT设备
-
Low latency 低延迟
Optimized float- and fixed-point CPU kernels, op‑fusing, and more.
-
Acceleration 加速
Integration with GPU and internal/external accelerators.
-
Small model size 模型尺寸变小
Controlled dependencies, quantization, and op registration. 控制依赖、量化、op注册
-
Tooling 工具
Conversion, compression, benchmarking, power-consumption, and more. 转换,压缩,基准,能量损耗等
How it works
Build
Build a new model or retrain an existing one, such as using transfer learning.
Convert
Convert a TensorFlow model into a compressed flat buffer with the TensorFlow Lite Converter.
Deploy
Take the compressed .tflite
file and load it into a mobile or embedded device.
See the tutorials below to build an app.