Tensorflow lite android源码 中编译集成

之前一直是在使用的是tensorflow mobile,来作为模型的运行环境。但是,tensoflow mobile的libtensorflow_inference.so有19MB,load到内存里以后,会占用较多的内存。。。测试看下来,大概占用20M,加上模型占用内存,导致我们的模块,内存占用变大了30MB。内存占用偏高,因此想优化一下。

 

首先想到的手段是模型裁剪。

但是,只有裁剪模型的网络结构,才能降低内存占用,可是这样做,又会导致模型的准确率降低,因此放弃。

剩下一条路,就是抱着试一试的心态,尝试换用较新的tensorflow lite了。

网上很多教程都是用gradle,或者brzel编译使用tensorflow lite的,但是我想在Android源码里编译使用tensorflow lite。因此无法使用gradle或者brzel编译。查了不少资料,折腾了两天,终于搞定了。在这里分享给大家。

 

编译出适合手机的jar和so

这一步与tensorflow 官网基本一致,但是官网有坑: https://www.tensorflow.org/mobile/tflite/demo_android

Build TensorFlow Lite and the demo app from source

Clone the TensorFlow repo

git clone https://github.com/tensorflow/tensorflow

Install Bazel

If bazel is not installed on your system, see Installing Bazel.

Note: Bazel does not currently support Android builds on Windows. Windows users should download the prebuilt binary.

Install Android NDK and SDK

The Android NDK is required to build the native (C/C++) TensorFlow Lite code. The current recommended version is 14band can be found on the NDK Archives page.

The Android SDK and build tools can be downloaded separately or used as part of Android Studio. To build the TensorFlow Lite Android demo, build tools require API >= 23 (but it will run on devices with API >= 21).

In the root of the TensorFlow repository, update the WORKSPACE file with the api_level and location of the SDK and NDK. If you installed it with Android Studio, the SDK path can be found in the SDK manager. The default NDK path is:{SDK path}/ndk-bundle. For example:

android_sdk_repository (
    name = "androidsdk",
    api_level = 23,
    build_tools_version = "23.0.2",
    path = "/home/xxxx/android-sdk-linux/",
)

android_ndk_repository(
    name = "androidndk",
    path = "/home/xxxx/android-ndk-r10e/",
    api_level = 19,
)

Some additional details are available on the TF Lite Android App page.

Build the source code

To build the demo app, run bazel:

bazel build --cxxopt=--std=c++11 //tensorflow/contrib/lite/java/demo/app/src/main:TfLiteCameraDemo

Caution: Because of an bazel bug, we only support building the Android demo app within a Python 2 environment.

就是这个"Some additional details are available on the TF Lite Android App page.",一定要点进去看。。。

点击后,跳转https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/lite/java/demo/README.md

Building from Source with Bazel

  1. Follow the Bazel steps for the TF Demo App:

  2. Install Bazel and Android Prerequisites. It's easiest with Android Studio.

    • You'll need at least SDK version 23.
    • Make sure to install the latest version of Bazel. Some distributions ship with Bazel 0.5.4, which is too old.
    • Bazel requires Android Build Tools 26.0.1 or higher.
    • Bazel is incompatible with NDK revisions 15 and above, with revision 16 being a compile-breaking change.Download an older version manually instead of using the SDK Manager.
    • You also need to install the Android Support Repository, available through Android Studio under Android SDK Manager -> SDK Tools -> Android Support Repository.

一定要按照第一步的提示“Follow the Bazel steps for the TF Demo App:”,使用NDK14的版本,才能编译成功。:P

官网最开始写19。。。坑啊。。。各种编译不过。我尝试了19,17,16。。。都失败了。。直到最后仔细查看每个链接。。。仔细阅读每篇教程。。才终于找到The current recommended version is 14b, which may be found here.

Bazel

NOTE: Bazel does not currently support building for Android on Windows. Full support for gradle/cmake builds is coming soon, but in the meantime we suggest that Windows users download the prebuilt binaries instead.

Install Bazel and Android Prerequisites

Bazel is the primary build system for TensorFlow. To build with Bazel, it and the Android NDK and SDK must be installed on your system.

  1. Install the latest version of Bazel as per the instructions on the Bazel website.

  2. The Android NDK is required to build the native (C/C++) TensorFlow code. The current recommended version is 14b, which may be found here.

    • NDK 16, the revision released in November 2017, is incompatible with Bazel. See here.
  3. The Android SDK and build tools may be obtained here, or alternatively as part of Android Studio. Build tools API >= 23 is required to build the TF Android demo (though it will run on API >= 21 devices).

    • The Android Studio SDK Manager's NDK installer will install the latest revision of the NDK, which is incompatiblewith Bazel. You'll need to download an older version manually, as (2) suggests.

Edit WORKSPACE

NOTE: As long as you have the SDK and NDK installed, the ./configure script will create these rules for you. Answer "Yes" when the script asks to automatically configure the ./WORKSPACE.

The Android entries in <workspace_root>/WORKSPACE must be uncommented with the paths filled in appropriately depending on where you installed the NDK and SDK. Otherwise an error such as: "The external label '//external:android/sdk' is not bound to anything" will be reported.

Also edit the API levels for the SDK in WORKSPACE to the highest level you have installed in your SDK. This must be >= 23 (this is completely independent of the API level of the demo, which is defined in AndroidManifest.xml). The NDK API level may remain at 14.

bazel编译环境配置好后,就可以编译了。

官网推荐的这个

bazel build --cxxopt=--std=c++11 //tensorflow/contrib/lite/java/demo/app/src/main:TfLiteCameraDemo

只能编译出TfLiteCameraDemo的apk,

但是我想编译出libtensorflowlite.jar和libtensorflowlite_jni.so,方便在Android源码环境里编译集成tensorflow lite,所以,在编译tensorflowlite的源码时,需要调整一下bazel的编译命令:

bazel build --cxxopt='--std=c++11' //tensorflow/contrib/lite/java:tensorflowlite --crosstool_top=//external:android/crosstool --host_crosstool_top=@bazel_tools//tools/cpp:toolchain --cpu=arm64-v8a

然后,就能成功编译出给arm64-v8a使用的libtensorflowlite.jar和libtensorflowlite_jni.so了。

想编译支持其他类型的机器,修改--cpu参数即可。

 

Android源码里编译使用tensorflowlite

下面,就是在源码环境里测试一下。

首先,把TfLiteCameraDemo的源码拷贝到Android源码目录里,然后添加Android.mk,这两个配置是关键:

LOCAL_PREBUILT_STATIC_JAVA_LIBRARIES := tensorflowlite:libs/libtensorflowlite.jar

LOCAL_PREBUILT_LIBS := libtensorflowlite_jni:libs/libtensorflowlite_jni.so

以前使用tensorflow mobile的时候,libtensorflow_inference.so是需要自己手动System.loadLibrary的。

但是我看了下TfLiteCameraDemo,没有发现System.loadLibrary的代码。

于是,就去查看TensorflowLite的源码,发现原来libtensorflowlite_jni.so已经在TensorflowLite里加载了。(所以,我们在Android.mk里配置的so的名字,也必须是“libtensorflowlite_jni”,否则运行时会报错,加载so失败)

/** Static utility methods loading the TensorFlowLite runtime. */
public final class TensorFlowLite {

  private static final String LIBNAME = "tensorflowlite_jni";

  private TensorFlowLite() {}

  /** Returns the version of the underlying TensorFlowLite runtime. */
  public static native String version();

  /**
   * Load the TensorFlowLite runtime C library.
   */
  static boolean init() {
    try {
      System.loadLibrary(LIBNAME);
      return true;
    } catch (UnsatisfiedLinkError e) {
      System.err.println("TensorFlowLite: failed to load native library: " + e.getMessage());
      return false;
    }
  }

  static {
    init();
  }
}

然后在源码里make一下,就成功了。

push进手机后,TfLiteCameraDemo成功识别出了键盘,内存占用大概20M,这是UI activity + bitmap + tensorflowlite + mobileNet一共占用的,而以前tensorflow mobile自己就能占用20M了。

 

总结

原来tensoflow mobile的libtensorflow_inference.so有19M,而libtensorflowlite_jni.so只有1.5M,因此至少在加载tensorflow需要的native so这一块,lite比mobile就节省了不少内存。

特别感谢这两篇文章,为我提供了不少思路:

https://yq.aliyun.com/articles/608715

https://fucknmb.com/2017/11/17/Tensorflow-Lite%E7%BC%96%E8%AF%91/

  • 1
    点赞
  • 9
    收藏
    觉得还不错? 一键收藏
  • 3
    评论
评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值