Tensorflow model on Android workflow



I'm trying to figure out the workflow for training and deploying a Tensorflow model on Android. I'm aware of the other questions similar to this one on StackOverflow, but none of them seem to address the problems I've run into.

After studying the Android example from the Tensorflow repository, this is what I think the workflow should be:

  1. Build and train Tensorflow model in Python.
  2. Create a new graph, and transfer all relevant nodes (i.e. not the nodes responsible for training) to this new graph. Trained weight variables are imported as constants so that the C++ API can read them.
  3. Develop Android GUI in Java, using the native keyword to stub out a call to the Tensorflow model.
  4. Run javah to generate the C/C++ stub code for the Tensorflow native call.
  5. Fill in the stub by using the Tensorflow C++ API to read in and access the trained/serialized model.
  6. Use Bazel to build BOTH the Java app, the native Tensorflow interface (as a .so file), and generate the APK.
  7. Use adb to deploy the APK.

    Step 6 is the problem. Bazel will happily compile a native (to OSX) .dylib that I can call from Java via the JNI. Android Studio, likewise, will generate a whole bunch of XML code that makes the GUI I want. However, Bazel wants all of the java app code to be inside the 'WORKSPACE' top-level directory (in the Tensorflow repo), and Android Studio immediately links in all sorts of external libraries from the SDK to make GUIs (I know because my Bazel compile run fails when it can't find these resources). The only way I can find to force Bazel to cross-compile a .so file is by making it a dependent rule of an Android rule. Directly cross-compiling a native lib is what I'd prefer to porting my A.S. code to a Bazel project.

    How do I square this? Bazel will supposedly compile Android code, but Android Studio generates code that Bazel can't compile. All the examples from Google simply give you code from a repo without any clue as to how it was generated. As far as I know, the XML that's part of an Android Studio app is supposed to be generated, not made by hand. If it can be made by hand, how do I avoid the need for all those external libraries?

    Maybe I'm getting the workflow wrong, or there's some aspect of Bazel/Android Studio that I'm not understanding. Any help appreciated.

Thanks!

Edit:

There were several things that I ended up doing that might have contributed to the library building successfully:

  1. I upgraded to the latest Bazel.
  2. I rebuilt TensorFlow from source.
  3. I implemented the recommended Bazel BUILD file below, with a few additions (taken from the Android example):

    cc_binary(
    name = "libName.so",
    srcs = ["org_tensorflowtest_MyActivity.cc", 
            "org_tensorflowtest_MyActivity.h",
            "jni.h",
            "jni_md.h",
            ":libpthread.so"],
    deps = ["//tensorflow/core:android_tensorflow_lib",
            ],
    copts = [
        "-std=c++11",
        "-mfpu=neon",
        "-O2",
    ],
    linkopts = ["-llog -landroid -lm"],
    linkstatic = 1,
    linkshared = 1,
    )
    
    cc_binary(
         name = "libpthread.so",
         srcs = [],
         linkopts = ["-shared"],
         tags = [
             "manual",
             "notap",
         ],
    )
    

I haven't verified that this library can be loaded and used in Android yet; Android Studio 1.5 seems to be very finicky about acknowledging the presence of native libs.

I'm trying to figure out the workflow for training and deploying a Tensorflow model on Android. I'm aware of the other questions similar to this one on StackOverflow, but none of them seem to address the problems I've run into.

After studying the Android example from the Tensorflow repository, this is what I think the workflow should be:

  1. Build and train Tensorflow model in Python.
  2. Create a new graph, and transfer all relevant nodes (i.e. not the nodes responsible for training) to this new graph. Trained weight variables are imported as constants so that the C++ API can read them.
  3. Develop Android GUI in Java, using the native keyword to stub out a call to the Tensorflow model.
  4. Run javah to generate the C/C++ stub code for the Tensorflow native call.
  5. Fill in the stub by using the Tensorflow C++ API to read in and access the trained/serialized model.
  6. Use Bazel to build BOTH the Java app, the native Tensorflow interface (as a .so file), and generate the APK.
  7. Use adb to deploy the APK.

    Step 6 is the problem. Bazel will happily compile a native (to OSX) .dylib that I can call from Java via the JNI. Android Studio, likewise, will generate a whole bunch of XML code that makes the GUI I want. However, Bazel wants all of the java app code to be inside the 'WORKSPACE' top-level directory (in the Tensorflow repo), and Android Studio immediately links in all sorts of external libraries from the SDK to make GUIs (I know because my Bazel compile run fails when it can't find these resources). The only way I can find to force Bazel to cross-compile a .so file is by making it a dependent rule of an Android rule. Directly cross-compiling a native lib is what I'd prefer to porting my A.S. code to a Bazel project.

    How do I square this? Bazel will supposedly compile Android code, but Android Studio generates code that Bazel can't compile. All the examples from Google simply give you code from a repo without any clue as to how it was generated. As far as I know, the XML that's part of an Android Studio app is supposed to be generated, not made by hand. If it can be made by hand, how do I avoid the need for all those external libraries?

    Maybe I'm getting the workflow wrong, or there's some aspect of Bazel/Android Studio that I'm not understanding. Any help appreciated.

Thanks!

Edit:

There were several things that I ended up doing that might have contributed to the library building successfully:

  1. I upgraded to the latest Bazel.
  2. I rebuilt TensorFlow from source.
  3. I implemented the recommended Bazel BUILD file below, with a few additions (taken from the Android example):

    cc_binary(
    name = "libName.so",
    srcs = ["org_tensorflowtest_MyActivity.cc", 
            "org_tensorflowtest_MyActivity.h",
            "jni.h",
            "jni_md.h",
            ":libpthread.so"],
    deps = ["//tensorflow/core:android_tensorflow_lib",
            ],
    copts = [
        "-std=c++11",
        "-mfpu=neon",
        "-O2",
    ],
    linkopts = ["-llog -landroid -lm"],
    linkstatic = 1,
    linkshared = 1,
    )
    
    cc_binary(
         name = "libpthread.so",
         srcs = [],
         linkopts = ["-shared"],
         tags = [
             "manual",
             "notap",
         ],
    )
    

I haven't verified that this library can be loaded and used in Android yet; Android Studio 1.5 seems to be very finicky about acknowledging the presence of native libs.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值