承接上一篇的内容,考虑如何将深度学习的模型加载到android app中
前言
将图片学习的模型加载到手机应用中,让学习模型发挥实际用处。这里以SqueezeNet模型(典型轻量化模型)为参考,记录如何将轻量化模型应用到手机app上。
一、使用工具
采用pycharm和android studio分别作为python和android开发的IDE。
二、使用步骤
1.模型格式的转换
Keras可以作为初学者入手深度学习模型不错的模型库,其语言简单且易于理解。Keras一般将训练好的模型和参数以h5格式保存,这里需要转换成tflite格式,以供android调用。
from keras_squeezenet import SqueezeNet
model = SqueezeNet(weights=None)
model.load_weights("./squeezenet_weights_tf_dim_ordering_tf_kernels.h5") ###从网上下载SequeezeNet网络模型及参数
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
open("./squeezenet_model.tflite", "wb").write(tflite_model)
这里生成的tflite文件能够根据输入图片产生预测结果的概率值。
将训练的模型加载到android上不仅需要tflite模型,还需要label文件。
在github中下载的模型相配套的label文件。
时候不好下载,可以从此处下载:
分类标签文件
接下来通过Android Studio建立项目,该过程这里不详细描述。
2.配置文件修改
在新建的项目中,在build.gradle中增加
android {
......
aaptOptions {
noCompress "tflite"
noCompress "lite"
}
}
dependencies {
.......
implementation 'org.tensorflow:tensorflow-lite:2.2.0'
}
大家可以根据自己的要求写界面文件,这里附上写的界面文件(AndroidManifest.xml):
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="gdut.bsx.tensorflowtraining">
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE"/>
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/>
<uses-permission android:name="android.permission.CAMERA"/>
<application
android:allowBackup="true"
android:icon="@mipmap/ic_launcher"
android:label="@string/app_name"
android:roundIcon="@mipmap/ic_launcher_round"
android:supportsRtl="true"
android:screenOrientation="portrait"
android:theme="@style/AppTheme">
<activity android:name=".MainActivity" android:screenOrientation="portrait"
android:launchMode="singleTask">
<intent-filter>
<action android:name="android.intent.action.MAIN"/>
<category android:name="android.intent.category.LAUNCHER"/>
</intent-filter>
</activity>
<provider
android:name="android.support.v4.content.FileProvider"
android:authorities="gdut.bsx.tensorflowtraining.fileprovider"
android:exported="false"
android:grantUriPermissions="true">
<meta-data
android:name="android.support.FILE_PROVIDER_PATHS"
android:resource="@xml/file_paths" />
</provider>
</application>
</manifest>
将生成的模型文件 sequeezenet.tflite和Class_Label.txt 放入文件夹assets中。
注意:一定要用分类标签文件。
3. 应用程序
这里我们实现的代码主要功能包括摄像头拍照,图片管理、图像分类。
包含代码包括:
ImageClassifier.java
package gdut.bsx.tensorflowtraining;
import android.content.Context;
import android.content.res.AssetFileDescriptor;
import android.graphics.Bitmap;
import android.os.SystemClock;
import android.util.Log;
import org.tensorflow.lite.Interpreter;
import java.io.BufferedReader;
import java.io.FileInputStream;
import java.io.IOException;
import java.io.InputStreamReader;
import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import java.nio.channels.FileChannel;
import java.util.AbstractMap;
import java.util.ArrayList;
import java.util.Comparator;
import java.util.List;
import java.util.Map;
import java.util.PriorityQueue;
public class ImageClassifier {
private static volatile ImageClassifier INSTANCE;
/**
* Tag for the {@link Log}.
*/
private static final String TAG = "TfLiteCameraDemo";
/**
* Name of the model file stored in Assets.
*/
private static final String MODEL_PATH = "squeezenet.tflite";
//private static final String MODEL_PATH = "mobilenet_v1_0.25_128_quant.tflite";
/**
* Name of the label file stored in Assets.
*/
private static final String LABEL_PATH = "Class_Label.txt";
/** Number of results to show in the UI. */
private static final int RESULTS_TO_SHOW = 3;
/** Dimensions of inputs. */
private static final int DIM_BATCH_SIZE = 1;
private static final int DIM_PIXEL_SIZE = 3;
static final int DIM_IMG_SIZE_X = 224;
static final int DIM_IMG_SIZE_Y = 224;
private static final int IMAGE_MEAN = 128;
private static final float IMAGE_STD = 128.0f;
/* Preallocated buffers for storing image data in. */
private int[] intValues = new int[DIM_IMG_SIZE_X * DIM_IMG_SIZE_Y];
/**
* An instance of the driver class to run model inference with Tensorflow Lite.
*/
private Interpreter tflite;
/**
* Labels corresponding to the output of the vision model.
*/
private List<String> labelList;
/**
* A ByteBuffer to hold image data, to be feed into Tensorflow Lite as inputs.
*/
private ByteBuffer imgData ;
/**
* An array to hold inference results, to be feed into Tensorflow Lite as outputs.
*/
private float[][] labelProbArray ;
/**
* multi-stage low pass filter
**/
private float[][] filterLabelProbArray = null;
private static final int FILTER_STAGES = 3;
private static final float FILTER_FACTOR = 0.4f;
/**
* Initializes an {@code ImageClassifier}.
*/
private PriorityQueue<Map.Entry<String, Float>> sortedLabels =
new PriorityQueue<>(
RESULTS_TO_SHOW,
new Comparator<Map.Entry<String, Float>>() {
@Override
public int compare(Map.Entry<String, Float> o1, Map.Entry<String, Float> o2) {
return (o1.getValue()).compareTo(o2.getValue());
}
});
private ImageClassifier(Context activity) throws IOException {
tflite = new Interpreter(loadModelFile(activity));
labelList = loadLabelList(activity);
imgData = ByteBuffer.allocateDirect(
4 * DIM_BATCH_SIZE * DIM_IMG_SIZE_X * DIM_IMG_SIZE_Y * DIM_PIXEL_SIZE);
imgData.order(ByteOrder.nativeOrder());
labelProbArray = new float[1][labelList.size()];
filterLabelProbArray = new float[FILTER_STAGES][labelList.size()];
Log.d(TAG, "Created a Tensorflow Lite Image Classifier.");
}
public static ImageClassifier getInstance(Context context) {
if (INSTANCE == null) {
try {
INSTANCE = new ImageClassifier(context);
} catch (IOException e) {
e.printStackTrace();
}
}
return INSTANCE;
}
/**
* Classifies a frame from the preview stream.
*/
public String classifyFrame(Bitmap bitmap) {
if (tflite == null) {
Log.e(TAG, "Image classifier has not been initialized; Skipped.");
return "Uninitialized Classifier.";
}
convertBitmapToByteBuffer(bitmap);
long startTime = SystemClock.uptimeMillis();
tflite.run(imgData, labelProbArray);
long endTime = SystemClock.uptimeMillis();
Log.d(TAG, "Timecost to run model inference: " + Long.toString(endTime - startTime));
// Smooth the results across frames.
//applyFilter();
return printTopKLabels();
}
/**
* Smooth the results across frames.
*/
/*
void applyFilter() {
int numLabels = labelList.size();
// Low pass filter `labelProbArray` into the first stage of the filter.
for (int j = 0; j < numLabels; ++j) {
filterLabelProbArray[0][j] +=
FILTER_FACTOR * (labelProbArray[0][j] - filterLabelProbArray[0][j]);
}
// Low pass filter each stage into the next.
for (int i = 1; i < FILTER_STAGES; ++i) {
for (int j = 0; j < numLabels; ++j) {
filterLabelProbArray[i][j] +=
FILTER_FACTOR * (filterLabelProbArray[i - 1][j] - filterLabelProbArray[i][j]);
}
}
// Copy the last stage filter output back to `labelProbArray`.
for (int j = 0; j < numLabels; ++j) {
labelProbArray[0][j] = filterLabelProbArray[FILTER_STAGES - 1][j];
}
}
*/
/** Closes tflite to release resources. */
public void close() {
tflite.close();
tflite = null;
}
/**
* Memory-map the model file in Assets.
*/
private ByteBuffer loadModelFile(Context activity) throws IOException {
AssetFileDescriptor fileDescriptor = activity.getAssets().openFd(MODEL_PATH);
FileInputStream inputStream = new FileInputStream(fileDescriptor.getFileDescriptor());
FileChannel fileChannel = inputStream.getChannel();
long startOffset = fileDescriptor.getStartOffset();
long declaredLength = fileDescriptor.getDeclaredLength();
return fileChannel.map(FileChannel.MapMode.READ_ONLY, startOffset, declaredLength).asReadOnlyBuffer();
}
/**
* Reads label list from Assets.
*/
private List<String> loadLabelList(Context activity) throws IOException {
List<String> labelList = new ArrayList<String>();
BufferedReader reader =
new BufferedReader(new InputStreamReader(activity.getAssets().open(LABEL_PATH)));
String line;
while ((line = reader.readLine()) != null) {
labelList.add(line);
}
reader.close();
return labelList;
}
/**
* Get Label.txt.
*/
public List<String> getLabelList() throws IOException {
return labelList;
}
/**
* Writes Image data into a {@code ByteBuffer}.
*/
private void convertBitmapToByteBuffer(Bitmap bitmap) {
if (imgData == null) {
return;
}
imgData.rewind();
bitmap.getPixels(intValues, 0, bitmap.getWidth(), 0, 0, bitmap.getWidth(), bitmap.getHeight());
// Convert the image to floating point.
int pixel = 0;
long startTime = SystemClock.uptimeMillis();
for (int i = 0; i < DIM_IMG_SIZE_X; ++i) {
for (int j = 0; j < DIM_IMG_SIZE_Y; ++j) {
final int val = intValues[pixel++];
imgData.putFloat((((val >> 16) & 0xFF) - IMAGE_MEAN) / IMAGE_STD);
imgData.putFloat((((val >> 8) & 0xFF) - IMAGE_MEAN) / IMAGE_STD);
imgData.putFloat((((val) & 0xFF) - IMAGE_MEAN) / IMAGE_STD);
// imgData.putFloat(((val >> 16) & 0xFF) / IMAGE_STD);
// imgData.putFloat(((val >> 8) & 0xFF)/ IMAGE_STD );
// imgData.putFloat(((val) & 0xFF)/ IMAGE_STD );
}
}
long endTime = SystemClock.uptimeMillis();
Log.d(TAG, "Timecost to put values into ByteBuffer: " + Long.toString(endTime - startTime));
}
/** Prints top-K labels, to be shown in UI as the results. */
private String printTopKLabels() {
for (int i = 0; i < labelList.size(); ++i) {
sortedLabels.add(
new AbstractMap.SimpleEntry<>(labelList.get(i), labelProbArray[0][i]));
if (sortedLabels.size() > RESULTS_TO_SHOW) {
sortedLabels.poll();
}
}
String textToShow = "";
final int size = sortedLabels.size();
for (int i = 0; i < size; ++i) {
Map.Entry<String, Float> label = sortedLabels.poll();
textToShow = String.format("\n%s: %4.2f", label.getKey(), label.getValue()) + textToShow;
}
return textToShow;
}
}
MainActivity.java
package gdut.bsx.tensorflowtraining;
import android.Manifest;
import android.content.Context;
import android.content.DialogInterface;
import android.content.Intent;
import android.content.pm.PackageManager;
import android.graphics.Bitmap;
import android.graphics.Matrix;
import android.net.Uri;
import android.os.Build;
import android.os.Bundle;
import android.os.Looper;
import android.os.MessageQueue;
import android.provider.MediaStore;
import android.provider.Settings;
import android.support.annotation.NonNull;
import android.support.annotation.Nullable;
import android.support.v4.app.ActivityCompat;
import android.support.v4.content.ContextCompat;
import android.support.v4.content.FileProvider;
import android.support.v7.app.AlertDialog;
import android.support.v7.app.AppCompatActivity;
import android.support.v7.app.AppCompatDelegate;
import android.util.Log;
import android.view.View;
import android.widget.ImageView;
import android.widget.TextView;
import android.widget.Toast;
import com.bumptech.glide.load.DataSource;
import com.bumptech.glide.load.engine.GlideException;
import com.bumptech.glide.request.RequestListener;
import com.bumptech.glide.request.target.Target;
import java.io.File;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.Executor;
import java.util.concurrent.ScheduledThreadPoolExecutor;
import java.util.concurrent.ThreadFactory;
blic class MainActivity extends AppCompatActivity implements View.OnClickListener {
public static final String TAG = MainActivity.class.getSimpleName();
private static final int OPEN_SETTING_REQUEST_COED = 110;
private static final int TAKE_PHOTO_REQUEST_CODE = 120;
private static final int PICTURE_REQUEST_CODE = 911;
private static final int PERMISSIONS_REQUEST = 108;
private static final int CAMERA_PERMISSIONS_REQUEST_CODE = 119;
private static final String CURRENT_TAKE_PHOTO_URI = "currentTakePhotoUri";
private gdut.bsx.tensorflowtraining.ImageClassifier imageClassifier;
static final int dstWidth = 224;
static final int dstHeight = 224;
private Executor executor;
private Uri currentTakePhotoUri;
private TextView result;
private ImageView ivPicture;
static {
AppCompatDelegate.setCompatVectorFromResourcesEnabled(true);
}
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
if (!isTaskRoot()) {
finish();
}
setContentView(R.layout.activity_main);
findViewById(R.id.iv_choose_picture).setOnClickListener(this);
findViewById(R.id.iv_take_photo).setOnClickListener(this);
ivPicture = findViewById(R.id.iv_picture);
result = findViewById(R.id.tv_classifier_info);
initialiseImageClassifier(this);
// 避免耗时任务占用 CPU 时间片造成UI绘制卡顿,提升启动页面加载速度
Looper.myQueue().addIdleHandler(idleHandler);
}
@Override
public void onSaveInstanceState(Bundle savedInstanceState){
// 防止拍照后无法返回当前 activity 时数据丢失
savedInstanceState.putParcelable(CURRENT_TAKE_PHOTO_URI, currentTakePhotoUri);
super.onSaveInstanceState(savedInstanceState);
}
@Override
protected void onRestoreInstanceState(Bundle savedInstanceState) {
super.onRestoreInstanceState(savedInstanceState);
if (savedInstanceState != null) {
currentTakePhotoUri = savedInstanceState.getParcelable(CURRENT_TAKE_PHOTO_URI);
}
}
/**
* 主线程消息队列空闲时(视图第一帧绘制完成时)处理耗时事件
*/
MessageQueue.IdleHandler idleHandler = new MessageQueue.IdleHandler() {
@Override
public boolean queueIdle() {
// 初始化线程池
executor = new ScheduledThreadPoolExecutor(1, new ThreadFactory() {
@Override
public Thread newThread(@NonNull Runnable r) {
Thread thread = new Thread(r);
thread.setDaemon(true);
thread.setName("ThreadPool-ImageClassifier");
return thread;
}
});
// 请求权限
requestMultiplePermissions();
return false;
}
};
/**
* 请求存储和相机权限
*/
private void requestMultiplePermissions() {
String storagePermission = Manifest.permission.WRITE_EXTERNAL_STORAGE;
String cameraPermission = Manifest.permission.CAMERA;
int hasStoragePermission = ActivityCompat.checkSelfPermission(this, storagePermission);
int hasCameraPermission = ActivityCompat.checkSelfPermission(this, cameraPermission);
List<String> permissions = new ArrayList<>();
if (hasStoragePermission != PackageManager.PERMISSION_GRANTED) {
permissions.add(storagePermission);
}
if (hasCameraPermission != PackageManager.PERMISSION_GRANTED) {
permissions.add(cameraPermission);
}
if (!permissions.isEmpty()) {
String[] params = permissions.toArray(new String[permissions.size()]);
ActivityCompat.requestPermissions(this, params, PERMISSIONS_REQUEST);
}
}
@Override
public void onRequestPermissionsResult(int requestCode, String[] permissions, int[] grantResults)
{
if (requestCode == PERMISSIONS_REQUEST) {
if (Manifest.permission.WRITE_EXTERNAL_STORAGE.equals(permissions[0]) && grantResults[0] != PackageManager.PERMISSION_GRANTED) {
//permission denied 显示对话框告知用户必须打开权限 (storagePermission )
// Should we show an explanation?
// 当app完全没有机会被授权的时候,调用shouldShowRequestPermissionRationale() 返回false
if (ActivityCompat.shouldShowRequestPermissionRationale(this,
Manifest.permission.WRITE_EXTERNAL_STORAGE)) {
// 系统弹窗提示授权
showNeedStoragePermissionDialog();
} else {
// 已经被禁止的状态,比如用户在权限对话框中选择了"不再显示”,需要自己弹窗解释
showMissingStoragePermissionDialog();
}
}
} else if (requestCode == CAMERA_PERMISSIONS_REQUEST_CODE) {
if (grantResults[0] != PackageManager.PERMISSION_GRANTED) {
showNeedCameraPermissionDialog();
} else {
openSystemCamera();
}
}
}
/**
* 显示缺失权限提示,可再次请求动态权限
*/
private void showNeedStoragePermissionDialog() {
new AlertDialog.Builder(this)
.setTitle("权限获取提示")
.setMessage("必须要有存储权限才能获取到图片")
.setNegativeButton("取消", new DialogInterface.OnClickListener() {
@Override public void onClick(DialogInterface dialog, int which) {
dialog.cancel();
}
})
.setPositiveButton("确定", new DialogInterface.OnClickListener() {
@Override public void onClick(DialogInterface dialog, int which) {
ActivityCompat.requestPermissions(MainActivity.this,
new String[]{Manifest.permission.WRITE_EXTERNAL_STORAGE}, PERMISSIONS_REQUEST);
}
}).setCancelable(false)
.show();
}
/**
* 显示权限被拒提示,只能进入设置手动改
*/
private void showMissingStoragePermissionDialog() {
new AlertDialog.Builder(this)
.setTitle("权限获取失败")
.setMessage("必须要有存储权限才能正常运行")
.setNegativeButton("取消", new DialogInterface.OnClickListener() {
@Override public void onClick(DialogInterface dialog, int which) {
MainActivity.this.finish();
}
})
.setPositiveButton("去设置", new DialogInterface.OnClickListener() {
@Override public void onClick(DialogInterface dialog, int which) {
startAppSettings();
}
})
.setCancelable(false)
.show();
}
private void showNeedCameraPermissionDialog() {
new AlertDialog.Builder(this)
.setMessage("摄像头权限被关闭,请开启权限后重试")
.setPositiveButton("确定", new DialogInterface.OnClickListener() {
@Override
public void onClick(DialogInterface dialog, int which) {
dialog.dismiss();
}
})
.create().show();
}
private static final String PACKAGE_URL_SCHEME = "package:";
/**
* 启动应用的设置进行授权
*/
private void startAppSettings() {
Intent intent = new Intent(Settings.ACTION_APPLICATION_DETAILS_SETTINGS);
intent.setData(Uri.parse(PACKAGE_URL_SCHEME + getPackageName()));
startActivityForResult(intent, OPEN_SETTING_REQUEST_COED);
}
@Override
public void onClick(View view) {
switch (view.getId()) {
case R.id.iv_choose_picture :
choosePicture();
break;
case R.id.iv_take_photo :
takePhoto();
break;
default:break;
}
}
/**
* 选择一张图片并裁剪获得一个小图
*/
private void choosePicture() {
Intent intent = new Intent(Intent.ACTION_GET_CONTENT);
intent.setType("image/*");
startActivityForResult(intent, PICTURE_REQUEST_CODE);
}
/**
* 使用系统相机拍照
*/
private void takePhoto() {
if (ContextCompat.checkSelfPermission(this, Manifest.permission.CAMERA) != PackageManager.PERMISSION_GRANTED) {
ActivityCompat.requestPermissions(this, new String[]{Manifest.permission.CAMERA}, CAMERA_PERMISSIONS_REQUEST_CODE);
} else {
openSystemCamera();
}
}
/**
* 打开系统相机
*/
private void openSystemCamera() {
//调用系统相机
Intent takePhotoIntent = new Intent();
takePhotoIntent.setAction(MediaStore.ACTION_IMAGE_CAPTURE);
//这句作用是如果没有相机则该应用不会闪退,要是不加这句则当系统没有相机应用的时候该应用会闪退
if (takePhotoIntent.resolveActivity(getPackageManager()) == null) {
Toast.makeText(this, "当前系统没有可用的相机应用", Toast.LENGTH_SHORT).show();
return;
}
String fileName = "TF_" + System.currentTimeMillis() + ".jpg";
File photoFile = new File(FileUtil.getPhotoCacheFolder(), fileName);
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.N) {
//通过FileProvider创建一个content类型的Uri
currentTakePhotoUri = FileProvider.getUriForFile(this, "gdut.bsx.tensorflowtraining.fileprovider", photoFile);
//对目标应用临时授权该 Uri 所代表的文件
takePhotoIntent.addFlags(Intent.FLAG_GRANT_READ_URI_PERMISSION);
} else {
currentTakePhotoUri = Uri.fromFile(photoFile);
}
//将拍照结果保存至 outputFile 的Uri中,不保留在相册中
takePhotoIntent.putExtra(MediaStore.EXTRA_OUTPUT, currentTakePhotoUri);
startActivityForResult(takePhotoIntent, TAKE_PHOTO_REQUEST_CODE);
}
@Override
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
super.onActivityResult(requestCode, resultCode, data);
if (resultCode == RESULT_OK) {
if (requestCode == PICTURE_REQUEST_CODE) {
// 处理选择的图片
handleInputPhoto(data.getData());
} else if (requestCode == OPEN_SETTING_REQUEST_COED){
requestMultiplePermissions();
} else if (requestCode == TAKE_PHOTO_REQUEST_CODE) {
// 如果拍照成功,加载图片并识别
handleInputPhoto(currentTakePhotoUri);
}
}
}
/**
* 处理图片
* @param imageUri
*/
private void handleInputPhoto(Uri imageUri) {
// 加载图片
GlideApp.with(MainActivity.this).asBitmap().listener(new RequestListener<Bitmap>() {
@Override
public boolean onLoadFailed(@Nullable GlideException e, Object model, Target<Bitmap> target, boolean isFirstResource) {
Log.d(TAG,"handleInputPhoto onLoadFailed");
Toast.makeText(MainActivity.this, "图片加载失败", Toast.LENGTH_SHORT).show();
return false;
}
@Override
public boolean onResourceReady(Bitmap resource, Object model, Target<Bitmap> target, DataSource dataSource, boolean isFirstResource) {
Log.d(TAG,"handleInputPhoto onResourceReady");
startImageClassifier(resource);
return false;
}
}).load(imageUri).into(ivPicture);
//result.setText("Processing...");
}
/**
* 开始图片识别匹配
* @param bitmap
*/
private void startImageClassifier(final Bitmap bitmap) {
executor.execute(new Runnable() {
@Override
public void run() {
Log.i(TAG, Thread.currentThread().getName() + " startImageClassifier");
Bitmap reshapeBitmap = Bitmap.createScaledBitmap(bitmap, dstWidth, dstHeight, false);
final String results = imageClassifier.classifyFrame(reshapeBitmap);
Log.i(TAG, "startImageClassifier results: " + results);
runOnUiThread(new Runnable() {
@Override
public void run() {
result.setText(results);
}
});
}
});
}
private void initialiseImageClassifier(Context app) {
try {
imageClassifier = gdut.bsx.tensorflowtraining.ImageClassifier.getInstance(app);
} catch (Exception e) {
Log.e(TAG, "Failed to initialize an image classifier.");
}
}
}
FileUtil.java
package gdut.bsx.tensorflowtraining;
import android.content.Context;
import android.media.MediaScannerConnection;
import android.net.Uri;
import android.os.Environment;
import android.text.TextUtils;
import android.util.Log;
import java.io.File;
public class FileUtil {
/**
* 删除或增加图片、视频等媒体资源文件时 通知系统更新媒体库,重新扫描
* @param filePath 文件路径,包括后缀
*/
public static void notifyScanMediaFile(Context context, String filePath) {
if (context == null || TextUtils.isEmpty(filePath)){
Log.e("FileUtil", "notifyScanMediaFile context is null or filePath is empty.");
return;
}
MediaScannerConnection.scanFile(context, new String[] {filePath}, null, new MediaScannerConnection.OnScanCompletedListener() {
@Override
public void onScanCompleted(String path, Uri uri) {
Log.i("FileUtil", "onScanCompleted");
}
});
}
public static File getPhotoCacheFolder() {
File cacheFolder = new File(Environment.getExternalStorageDirectory(), "TensorFlowPhotos");
if (!cacheFolder.exists()) {
cacheFolder.mkdirs();
}
return cacheFolder;
}
}
其中源代码可以通过以下链接获得。
全部源程序下载