之前的利用java命令编辑生jni文件的方式已经out了,本文主要介绍利用cmake插件来实现opencv for android 的配置和使用,掌握了cmake文件想配置其他的算法移植等工作也是一样的原理。前面将配置方法介绍下,后面为demo的相关文件,各位老铁若前面看懂了,后面的demo就不用看了,若不明白再去下载demo研究吧,哈哈!
配置
android studio 2.3以后的版本较好
cmake插件
opencv for android 这个可以在opencv网站上去下载
解压后的目录如下图:
想利用opencv首先将上述文件拷贝到project中,主要是sdk目录下的native 和java文件进行配置,java层其实是对native的封装:
一、如果只是在jni层或者底层cpp中调用opencv
1、在app gradle中 android{}中后面添加cmake配置文件:
externalNativeBuild {
cmake {
path "CMakeLists.txt"
}
}
2、配置cmake文件,主要是设置opencv的路径和后面的引入库
cmake_minimum_required(VERSION 3.4.1)
# Creates and names a library, sets it as either STATIC
# or SHARED, and provides the relative paths to its source code.
# You can define multiple libraries, and CMake builds them for you.
# Gradle automatically packages shared libraries with your APK.
#设置OpenCV的路径
set(OpenCV_DIR ${CMAKE_SOURCE_DIR}/../OpenCV-android-sdk/sdk/native/jni)
find_package(OpenCV REQUIRED)
include_directories(${CMAKE_SOURCE_DIR}/../OpenCV-android-sdk/sdk/native/jni/include)
add_library( # Sets the name of the library.
# native-lib
lammy-jni
# Sets the library as a shared library.
SHARED
# Provides a relative path to your source file(s).
#src/main/cpp/native-lib.cpp
src/main/cpp/lammy-jni.cpp
)
# Searches for a specified prebuilt library and stores the path as a
# variable. Because CMake includes system libraries in the search path by
# default, you only need to specify the name of the public NDK library
# you want to add. CMake verifies that the library exists before
# completing its build.
find_library( # Sets the name of the path variable.
log-lib
# Specifies the name of the NDK library that
# you want CMake to locate.
log )
# Specifies libraries CMake should link to your target library. You
# can link multiple libraries, such as libraries you define in this
# build script, prebuilt third-party libraries, or system libraries.
target_link_libraries( # Specifies the target library.
# native-lib
lammy-jni
# Links the target library to the log library
# included in the NDK.
${log-lib}
${OpenCV_LIBS})
include_directories(${CMAKE_SOURCE_DIR} 文件夹路径) 在开始时候配置你要编译的cpp文件中要用到的库文件的路径,否则在jni和我们写的cpp中就找不到相应文件。
add_library中分别是设置要编译的库文件名称、库的类型、要编译的cpp的文件.
target_link_libraries中是要链接的库,开始一般是我们上面要编译的库文件,最后引入android的log文件和 OpenCV_LIBS文件。
NODE:当然我们也可以在cmake文件里配置子的cmake文件,这样可以编译多个库文件,如下:
#添加子Cmake文件
#add_subdirectory(src/main/cpp/子目录)
这样opecv的jni环境配置好了,创建工程时选择支持c++ 产生的native-jni 可以删掉。当然建议将内容复制出来,创建自己的jni文件,将对应的包名修改为上层load的类的包名。
二、java 文件中使用封装好的jni来处理图像
上面已经讲到过,opencv for android sdk 已经帮我们封装好了一些常用的图像处理,因此我们只需要在使用一些图像处理的时候load进来相应的底层jni库文件。
1、在app gradle中需得加入opencv的底层库:
sourceSets.main{
jniLibs.srcDir '../OpenCV-android-sdk/sdk/native/libs' //设置opencv的native库路径,用于加载opencv_java3
jni.srcDirs = [] //disable automatic ndk-build call
}
2、在要调用opencv的类中load opencv的库:
static {
System.loadLibrary("opencv_java3");
}
3、要导入基于底层库的java 包,当然我们也可以直接将opencv-android-sdk 目录下sdk-java工程以file -> import moudle 的形式导入项目(当然可以打包成jar包,放入工程libs文件中如下图)
Node: 导入moudle的时候注意导入的opencv moudle的时候要将gradle中编译compileSdkVersion 等信息修改与我们项目一致。必须在import前修改。
这样我们就可以直接调用sdk封装好的方法来处理图像啦。
三、上层和jni层和c代码层都使用opencv
如果将前面的配置都配置了,那么底层和上层的代码都可以使用opencv了,其中较为常用的是,在java层创建Mat ,然后向jni层传递long类型的mat地址,这样底层对图像处理后上层创建的mat也就处理好,避免了向jni层传递像素信息,和jni层的数据转换,提高性能。
下面将给出三种方式下,对一张图像进行模糊处理的方式,若还有不清楚的地方,文章末尾有demo的下载地址,欢迎各位老铁们下载
1 、cmake的配置信息上面已经给出了
2、app gradle的配置如下:
apply plugin: 'com.android.application'
android {
compileSdkVersion 26
defaultConfig {
applicationId "com.example.zhangpeng30.opencvtest"
minSdkVersion 15
targetSdkVersion 26
versionCode 1
versionName "1.0"
testInstrumentationRunner "android.support.test.runner.AndroidJUnitRunner"
externalNativeBuild {
cmake {
cppFlags ""
}
}
}
buildTypes {
release {
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'
}
}
sourceSets.main{
jniLibs.srcDir '../OpenCV-android-sdk/sdk/native/libs' //设置opencv的native库路径,用于加载opencv_java3
jni.srcDirs = [] //disable automatic ndk-build call
}
externalNativeBuild {
cmake {
path "CMakeLists.txt"
}
}
}
dependencies {
implementation fileTree(dir: 'libs', include: ['*.jar'])
implementation 'com.android.support:appcompat-v7:26.1.0'
implementation 'com.android.support.constraint:constraint-layout:1.0.2'
implementation 'com.android.support:design:26.1.0'
testImplementation 'junit:junit:4.12'
androidTestImplementation 'com.android.support.test:runner:1.0.1'
androidTestImplementation 'com.android.support.test.espresso:espresso-core:3.0.1'
compile project(path: ':openCVLibrary310')
}
3 JNI文件
//
// Created by lammy on 2017/11/27.
//
#include <jni.h>
#include "opencv2/opencv.hpp"
#ifndef OPENCVTEST_LAMMY_JNI_H
#define OPENCVTEST_LAMMY_JNI_H
#endif //OPENCVTEST_LAMMY_JNI_H
extern "C" {
JNIEXPORT jintArray
JNICALL
Java_com_example_zhangpeng30_opencvtest_ImageUtil_getBlurBitmap(
JNIEnv *env,
jobject /* this */, jint w, jint h, jintArray pix) {
int *cpix = env->GetIntArrayElements(pix, JNI_FALSE);
cv::Mat mat = cv::Mat(h, w, CV_8UC4, (unsigned char *) cpix);
cv::Mat outMat;
//转化为3通道
cv::cvtColor(mat, outMat, CV_BGRA2BGR);
//处理图像
blur(outMat, outMat, cv::Size(20, 20));
//处理完图像后,转化为4通道
cv::cvtColor(outMat, outMat, CV_BGR2BGRA);
//这里传回int数组。
uchar *ptr = outMat.data;
int size = w * h;
jintArray blurArray = env->NewIntArray(size);
env->SetIntArrayRegion(blurArray, 0, size, (const jint *) ptr);
env->ReleaseIntArrayElements(pix, cpix, 0);
return blurArray;
}
JNIEXPORT void
JNICALL
Java_com_example_zhangpeng30_opencvtest_ImageUtil_getBlurBitmap2(
JNIEnv *env,
jobject /* this */ ,jint w , jint h ,jlong matAddress , jlong outMatAddress) {
const cv::Mat &srcMat = *((const cv::Mat*)matAddress);
cv::Mat &outMat = *(( cv::Mat*)outMatAddress);
blur(srcMat, outMat, cv::Size(20, 20));
}
}
4 上层load jni 和opencv的库和三种处理方式的类
package com.example.zhangpeng30.opencvtest;
import android.graphics.Bitmap;
import org.opencv.android.Utils;
import org.opencv.core.Mat;
import org.opencv.core.Size;
import org.opencv.imgproc.Imgproc;
/**
* Created by zhangpeng30 on 2017/11/27.
*/
public class ImageUtil {
/******** jni层使用opencv 模糊图像 ******/
static {
System.loadLibrary("lammy-jni");
}
public Bitmap getBlurBitmapJni(Bitmap bitmap){
int w = bitmap.getWidth();
int h = bitmap.getHeight();
int[] pix = new int[w * h];
bitmap.getPixels(pix , 0,w,0,0, w, h);
int[] edgeArray = getBlurBitmap(w, h ,pix);
Bitmap edgeBitmap = Bitmap.createBitmap(edgeArray,w,h, Bitmap.Config.ARGB_8888);
return edgeBitmap;
}
private native int[] getBlurBitmap(int w , int h ,int[] pix);
/******** java层使用opencv ******/
static {
System.loadLibrary("opencv_java3");
}
public Bitmap getBlurBitmapJava(Bitmap bitmap){
Mat mat = new Mat();
Mat matOut = new Mat();
Utils.bitmapToMat(bitmap , mat);
Imgproc.blur(mat,matOut,new Size(20,20) );
Bitmap blurBitmap = Bitmap.createBitmap(bitmap.getWidth() ,bitmap.getHeight() , Bitmap.Config.ARGB_8888);
Utils.matToBitmap(matOut , blurBitmap);
return blurBitmap;
}
/******** java层 JNI层同时使用opencv ******/
public Bitmap getBlurBitmapJavaAndJni(Bitmap bitmap){
int w = bitmap.getWidth();
int h = bitmap.getHeight();
Mat mat = new Mat();
Mat outMat = new Mat();
Utils.bitmapToMat(bitmap , mat);
getBlurBitmap2(w , h , mat.getNativeObjAddr() , outMat.getNativeObjAddr());
Bitmap blurBitmap = Bitmap.createBitmap(w,h, Bitmap.Config.ARGB_8888);
Utils.matToBitmap(outMat , blurBitmap);
return blurBitmap;
}
private native void getBlurBitmap2(int w , int h ,long MatAddress , long outAddress);
}
下面附上本demo的链接,欢迎大家下载和交流:opencv android studio 环境配置