为什么这么顺利就解决了这个问题呢?这就要感谢我们亲爱的OpenCV了!在OpenCV中有一个很好用的类,叫做org.opencv.android.JavaCameraView。通过这个JavaCameraView,我们可以在其onCameraFrame()回调方法中获取图像帧。此时就可以对这个图像进行处理啦!
那么,这个又是怎么实现的呢?知其然不知其所以然的态度,程序员显然要不得。
实际上,JavaCameraView继承自CameraBridgeViewBase ,并实现了PreviewCallback接口。而CameraBridgeViewBase 又继承自SurfaceView。从CameraBridgeViewBase 的命名中,我们也可以看出来,他是Camera和View之间的一个桥梁。在CameraBridgeViewBase 中有一个接口:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
public interface CvCameraViewListener2 {
/**
* This method is invoked when camera preview has started. After this method is invoked
* the frames will start to be delivered to client via the onCameraFrame() callback.
* @param width - the width of the frames that will be delivered
* @param height - the height of the frames that will be delivered
*/
public void onCameraViewStarted(int width, int height);
/**
* This method is invoked when camera preview has been stopped for some reason.
* No frames will be delivered via onCameraFrame() callback after this method is called.
*/
public void onCameraViewStopped();
/**
* This method is invoked when delivery of the frame needs to be done.
* The returned values - is a modified frame which needs to be displayed on the screen.
* TODO: pass the parameters specifying the format of the frame (BPP, YUV or RGB and etc)
*/
public Mat onCameraFrame(CvCameraViewFrame inputFrame);
};
|
该接口定义了CameraView的三个状态:开始、结束、接收到帧。然后在该类的最重要的绘图方法中调用了onCameraFrame()方法:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 |
protected void deliverAndDrawFrame(CvCameraViewFrame frame) {
Mat modified;
if (mListener != null) {
modified = mListener.onCameraFrame(frame);
} else {
modified = frame.rgba();
}
boolean bmpValid = true;
if (modified != null) {
try {
Utils.matToBitmap(modified, mCacheBitmap);
} catch(Exception e) {
Log.e(TAG, "Mat type: " + modified);
Log.e(TAG, "Bitmap type: " + mCacheBitmap.getWidth() + "*" + mCacheBitmap.getHeight());
Log.e(TAG, "Utils.matToBitmap() throws an exception: " + e.getMessage());
bmpValid = false;
}
}
if (bmpValid && mCacheBitmap != null) {
Canvas canvas = getHolder().lockCanvas();
if (canvas != null) {
canvas.drawColor(0, android.graphics.PorterDuff.Mode.CLEAR);
Log.d(TAG, "mStretch value: " + mScale);
if (mScale != 0) {
canvas.drawBitmap(mCacheBitmap, new Rect(0,0,mCacheBitmap.getWidth(), mCacheBitmap.getHeight()),
new Rect((int)((canvas.getWidth() - mScale*mCacheBitmap.getWidth()) / 2),
(int)((canvas.getHeight() - mScale*mCacheBitmap.getHeight()) / 2),
(int)((canvas.getWidth() - mScale*mCacheBitmap.getWidth()) / 2 + mScale*mCacheBitmap.getWidth()),
(int)((canvas.getHeight() - mScale*mCacheBitmap.getHeight()) / 2 + mScale*mCacheBitmap.getHeight())), null);
} else {
canvas.drawBitmap(mCacheBitmap, new Rect(0,0,mCacheBitmap.getWidth(), mCacheBitmap.getHeight()),
new Rect((canvas.getWidth() - mCacheBitmap.getWidth()) / 2,
(canvas.getHeight() - mCacheBitmap.getHeight()) / 2,
(canvas.getWidth() - mCacheBitmap.getWidth()) / 2 + mCacheBitmap.getWidth(),
(canvas.getHeight() - mCacheBitmap.getHeight()) / 2 + mCacheBitmap.getHeight()), null);
}
if (mFpsMeter != null) {
mFpsMeter.measure();
mFpsMeter.draw(canvas, 20, 30);
}
getHolder().unlockCanvasAndPost(canvas);
}
}
}
|
onCameraFrame()方法允许你对原有的数据进行一些修改,生成你需要的帧图像,并在View中进行绘制。
在CameraBridgeViewBase 中,并没有调用到deliverAndDrawFrame()这个方法,而是在JavaCameraView中调用了。那么,怎么调用的呢?
在 JavaCameraView中,首先新建了一个线程,该线程在不停地循环,如果待绘制区中有图像帧,则进行绘制,否则,线程等待,功能代码如下:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
private class CameraWorker implements Runnable {
public void run() {
do {
synchronized (JavaCameraView.this) {
try {
JavaCameraView.this.wait();
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
if (!mStopThread) {
if (!mFrameChain[mChainIdx].empty())
deliverAndDrawFrame(mCameraFrame[mChainIdx]);
mChainIdx = 1 - mChainIdx;
}
} while (!mStopThread);
Log.d(TAG, "Finish processing thread");
}
}
|
那么这个时候就简单了,程序开始时,先进入initializeCamera()方法,该方法新建了一个Camera,设置了一些参数,然后关键的代码就是:
mCamera.setPreviewCallbackWithBuffer(this);
通过该代码,在Camera预览时,程序会调用其Callback接口中的onPreviewFrame方法:
public void onPreviewFrame(byte[] frame, Camera arg1) {
Log.d(TAG, "Preview Frame received. Frame size: " + frame.length);
synchronized (this) {
mFrameChain[1 - mChainIdx].put(0, 0, frame);
this.notify();
}
if (mCamera != null)
mCamera.addCallbackBuffer(mBuffer);
}
在这里,每当预览到一个frame时,就将其放入待绘制区,然后唤醒线程,进行View的绘制。
OK,到这里所有的流程都已经完毕,那么我们不禁会问,为啥要这么麻烦呢?直接在onPreViewFrame中获取帧不就完事了吗?不同的就在于,通过这个框架,我们获得了一个自定义的View,你可以根据你的需求对用户展现不同的View了。在onCameraFame()方法中,我们可以对Frame进行任意操作,然后View上就会出现我们需要的效果了,比如这样: