首先在java层面中,surfaceview的显示是通过SurfaceHolder这个类中2个方法实现的:
Canvas SurfaceHolder.lockCanvas();
SurfaceHolder.unlockAndPost(Canvas canvas);
例如
- Canvas canvas = holder.lockCanvas();
- canvas.drawRGB(rand.nextInt(255), rand.nextInt(255), rand.nextInt(255));
- holder.unlockCanvasAndPost(canvas);
先锁定surface然后返回canvas,然后使用canvas绘制,最后再解锁并渲染到屏幕上可见。
那么在ffmpeg中渲染surfaceview只需要通过一个native方法把surface传入就行。
例如
@Override
public void surfaceCreated(SurfaceHolder holder) {
new Thread(new Runnable() {
@Override
public void run() {
play(surfaceHolder.getSurface());
}
}).start();
}
...
public native int play(Object surface);
然后在需要渲染的线程中
// 获取native window
ANativeWindow *nativeWindow = ANativeWindow_fromSurface(env, surface);
最后在每一帧要渲染处
// lock native window buffer
ANativeWindow_lock(nativeWindow, &windowBuffer, 0);
...
ANativeWindow_unlockAndPost(nativeWindow);
具体参考
http://blog.csdn.net/wangdong20/article/details/8572835
http://blog.csdn.net/King1425/article/details/71514318?locationNum=4&fps=1