先说下简单流程:
1.camera回调nv21 yuv;
2.nv21转yuv420;
3.x264编码h264,回调回java层;
4.写文件,生成.h264文件;
5.使用vlc等播放器播放。
android java层的代码比较简单,简单说下:
这个demo启动会,surfaceView会显示Camera拍摄到的数据,Activity需要继承
SurfaceHolder.Callback,Camera.PreviewCallback接口
SurfaceHolder.Callback的方法有:
void surfaceCreated(SurfaceHolder var1); void surfaceChanged(SurfaceHolder var1, int var2, int var3, int var4); void surfaceDestroyed(SurfaceHolder var1);
surfaceCreated方法一般可以做一些变量的初始化,在本地例子中,用来初始化x264编码器,打开照相机,代码如下:
@Override public void surfaceCreated(SurfaceHolder holder) { // TODO Auto-generated method stub x264.initX264Encode(width, height, fps, bitrate); camera = getBackCamera(); startcamera(camera); }因为是做demo,所以width,height,fps,bitrate都是自己写死的,如果做的好一些,应该先检查camera支持的分辨率,从而选择width,height,然后获取camera的帧率,再initx264Encoder对象
surfaceDestory方法可以将需要释放的对象释放,本例中,用来关闭照相机,关闭编码器,关闭写好的文件
@Override public void surfaceDestroyed(SurfaceHolder holder) { // TODO Auto-generated method stub if (null != camera) { camera.setPreviewCallback(null); camera.stopPreview(); camera.release(); camera = null; } x264.CloseX264Encode(); try { outputStream.flush(); outputStream.close(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } }
Camera.PreviewCallBack接口的方法是:
@Override public void onPreviewFrame(byte[] data, Camera camera) { // TODO Auto-generated method stub time += timespan; byte[] yuv420 = new byte[width*height*3/2]; YUV420SP2YUV420(data,yuv420,width,height); x264.PushOriStream(yuv420, yuv420.length, time); }会回调data ,为yuv数据,yuv数据格式的设置在startcamera中,代码如下:
parameters.setPreviewFormat(ImageFormat.NV21);按说要检查手机camera支持回调的yuv数据格式,但是一般都支持nv21,所以先设置成nv21,
然后再通过方法将nv21转换为yuv420,因为x264编码器设置的支持的yuv格式为420的,代码如下:
private void YUV420SP2YUV420(byte[] yuv420sp, byte[] yuv420, int width, int height) { if (yuv420sp == null ||yuv420 == null)return; int framesize = width*height; int i = 0, j = 0; //copy y for (i = 0; i < framesize; i++) { yuv420[i] = yuv420sp[i]; } i = 0; for (j = 0; j < framesize/2; j+=2) { yuv420[i + framesize*5/4] = yuv420sp[j+framesize]; i++; } i = 0; for(j = 1; j < framesize/2;j+=2) { yuv420[i+framesize] = yuv420sp[j+framesize]; i++; } }
转好以后,调用native层的接口来将yuv编码为h264,编码完成后,会通过回调抛回来
x264.PushOriStream(yuv420, yuv420.length, time);
private x264sdk.listener l = new x264sdk.listener(){
@Override
public void h264data(byte[] buffer, int length) {
// TODO Auto-generated method stub
try {
outputStream.write(buffer, 0, buffer.length);
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
};
抛回来后,写文件,java的代码基本就这么多。
下面简单说下x264编码器:
1.初始化x264编码器
voidx264Encode::initX264Encode(int width, int height, int fps, int bite)
这个函数需要上层传入视频的分辨率及帧率和比特率,具体代码请看例子吧,调重要的说下
x264支持多线程编码,如果需要启动多线程,则需要设置_x264_param->i_threads参数
2.编码yuv为h264
voidx264Encode::startEncoder(uint8_t * dataptr, char *&bufdata,int &buflen, int &isKeyFrame)
第一个参数就是yuv数据,第二个参数为编码成功后h264数据的地址,第三个参数返回h264的长度,第四个参数返回是否为I帧
3.回调给java
使用h264callbackFunc回调,传到JNI层,JNI通过CallVoidMethod回调到java层,JNI回调java代码如下:
void CALLBACK H264DataCallBackFunc(void* pdata,int datalen)
{
h264datacallback.name = "H264DataCallBackFunc";
h264datacallback.signature = "([BI)V";
JavaEnv java;
if (java.istarch) {
JNIEnv* menv= NULL;
VM->AttachCurrentThread(&menv, NULL);
jbyteArray pcmdata = menv->NewByteArray(datalen);
menv->SetByteArrayRegion(pcmdata, 0, datalen,(jbyte*)pdata);
java.env->CallVoidMethod(ehobj,h264datacallback.getMID(java.env, jclz),pcmdata,datalen);
}
}
4.写文件->结束->关闭编码器
完整的例子的下载地址如下:
https://github.com/sszhangpengfei/android_x264_encoder