Android下NV21转ARGB的方法

Android Camera.PreviewCallback回来的每一帧图像的格式为NV21,NV21是YUV420的一种。但具体YUV三个分量怎么存储,Android的文档没有指出。在网上找了很多,都没有详细的介绍,其中曲折就不说了。最终在这里找到了权威的介绍:

http://www.fourcc.org/yuv.php#NV12
将其摘要如下:
NV12

YUV 4:2:0 image with a plane of 8 bit Y samples followed by an interleaved U/V plane 
containing 8 bit 2x2 subsampled colour difference samples.

 	                  Horizontal	Vertical
Y Sample Period	               1	1
V (Cr) Sample Period           2	2
U (Cb) Sample Period           2	2

Microsoft defines this format as follows:

 "A format in which all Y samples are found first in memory as an array of 
unsigned char with an even number of lines (possibly with a larger stride for 
memory alignment), followed immediately by an array of unsigned char containing 
interleaved Cb and Cr samples (such that if addressed as a little-endian WORD 
type, Cb would be in the LSBs and Cr would be in the MSBs) with the same total 
stride as the Y samples.This is the preferred 4:2:0 pixel format."

NV21

YUV 4:2:0 image with a plane of 8 bit Y samples followed by an interleaved V/U 
plane containing 8 bit 2x2 subsampled chroma samples. The same as NV12 except 
the interleave order of U and V is reversed.

 	                  Horizontal	Vertical
Y Sample Period                1	1
V (Cr) Sample Period           2	2
U (Cb) Sample Period           2	2

Microsoft defines this format as follows:

 "The same as NV12, except that Cb and Cr samples are swapped so that the 
chroma array of unsigned char would have Cr followed by Cb for each sample (such 
that if addressed as a little-endian WORD type, Cr would be in the LSBs and Cb 
would be in the MSBs)."

意思就是NV21是按YUV 4:2:0抽样的。即,对于一张图片的每一个像素,完整保留其Y分量,而对于U和V分量,以2X2的像素块比例采样。其中Y分量按平面存储,U和V则交错打包存储。NV12和NV21都是Y分量在前,U和V分量打包在后。不同处在于NV12是按UV顺序打包,而NV21是按VU顺序打包的。

了解了NV21的YUV存储顺序后,再加上YUV转RGB的公式就OK了。

 YUV转RGB的公式,网上也是一大堆,最终我在Wikipedia上找到了一个可以用的公式,先附上Wikipedia的连接:

http://en.wikipedia.org/wiki/YUV#Y.27UV420sp_.28NV21.29_to_RGB_conversion_.28Android.29
其摘要如下:
void YUVImage::yuv2rgb(uint8_t yValue, uint8_t uValue, uint8_t vValue,
        uint8_t *r, uint8_t *g, uint8_t *b) const {
    *r = yValue + (1.370705 * (vValue-128));
    *g = yValue - (0.698001 * (vValue-128)) - (0.337633 * (uValue-128));
    *b = yValue + (1.732446 * (uValue-128));
    *r = clamp(*r, 0, 255);
    *g = clamp(*g, 0, 255);
    *b = clamp(*b, 0, 255);
}
 

稍微简化一下,即:

    r = yValue + (1.370705 * (vValue-128));
    g = yValue - (0.698001 * (vValue-128)) - (0.337633 * (uValue-128));
    b = yValue + (1.732446 * (uValue-128));

    r = r < 0 ? 0 : ( r > 255 ? 255 : r);
    g = g < 0 ? 0 : ( g > 255 ? 255 : g);
    b = b < 0 ? 0 : ( b > 255 ? 255 : b);

到此,NV21转RGB的算法已经一目了然了,即,对于每一个像素,取出其对应的YUV的值,然后,算出该像素对应的RGB的值即可。

下面是我在Android下JNI层的转换代码:

	jboolean copy = 1;
	unsigned char *buffer = (unsigned char*)env->GetByteArrayElements(data,&copy);
	int length = width*height*4;
	unsigned char *rgbBuf = (unsigned char*)malloc(length);

	for(int iHeight = 0;iHeight<height;iHeight++)
	{
		for(int iWidth=0;iWidth<width;iWidth++)
		{
			unsigned char yValue = buffer[width*iHeight+iWidth];
			int index = iWidth % 2 == 0 ? iWidth : iWidth - 1;
			unsigned char vValue = buffer[width*height+width*(iHeight/2)+index];
			unsigned char uValue = buffer[width*height+width*(iHeight/2)+index+1];

			double r = yValue + (1.370705 * (vValue-128));
			double g = yValue - (0.698001 * (vValue-128)) - (0.337633 * (uValue-128));
			double b = yValue + (1.732446 * (uValue-128));

			r = r < 0 ? 0 : ( r > 255 ? 255 : r);
			g = g < 0 ? 0 : ( g > 255 ? 255 : g);
			b = b < 0 ? 0 : ( b > 255 ? 255 : b);

			rgbBuf[width*iHeight*4+iWidth*4+0] = (unsigned char)r;
			rgbBuf[width*iHeight*4+iWidth*4+1] = (unsigned char)g;
			rgbBuf[width*iHeight*4+iWidth*4+2] = (unsigned char)b;
			rgbBuf[width*iHeight*4+iWidth*4+3] = 255;
		}
	}
	env->ReleaseByteArrayElements(data,(jbyte*)buffer,0);

	jbyteArray arr = env->NewByteArray(length);
	jbyte *buf = (jbyte*)malloc(length);
	memcpy(buf,rgbBuf,length);
	env->SetByteArrayRegion(arr,0,length,buf);
	free(buf);
	free(rgbBuf);
	return arr;

其中 data,width,height分别为Java层传入参数。env为JNI层默认带的参数。最终返回的arr为jbyteArray类型,到了Java层即byte[].注意这里我的像素通道顺序是RGBA.这样,到了Java层,其返回的byte[]中的顺序就是ARGB了,然后就可以按照ARGB_8888进行处理了。

  • 1
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值