android camera2 拍摄yuv图片,并将yuv 保存成Bitmap的方法

本文详细介绍了在AndroidCamera2API中,如何处理YUV_420_888格式的图像数据,包括YUV420P和YUV420SP的区别,以及如何从Camera2的ImageReader回调中获取和转换为Bitmap。还提供了实际的代码片段和性能分析。
摘要由CSDN通过智能技术生成

一、yuv简介
yuv420p和yuv420sp
yuv420p(例如yv12):每两行的4个字节对应一个像素的y,每两行的2个字节(uv)对应前面的一个像素的y
yuv420sp(例如nv21):每两行的4个字节对应一个像素的y,每一行两个字节(uv)对应前面的一个像素的y
例如yv12 格式6*4
YYYYYY
YYYYYY
YYYYYY
YYYYYY
VVVVVV
UUUUUU
例如nv21 格式6*4
YYYYYY
YYYYYY
YYYYYY
YYYYYY
VUVUVU
VUVUVU

二、camera2 Android回调imagereader返回的 YUV_420_888 数据,存储方式
image = reader.acquireLatestImage();
Image.Plane[] planes = image.getPlanes(); //获取yuv图像的平面个数,plane0返回的是y分量
Image.Plane plane = planes[i];
Buffer buffer = plane.getBuffer();
1.buffer.remaining() 获取对应平面字节个数,
2.plane.getPixelStride() 获取对应平面的字节步长
3.plane.getRowStride() 获取对应平面的行步长
预览分辨率为:1280*720,这是获取的 YUV_420_888格式对应的yuv数据log日志,从log看看plane1和plane2 的getPixelStride 是2, 说明间隔的原色才是有效的元素
即plane1的行内索引为0,2,4,6..对应的是u分量中间插入的是v分量,且数组长度是1280*720/2 -1
即plane2的行内索引为0,2,4,6..对应的是v分量中间插入的是u分量,
2023-03-06 10:36:54.068 31203-31258 Camera2Fragment I getByteFromYuvReader() planes.length:3
2023-03-06 10:36:54.068 31203-31258 Camera2Fragment I getByteFromYuvReader() i:0 buffer.remaining:921600 getPixelStride:1 getRowStride:1280
2023-03-06 10:36:54.068 31203-31258 Camera2Fragment I getByteFromYuvReader() i:1 buffer.remaining:460799 getPixelStride:2 getRowStride:1280
2023-03-06 10:36:54.068 31203-31258 Camera2Fragment I getByteFromYuvReader() i:2 buffer.remaining:460799 getPixelStride:2 getRowStride:1280

三、拍摄yuv 并转换为bitmap 保存的代码实现
1.拍摄yuv格式图片的方法
mImageReader = ImageReader.newInstance(Config.SHOOT_PIC_WIDTH,
Config.SHOOT_PIC_HEIGHT, ImageFormat.YUV_420_888, 1);

ps:Android官方 Android camera api1 默认是:NV21,Android camera api2建议使用YUV_420_888

2.在 imagereader.onImageAvailable 回调处理
if (ImageFormat.YUV_420_888 == reader.getImageFormat()) {
Bitmap bitmap = getBitmapFromYuvReader(reader);
}

//从ImageReader中读取yuv并转成bitmap
private synchronized Bitmap getBitmapFromYuvReader(ImageReader reader) {
if (null == reader) {
Logger.i(TAG, "getBitmapFromYuvReader() reader is null return null");
return null;
}

Image image = null;
try {
byte[] plane0Y = null;
byte[] plane1WithU = null; //plane1 包含u
byte[] plane2WithV = null; //plane2 包含v
byte[] u = null;//真实的u
byte[] v = null;//真实的u
// fos = new FileOutputStream(file);
//获取捕获的照片数据
image = reader.acquireLatestImage();
if (null == image) {
Logger.w(TAG, "getBitmapFromYuvReader() image is null");
return null;
}
Image.Plane[] planes = image.getPlanes();
Logger.i(TAG, "getBitmapFromYuvReader() planes.length:" + planes.length);
if (planes.length != 3) {
return null;
}
// 重复使用同一批byte数组,减少gc频率
if (plane0Y == null || plane1WithU == null || plane2WithV == null) {
plane0Y = new byte[planes[0].getBuffer().limit() - planes[0].getBuffer().position()];
plane1WithU = new byte[planes[1].getBuffer().limit() - planes[1].getBuffer().position()];
plane2WithV = new byte[planes[2].getBuffer().limit() - planes[2].getBuffer().position()];
}
for (int i = 0; i < planes.length; i++) {
Image.Plane plane = planes[i];
Buffer buffer = plane.getBuffer();
//1280*720
Logger.i(TAG, "getBitmapFromYuvReader() i:" + i + " buffer.remaining:" + buffer.remaining()
+ " getPixelStride:" + plane.getPixelStride() + " getRowStride:" + plane.getRowStride());
}
if (image.getPlanes()[0].getBuffer().remaining() == plane0Y.length) {
planes[0].getBuffer().get(plane0Y);
planes[1].getBuffer().get(plane1WithU);
planes[2].getBuffer().get(plane2WithV);
if (planes[1].getPixelStride() == 2) { //sp
//提取U v分量 ,这里需要+1,因为plane1和plane2都是少存储一个字节
u = new byte[(plane1WithU.length + 1) / 2];
v = new byte[(plane2WithV.length + 1) / 2];
int index_u = 0;
int index_v = 0;
for (int i = 0; i < plane1WithU.length; i++) {
if (0 == (i % 2)) {
u[index_u] = plane1WithU[i];
index_u++;
}
}
for (int j = 0; j < plane2WithV.length; j++) {
if (0 == (j % 2)) {
v[index_v] = plane2WithV[j];
index_v++;
}
}
}
byte[] arrayNV21 = getArrayNV21FromYuv(plane0Y, u, v);
final int WIDTH = Config.SHOOT_PIC_WIDTH;
final int HEIGHT = Config.SHOOT_PIC_HEIGHT;
Logger.i(TAG, "getBitmapFromYuvReader() arrayNV21.length:" + arrayNV21.length);
YuvImage yuvImage = new YuvImage(arrayNV21, ImageFormat.NV21, WIDTH, HEIGHT, null);
ByteArrayOutputStream stream = new ByteArrayOutputStream();
yuvImage.compressToJpeg(new Rect(0, 0, WIDTH, HEIGHT), 80, stream);
Bitmap newBitmap = BitmapFactory.decodeByteArray(stream.toByteArray(), 0, stream.size());
stream.close();
return newBitmap;
}

} catch (Exception ex) {
Logger.i(TAG, "getBitmapFromYuvReader() error:" + ex);
} finally {

    //记得关闭 image
    if (image != null) {
        image.close();
    }
}

return null;
}
//将yuv 数据合并成 nv21格式的byte数组
private byte[] getArrayNV21FromYuv(byte[] y, byte[] u, byte[] v) {
//正常来说y长度是WIDTH*HEIGHT,u和v的长度是WIDTH*HEIGHT/4
final int WIDTH = Config.SHOOT_PIC_WIDTH;//图片宽
final int HEIGHT = Config.SHOOT_PIC_HEIGHT;//图片宽
if (WIDTH * HEIGHT != y.length) {
Logger.i(TAG, "getArrayNV21FromYuv() y length is error");
return null;
}
if ((WIDTH * HEIGHT / 4) != u.length || (WIDTH * HEIGHT / 4) != v.length) {
Logger.i(TAG, "getArrayNV21FromYuv() u or v length is error!");
return null;
}
int lengthY = y.length;
int lengthU = u.length;
int lengthV = u.length;
int newLength = lengthY + lengthU + lengthV;
byte[] arrayNV21 = new byte[newLength];
//先将所有的Y数据存储进去
System.arraycopy(y, 0, arrayNV21, 0, y.length);

//然后交替存储VU数据(注意U,V数据的长度应该是相等的,记住顺序是VU VU)
for (int i = 0; i < v.length; i++) {
int index = lengthY + i * 2;
arrayNV21[index] = v[i];
}

for (int i = 0; i < u.length; i++) {
int index = lengthY + i * 2 + 1;
arrayNV21[index] = u[i];
}
Logger.i(TAG, "getArrayNV21FromYuv()");
return arrayNV21;
}

----耗时分析----------------
如log所示 将yuv byte 数组 保存成 nv21 数组耗时12毫秒,从reader取出 yuv byte 数组 到保存转换成nv21数组总共耗时17毫秒, 将nv21 数组 转换成bitmap 耗时24毫秒
总共 从imagereader回调,到转换成bitmap总共 耗时43毫秒,
2023-01-01 02:01:38.071 10657-10692 ImageUtil org.opencv.cameratest I getBitmapFromYuvReader 22 arrayNV21.length:1382400 diffYuv2Nv21Mills:12 diffNv21FromReaderMills:17
2023-01-01 02:01:38.096 10657-10692 CvUtil org.opencv.cameratest E getCvFaceBitmap mCacheBitmap:android.graphics.Bitmap@45ff5f6 diffMills:24
2023-01-01 02:01:38.098 10657-10692 ImageUtil org.opencv.cameratest I getCheckedBitmap end diffMills:43 format:35

相关知识点详情可参考如下链接:

https://blog.csdn.net/u010126792/article/details/86593199
https://blog.csdn.net/weixin_41937380/article/details/127758173

  • 16
    点赞
  • 9
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
好的,以下是一个简单的Android Camera2的ImageReader回调函数示例代码,用于将Image的YUV数据转换为RGB格式的Bitmap保存到本地。在这个示例中,我们将使用JavaCV库来进行YUV转RGB的操作,并使用Bitmap类来保存图像。 ```java private ImageReader.OnImageAvailableListener mOnImageAvailableListener = new ImageReader.OnImageAvailableListener() { @Override public void onImageAvailable(ImageReader reader) { Image image = reader.acquireNextImage(); if (image == null) { return; } // 获取图像的宽度和高度 int width = image.getWidth(); int height = image.getHeight(); // 将YUV数据转换为RGB格式的Bitmap Mat yuvMat = new Mat(height + height / 2, width, CvType.CV_8UC1); ByteBuffer buffer = image.getPlanes()[0].getBuffer(); byte[] data = new byte[buffer.remaining()]; buffer.get(data); yuvMat.put(0, 0, data); Mat rgbMat = new Mat(height, width, CvType.CV_8UC3); Imgproc.cvtColor(yuvMat, rgbMat, Imgproc.COLOR_YUV2RGB_NV21); Bitmap bitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888); Utils.matToBitmap(rgbMat, bitmap); // 保存Bitmap到本地 String fileName = "image_" + System.currentTimeMillis() + ".jpg"; String filePath = Environment.getExternalStoragePublicDirectory(Environment.DIRECTORY_PICTURES) + File.separator + fileName; try { FileOutputStream outputStream = new FileOutputStream(filePath); bitmap.compress(Bitmap.CompressFormat.JPEG, 100, outputStream); outputStream.close(); } catch (Exception e) { e.printStackTrace(); } image.close(); } }; ``` 请注意,这只是一个简单的示例代码,可能需要根据你的实际需求进行修改。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值