Android NDK开发详解相机之图片拍摄
图片拍摄用例旨在拍摄高分辨率的优质照片,不仅提供简单的相机手动控制功能,还提供自动白平衡、自动曝光和自动对焦 (3A) 功能。调用方负责决定如何使用拍摄的照片,具体包括以下选项:
takePicture(Executor, OnImageCapturedCallback):此方法为拍摄的图片提供内存缓冲区。
takePicture(OutputFileOptions, Executor, OnImageSavedCallback):此方法将拍摄的图片保存到提供的文件位置。
运行 ImageCapture 的可自定义执行程序有两种类型:回调执行程序和 IO 执行程序。
回调执行程序是 takePicture 方法的参数。它用于执行用户提供的 OnImageCapturedCallback()。
如果调用方选择将图片保存到文件位置,您可以指定执行程序以执行 IO。如需设置 IO 执行程序,请调用 ImageCapture.Builder.setIoExecutor(Executor)。如果执行程序不存在,则默认 CameraX 为任务的内部 IO 执行程序。
设置图片拍摄
图片拍摄用例会提供拍照所需的基本控件,例如闪光灯、连续自动对焦、零快门延迟等。
setCaptureMode()
ImageCapture.Builder.setCaptureMode() 可用于配置拍摄照片时所采用的拍摄模式:
CAPTURE_MODE_MINIMIZE_LATENCY:缩短图片拍摄的延迟时间。
CAPTURE_MODE_MAXIMIZE_QUALITY:提高图片拍摄的图片质量。
拍摄模式默认为 CAPTURE_MODE_MINIMIZE_LATENCY。如需了解详情,请参阅 setCaptureMode() 参考文档。
零快门延迟
注意:零快门延迟仍是一项实验性功能。如需就零快门延迟提供反馈,请加入 Android CameraX 论坛。
从 1.2 开始,零快门延迟 (CAPTURE_MODE_ZERO_SHOT_LAG) 以拍摄模式的形式提供。与默认拍摄模式 CAPTURE_MODE_MINIMIZE_LATENCY 相比,启用零快门延迟后,延迟时间会明显缩短,这样您便不会错过拍摄机会。
零快门延迟会使用环形缓冲区来存储三个最近拍摄的帧。当用户按下拍摄按钮时,CameraX 会调用 takePicture(),环形缓冲区则会检索其时间戳最接近按钮按下时间的捕获帧。然后,CameraX 会重新处理拍摄会话,以从该帧生成以 JPEG 格式保存到磁盘的图片。
前提条件
在启用零快门延迟之前,请使用 isZslSupported() 确定相关设备是否符合以下要求:
以 Android 6.0 及更高版本(API 级别 23 及更高级别)为目标平台。
支持 PRIVATE 重新处理。
如果设备不符合最低要求,CameraX 便会回退到 CAPTURE_MODE_MINIMIZE_LATENCY。
零快门延迟仅适用于图片拍摄用例。您无法为视频拍摄用例或相机扩展程序启用该功能。最后,由于使用闪光灯会增加延迟时间,因此当闪光灯开启或处于自动模式时,零快门延迟将不起作用。如需详细了解如何设置闪光灯模式,请参阅 setFlashMode()。
启用零快门延迟
如需启用零快门延迟,请将 CAPTURE_MODE_ZERO_SHOT_LAG 传递给 ImageCapture.Builder.setCaptureMode()。如果传递失败,setCaptureMode() 会回退到 CAPTURE_MODE_MINIMIZE_LATENCY。
setFlashMode()
默认闪光灯模式为 FLASH_MODE_OFF。如需设置闪光灯模式,请使用 ImageCapture.Builder.setFlashMode():
FLASH_MODE_ON:闪光灯始终处于开启状态。
FLASH_MODE_AUTO:在弱光环境下拍摄时,自动开启闪光灯。
拍照
以下代码示例展示了如何配置应用来拍摄照片:
Kotlin
val imageCapture = ImageCapture.Builder()
.setTargetRotation(view.display.rotation)
.build()
cameraProvider.bindToLifecycle(lifecycleOwner, cameraSelector, imageCapture,
imageAnalysis, preview)
Java
ImageCapture imageCapture =
new ImageCapture.Builder()
.setTargetRotation(view.getDisplay().getRotation())
.build();
cameraProvider.bindToLifecycle(lifecycleOwner, cameraSelector, imageCapture, imageAnalysis, preview);
请注意,bindToLifecycle() 会返回一个 Camera 对象。如需详细了解如何控制相机输出(如变焦和曝光),请参阅此指南。
配置好相机后,以下代码会根据用户操作拍照:
Kotlin
fun onClick() {
val outputFileOptions = ImageCapture.OutputFileOptions.Builder(File(...)).build()
imageCapture.takePicture(outputFileOptions, cameraExecutor,
object : ImageCapture.OnImageSavedCallback {
override fun onError(error: ImageCaptureException)
{
// insert your code here.
}
override fun onImageSaved(outputFileResults: ImageCapture.OutputFileResults) {
// insert your code here.
}
})
}
Java
public void onClick() {
ImageCapture.OutputFileOptions outputFileOptions =
new ImageCapture.OutputFileOptions.Builder(new File(...)).build();
imageCapture.takePicture(outputFileOptions, cameraExecutor,
new ImageCapture.OnImageSavedCallback() {
@Override
public void onImageSaved(ImageCapture.OutputFileResults outputFileResults) {
// insert your code here.
}
@Override
public void onError(ImageCaptureException error) {
// insert your code here.
}
}
);
}
图片拍摄方法完全支持 JPEG 格式。如需查看有关如何将 Media.Image 对象从 YUV_420_888 格式转换为 RGB Bitmap对象的示例代码,请参阅 YuvToRgbConverter.kt。
/*
* Copyright 2020 The Android Open Source Project
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.example.android.camera.utils
import android.content.Context
import android.graphics.Bitmap
import android.graphics.ImageFormat
import android.graphics.Rect
import android.media.Image
import android.renderscript.Allocation
import android.renderscript.Element
import android.renderscript.RenderScript
import android.renderscript.ScriptIntrinsicYuvToRGB
import android.renderscript.Type
import java.nio.ByteBuffer
/**
* Helper class used to efficiently convert a [Media.Image] object from
* [ImageFormat.YUV_420_888] format to an RGB [Bitmap] object.
*
* The [yuvToRgb] method is able to achieve the same FPS as the CameraX image
* analysis use case on a Pixel 3 XL device at the default analyzer resolution,
* which is 30 FPS with 640x480.
*
* NOTE: This has been tested in a limited number of devices and is not
* considered production-ready code. It was created for illustration purposes,
* since this is not an efficient camera pipeline due to the multiple copies
* required to convert each frame.
*/
class YuvToRgbConverter(context: Context) {
private val rs = RenderScript.create(context)
private val scriptYuvToRgb = ScriptIntrinsicYuvToRGB.create(rs, Element.U8_4(rs))
private var pixelCount: Int = -1
private lateinit var yuvBuffer: ByteBuffer
private lateinit var inputAllocation: Allocation
private lateinit var outputAllocation: Allocation
@Synchronized
fun yuvToRgb(image: Image, output: Bitmap) {
// Ensure that the intermediate output byte buffer is allocated
if (!::yuvBuffer.isInitialized) {
pixelCount = image.cropRect.width() * image.cropRect.height()
// Bits per pixel is an average for the whole image, so it's useful to compute the size
// of the full buffer but should not be used to determine pixel offsets
val pixelSizeBits = ImageFormat.getBitsPerPixel(ImageFormat.YUV_420_888)
yuvBuffer = ByteBuffer.allocateDirect(pixelCount * pixelSizeBits / 8)
}
// Rewind the buffer; no need to clear it since it will be filled
yuvBuffer.rewind()
// Get the YUV data in byte array form using NV21 format
imageToByteBuffer(image, yuvBuffer.array())
// Ensure that the RenderScript inputs and outputs are allocated
if (!::inputAllocation.isInitialized) {
// Explicitly create an element with type NV21, since that's the pixel format we use
val elemType = Type.Builder(rs, Element.YUV(rs)).setYuvFormat(ImageFormat.NV21).create()
inputAllocation = Allocation.createSized(rs, elemType.element, yuvBuffer.array().size)
}
if (!::outputAllocation.isInitialized) {
outputAllocation = Allocation.createFromBitmap(rs, output)
}
// Convert NV21 format YUV to RGB
inputAllocation.copyFrom(yuvBuffer.array())
scriptYuvToRgb.setInput(inputAllocation)
scriptYuvToRgb.forEach(outputAllocation)
outputAllocation.copyTo(output)
}
private fun imageToByteBuffer(image: Image, outputBuffer: ByteArray) {
assert(image.format == ImageFormat.YUV_420_888)
val imageCrop = image.cropRect
val imagePlanes = image.planes
imagePlanes.forEachIndexed { planeIndex, plane ->
// How many values are read in input for each output value written
// Only the Y plane has a value for every pixel, U and V have half the resolution i.e.
//
// Y Plane U Plane V Plane
// =============== ======= =======
// Y Y Y Y Y Y Y Y U U U U V V V V
// Y Y Y Y Y Y Y Y U U U U V V V V
// Y Y Y Y Y Y Y Y U U U U V V V V
// Y Y Y Y Y Y Y Y U U U U V V V V
// Y Y Y Y Y Y Y Y
// Y Y Y Y Y Y Y Y
// Y Y Y Y Y Y Y Y
val outputStride: Int
// The index in the output buffer the next value will be written at
// For Y it's zero, for U and V we start at the end of Y and interleave them i.e.
//
// First chunk Second chunk
// =============== ===============
// Y Y Y Y Y Y Y Y V U V U V U V U
// Y Y Y Y Y Y Y Y V U V U V U V U
// Y Y Y Y Y Y Y Y V U V U V U V U
// Y Y Y Y Y Y Y Y V U V U V U V U
// Y Y Y Y Y Y Y Y
// Y Y Y Y Y Y Y Y
// Y Y Y Y Y Y Y Y
var outputOffset: Int
when (planeIndex) {
0 -> {
outputStride = 1
outputOffset = 0
}
1 -> {
outputStride = 2
// For NV21 format, U is in odd-numbered indices
outputOffset = pixelCount + 1
}
2 -> {
outputStride = 2
// For NV21 format, V is in even-numbered indices
outputOffset = pixelCount
}
else -> {
// Image contains more than 3 planes, something strange is going on
return@forEachIndexed
}
}
val planeBuffer = plane.buffer
val rowStride = plane.rowStride
val pixelStride = plane.pixelStride
// We have to divide the width and height by two if it's not the Y plane
val planeCrop = if (planeIndex == 0) {
imageCrop
} else {
Rect(
imageCrop.left / 2,
imageCrop.top / 2,
imageCrop.right / 2,
imageCrop.bottom / 2
)
}
val planeWidth = planeCrop.width()
val planeHeight = planeCrop.height()
// Intermediate buffer used to store the bytes of each row
val rowBuffer = ByteArray(plane.rowStride)
// Size of each row in bytes
val rowLength = if (pixelStride == 1 && outputStride == 1) {
planeWidth
} else {
// Take into account that the stride may include data from pixels other than this
// particular plane and row, and that could be between pixels and not after every
// pixel:
//
// |---- Pixel stride ----| Row ends here --> |
// | Pixel 1 | Other Data | Pixel 2 | Other Data | ... | Pixel N |
//
// We need to get (N-1) * (pixel stride bytes) per row + 1 byte for the last pixel
(planeWidth - 1) * pixelStride + 1
}
for (row in 0 until planeHeight) {
// Move buffer position to the beginning of this row
planeBuffer.position(
(row + planeCrop.top) * rowStride + planeCrop.left * pixelStride)
if (pixelStride == 1 && outputStride == 1) {
// When there is a single stride value for pixel and output, we can just copy
// the entire row in a single step
planeBuffer.get(outputBuffer, outputOffset, rowLength)
outputOffset += rowLength
} else {
// When either pixel or output have a stride > 1 we must copy pixel by pixel
planeBuffer.get(rowBuffer, 0, rowLength)
for (col in 0 until planeWidth) {
outputBuffer[outputOffset] = rowBuffer[col * pixelStride]
outputOffset += outputStride
}
}
}
}
}
}
其他资源
要详细了解 CameraX,请参阅下面列出的其他资源。
Codelab
CameraX 使用入门
代码示例
CameraX 示例应用