Android 源码 Camera2 预览流程分析一

先上一段典型的预览代码,梳理一下相机预览流程。

  1. 从 TextureView 获取到 SurfaceTexture
  2. 将 SurfaceTexture 默认缓冲区的大小配置为相机预览的大小
  3. 新建一个 Surface 作为预览输出
  4. CaptureRequest.Builder 设置 Surface
  5. 创建 CameraCaptureSession 用于相机预览
  6. 创建成功 CameraCaptureSession 后给 CaptureRequest.Builder 设置自动对焦,必要时开启闪光灯
  7. 现在可以开始显示相机预览了,CameraCaptureSession 设置重复请求预览数据
   /**
     * Creates a new [CameraCaptureSession] for camera preview.
     */
    private fun createCameraPreviewSession() {
        try {
            val texture = textureView.surfaceTexture

            // We configure the size of default buffer to be the size of camera preview we want.
            texture.setDefaultBufferSize(previewSize.width, previewSize.height)

            // This is the output Surface we need to start preview.
            val surface = Surface(texture)

            // We set up a CaptureRequest.Builder with the output Surface.
            previewRequestBuilder = cameraDevice!!.createCaptureRequest(
                    CameraDevice.TEMPLATE_PREVIEW
            )
            previewRequestBuilder.addTarget(surface)

            // Here, we create a CameraCaptureSession for camera preview.
            cameraDevice?.createCaptureSession(Arrays.asList(surface),
                    object : CameraCaptureSession.StateCallback() {

                        override fun onConfigured(cameraCaptureSession: CameraCaptureSession) {
                            // The camera is already closed
                            if (cameraDevice == null) return

                            // When the session is ready, we start displaying the preview.
                            captureSession = cameraCaptureSession
                            try {
                                // Auto focus should be continuous for camera preview.
                                previewRequestBuilder.set(CaptureRequest.CONTROL_AF_MODE,
                                        CaptureRequest.CONTROL_AF_MODE_CONTINUOUS_PICTURE)
                                // Flash is automatically enabled when necessary.
                                setAutoFlash(previewRequestBuilder)

                                // Finally, we start displaying the camera preview.
                                previewRequest = previewRequestBuilder.build()
                                captureSession?.setRepeatingRequest(previewRequest,
                                        captureCallback, backgroundHandler)
                            } catch (e: CameraAccessException) {
                                Log.e(TAG, e.toString())
                            }

                        }

                        override fun onConfigureFailed(session: CameraCaptureSession) {
                            activity.showToast("Failed")
                        }
                    }, null)
        } catch (e: CameraAccessException) {
            Log.e(TAG, e.toString())
        }

    }

在这里插入图片描述CameraMetadataNative 类代表 Binder 跨进程通信到 camera service 的相机元数据封送的实现。

这里面涉及了一个重点逻辑调用了 mRemoteDevice 变量指向的对象的 createDefaultRequest(…) 方法。templateType = CameraDevice.TEMPLATE_PREVIEW,其值为 1。然后将输出的 CameraMetadataNative 对象作为 CaptureRequest.Builder 入参构造一个 CaptureRequest.Builder 对象返回。

frameworks/base/core/java/android/hardware/camera2/impl/CameraDeviceImpl.java

public class CameraDeviceImpl extends CameraDevice {
    ......
    @Override
    public CaptureRequest.Builder createCaptureRequest(int templateType)
            throws CameraAccessException {
        synchronized(mInterfaceLock) {
            checkIfCameraClosedOrInError();

            CameraMetadataNative templatedRequest = new CameraMetadataNative();

            try {
                mRemoteDevice.createDefaultRequest(templateType, /*out*/templatedRequest);
            } catch (CameraRuntimeException e) {
                throw e.asChecked();
            } catch (RemoteException e) {
                // impossible
                return null;
            }

            CaptureRequest.Builder builder = new CaptureRequest.Builder(
                    templatedRequest, /*reprocess*/false, CameraCaptureSession.SESSION_ID_NONE);

            return builder;
        }
    }
    ......
}

首先来确定 mRemoteDevice 是哪里赋值的?CameraDeviceImpl 类 setRemoteDevice(…) 方法中又通过调用 CameraBinderDecorator 类 newInstance(…) 方法创建了实现 ICameraDeviceUser 接口的对象。

frameworks/base/core/java/android/hardware/camera2/impl/CameraDeviceImpl.java

public class CameraDeviceImpl extends CameraDevice {
    ......
    public void setRemoteDevice(ICameraDeviceUser remoteDevice) {
        synchronized(mInterfaceLock) {
            // TODO: Move from decorator to direct binder-mediated exceptions
            // If setRemoteFailure already called, do nothing
            if (mInError) return;

            mRemoteDevice = CameraBinderDecorator.newInstance(remoteDevice);

            mDeviceHandler.post(mCallOnOpened);
            mDeviceHandler.post(mCallOnUnconfigured);
        }
    }
    ......
}

CameraBinderDecorator 类静态方法 newInstance(…) 是个泛型方法。实际上其内部实现为调用 Decorator 泛型类的 newInstance(…) 方法。入参除了 T 类型的对象,还有一个 CameraBinderDecoratorListener 对象。

frameworks/base/core/java/android/hardware/camera2/utils/CameraBinderDecorator.java

public class CameraBinderDecorator {
    ......
    static class CameraBinderDecoratorListener implements Decorator.DecoratorListener {

        @Override
        public void onBeforeInvocation(Method m, Object[] args) {
        }

        @Override
        public void onAfterInvocation(Method m, Object[] args, Object result) {
            // int return type => status_t => convert to exception
            if (m.getReturnType() == Integer.TYPE) {
                int returnValue = (Integer) result;
                throwOnError(returnValue);
            }
        }

        @Override
        public boolean onCatchException(Method m, Object[] args, Throwable t) {

            if (t instanceof DeadObjectException) {
                throw new CameraRuntimeException(CAMERA_DISCONNECTED,
                        "Process hosting the camera service has died unexpectedly",
                        t);
            } else if (t instanceof RemoteException) {
                throw new UnsupportedOperationException("An unknown RemoteException was thrown" +
                        " which should never happen.", t);
            }

            return false;
        }

        @Override
        public void onFinally(Method m, Object[] args) {
        }

    }    
    ......
    public static <T> T newInstance(T obj) {
        return Decorator.<T> newInstance(obj, new CameraBinderDecoratorListener());
    }
}

代码跟到这个位置就很清晰了,这里使用了 Java 的动态代理机制。也就是说最后返回的实现 ICameraDeviceUser 接口的对象实际上是动态代理到 remoteDevice 引用对象的对象。为什么需要绕一个大弯子?使用动态代理机制可以在不改变原有对象实现方法的基础上完成一些额外功能。

frameworks/base/core/java/android/hardware/camera2/utils/Decorator.java

public class Decorator<T> implements InvocationHandler {
    ......
    @SuppressWarnings("unchecked")
    public static<T> T newInstance(T obj, DecoratorListener listener) {
        return (T)java.lang.reflect.Proxy.newProxyInstance(
                obj.getClass().getClassLoader(),
                obj.getClass().getInterfaces(),
                new Decorator<T>(obj, listener));
    }
    ......
}

继续分析 CameraDeviceImpl 类 setRemoteDevice(…) 何时被调用?如果对 openCamera(…) 流程还有印象,就知道其赋值是在 CameraManager 类 openCameraDeviceUserAsync(…) 方法中。分析可知 mRemoteDevice 对象实际指向一个 ICameraDeviceUser.Stub.Proxy 类型的对象。

现在可以继续分析 createDefaultRequest(…) 方法调用流程了。ICameraDeviceUser.Stub 和 ICameraDeviceUser.Stub.Proxy 是编译 ICameraDeviceUser.aidl 生成的。最终会调用 BpCameraDeviceUser 类 createDefaultRequest(…) 方法。

frameworks/av/camera/camera2/ICameraDeviceUser.cpp

class BpCameraDeviceUser : public BpInterface<ICameraDeviceUser>
{
public:
    ......
    // 从模板创建请求对象
    virtual status_t createDefaultRequest(int templateId,
                                          /*out*/
                                          CameraMetadata* request)
    {
        Parcel data, reply;
        data.writeInterfaceToken(ICameraDeviceUser::getInterfaceDescriptor());
        data.writeInt32(templateId);
        remote()->transact(CREATE_DEFAULT_REQUEST, data, &reply);

        reply.readExceptionCode();
        status_t result = reply.readInt32();

        CameraMetadata out;
        if (reply.readInt32() != 0) {
            out.readFromParcel(&reply);
        }

        if (request != NULL) {
            request->swap(out);
        }
        return result;
    }
}

BpCameraDeviceUser 类 createDefaultRequest(…) 方法会调用到 BnCameraDeviceUser 同名方法。

frameworks/av/include/camera/camera2/ICameraDeviceUser.h

class BnCameraDeviceUser: public BnInterface<ICameraDeviceUser>
{
public:
    virtual status_t    onTransact( uint32_t code,
                                    const Parcel& data,
                                    Parcel* reply,
                                    uint32_t flags = 0);
};

BnCameraDeviceUser::onTransact(…) 接收到 CREATE_DEFAULT_REQUEST 类型的消息,并调用 createDefaultRequest(…) 方法处理,实际是 BnCameraDeviceUser 的子类。

frameworks/av/camera/camera2/ICameraDeviceUser.cpp

status_t BnCameraDeviceUser::onTransact(
    uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
    switch(code) {
        ......
        case CREATE_DEFAULT_REQUEST: {
            CHECK_INTERFACE(ICameraDeviceUser, data, reply);

            int templateId = data.readInt32();

            CameraMetadata request;
            status_t ret;
            ret = createDefaultRequest(templateId, &request);

            reply->writeNoException();
            reply->writeInt32(ret);

            // out-variables are after exception and return value
            reply->writeInt32(1); // to mark presence of metadata object
            request.writeToParcel(const_cast<Parcel*>(reply));

            return NO_ERROR;
        } break;
        ......
   }
}

下面是继承链。

CameraDeviceClient -> Camera2ClientBase -> CameraDeviceClientBase -> BnCameraDeviceUser

mDevice 是在 Camera2ClientBase 构造器中初始化的。结合 openCamera 流程可知,假设 mDevice 指向 Camera3Device 对象。

frameworks/av/services/camera/libcameraservice/api2/CameraDeviceClient.cpp

// Create a request object from a template.
status_t CameraDeviceClient::createDefaultRequest(int templateId,
                                                  /*out*/
                                                  CameraMetadata* request)
{
    ATRACE_CALL();
    ALOGV("%s (templateId = 0x%x)", __FUNCTION__, templateId);

    status_t res;
    if ( (res = checkPid(__FUNCTION__) ) != OK) return res;

    Mutex::Autolock icl(mBinderSerializationLock);

    if (!mDevice.get()) return DEAD_OBJECT;

    CameraMetadata metadata;
    if ( (res = mDevice->createDefaultRequest(templateId, &metadata) ) == OK &&
        request != NULL) {

        request->swap(metadata);
    }

    return res;
}

Camera3Device::createDefaultRequest(…) 方法实际上调用了 camera3_device_t 结构体内 ops 指向的 camera3_device_ops_t 结构体内的 construct_default_request_settings 函数指针(厂家需要实现)。

这个函数指针的接口定义如下:

为标准相机用例创建捕获设置。

设备必须返回配置为满足请求的用例的设置缓冲区,该缓冲区必须是 CAMERA3_TEMPLATE_ * 枚举之一。 必须包括所有请求控制字段。

HAL 保留了此结构的所有权,但是指向该结构的指针必须有效,直到关闭设备为止。 一旦此调用返回缓冲区,框架和 HAL可能不会修改缓冲区。 对于相同模板或其他模板的后续调用,可以返回相同的缓冲区。

性能要求:

这应该是非阻塞调用。 HAL 应该在 1 毫秒内从此调用返回,并且必须在 5 毫秒内从此调用返回。

frameworks/av/services/camera/libcameraservice/device3/Camera3Device.cpp

status_t Camera3Device::createDefaultRequest(int templateId,
        CameraMetadata *request) {
    ATRACE_CALL();
    ALOGV("%s: for template %d", __FUNCTION__, templateId);
    Mutex::Autolock il(mInterfaceLock);
    Mutex::Autolock l(mLock);

    switch (mStatus) {
        case STATUS_ERROR:
            CLOGE("Device has encountered a serious error");
            return INVALID_OPERATION;
        case STATUS_UNINITIALIZED:
            CLOGE("Device is not initialized!");
            return INVALID_OPERATION;
        case STATUS_UNCONFIGURED:
        case STATUS_CONFIGURED:
        case STATUS_ACTIVE:
            // OK
            break;
        default:
            SET_ERR_L("Unexpected status: %d", mStatus);
            return INVALID_OPERATION;
    }

    if (!mRequestTemplateCache[templateId].isEmpty()) {
        *request = mRequestTemplateCache[templateId];
        return OK;
    }

    const camera_metadata_t *rawRequest;
    ATRACE_BEGIN("camera3->construct_default_request_settings");
    rawRequest = mHal3Device->ops->construct_default_request_settings(
        mHal3Device, templateId);
    ATRACE_END();
    if (rawRequest == NULL) {
        ALOGI("%s: template %d is not supported on this camera device",
              __FUNCTION__, templateId);
        return BAD_VALUE;
    }
    *request = rawRequest;
    mRequestTemplateCache[templateId] = rawRequest;

    return OK;
}

CaptureRequest.Builder 设置 Surface 这一步,在请求的目标列表(HashSet)中添加一个 Surface,当向相机设备发出请求时,添加的 Surface 必须是对 CameraDevice#createCaptureSession 的最新调用中包含的 Surface 之一。

frameworks/base/core/java/android/hardware/camera2/CaptureRequest.java

public final class CaptureRequest extends CameraMetadata<CaptureRequest.Key<?>>
        implements Parcelable {
    ......
    private final HashSet<Surface> mSurfaceSet;    
    ......
    private CaptureRequest(CameraMetadataNative settings, boolean isReprocess,
            int reprocessableSessionId) {
        mSettings = CameraMetadataNative.move(settings);
        mSurfaceSet = new HashSet<Surface>();
        mIsReprocess = isReprocess;
        if (isReprocess) {
            if (reprocessableSessionId == CameraCaptureSession.SESSION_ID_NONE) {
                throw new IllegalArgumentException("Create a reprocess capture request with an " +
                        "invalid session ID: " + reprocessableSessionId);
            }
            mReprocessableSessionId = reprocessableSessionId;
        } else {
            mReprocessableSessionId = CameraCaptureSession.SESSION_ID_NONE;
        }
    }
    ......
    public final static class Builder {

        private final CaptureRequest mRequest;  
        
        public Builder(CameraMetadataNative template, boolean reprocess,
                int reprocessableSessionId) {
            mRequest = new CaptureRequest(template, reprocess, reprocessableSessionId);
        }        
        
        public void addTarget(@NonNull Surface outputTarget) {
            mRequest.mSurfaceSet.add(outputTarget);
        }     
        ......
    }
    ......
}

创建 CameraCaptureSession 用于相机预览。

  1. 将每个 Surface 包装成 OutputConfiguration(一个用于描述相机输出的类,其中包含一个 Surface 及其用于创建捕获会话的特定配置)
  2. 调用内部方法 createCaptureSessionInternal(…)

frameworks/base/core/java/android/hardware/camera2/impl/CameraDeviceImpl.java

public class CameraDeviceImpl extends CameraDevice {
    ......
    @Override
    public void createCaptureSession(List<Surface> outputs,
            CameraCaptureSession.StateCallback callback, Handler handler)
            throws CameraAccessException {
        List<OutputConfiguration> outConfigurations = new ArrayList<>(outputs.size());
        for (Surface surface : outputs) {
            outConfigurations.add(new OutputConfiguration(surface));
        }
        createCaptureSessionInternal(null, outConfigurations, callback, handler,
                /*isConstrainedHighSpeed*/false);
    }    
    ......
}

核心步骤:

  1. 调用 configureStreamsChecked(…) 配置流然后阻塞直到空闲 IDLE
  2. 创建 CameraCaptureSessionImpl 对象

frameworks/base/core/java/android/hardware/camera2/impl/CameraDeviceImpl.java

public class CameraDeviceImpl extends CameraDevice {
    ......
    private void createCaptureSessionInternal(InputConfiguration inputConfig,
            List<OutputConfiguration> outputConfigurations,
            CameraCaptureSession.StateCallback callback, Handler handler,
            boolean isConstrainedHighSpeed) throws CameraAccessException {
        synchronized(mInterfaceLock) {
            ......
            // TODO: dont block for this
            boolean configureSuccess = true;
            CameraAccessException pendingException = null;
            Surface input = null;
            try {
                // 配置流然后阻塞直到空闲 IDLE
                configureSuccess = configureStreamsChecked(inputConfig, outputConfigurations,
                        isConstrainedHighSpeed);
                if (configureSuccess == true && inputConfig != null) {
                    input = new Surface();
                    try {
                        mRemoteDevice.getInputSurface(/*out*/input);
                    } catch (CameraRuntimeException e) {
                        e.asChecked();
                    }
                }
            } catch (CameraAccessException e) {
                configureSuccess = false;
                pendingException = e;
                input = null;
                if (DEBUG) {
                    Log.v(TAG, "createCaptureSession - failed with exception ", e);
                }
            } catch (RemoteException e) {
                // impossible
                return;
            }

            List<Surface> outSurfaces = new ArrayList<>(outputConfigurations.size());
            for (OutputConfiguration config : outputConfigurations) {
                outSurfaces.add(config.getSurface());
            }
            // 如果 configureOutputs 成功,则触发 onConfigured,否则触发 onConfigureFailed。
            CameraCaptureSessionCore newSession = null;
            if (isConstrainedHighSpeed) {
                newSession = new CameraConstrainedHighSpeedCaptureSessionImpl(mNextSessionId++,
                        outSurfaces, callback, handler, this, mDeviceHandler, configureSuccess,
                        mCharacteristics);
            } else {
                // 进入此分支
                newSession = new CameraCaptureSessionImpl(mNextSessionId++, input,
                        outSurfaces, callback, handler, this, mDeviceHandler,
                        configureSuccess);
            }

            // TODO: wait until current session closes, then create the new session
            mCurrentSession = newSession;

            if (pendingException != null) {
                throw pendingException;
            }

            mSessionStateCallback = mCurrentSession.getDeviceStateCallback();
        }
    }    
    ......
}

configureStreamsChecked(…) 函数尝试配置输入和输出;设备将进入空闲状态,然后在可能的情况下配置新的输入和输出。

添加所有新流。遍历 outputs 列表,逐个跨进程调用到 CameraDeviceClient 类 createStream(…) 方法。

frameworks/base/core/java/android/hardware/camera2/impl/CameraDeviceImpl.java

public class CameraDeviceImpl extends CameraDevice {
    public boolean configureStreamsChecked(InputConfiguration inputConfig,
            List<OutputConfiguration> outputs, boolean isConstrainedHighSpeed)
                    throws CameraAccessException {
        // Treat a null input the same an empty list
        if (outputs == null) {
            outputs = new ArrayList<OutputConfiguration>();
        }
        if (outputs.size() == 0 && inputConfig != null) {
            throw new IllegalArgumentException("cannot configure an input stream without " +
                    "any output streams");
        }

        checkInputConfiguration(inputConfig);

        boolean success = false;

        synchronized(mInterfaceLock) {
            checkIfCameraClosedOrInError();
            // Streams to create
            HashSet<OutputConfiguration> addSet = new HashSet<OutputConfiguration>(outputs);
            // Streams to delete
            List<Integer> deleteList = new ArrayList<Integer>();

            // 确定需要创建哪些流,删除哪些流
            for (int i = 0; i < mConfiguredOutputs.size(); ++i) {
                int streamId = mConfiguredOutputs.keyAt(i);
                OutputConfiguration outConfig = mConfiguredOutputs.valueAt(i);

                if (!outputs.contains(outConfig)) {
                    deleteList.add(streamId);
                } else {
                    addSet.remove(outConfig);  // 不要创建之前创建的流
                }
            }

            mDeviceHandler.post(mCallOnBusy);
            stopRepeating();

            try {
                waitUntilIdle();

                mRemoteDevice.beginConfigure();

                // 如果输入配置不同,请重新配置输入流。
                InputConfiguration currentInputConfig = mConfiguredInput.getValue();
                if (inputConfig != currentInputConfig &&
                        (inputConfig == null || !inputConfig.equals(currentInputConfig))) {
                    if (currentInputConfig != null) {
                        mRemoteDevice.deleteStream(mConfiguredInput.getKey());
                        mConfiguredInput = new SimpleEntry<Integer, InputConfiguration>(
                                REQUEST_ID_NONE, null);
                    }
                    if (inputConfig != null) {
                        int streamId = mRemoteDevice.createInputStream(inputConfig.getWidth(),
                                inputConfig.getHeight(), inputConfig.getFormat());
                        mConfiguredInput = new SimpleEntry<Integer, InputConfiguration>(
                                streamId, inputConfig);
                    }
                }

                // 首先删除所有流(以释放HW资源)
                for (Integer streamId : deleteList) {
                    mRemoteDevice.deleteStream(streamId);
                    mConfiguredOutputs.delete(streamId);
                }

                // 添加所有新流
                for (OutputConfiguration outConfig : outputs) {
                    if (addSet.contains(outConfig)) {
                        int streamId = mRemoteDevice.createStream(outConfig);
                        mConfiguredOutputs.put(streamId, outConfig);
                    }
                }

                try {
                    mRemoteDevice.endConfigure(isConstrainedHighSpeed);
                }
                catch (IllegalArgumentException e) {
                    // OK. camera service can reject stream config if it's not supported by HAL
                    // This is only the result of a programmer misusing the camera2 api.
                    Log.w(TAG, "Stream configuration failed");
                    return false;
                }

                success = true;
            } catch (CameraRuntimeException e) {
                if (e.getReason() == CAMERA_IN_USE) {
                    throw new IllegalStateException("The camera is currently busy." +
                            " You must wait until the previous operation completes.");
                }

                throw e.asChecked();
            } catch (RemoteException e) {
                // impossible
                return false;
            } finally {
                if (success && outputs.size() > 0) {
                    mDeviceHandler.post(mCallOnIdle);
                } else {
                    // Always return to the 'unconfigured' state if we didn't hit a fatal error
                    mDeviceHandler.post(mCallOnUnconfigured);
                }
            }
        }

        return success;
    }    
}

核心步骤:

  1. 创建 Native Surface
  2. 调用 Camera3Device 类 createStream(…) 方法创建流

frameworks/av/services/camera/libcameraservice/api2/CameraDeviceClient.cpp

status_t CameraDeviceClient::createStream(const OutputConfiguration &outputConfiguration)
{
    ATRACE_CALL();

    status_t res;
    if ( (res = checkPid(__FUNCTION__) ) != OK) return res;

    Mutex::Autolock icl(mBinderSerializationLock);

    // 拿到 bufferProducer
    sp<IGraphicBufferProducer> bufferProducer = outputConfiguration.getGraphicBufferProducer();
    if (bufferProducer == NULL) {
        ALOGE("%s: bufferProducer must not be null", __FUNCTION__);
        return BAD_VALUE;
    }
    if (!mDevice.get()) return DEAD_OBJECT;

    // 不要为同一目标 Surface 创建多个流
    {
        ssize_t index = mStreamMap.indexOfKey(IInterface::asBinder(bufferProducer));
        if (index != NAME_NOT_FOUND) {
            ALOGW("%s: Camera %d: Buffer producer already has a stream for it "
                  "(ID %zd)",
                  __FUNCTION__, mCameraId, index);
            return ALREADY_EXISTS;
        }
    }

    // HACK b/10949105
    // 查询消费者使用情况位,以使用 controlledByApp 参数设置 GLConsumer 的异步操作模式
    bool useAsync = false;
    int32_t consumerUsage;
    if ((res = bufferProducer->query(NATIVE_WINDOW_CONSUMER_USAGE_BITS,
            &consumerUsage)) != OK) {
        ALOGE("%s: Camera %d: Failed to query consumer usage", __FUNCTION__,
              mCameraId);
        return res;
    }
    if (consumerUsage & GraphicBuffer::USAGE_HW_TEXTURE) {
        ALOGW("%s: Camera %d: Forcing asynchronous mode for stream",
                __FUNCTION__, mCameraId);
        useAsync = true;
    }

    int32_t disallowedFlags = GraphicBuffer::USAGE_HW_VIDEO_ENCODER |
                              GRALLOC_USAGE_RENDERSCRIPT;
    int32_t allowedFlags = GraphicBuffer::USAGE_SW_READ_MASK |
                           GraphicBuffer::USAGE_HW_TEXTURE |
                           GraphicBuffer::USAGE_HW_COMPOSER;
    bool flexibleConsumer = (consumerUsage & disallowedFlags) == 0 &&
            (consumerUsage & allowedFlags) != 0;

    sp<IBinder> binder = IInterface::asBinder(bufferProducer);
    sp<Surface> surface = new Surface(bufferProducer, useAsync);
    ANativeWindow *anw = surface.get();

    int width, height, format;
    android_dataspace dataSpace;

    if ((res = anw->query(anw, NATIVE_WINDOW_WIDTH, &width)) != OK) {
        ALOGE("%s: Camera %d: Failed to query Surface width", __FUNCTION__,
              mCameraId);
        return res;
    }
    if ((res = anw->query(anw, NATIVE_WINDOW_HEIGHT, &height)) != OK) {
        ALOGE("%s: Camera %d: Failed to query Surface height", __FUNCTION__,
              mCameraId);
        return res;
    }
    if ((res = anw->query(anw, NATIVE_WINDOW_FORMAT, &format)) != OK) {
        ALOGE("%s: Camera %d: Failed to query Surface format", __FUNCTION__,
              mCameraId);
        return res;
    }
    if ((res = anw->query(anw, NATIVE_WINDOW_DEFAULT_DATASPACE,
                            reinterpret_cast<int*>(&dataSpace))) != OK) {
        ALOGE("%s: Camera %d: Failed to query Surface dataSpace", __FUNCTION__,
              mCameraId);
        return res;
    }

    // FIXME: remove this override since the default format should be
    //       IMPLEMENTATION_DEFINED. b/9487482
    if (format >= HAL_PIXEL_FORMAT_RGBA_8888 &&
        format <= HAL_PIXEL_FORMAT_BGRA_8888) {
        ALOGW("%s: Camera %d: Overriding format %#x to IMPLEMENTATION_DEFINED",
              __FUNCTION__, mCameraId, format);
        format = HAL_PIXEL_FORMAT_IMPLEMENTATION_DEFINED;
    }

    // 将尺寸舍入到此格式可用的最接近尺寸
    if (flexibleConsumer && !CameraDeviceClient::roundBufferDimensionNearest(width, height,
            format, dataSpace, mDevice->info(), /*out*/&width, /*out*/&height)) {
        ALOGE("%s: No stream configurations with the format %#x defined, failed to create stream.",
                __FUNCTION__, format);
        return BAD_VALUE;
    }

    int streamId = -1;
    res = mDevice->createStream(surface, width, height, format, dataSpace,
                                static_cast<camera3_stream_rotation_t>
                                        (outputConfiguration.getRotation()),
                                &streamId);

    if (res == OK) {
        mStreamMap.add(binder, streamId);

        ALOGV("%s: Camera %d: Successfully created a new stream ID %d",
              __FUNCTION__, mCameraId, streamId);

        /**
         * 设置流转换标志以自动旋转相机流以进行预览
         */
        int32_t transform = 0;
        res = getRotationTransformLocked(&transform);

        if (res != OK) {
            // Error logged by getRotationTransformLocked.
            return res;
        }

        res = mDevice->setStreamTransform(streamId, transform);
        if (res != OK) {
            ALOGE("%s: Failed to set stream transform (stream id %d)",
                  __FUNCTION__, streamId);
            return res;
        }

        return streamId;
    }

    return res;
}

现在重点来跟 Camera3Device 类 createStream(…) 方法。这里创建了 Camera3OutputStream (用于管理来自相机设备的单个输出数据流)对象,并将其添加到 mOutputStreams 指向的 KeyedVector 向量中。

frameworks/av/services/camera/libcameraservice/device3/Camera3Device.cpp

status_t Camera3Device::createStream(sp<Surface> consumer,
        uint32_t width, uint32_t height, int format, android_dataspace dataSpace,
        camera3_stream_rotation_t rotation, int *id) {
    ATRACE_CALL();
    Mutex::Autolock il(mInterfaceLock);
    Mutex::Autolock l(mLock);
    ALOGV("Camera %d: Creating new stream %d: %d x %d, format %d, dataspace %d rotation %d",
            mId, mNextStreamId, width, height, format, dataSpace, rotation);

    status_t res;
    bool wasActive = false;

    switch (mStatus) {
        case STATUS_ERROR:
            CLOGE("Device has encountered a serious error");
            return INVALID_OPERATION;
        case STATUS_UNINITIALIZED:
            CLOGE("Device not initialized");
            return INVALID_OPERATION;
        case STATUS_UNCONFIGURED:
        case STATUS_CONFIGURED:
            // OK
            break;
        case STATUS_ACTIVE:
            ALOGV("%s: Stopping activity to reconfigure streams", __FUNCTION__);
            res = internalPauseAndWaitLocked();
            if (res != OK) {
                SET_ERR_L("Can't pause captures to reconfigure streams!");
                return res;
            }
            wasActive = true;
            break;
        default:
            SET_ERR_L("Unexpected status: %d", mStatus);
            return INVALID_OPERATION;
    }
    assert(mStatus != STATUS_ACTIVE);

    sp<Camera3OutputStream> newStream;
    if (format == HAL_PIXEL_FORMAT_BLOB) {
        ssize_t blobBufferSize;
        if (dataSpace != HAL_DATASPACE_DEPTH) {
            blobBufferSize = getJpegBufferSize(width, height);
            if (blobBufferSize <= 0) {
                SET_ERR_L("Invalid jpeg buffer size %zd", blobBufferSize);
                return BAD_VALUE;
            }
        } else {
            blobBufferSize = getPointCloudBufferSize();
            if (blobBufferSize <= 0) {
                SET_ERR_L("Invalid point cloud buffer size %zd", blobBufferSize);
                return BAD_VALUE;
            }
        }
        newStream = new Camera3OutputStream(mNextStreamId, consumer,
                width, height, blobBufferSize, format, dataSpace, rotation);
    } else {
        newStream = new Camera3OutputStream(mNextStreamId, consumer,
                width, height, format, dataSpace, rotation);
    }
    newStream->setStatusTracker(mStatusTracker);

    res = mOutputStreams.add(mNextStreamId, newStream);
    if (res < 0) {
        SET_ERR_L("Can't add new stream to set: %s (%d)", strerror(-res), res);
        return res;
    }

    *id = mNextStreamId++;
    mNeedConfig = true;

    // Continue captures if active at start
    if (wasActive) {
        ALOGV("%s: Restarting activity to reconfigure streams", __FUNCTION__);
        res = configureStreamsLocked();
        if (res != OK) {
            CLOGE("Can't reconfigure device for new stream %d: %s (%d)",
                    mNextStreamId, strerror(-res), res);
            return res;
        }
        internalResumeLocked();
    }
    ALOGV("Camera %d: Created new stream", mId);
    return OK;
}

再来分析创建 CameraCaptureSessionImpl 对象。

创建一个新的 CameraCaptureSession。调用时,摄像头设备必须已经处于 IDLE 状态。不得有任何待处理的操作(例如,没有待处理的捕获、没有重复的请求、没有刷新)。

frameworks/base/core/java/android/hardware/camera2/impl/CameraCaptureSessionImpl.java

public class CameraCaptureSessionImpl extends CameraCaptureSession
        implements CameraCaptureSessionCore {
    ......
    CameraCaptureSessionImpl(int id, Surface input, List<Surface> outputs,
            CameraCaptureSession.StateCallback callback, Handler stateHandler,
            android.hardware.camera2.impl.CameraDeviceImpl deviceImpl,
            Handler deviceStateHandler, boolean configureSuccess) {
        if (outputs == null || outputs.isEmpty()) {
            throw new IllegalArgumentException("outputs must be a non-null, non-empty list");
        } else if (callback == null) {
            throw new IllegalArgumentException("callback must not be null");
        }

        mId = id;
        mIdString = String.format("Session %d: ", mId);

        // TODO: extra verification of outputs
        mOutputs = outputs;
        mInput = input;
        mStateHandler = checkHandler(stateHandler);
        mStateCallback = createUserStateCallbackProxy(mStateHandler, callback);

        mDeviceHandler = checkNotNull(deviceStateHandler, "deviceStateHandler must not be null");
        mDeviceImpl = checkNotNull(deviceImpl, "deviceImpl must not be null");

        /*
         * 对所有内部即将到来的事件使用与设备的 StateCallback 相同的 handler
         *
         * 这样可以确保 CameraDevice.StateCallback 和CameraDeviceImpl.CaptureCallback 事件之间的总体顺序。
         */
        mSequenceDrainer = new TaskDrainer<>(mDeviceHandler, new SequenceDrainListener(),
                /*name*/"seq");
        mIdleDrainer = new TaskSingleDrainer(mDeviceHandler, new IdleDrainListener(),
                /*name*/"idle");
        mAbortDrainer = new TaskSingleDrainer(mDeviceHandler, new AbortDrainListener(),
                /*name*/"abort");

        // CameraDevice 应该在构造我们之前调用 configureOutputs 并完成它

        if (configureSuccess) {
            mStateCallback.onConfigured(this);
            if (DEBUG) Log.v(TAG, mIdString + "Created session successfully");
            mConfigureSuccess = true;
        } else {
            mStateCallback.onConfigureFailed(this);
            mClosed = true; // 不要触发其他任何回调,也不要进行其他任何工作
            Log.e(TAG, mIdString + "Failed to create capture session; configuration failed");
            mConfigureSuccess = false;
        }
    }
    ......
}

在这里插入图片描述创建成功 CameraCaptureSession 后给 CaptureRequest.Builder 设置自动对焦,必要时开启闪光灯,这一步就略过,现在分析最后一步显示相机预览,CameraCaptureSession 设置重复请求预览数据,CameraCaptureSession 类是一个抽象类,实际是调用了实现类 CameraCaptureSessionImpl setRepeatingRequest(…) 方法处理。

此方法重点在最后一句 return 语句上,调用了 CameraDeviceImpl 类 setRepeatingRequest(…) 方法,将其返回结果添加到处理队列中。

frameworks/base/core/java/android/hardware/camera2/impl/CameraCaptureSessionImpl.java

public class CameraCaptureSessionImpl extends CameraCaptureSession
        implements CameraCaptureSessionCore {
    ......
    @Override
    public synchronized int setRepeatingRequest(CaptureRequest request, CaptureCallback callback,
            Handler handler) throws CameraAccessException {
        if (request == null) {
            throw new IllegalArgumentException("request must not be null");
        } else if (request.isReprocess()) {
            throw new IllegalArgumentException("repeating reprocess requests are not supported");
        }

        checkNotClosed();

        handler = checkHandler(handler, callback);

        if (DEBUG) {
            Log.v(TAG, mIdString + "setRepeatingRequest - request " + request + ", callback " +
                    callback + " handler" + " " + handler);
        }

        return addPendingSequence(mDeviceImpl.setRepeatingRequest(request,
                createCaptureCallbackProxy(handler, callback), mDeviceHandler));
    }
    ......
}

CameraDeviceImpl 类 setRepeatingRequest(…) 就将入参 CaptureRequest 封装成 List 最后调用 submitCaptureRequest(…) 方法处理。

frameworks/base/core/java/android/hardware/camera2/impl/CameraDeviceImpl.java

public class CameraDeviceImpl extends CameraDevice {
    ......
    public int setRepeatingRequest(CaptureRequest request, CaptureCallback callback,
            Handler handler) throws CameraAccessException {
        List<CaptureRequest> requestList = new ArrayList<CaptureRequest>();
        requestList.add(request);
        return submitCaptureRequest(requestList, callback, handler, /*streaming*/true);
    }    
    ......
}

CameraDeviceImpl 类 submitCaptureRequest(…) 方法主要调用远程方法 submitRequestList(…) 提交请求列表。

frameworks/base/core/java/android/hardware/camera2/impl/CameraDeviceImpl.java

public class CameraDeviceImpl extends CameraDevice {
    ......
    private int submitCaptureRequest(List<CaptureRequest> requestList, CaptureCallback callback,
            Handler handler, boolean repeating) throws CameraAccessException {

        // 如果回调有效,则需要有效的 handler,或者当前线程需要 looper
        handler = checkHandler(handler, callback);

        // 确保所有请求至少具有 1 个 surface,所有 surface 都不为空
        for (CaptureRequest request : requestList) {
            if (request.getTargets().isEmpty()) {
                throw new IllegalArgumentException(
                        "Each request must have at least one Surface target");
            }

            for (Surface surface : request.getTargets()) {
                if (surface == null) {
                    throw new IllegalArgumentException("Null Surface targets are not allowed");
                }
            }
        }

        synchronized(mInterfaceLock) {
            checkIfCameraClosedOrInError();
            int requestId;

            if (repeating) {
                stopRepeating();
            }

            LongParcelable lastFrameNumberRef = new LongParcelable();
            try {
                requestId = mRemoteDevice.submitRequestList(requestList, repeating,
                        /*out*/lastFrameNumberRef);
                if (DEBUG) {
                    Log.v(TAG, "last frame number " + lastFrameNumberRef.getNumber());
                }
            } catch (CameraRuntimeException e) {
                throw e.asChecked();
            } catch (RemoteException e) {
                // impossible
                return -1;
            }

            if (callback != null) {
                mCaptureCallbackMap.put(requestId, new CaptureCallbackHolder(callback,
                        requestList, handler, repeating, mNextSessionId - 1));
            } else {
                if (DEBUG) {
                    Log.d(TAG, "Listen for request " + requestId + " is null");
                }
            }

            long lastFrameNumber = lastFrameNumberRef.getNumber();

            if (repeating) {
                if (mRepeatingRequestId != REQUEST_ID_NONE) {
                    checkEarlyTriggerSequenceComplete(mRepeatingRequestId, lastFrameNumber);
                }
                mRepeatingRequestId = requestId;
            } else {
                mRequestLastFrameNumbersList.add(new RequestLastFrameNumbersHolder(requestList,
                        requestId, lastFrameNumber));
            }

            if (mIdle) {
                mDeviceHandler.post(mCallOnActive);
            }
            mIdle = false;

            return requestId;
        }
    }  
    ......
}

以上最终会调用 CameraDeviceClient 类 submitRequestList(…) 方法。这里重点关注更新 CameraMetadata(基于 C 的 camera_metadata_t 库的便利包装) ANDROID_REQUEST_OUTPUT_STREAMS 和 setStreamingRequestList(…)。

frameworks/av/services/camera/libcameraservice/api2/CameraDeviceClient.cpp

status_t CameraDeviceClient::submitRequestList(List<sp<CaptureRequest> > requests,
                                               bool streaming, int64_t* lastFrameNumber) {
    ATRACE_CALL();
    ALOGV("%s-start of function. Request list size %zu", __FUNCTION__, requests.size());

    status_t res;
    if ( (res = checkPid(__FUNCTION__) ) != OK) return res;

    Mutex::Autolock icl(mBinderSerializationLock);

    if (!mDevice.get()) return DEAD_OBJECT;

    if (requests.empty()) {
        ALOGE("%s: Camera %d: Sent null request. Rejecting request.",
              __FUNCTION__, mCameraId);
        return BAD_VALUE;
    }

    List<const CameraMetadata> metadataRequestList;
    int32_t requestId = mRequestIdCounter;
    uint32_t loopCounter = 0;

    for (List<sp<CaptureRequest> >::iterator it = requests.begin(); it != requests.end(); ++it) {
        sp<CaptureRequest> request = *it;
        if (request == 0) {
            ALOGE("%s: Camera %d: Sent null request.",
                    __FUNCTION__, mCameraId);
            return BAD_VALUE;
        } else if (request->mIsReprocess) {
            if (!mInputStream.configured) {
                ALOGE("%s: Camera %d: no input stream is configured.", __FUNCTION__, mCameraId);
                return BAD_VALUE;
            } else if (streaming) {
                ALOGE("%s: Camera %d: streaming reprocess requests not supported.", __FUNCTION__,
                        mCameraId);
                return BAD_VALUE;
            }
        }

        CameraMetadata metadata(request->mMetadata);
        if (metadata.isEmpty()) {
            ALOGE("%s: Camera %d: Sent empty metadata packet. Rejecting request.",
                   __FUNCTION__, mCameraId);
            return BAD_VALUE;
        } else if (request->mSurfaceList.isEmpty()) {
            ALOGE("%s: Camera %d: Requests must have at least one surface target. "
                  "Rejecting request.", __FUNCTION__, mCameraId);
            return BAD_VALUE;
        }

        if (!enforceRequestPermissions(metadata)) {
            // Callee logs
            return PERMISSION_DENIED;
        }

        /**
         * 写入我们根据捕获请求的 surface 目标列表计算出的输出流 ID
         */
        Vector<int32_t> outputStreamIds;
        outputStreamIds.setCapacity(request->mSurfaceList.size());
        for (size_t i = 0; i < request->mSurfaceList.size(); ++i) {
            sp<Surface> surface = request->mSurfaceList[i];
            if (surface == 0) continue;

            sp<IGraphicBufferProducer> gbp = surface->getIGraphicBufferProducer();
            int idx = mStreamMap.indexOfKey(IInterface::asBinder(gbp));

            // 尝试使用未创建的 surface 提交请求
            if (idx == NAME_NOT_FOUND) {
                ALOGE("%s: Camera %d: Tried to submit a request with a surface that"
                      " we have not called createStream on",
                      __FUNCTION__, mCameraId);
                return BAD_VALUE;
            }

            int streamId = mStreamMap.valueAt(idx);
            outputStreamIds.push_back(streamId);
            ALOGV("%s: Camera %d: Appending output stream %d to request",
                  __FUNCTION__, mCameraId, streamId);
        }

        metadata.update(ANDROID_REQUEST_OUTPUT_STREAMS, &outputStreamIds[0],
                        outputStreamIds.size());

        if (request->mIsReprocess) {
            metadata.update(ANDROID_REQUEST_INPUT_STREAMS, &mInputStream.id, 1);
        }

        metadata.update(ANDROID_REQUEST_ID, &requestId, /*size*/1);
        loopCounter++; // loopCounter starts from 1
        ALOGV("%s: Camera %d: Creating request with ID %d (%d of %zu)",
              __FUNCTION__, mCameraId, requestId, loopCounter, requests.size());

        metadataRequestList.push_back(metadata);
    }
    mRequestIdCounter++;

    if (streaming) {
        res = mDevice->setStreamingRequestList(metadataRequestList, lastFrameNumber);
        if (res != OK) {
            ALOGE("%s: Camera %d:  Got error %d after trying to set streaming "
                  "request", __FUNCTION__, mCameraId, res);
        } else {
            mStreamingRequestList.push_back(requestId);
        }
    } else {
        res = mDevice->captureList(metadataRequestList, lastFrameNumber);
        if (res != OK) {
            ALOGE("%s: Camera %d: Got error %d after trying to set capture",
                __FUNCTION__, mCameraId, res);
        }
        ALOGV("%s: requestId = %d ", __FUNCTION__, requestId);
    }

    ALOGV("%s: Camera %d: End of function", __FUNCTION__, mCameraId);
    if (res == OK) {
        return requestId;
    }

    return res;
}

更新元数据条目。如果尚不存在,将创建条目;如果空间不足,将重新分配缓冲区。

frameworks/av/camera/CameraMetadata.cpp

status_t CameraMetadata::update(uint32_t tag,
        const int32_t *data, size_t data_count) {
    status_t res;
    if (mLocked) {
        ALOGE("%s: CameraMetadata is locked", __FUNCTION__);
        return INVALID_OPERATION;
    }
    if ( (res = checkType(tag, TYPE_INT32)) != OK) {
        return res;
    }
    return updateImpl(tag, (const void*)data, data_count);
}

mBuffer 是一个 camera_metadata_t * 指针,里面存了各种元数据。add_camera_metadata_entry、update_camera_metadata_entry 分别是添加和更新元数据条目的方法。

frameworks/av/camera/CameraMetadata.cpp

status_t CameraMetadata::updateImpl(uint32_t tag, const void *data,
        size_t data_count) {
    status_t res;
    if (mLocked) {
        ALOGE("%s: CameraMetadata is locked", __FUNCTION__);
        return INVALID_OPERATION;
    }
    int type = get_camera_metadata_tag_type(tag);
    if (type == -1) {
        ALOGE("%s: Tag %d not found", __FUNCTION__, tag);
        return BAD_VALUE;
    }
    // 安全检查-确保数据未指向该元数据,因为如果需要调整大小,该元数据将失效
    size_t bufferSize = get_camera_metadata_size(mBuffer);
    uintptr_t bufAddr = reinterpret_cast<uintptr_t>(mBuffer);
    uintptr_t dataAddr = reinterpret_cast<uintptr_t>(data);
    if (dataAddr > bufAddr && dataAddr < (bufAddr + bufferSize)) {
        ALOGE("%s: Update attempted with data from the same metadata buffer!",
                __FUNCTION__);
        return INVALID_OPERATION;
    }

    size_t data_size = calculate_camera_metadata_entry_data_size(type,
            data_count);

    res = resizeIfNeeded(1, data_size);

    if (res == OK) {
        camera_metadata_entry_t entry;
        res = find_camera_metadata_entry(mBuffer, tag, &entry);
        if (res == NAME_NOT_FOUND) {
            res = add_camera_metadata_entry(mBuffer,
                    tag, data, data_count);
        } else if (res == OK) {
            res = update_camera_metadata_entry(mBuffer,
                    entry.index, data, data_count, NULL);
        }
    }

    if (res != OK) {
        ALOGE("%s: Unable to update metadata entry %s.%s (%x): %s (%d)",
                __FUNCTION__, get_camera_metadata_section_name(tag),
                get_camera_metadata_tag_name(tag), tag, strerror(-res), res);
    }

    IF_ALOGV() {
        ALOGE_IF(validate_camera_metadata_structure(mBuffer, /*size*/NULL) !=
                 OK,

                 "%s: Failed to validate metadata structure after update %p",
                 __FUNCTION__, mBuffer);
    }

    return res;
}

Camera3Device 类 setStreamingRequestList(…) 方法实际工作是由 submitRequestsHelper(…) 完成的。

frameworks/av/services/camera/libcameraservice/device3/Camera3Device.cpp

status_t Camera3Device::setStreamingRequestList(const List<const CameraMetadata> &requests,
                                                int64_t *lastFrameNumber) {
    ATRACE_CALL();

    return submitRequestsHelper(requests, /*repeating*/true, lastFrameNumber);
}

submitRequestsHelper(…) 方法的重点在给请求线程设置重复请求。当然转换 Metadata List 到 RequestList 也是重要的一步。

frameworks/av/services/camera/libcameraservice/device3/Camera3Device.cpp

status_t Camera3Device::submitRequestsHelper(
        const List<const CameraMetadata> &requests, bool repeating,
        /*out*/
        int64_t *lastFrameNumber) {
    ATRACE_CALL();
    Mutex::Autolock il(mInterfaceLock);
    Mutex::Autolock l(mLock);

    status_t res = checkStatusOkToCaptureLocked();
    if (res != OK) {
        // error logged by previous call
        return res;
    }

    RequestList requestList;

    res = convertMetadataListToRequestListLocked(requests, /*out*/&requestList);
    if (res != OK) {
        // error logged by previous call
        return res;
    }

    if (repeating) {
        res = mRequestThread->setRepeatingRequests(requestList, lastFrameNumber);
    } else {
        res = mRequestThread->queueRequestList(requestList, lastFrameNumber);
    }

    if (res == OK) {
        waitUntilStateThenRelock(/*active*/true, kActiveTimeout);
        if (res != OK) {
            SET_ERR_L("Can't transition to active in %f seconds!",
                    kActiveTimeout/1e9);
        }
        ALOGV("Camera %d: Capture request %" PRId32 " enqueued", mId,
              (*(requestList.begin()))->mResultExtras.requestId);
    } else {
        CLOGE("Cannot queue request. Impossible.");
        return BAD_VALUE;
    }

    return res;
}

先来分析 convertMetadataListToRequestListLocked(…) 转换函数流程。遍历 metadataList,每个元素作为入参调用 setUpRequestLocked(…),返回 CaptureRequest 对象。

frameworks/av/services/camera/libcameraservice/device3/Camera3Device.cpp

status_t Camera3Device::convertMetadataListToRequestListLocked(
        const List<const CameraMetadata> &metadataList, RequestList *requestList) {
    if (requestList == NULL) {
        CLOGE("requestList cannot be NULL.");
        return BAD_VALUE;
    }

    int32_t burstId = 0;
    for (List<const CameraMetadata>::const_iterator it = metadataList.begin();
            it != metadataList.end(); ++it) {
        sp<CaptureRequest> newRequest = setUpRequestLocked(*it);
        if (newRequest == 0) {
            CLOGE("Can't create capture request");
            return BAD_VALUE;
        }

        // 设置突发 ID 和请求 ID
        newRequest->mResultExtras.burstId = burstId++;
        if (it->exists(ANDROID_REQUEST_ID)) {
            if (it->find(ANDROID_REQUEST_ID).count == 0) {
                CLOGE("RequestID entry exists; but must not be empty in metadata");
                return BAD_VALUE;
            }
            newRequest->mResultExtras.requestId = it->find(ANDROID_REQUEST_ID).data.i32[0];
        } else {
            CLOGE("RequestID does not exist in metadata");
            return BAD_VALUE;
        }

        requestList->push_back(newRequest);

        ALOGV("%s: requestId = %" PRId32, __FUNCTION__, newRequest->mResultExtras.requestId);
    }

    // 如果这是高速视频录制请求,设置批次大小。
    if (mIsConstrainedHighSpeedConfiguration && requestList->size() > 0) {
        auto firstRequest = requestList->begin();
        for (auto& outputStream : (*firstRequest)->mOutputStreams) {
            if (outputStream->isVideoStream()) {
                (*firstRequest)->mBatchSize = requestList->size();
                break;
            }
        }
    }

    return OK;
}

setUpRequestLocked(…) 函数中,首先调用 configureStreamsLocked() 配置流,然后调用 createCaptureRequest(…) 创建 CaptureRequest 对象。

frameworks/av/services/camera/libcameraservice/device3/Camera3Device.cpp

sp<Camera3Device::CaptureRequest> Camera3Device::setUpRequestLocked(
        const CameraMetadata &request) {
    status_t res;

    if (mStatus == STATUS_UNCONFIGURED || mNeedConfig) {
        res = configureStreamsLocked();
        // 流配置由于不支持的配置而失败。设备回到未配置状态。客户端可以尝试其他配置
        if (res == BAD_VALUE && mStatus == STATUS_UNCONFIGURED) {
            CLOGE("No streams configured");
            return NULL;
        }
        // 流配置由于其他原因而失败,致命。
        if (res != OK) {
            SET_ERR_L("Can't set up streams: %s (%d)", strerror(-res), res);
            return NULL;
        }
        // 流配置成功配置为空流配置。
        if (mStatus == STATUS_UNCONFIGURED) {
            CLOGE("No streams configured");
            return NULL;
        }
    }

    sp<CaptureRequest> newRequest = createCaptureRequest(request);
    return newRequest;
}

这里将调用 HAL 方法配置流。

frameworks/av/services/camera/libcameraservice/device3/Camera3Device.cpp

status_t Camera3Device::configureStreamsLocked() {
    ATRACE_CALL();
    status_t res;

    if (mStatus != STATUS_UNCONFIGURED && mStatus != STATUS_CONFIGURED) {
        CLOGE("Not idle");
        return INVALID_OPERATION;
    }

    if (!mNeedConfig) {
        ALOGV("%s: Skipping config, no stream changes", __FUNCTION__);
        return OK;
    }

    // 设备 HALv3.2 或更旧规范错误的解决方法-零流需要添加虚拟流。
    if (mOutputStreams.size() == 0) {
        addDummyStreamLocked();
    } else {
        tryRemoveDummyStreamLocked();
    }

    // 开始配置流
    ALOGV("%s: Camera %d: Starting stream configuration", __FUNCTION__, mId);

    camera3_stream_configuration config;
    config.operation_mode = mIsConstrainedHighSpeedConfiguration ?
            CAMERA3_STREAM_CONFIGURATION_CONSTRAINED_HIGH_SPEED_MODE :
            CAMERA3_STREAM_CONFIGURATION_NORMAL_MODE;
    config.num_streams = (mInputStream != NULL) + mOutputStreams.size();

    Vector<camera3_stream_t*> streams;
    streams.setCapacity(config.num_streams);

    if (mInputStream != NULL) {
        camera3_stream_t *inputStream;
        // 开始输入流配置
        inputStream = mInputStream->startConfiguration();
        if (inputStream == NULL) {
            SET_ERR_L("Can't start input stream configuration");
            return INVALID_OPERATION;
        }
        streams.add(inputStream);
    }

    for (size_t i = 0; i < mOutputStreams.size(); i++) {

        // 不要配置两次 bidi 流,也不要将它们两次添加到列表中
        if (mOutputStreams[i].get() ==
            static_cast<Camera3StreamInterface*>(mInputStream.get())) {

            config.num_streams--;
            continue;
        }

        camera3_stream_t *outputStream;
        // 开始输出流配置
        outputStream = mOutputStreams.editValueAt(i)->startConfiguration();
        if (outputStream == NULL) {
            SET_ERR_L("Can't start output stream configuration");
            return INVALID_OPERATION;
        }
        streams.add(outputStream);
    }

    config.streams = streams.editArray();

    // 做 HAL 配置
    ATRACE_BEGIN("camera3->configure_streams");
    res = mHal3Device->ops->configure_streams(mHal3Device, &config);
    ATRACE_END();

    if (res == BAD_VALUE) {
        // HAL 将这组流拒绝为不支持,清理配置尝试并返回到未配置状态
        if (mInputStream != NULL && mInputStream->isConfiguring()) {
            res = mInputStream->cancelConfiguration();
            if (res != OK) {
                SET_ERR_L("Can't cancel configuring input stream %d: %s (%d)",
                        mInputStream->getId(), strerror(-res), res);
                return res;
            }
        }

        for (size_t i = 0; i < mOutputStreams.size(); i++) {
            sp<Camera3OutputStreamInterface> outputStream =
                    mOutputStreams.editValueAt(i);
            if (outputStream->isConfiguring()) {
                res = outputStream->cancelConfiguration();
                if (res != OK) {
                    SET_ERR_L(
                        "Can't cancel configuring output stream %d: %s (%d)",
                        outputStream->getId(), strerror(-res), res);
                    return res;
                }
            }
        }

        // 返回到调用开始的状态,以便将来正确配置清理内容
        internalUpdateStatusLocked(STATUS_UNCONFIGURED);
        mNeedConfig = true;

        ALOGV("%s: Camera %d: Stream configuration failed", __FUNCTION__, mId);
        return BAD_VALUE;
    } else if (res != OK) {
        // 来自 configure_streams 的其他类型的错误-这不是期望的
        SET_ERR_L("Unable to configure streams with HAL: %s (%d)",
                strerror(-res), res);
        return res;
    }

    // 立即完成所有流配置
    if (mInputStream != NULL && mInputStream->isConfiguring()) {
        res = mInputStream->finishConfiguration(mHal3Device);
        if (res != OK) {
            SET_ERR_L("Can't finish configuring input stream %d: %s (%d)",
                    mInputStream->getId(), strerror(-res), res);
            return res;
        }
    }

    for (size_t i = 0; i < mOutputStreams.size(); i++) {
        sp<Camera3OutputStreamInterface> outputStream =
            mOutputStreams.editValueAt(i);
        if (outputStream->isConfiguring()) {
            res = outputStream->finishConfiguration(mHal3Device);
            if (res != OK) {
                SET_ERR_L("Can't finish configuring output stream %d: %s (%d)",
                        outputStream->getId(), strerror(-res), res);
                return res;
            }
        }
    }

    // 请求线程需要知道以避免在 configure_streams() 调用之间使用重复最后设置协议
    mRequestThread->configurationComplete();

    // 提高请求线程的优先级,以便高速记录到 SCHED_FIFO
    if (mIsConstrainedHighSpeedConfiguration) {
        pid_t requestThreadTid = mRequestThread->getTid();
        res = requestPriority(getpid(), requestThreadTid,
                kConstrainedHighSpeedThreadPriority, true);
        if (res != OK) {
            ALOGW("Can't set realtime priority for request processing thread: %s (%d)",
                    strerror(-res), res);
        } else {
            ALOGD("Set real time priority for request queue thread (tid %d)", requestThreadTid);
        }
    } else {
        // TODO: Set/restore normal priority for normal use cases
    }

    // 更新设备状态
    mNeedConfig = false;

    internalUpdateStatusLocked((mDummyStreamId == NO_STREAM) ?
            STATUS_CONFIGURED : STATUS_UNCONFIGURED);

    ALOGV("%s: Camera %d: Stream configuration complete", __FUNCTION__, mId);

    // 配置流后,删除已删除的流
    mDeletedStreams.clear();

    return OK;
}

再来分析调用 createCaptureRequest(…) 创建 CaptureRequest 对象。

frameworks/av/services/camera/libcameraservice/device3/Camera3Device.cpp

sp<Camera3Device::CaptureRequest> Camera3Device::createCaptureRequest(
        const CameraMetadata &request) {
    ATRACE_CALL();
    status_t res;

    sp<CaptureRequest> newRequest = new CaptureRequest;
    newRequest->mSettings = request;

    camera_metadata_entry_t inputStreams =
            newRequest->mSettings.find(ANDROID_REQUEST_INPUT_STREAMS);
    if (inputStreams.count > 0) {
        if (mInputStream == NULL ||
                mInputStream->getId() != inputStreams.data.i32[0]) {
            CLOGE("Request references unknown input stream %d",
                    inputStreams.data.u8[0]);
            return NULL;
        }
        // 首次使用时流配置(分配/注册)的延迟完成
        if (mInputStream->isConfiguring()) {
            res = mInputStream->finishConfiguration(mHal3Device);
            if (res != OK) {
                SET_ERR_L("Unable to finish configuring input stream %d:"
                        " %s (%d)",
                        mInputStream->getId(), strerror(-res), res);
                return NULL;
            }
        }
        // 检查是否正在准备流
        if (mInputStream->isPreparing()) {
            CLOGE("Request references an input stream that's being prepared!");
            return NULL;
        }

        newRequest->mInputStream = mInputStream;
        newRequest->mSettings.erase(ANDROID_REQUEST_INPUT_STREAMS);
    }

    camera_metadata_entry_t streams =
            newRequest->mSettings.find(ANDROID_REQUEST_OUTPUT_STREAMS);
    if (streams.count == 0) {
        CLOGE("Zero output streams specified!");
        return NULL;
    }

    for (size_t i = 0; i < streams.count; i++) {
        int idx = mOutputStreams.indexOfKey(streams.data.i32[i]);
        if (idx == NAME_NOT_FOUND) {
            CLOGE("Request references unknown stream %d",
                    streams.data.u8[i]);
            return NULL;
        }
        sp<Camera3OutputStreamInterface> stream =
                mOutputStreams.editValueAt(idx);

        // 首次使用时流配置(分配/注册)的延迟完成
        if (stream->isConfiguring()) {
            res = stream->finishConfiguration(mHal3Device);
            if (res != OK) {
                SET_ERR_L("Unable to finish configuring stream %d: %s (%d)",
                        stream->getId(), strerror(-res), res);
                return NULL;
            }
        }
        // 检查是否正在准备流
        if (stream->isPreparing()) {
            CLOGE("Request references an output stream that's being prepared!");
            return NULL;
        }
        // 将输出流添加到 CaptureRequest 成员 mOutputStreams 向量中
        newRequest->mOutputStreams.push(stream);
    }
    newRequest->mSettings.erase(ANDROID_REQUEST_OUTPUT_STREAMS);
    newRequest->mBatchSize = 1;

    return newRequest;
}

RequestThread 类代表用于管理将捕获请求提交到 HAL 设备的线程。

setRepeatingRequests(…) 方法代表设置或清除重复请求列表。两者都不阻塞。使用 waitUntilPaused 等待直到请求队列清空。

frameworks/av/services/camera/libcameraservice/device3/Camera3Device.cpp

status_t Camera3Device::RequestThread::setRepeatingRequests(
        const RequestList &requests,
        /*out*/
        int64_t *lastFrameNumber) {
    Mutex::Autolock l(mRequestLock);
    if (lastFrameNumber != NULL) {
        *lastFrameNumber = mRepeatingLastFrameNumber;
    }
    mRepeatingRequests.clear();
    mRepeatingRequests.insert(mRepeatingRequests.begin(),
            requests.begin(), requests.end());

    unpauseForNewRequests();

    mRepeatingLastFrameNumber = NO_IN_FLIGHT_REPEATING_FRAMES;
    return OK;
}

到这里基本逻辑全部梳理完成,但还是没有把预览的“车轮”转起来,后面继续分析。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

TYYJ-洪伟

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值