Android Camera HAL3中拍照Capture模式下多模块间的交互与帧Result与帧数据回调

本文均属自己阅读源码的点滴总结,转账请注明出处谢谢。

欢迎和大家交流。qq:1037701636 email:gzzaigcn2009@163.com

Software:系统源码Android5.1


前沿:

    之前的两篇博文算是比较详细的记录了整个Camera3 HAL3架构下完全不同于HAL1的preview预览处理过程,包括主要涉及到的控制流和视频流等。比较详细的阐述了Camera2Client下streamProcessor、CallbackProcessor、CaptureSequencer等模块在Camera3架构下的功能。分析得出每个模块下均会在Camera3Device下以一个Stream的形式存在,而每个stream又是由多个buffer来构成主体的。与HAL3进行数据的交互时,以Request和result来作为数据传输的载体。在这些基础上本文将描述具体拍照Capture模式下的数据流和控制流,主要会涉及到jpegprocessor、CaptureSequencer这几个模块的工作原理。鉴于Capture模式下的数据流更复杂,在这里重点会分析数据流result回传时,每个模块的响应以及处理过程,填补前一博文的空白。


1. HAL3中Camera2Client下的take picture的入口函数

作为标准的capture picture功能的入口,主要完成了以下两件事情:

updateProcessorStream(mJpegProcessor, l.mParameters);

mCaptureSequencer->startCapture(msgType)

    对于JpegProcessor模块而言,他的stream流第一次是在preview阶段进行了create与初始化,这里之所以再次调用JpegProcessor::updateStream目的是参考原先JpegProcessor stream的width与height是否变化即是否照片要求的分辨率发生了变化,如果是的话就需要delete原先的stream,重新建立一个stream。

    在JpegProcessor中重点关注CpuConsumer与Surface的生产者与消费者处理模式,官方称之为Create CPU buffer queue endpoint。


2. CaptureSequencer模块

    CaptureSequencer模块是take picture下操作的重点,在Camera2Client中进行了创建,首先来看CaptureSequencer线程的threadLoop函数:

bool CaptureSequencer::threadLoop() {

    sp<Camera2Client> client = mClient.promote();
    if (client == 0) return false;

    CaptureState currentState;
    {
        Mutex::Autolock l(mStateMutex);
        currentState = mCaptureState;
    }

    currentState = (this->*kStateManagers[currentState])(client);

    Mutex::Autolock l(mStateMutex);
    if (currentState != mCaptureState) {
        if (mCaptureState != IDLE) {
            ATRACE_ASYNC_END(kStateNames[mCaptureState], mStateTransitionCount);
        }
        mCaptureState = currentState;//保留新的状态
        mStateTransitionCount++;
        if (mCaptureState != IDLE) {
            ATRACE_ASYNC_BEGIN(kStateNames[mCaptureState], mStateTransitionCount);
        }
        ALOGV("Camera %d: New capture state %s",
                client->getCameraId(), kStateNames[mCaptureState]);
        mStateChanged.signal();
    }

    if (mCaptureState == ERROR) {
        ALOGE("Camera %d: Stopping capture sequencer due to error",
                client->getCameraId());
        return false;
    }

    return true;
}
CaptureSequencer是一个以不同的state状态机来循环工作的模块, currentState = (this->*kStateManagers[currentState])(client)函数是执行对应状态机下的执行函数,其中的state值如下:

const CaptureSequencer::StateManager
        CaptureSequencer::kStateManagers[CaptureSequencer::NUM_CAPTURE_STATES-1] = {
    &CaptureSequencer::manageIdle,
    &CaptureSequencer::manageStart,
    &CaptureSequencer::manageZslStart,
    &CaptureSequencer::manageZslWaiting,
    &CaptureSequencer::manageZslReprocessing,
    &CaptureSequencer::manageStandardStart,
    &CaptureSequencer::manageStandardPrecaptureWait,
    &CaptureSequencer::manageStandardCapture,
    &CaptureSequencer::manageStandardCaptureWait,
    &CaptureSequencer::manageBurstCaptureStart,
    &CaptureSequencer::manageBurstCaptureWait,
    &CaptureSequencer::manageDone,
};
我们以一个standard capture的操作模式,来分析一次完成的take picture的过程。初始化的 mCaptureState(IDLE),进入的函数入口为manageIdle:

CaptureSequencer::CaptureState CaptureSequencer::manageIdle(
        sp<Camera2Client> &/*client*/) {
    status_t res;
    Mutex::Autolock l(mInputMutex);
    while (!mStartCapture) {
        res = mStartCaptureSignal.waitRelative(mInputMutex,
                kWaitDuration);
        if (res == TIMED_OUT) break;
    }
    if (mStartCapture) {
        mStartCapture = false;
        mBusy = true;
        return START;
    }
    return IDLE;
}
函数主要在轮训mStartCapture的值,这个值是由CameraService端的拍照触发线程来启动的,代码如下:

status_t CaptureSequencer::startCapture(int msgType) {
    ALOGV("%s", __FUNCTION__);
    ATRACE_CALL();
    Mutex::Autolock l(mInputMutex);
    if (mBusy) {
        ALOGE("%s: Already busy capturing!", __FUNCTION__);
        return INVALID_OPERATION;
    }
    if (!mStartCapture) {
        mMsgType = msgType;
        mStartCapture = true;
        mStartCaptureSignal.signal();//启动CaptureSequencer
    }
    return OK;
}
对比CaptureSequencer Threadloop线程中,在阻塞式的等待mStartCapture = true,并在修改完mStartCapture 后向Threadloop发出signal。Threadloop线程被唤醒后,执行返回一个新的状态机mCaptureState = START:


2.1 START状态机

主要调用了updateCaptureRequest(l.mParameters, client)函数:

status_t CaptureSequencer::updateCaptureRequest(const Parameters ¶ms,
        sp<Camera2Client> &client) {
    ATRACE_CALL();
    status_t res;
    if (mCaptureRequest.entryCount() == 0) {
        res = client->getCameraDevice()->createDefaultRequest(
                CAMERA2_TEMPLATE_STILL_CAPTURE,
                &mCaptureRequest);
        if (res != OK) {
            ALOGE("%s: Camera %d: Unable to create default still image request:"
                    " %s (%d)", __FUNCTION__, client->getCameraId(),
                    strerror(-res), res);
            return res;
        }
    }

    res = params.updateRequest(&mCaptureRequest);
    if (res != OK) {
        ALOGE("%s: Camera %d: Unable to update common entries of capture "
                "request: %s (%d)", __FUNCTION__, client->getCameraId(),
                strerror(-res), res);
        return res;
    }

    res = params.updateRequestJpeg(&mCaptureRequest);//更新JPEG需要的参数
    if (res != OK) {
        ALOGE("%s: Camera %d: Unable to update JPEG entries of capture "
                "request: %s (%d)", __FUNCTION__, client->getCameraId(),
                strerror(-res), res);
        return res;
    }

    return OK;
}
     该函数和preview模式下的updatePreviewRequest很类似,这里首先检查mCaptureRequest是否是一个空的CameraMetadata,如果为空则由createDefaultRequest来请求HAL3来创建一个Request,其中相应的类型为CAMERA2_TEMPLATE_STILL_CAPTURE。 随后分别是使用当前模式下的配置参数来更新CameraMetadata  mCaptureRequest中不同tag的参数值,便于传递给HAL3,这个过程是类似与以前Camera1中直接的setParamters操作string的过程


2.2 STANDARD_START状态manageStandardCapture

该状态是启动整个take picture的重点所在:

CaptureSequencer::CaptureState CaptureSequencer::manageStandardCapture(
        sp<Camera2Client> &client) {
    status_t res;
    ATRACE_CALL();
    SharedParameters::Lock l(client->getParameters());
    Vector<int32_t> outputStreams;
    uint8_t captureIntent = static_cast<uint8_t>(ANDROID_CONTROL_CAPTURE_INTENT_STILL_CAPTURE);

    /**
     * Set up output streams in the request
     *  - preview
     *  - capture/jpeg
     *  - callback (if preview callbacks enabled)
     *  - recording (if recording enabled)
     */
    outputStreams.push(client->getPreviewStreamId());//preview Stream
    outputStreams.push(client->getCaptureStreamId());//capture Stream

    if (l.mParameters.previewCallbackFlags &
            CAMERA_FRAME_CALLBACK_FLAG_ENABLE_MASK) {
        outputStreams.push(client->getCallbackStreamId());//capture callback
    }

    if (l.mParameters.state == Parameters::VIDEO_SNAPSHOT) {
        outputStreams.push(client->getRecordingStreamId());
        captureIntent = static_cast<uint8_t>(ANDROID_CONTROL_CAPTURE_INTENT_VIDEO_SNAPSHOT);
    }

    res = mCaptureRequest.update(ANDROID_REQUEST_OUTPUT_STREAMS,
            outputStreams);
    if (res == OK) {
        res = mCaptureRequest.update(ANDROID_REQUEST_ID,
                &mCaptureId, 1);//当前request对应的ID
    }
    if (res == OK) {
        res = mCaptureRequest.update(ANDROID_CONTROL_CAPTURE_INTENT,
                &captureIntent, 1);
    }
    if (res == OK) {
        res = mCaptureRequest.sort();
    }

    if (res != OK) {
        ALOGE("%s: Camera %d: Unable to set up still capture request: %s (%d)",
                __FUNCTION__, client->getCameraId(), strerror(-res), res);
        return DONE;
    }

    // Create a capture copy since CameraDeviceBase#capture takes ownership
    CameraMetadata captureCopy = mCaptureRequest;
    if (captureCopy.entryCount() == 0) {
        ALOGE("%s: Camera %d: Unable to copy capture request for HAL device",
                __FUNCTION__, client->getCameraId());
        return DONE;
    }

    /**
     * Clear the streaming request for still-capture pictures
     *   (as opposed to i.e. video snapshots)
     */
    if (l.mParameters.state == Parameters::STILL_CAPTURE) {
        // API definition of takePicture() - stop preview before taking pic
        res = client->stopStream();
        if (res != OK) {
            ALOGE("%s: Camera %d: Unable to stop preview for still capture: "
                    "%s (%d)",
                    __FUNCTION__, client->getCameraId(), strerror(-res), res);
            return DONE;
        }
    }
    // TODO: Capture should be atomic with setStreamingRequest here
    res = client->getCameraDevice()->capture(captureCopy);//启动camera3device的capture,提交capture request
    if (res != OK) {
        ALOGE("%s: Camera %d: Unable to submit still image capture request: "
                "%s (%d)",
                __FUNCTION__, client->getCameraId(), strerror(-res), res);
        return DONE;
    }

    mTimeoutCount = kMaxTimeoutsForCaptureEnd;
    return STANDARD_CAPTURE_WAIT;
}

CaptureSequencer::CaptureState CaptureSequencer::manageStandardCaptureWait(
        sp<Camera2Client> &client) {
    status_t res;
    ATRACE_CALL();
    Mutex::Autolock l(mInputMutex);

    // Wait for new metadata result (mNewFrame)
    while (!mNewFrameReceived) {
        res = mNewFrameSignal.waitRelative(mInputMutex, kWaitDuration);//wait new 一帧metadata
        if (res == TIMED_OUT) {
            mTimeoutCount--;
            break;
        }
    }

    // Approximation of the shutter being closed
    // - TODO: use the hal3 exposure callback in Camera3Device instead
    if (mNewFrameReceived && !mShutterNotified) {
        SharedParameters::Lock l(client->getParameters());
        /* warning: this also locks a SharedCameraCallbacks */
        shutterNotifyLocked(l.mParameters, client, mMsgType);
        mShutterNotified = true;
    }

    // Wait until jpeg was captured by JpegProcessor
    while (mNewFrameReceived && !mNewCaptureReceived) {
        res = mNewCaptureSignal.waitRelative(mInputMutex, kWaitDuration);//等待JPEG数据
        if (res == TIMED_OUT) {
            mTimeoutCount--;
            break;
        }
    }
    if (mTimeoutCount <= 0) {
        ALOGW("Timed out waiting for capture to complete");
        return DONE;
    }
    if (mNewFrameReceived && mNewCaptureReceived) {//满足mNewFrameReceived
        if (mNewFrameId != mCaptureId) {
            ALOGW("Mismatched capture frame IDs: Expected %d, got %d",
                    mCaptureId, mNewFrameId);
        }
        camera_metadata_entry_t entry;
        entry = mNewFrame.find(ANDROID_SENSOR_TIMESTAMP);
        if (entry.count == 0) {
            ALOGE("No timestamp field in capture frame!");
        } else if (entry.count == 1) {
            if (entry.data.i64[0] != mCaptureTimestamp) {
                ALOGW("Mismatched capture timestamps: Metadata frame %" PRId64 ","
                        " captured buffer %" PRId64,
                        entry.data.i64[0],
                        mCaptureTimestamp);
            }
        } else {
            ALOGE("Timestamp metadata is malformed!");
        }
        client->removeFrameListener(mCaptureId, mCaptureId + 1, this);

        mNewFrameReceived = false;
        mNewCaptureReceived = false;
        return DONE;
    }
    return STANDARD_CAPTURE_WAIT;
}
整个函数的处理可以分为以下几个小点:

a:Vector<int32_t> outputStreams;

 outputStreams.push(client->getPreviewStreamId());//preview Stream
 outputStreams.push(client->getCaptureStreamId());//capture jpeg Stream
outputStreams.push(client->getCallbackStreamId());//capture callback

通过以上的操作,可以很清楚是看到,这里集合了take picture所需要使用到的stream流,对应的模块分别是:

streamProcessor、jpegProcessor、CallbackProcessor。

这个过程和Preview模式下是类似的,收集当前Camera2Client下的所有stream,并以stream的ID号作为区别。


b: 将当前操作所有的stream信息全部加入到CameraMetadata mCaptureRequest

    res = mCaptureRequest.update(ANDROID_REQUEST_OUTPUT_STREAMS,
            outputStreams);
    if (res == OK) {
        res = mCaptureRequest.update(ANDROID_REQUEST_ID,
                &mCaptureId, 1);//当前request对应的ID
    }

ANDROID_REQUEST_ID这项值表明,当前只存在3种Request类型:

预览Request mPreviewRequest:        mPreviewRequestId(Camera2Client::kPreviewRequestIdStart),
拍照Request mCaptureRequest:mCaptureId(Camera2Client::kCaptureRequestIdStart),
录像Request mRecordingRequest:        mRecordingRequestId(Camera2Client::kRecordingRequestIdStart),


c. 对于STILL_CAPTURE类型的picture

client->stopStream(),实现的本质是res = device->clearStreamingRequest(),mRequestThread->clearRepeatingRequests(lastFrameNumber);

该函数是将之前Preview模式下的建立的captureRequest作delete处理,之前在预览模式下是将最终生产的capturelist加入到了一个mRepeatingRequests当中,这里通过clear使之为empty,即不会再发送Request和HAL3进行数据的交互。


d.Camera3Device capture函数

首先关注capture函数传入的参数为captureCopy,即CameraMetadata mCaptureRequest的一个copy值。

status_t Camera3Device::capture(CameraMetadata &request, int64_t* /*lastFrameNumber*/) {
    ATRACE_CALL();

    List<const CameraMetadata> requests;
    requests.push_back(request);//对于一个CameraMetadata转为list
    return captureList(requests, /*lastFrameNumber*/NULL);
}

status_t Camera3Device::captureList(const List<const CameraMetadata> &requests,
                                    int64_t *lastFrameNumber) {
    ATRACE_CALL();

    return submitRequestsHelper(requests, /*repeating*/false, lastFrameNumber);//非重复的,制定于拍照
}

capture函数由Camera3Device来响应处理,其传入的mCaptureRequest转变为一个list,再交由submitRequestsHelper来处理,对比之前Preview下的处理方式,其startstream入口为setStreamingRequest->setStreamingRequestList->submitRequestsHelper。

这也表明了最终CameraMetadata类型的Request都是由submitRequestsHelper来完成的,所以convertMetadataListToRequestListLocked这个将CameraMetadata转换为List<sp<CaptureRequest> > RequestList的处理过程对两者来说都是一致的。但在后续处理时,对picture模式下的Request,其不再是repeating的处理,mRequestThread->queueRequestList():

status_t Camera3Device::RequestThread::queueRequestList(
        List<sp<CaptureRequest> > &requests,
        /*out*/
        int64_t *lastFrameNumber) {
    Mutex::Autolock l(mRequestLock);
    for (List<sp<CaptureRequest> >::iterator it = requests.begin(); it != requests.end();
            ++it) {
        mRequestQueue.push_back(*it);
    }......
    unpauseForNewRequests();

    return OK;
}
这里直接是将CaptureRequest加入到RequestQueue这个队列之中,区别于Preview模式是将captureRequest加入到一个mRepeatingRequests,重复的将其中的captureRequest加入到 RequestQueue。

最简单的理解是picture模式下是拍去几帧的数据流即可,Preview模式下是实时的获取帧,前者是几次one snop,后者是连续continuous。

到这里为止,可以说CaptureSequence已经完成了START状态机的处理。


e. 从START到STANDARD_CAPTURE_WAIT

该状态下对应的状态机处理函数为manageStandardCaptureWait:

CaptureSequencer::CaptureState CaptureSequencer::manageStandardCaptureWait(
        sp<Camera2Client> &client) {
    status_t res;
    ATRACE_CALL();
    Mutex::Autolock l(mInputMutex);

    // Wait for new metadata result (mNewFrame)
    while (!mNewFrameReceived) {
        res = mNewFrameSignal.waitRelative(mInputMutex, kWaitDuration);//wait new 一帧metadata
        if (res == TIMED_OUT) {
            mTimeoutCount--;
            break;
        }
    }

    // Approximation of the shutter being closed
    // - TODO: use the hal3 exposure callback in Camera3Device instead
    if (mNewFrameReceived && !mShutterNotified) {
        SharedParameters::Lock l(client->getParameters());
        /* warning: this also locks a SharedCameraCallbacks */
        shutterNotifyLocked(l.mParameters, client, mMsgType);
        mShutterNotified = true;
    }

    // Wait until jpeg was captured by JpegProcessor
    while (mNewFrameReceived && !mNewCaptureReceived) {
        res = mNewCaptureSignal.waitRelative(mInputMutex, kWaitDuration);//等待JPEG数据
        if (res == TIMED_OUT) {
            mTimeoutCount--;
            break;
        }
    }
    if (mTimeoutCount <= 0) {
        ALOGW("Timed out waiting for capture to complete");
        return DONE;
    }
    if (mNewFrameReceived && mNewCaptureReceived) {//满足mNewFrameReceived
        if (mNewFrameId != mCaptureId) {
            ALOGW("Mismatched capture frame IDs: Expected %d, got %d",
                    mCaptureId, mNewFrameId);
        }
        camera_metadata_entry_t entry;
        entry = mNewFrame.find(ANDROID_SENSOR_TIMESTAMP);
        if (entry.count == 0) {
            ALOGE("No timestamp field in capture frame!");
        } else if (entry.count == 1) {
            if (entry.data.i64[0] != mCaptureTimestamp) {
                ALOGW("Mismatched capture timestamps: Metadata frame %" PRId64 ","
                        " captured buffer %" PRId64,
                        entry.data.i64[0],
                        mCaptureTimestamp);
            }
        } else {
            ALOGE("Timestamp metadata is malformed!");
        }
        client->removeFrameListener(mCaptureId, mCaptureId + 1, this);

        mNewFrameReceived = false;
        mNewCaptureReceived = false;
        return DONE;
    }
    return STANDARD_CAPTURE_WAIT;
}
具体分析该函数可以知道其处于两次wait休眠状态,主要响应两个条件等待信号mNewFrameSignal与mNewCaptureSignal,两者者的等待周期为100ms。只有当mNewFrameReceived && mNewCaptureReceived同事满足条件时,才算是Capture到一帧picture。


f . Done State状态

这里先假设已经完成了wait这个状态,就会进入Done状态的执行函数manageDone(),最重要的部分如下:

  if (mCaptureBuffer != 0 && res == OK) {
        ATRACE_ASYNC_END(Camera2Client::kTakepictureLabel, takePictureCounter);

        Camera2Client::SharedCameraCallbacks::Lock
            l(client->mSharedCameraCallbacks);
        ALOGV("%s: Sending still image to client", __FUNCTION__);
        if (l.mRemoteCallback != 0) {
            l.mRemoteCallback->dataCallback(CAMERA_MSG_COMPRESSED_IMAGE,
                    mCaptureBuffer, NULL);//回传压缩好的jpeg数据到上层
        } else {
            ALOGV("%s: No client!", __FUNCTION__);
        }
    }
    mCaptureBuffer.clear();

他将采集到的一帧jpeg压缩格式的图像,回传到APP层,便于后期写入到文件等。在以往Camera HAL1.0中这部分的数据回传玩玩都是由HAL层来完成的,这也给编码带来复杂度以及效率低下等问题。Google在Camera3.0中很好的封装了dataCallback以及notifyCallback的回调处理,将其转到Camera2Client下不同模块来做响应回调。

其中mCaptureBuffer是回传回来的真实的jpeg格式的图像数据,其本质是从stream中提取的一个buffer然后被copy到一个heap中,等待APP Callback完成后,就会释放。

完成了Done状态后,CaptureSequence又会再次进入到IDLE模式,等待下一次的take picture的处理。


3 picture模式下Camera3Device处理Request与result

    对于picture模式下的Request处理,可以参考Preview模式下的RequestThread::threadLoop下的处理过程。这里主要分析result的响应过程:

    在前面已经提到CaptureSequence需要wait两个signal,一般都是有其他模块来触发回调这个signal,我们先来定位这两个signal发出的位置:

void CaptureSequencer::onResultAvailable(const CaptureResult &result) {
    ATRACE_CALL();
    ALOGV("%s: New result available.", __FUNCTION__);
    Mutex::Autolock l(mInputMutex);
    mNewFrameId = result.mResultExtras.requestId;//返回帧所属的request id
    mNewFrame = result.mMetadata;
    if (!mNewFrameReceived) {
        mNewFrameReceived = true;
        mNewFrameSignal.signal();//buffer相应的result 信息,由FrameProcessor模块来触发listener
    }
}

void CaptureSequencer::onCaptureAvailable(nsecs_t timestamp,
        sp<MemoryBase> captureBuffer) {
    ATRACE_CALL();
    ALOGV("%s", __FUNCTION__);
    Mutex::Autolock l(mInputMutex);
    mCaptureTimestamp = timestamp;
    mCaptureBuffer = captureBuffer;
    if (!mNewCaptureReceived) {
        mNewCaptureReceived = true;
        mNewCaptureSignal.signal();//真实的一帧jpeg图像
    }
}
那么这两个on回调函数是怎么触发的呢?下面来作具体的分析:


3.1.明确picture模式下,一次处理需要的stream数目 

需要明确的是一次take picture需要的stream分别有JpegProcessor、CallbackProcessor、StreamingProcessor三种,第一个主要接收的是jpeg格式的帧图像,第二个主要接收的是一帧的preview模式下回调到APP的视频帧,而最后一个是直接获取一帧视频图像后直接进行显示用的视频帧。


3.2.帧数据回调响应的由来processCaptureResult函数:

    无论是哪一个模块,数据回调响应最初的入口是HAL3的process_capture_result函数即processCaptureResult()函数,该函数的处理之所以复杂是因为HAL3.0中允许一次result回来的数据可以是不完整的,其中以3A相关的cameraMetadata的数据为主,这里需要说明每一帧的result回来时camera3_capture_result都是含有一个camera_metadata_t的,包含着一帧图像的各种信息tag字段,其中以3A信息为主。在processCaptureResult函数中由三个核心函数:

processPartial3AResult():处理回传回来的部分cameraMetadata result数据;

returnOutputBuffers():返回这次result中各个stream对应的buffer数据;

sendCaptureResult():处理的是一次完整的cameraMetadata result数据;


3.3. FrameProcessor模块的帧Result响应,以3A回调处理为主

processPartial3AResult()函数与sendCaptureResult()函数都是将3A的result结果发送给FrameProcessor去作处理的,因为无论是Request还是result都是必然带有一个类似stream的cameraMetadata的,所以在这个模块有别于其他模块,故不需要单独的stream流来交互数据的

            if (isPartialResult) {
                // Fire off a 3A-only result if possible
                if (!request.partialResult.haveSent3A) {//返回的只是3A的数据
                    request.partialResult.haveSent3A =
                            processPartial3AResult(frameNumber,
                                    request.partialResult.collectedResult,
                                    request.resultExtras);// frame含有3A则notify 处理
                }
            }
processPartial3AResult是将当前帧收集到的partialResult进行处理,需要明确的是partialResult是指定帧framenum下返回的result最新组成的result:

其内部需要确保目前收集到的result需要至少含有如下的tag的值,才算一次3A数据可True:

 gotAllStates &= get3AResult(partial, ANDROID_CONTROL_AF_MODE,
        &afMode, frameNumber);

    gotAllStates &= get3AResult(partial, ANDROID_CONTROL_AWB_MODE,
        &awbMode, frameNumber);

    gotAllStates &= get3AResult(partial, ANDROID_CONTROL_AE_STATE,
        &aeState, frameNumber);

    gotAllStates &= get3AResult(partial, ANDROID_CONTROL_AF_STATE,
        &afState, frameNumber);

    gotAllStates &= get3AResult(partial, ANDROID_CONTROL_AWB_STATE,
        &awbState, frameNumber);                                                                                                                             if (!gotAllStates) return false;
只有这样才满足构建一个CaptureResult minResult的要求,上述过程表明对已有的Result需要AE、AF、AWB同时OK时才会构建一个CaptureResult。
接着对比着来看sendCaptureResult:

void Camera3Device::sendCaptureResult(CameraMetadata &pendingMetadata,
        CaptureResultExtras &resultExtras,
        CameraMetadata &collectedPartialResult,
        uint32_t frameNumber) {
    if (pendingMetadata.isEmpty())
        return;

    Mutex::Autolock l(mOutputLock);

    // TODO: need to track errors for tighter bounds on expected frame number
    if (frameNumber < mNextResultFrameNumber) {
        SET_ERR("Out-of-order capture result metadata submitted! "
                "(got frame number %d, expecting %d)",
                frameNumber, mNextResultFrameNumber);
        return;
    }
    mNextResultFrameNumber = frameNumber + 1;//下一帧

    CaptureResult captureResult;
    captureResult.mResultExtras = resultExtras;
    captureResult.mMetadata = pendingMetadata;

    if (captureResult.mMetadata.update(ANDROID_REQUEST_FRAME_COUNT,
            (int32_t*)&frameNumber, 1) != OK) {
        SET_ERR("Failed to set frame# in metadata (%d)",
                frameNumber);
        return;
    } else {
        ALOGVV("%s: Camera %d: Set frame# in metadata (%d)",
                __FUNCTION__, mId, frameNumber);
    }

    // Append any previous partials to form a complete result
    if (mUsePartialResult && !collectedPartialResult.isEmpty()) {
        captureResult.mMetadata.append(collectedPartialResult);//
    }

    captureResult.mMetadata.sort();

    // Check that there's a timestamp in the result metadata
    camera_metadata_entry entry =
            captureResult.mMetadata.find(ANDROID_SENSOR_TIMESTAMP);
    if (entry.count == 0) {
        SET_ERR("No timestamp provided by HAL for frame %d!",
                frameNumber);
        return;
    }

    // Valid result, insert into queue
    List<CaptureResult>::iterator queuedResult =
            mResultQueue.insert(mResultQueue.end(), CaptureResult(captureResult));
    ALOGVV("%s: result requestId = %" PRId32 ", frameNumber = %" PRId64
           ", burstId = %" PRId32, __FUNCTION__,
           queuedResult->mResultExtras.requestId,
           queuedResult->mResultExtras.frameNumber,
           queuedResult->mResultExtras.burstId);

    mResultSignal.signal();//发送signal
}

该函数的主要工作是创建一个CaptureResult,可以看到对于之前帧回传回来的部分result,需要在这里进行组合成一帧完整的result。collectedPartialResult指的是当一次Request下发时,回传的result可能是分几次返回的,比如第一次的result只含有部分的信息,在第二次返回如果result已经被标记为完全上传回到Threadloop中,那么这里就需要对前几次的result进行组合,而前几次的result都是保存在当前帧的Request的,整个Request以唯一的一个framenumber作为索引,确保返回的result组合后是对应的同一个Request。

个人理解这个partialResult的处理机制是每次返回的Result并不一定包含了当前frameNumber帧号所需要的tag信息,而且这个每次回传的mNumPartialResults值是由HAL3.0层来决定的。在每次一的Result中,会收集

其中 isPartialResult = (result->partial_result < mNumPartialResults)决定了当前的Result是否还是一个处于partial Result的模式,是的话每次都进行collectResult,此外对于此模式下会收集3A的tag信息,调用processPartial3AResult来处理3A的值,而这个过程也是单列的处理。而一旦当前的Result返回处于非partial模式时,直接提取之前collect的Result并和当前的Result共同组成一个新的Capture Result。生成的CaptureResult会加入到mResultQueue队列。

至此分析完了HAL3返回的Captrue Result的处理过程,最终mResultSignal.signal()唤醒相应的等待线程,而这个过程就是由FrameProcessor模块来响应的。

FrameProcessorBase是一个FrameProcessor的基类,会启动一个Threadloop:

bool FrameProcessorBase::threadLoop() {
    status_t res;

    sp<CameraDeviceBase> device;
    {
        device = mDevice.promote();
        if (device == 0) return false;
    }

    res = device->waitForNextFrame(kWaitDuration);
    if (res == OK) {
        processNewFrames(device);// 3A相关的处理等待
    } else if (res != TIMED_OUT) {
        ALOGE("FrameProcessorBase: Error waiting for new "
                "frames: %s (%d)", strerror(-res), res);
    }

    return true;
}
调用camera3device的waitForNextFrame,等待周期为10ms.
status_t Camera3Device::waitForNextFrame(nsecs_t timeout) {
    status_t res;
    Mutex::Autolock l(mOutputLock);

    while (mResultQueue.empty()) {//capture result 结果非空则继续执行
        res = mResultSignal.waitRelative(mOutputLock, timeout);
        if (res == TIMED_OUT) {
            return res;
        } else if (res != OK) {
            ALOGW("%s: Camera %d: No frame in %" PRId64 " ns: %s (%d)",
                    __FUNCTION__, mId, timeout, strerror(-res), res);
            return res;
        }
    }
    return OK;
}
在这里一是看到了mResultQueue,二是看到了mResultSignal。对应于Camera3Device::sendCaptureResult()中的mOutputLock以及signal。

线程被唤醒后调用processNewFrames来处理当前帧


void FrameProcessorBase::processNewFrames(const sp<CameraDeviceBase> &device) {
    status_t res;
    ATRACE_CALL();
    CaptureResult result;

    ALOGV("%s: Camera %d: Process new frames", __FUNCTION__, device->getId());

    while ( (res = device->getNextResult(&result)) == OK) {

        // TODO: instead of getting frame number from metadata, we should read
        // this from result.mResultExtras when CameraDeviceBase interface is fixed.
        camera_metadata_entry_t entry;

        entry = result.mMetadata.find(ANDROID_REQUEST_FRAME_COUNT);
        if (entry.count == 0) {
            ALOGE("%s: Camera %d: Error reading frame number",
                    __FUNCTION__, device->getId());
            break;
        }
        ATRACE_INT("cam2_frame", entry.data.i32[0]);

        if (!processSingleFrame(result, device)) {//单独处理一帧
            break;
        }

        if (!result.mMetadata.isEmpty()) {
            Mutex::Autolock al(mLastFrameMutex);
            mLastFrame.acquire(result.mMetadata);
        }
    }
    if (res != NOT_ENOUGH_DATA) {
        ALOGE("%s: Camera %d: Error getting next frame: %s (%d)",
                __FUNCTION__, device->getId(), strerror(-res), res);
        return;
    }

    return;
}
device->getNextResult(&result)是从mResultQueue提取一个可用的CaptureResult,提取完成后作erase的处理。再检验这个Result是否属于一个固定的framenum,然后由processSingleFrame来完成一件事:

bool FrameProcessor::processSingleFrame(CaptureResult &frame,
                                        const sp<CameraDeviceBase> &device) {//处理帧


    sp<Camera2Client> client = mClient.promote();
    if (!client.get()) {
        return false;
    }


    bool isPartialResult = false;
    if (mUsePartialResult) {
        if (client->getCameraDeviceVersion() >= CAMERA_DEVICE_API_VERSION_3_2) {
            isPartialResult = frame.mResultExtras.partialResultCount < mNumPartialResults;
        } else {
            camera_metadata_entry_t entry;
            entry = frame.mMetadata.find(ANDROID_QUIRKS_PARTIAL_RESULT);
            if (entry.count > 0 &&
                    entry.data.u8[0] == ANDROID_QUIRKS_PARTIAL_RESULT_PARTIAL) {
                isPartialResult = true;
            }
        }
    }


    if (!isPartialResult && processFaceDetect(frame.mMetadata, client) != OK) {
        return false;
    }


    if (mSynthesize3ANotify) {
        process3aState(frame, client);
    }


    return FrameProcessorBase::processSingleFrame(frame, device);
}
bool FrameProcessorBase::processSingleFrame(CaptureResult &result,
                                            const sp<CameraDeviceBase> &device) {
    ALOGV("%s: Camera %d: Process single frame (is empty? %d)",
          __FUNCTION__, device->getId(), result.mMetadata.isEmpty());
    return processListeners(result, device) == OK;//处理所有的listener
}
status_t FrameProcessorBase::processListeners(const CaptureResult &result,
        const sp<CameraDeviceBase> &device) {
    ATRACE_CALL();

    camera_metadata_ro_entry_t entry;

    // Check if this result is partial.
    bool isPartialResult = false;
    if (device->getDeviceVersion() >= CAMERA_DEVICE_API_VERSION_3_2) {
        isPartialResult = result.mResultExtras.partialResultCount < mNumPartialResults;
    } else {
        entry = result.mMetadata.find(ANDROID_QUIRKS_PARTIAL_RESULT);
        if (entry.count != 0 &&
                entry.data.u8[0] == ANDROID_QUIRKS_PARTIAL_RESULT_PARTIAL) {
            ALOGV("%s: Camera %d: This is a partial result",
                    __FUNCTION__, device->getId());
            isPartialResult = true;
        }
    }

    // TODO: instead of getting requestID from CameraMetadata, we should get it
    // from CaptureResultExtras. This will require changing Camera2Device.
    // Currently Camera2Device uses MetadataQueue to store results, which does not
    // include CaptureResultExtras.
    entry = result.mMetadata.find(ANDROID_REQUEST_ID);
    if (entry.count == 0) {
        ALOGE("%s: Camera %d: Error reading frame id", __FUNCTION__, device->getId());
        return BAD_VALUE;
    }
    int32_t requestId = entry.data.i32[0];

    List<sp<FilteredListener> > listeners;
    {
        Mutex::Autolock l(mInputMutex);

        List<RangeListener>::iterator item = mRangeListeners.begin();
        // Don't deliver partial results to listeners that don't want them
        while (item != mRangeListeners.end()) {
            if (requestId >= item->minId && requestId < item->maxId &&
                    (!isPartialResult || item->sendPartials)) {
                sp<FilteredListener> listener = item->listener.promote();
                if (listener == 0) {
                    item = mRangeListeners.erase(item);
                    continue;
                } else {
                    listeners.push_back(listener);
                }
            }
            item++;
        }
    }
    ALOGV("%s: Camera %d: Got %zu range listeners out of %zu", __FUNCTION__,
          device->getId(), listeners.size(), mRangeListeners.size());

    List<sp<FilteredListener> >::iterator item = listeners.begin();
    for (; item != listeners.end(); item++) {
        (*item)->onResultAvailable(result);//所有注册的listener,告知有result返回
    }
    return OK;
}
这里简单的理解是在获取一个正常的CaptureResult时,就需要将这个Result分发给哪些感兴趣的模块,而这个过程由一个FilteredListener来完成:

其他模块如果想要listen FrameProcessor模块,可以调用registerListener来注册,保存在mRangeListeners之中,具体的接口如下:

status_t Camera2Client::registerFrameListener(int32_t minId, int32_t maxId,
        wp<camera2::FrameProcessor::FilteredListener> listener, bool sendPartials) {
    return mFrameProcessor->registerListener(minId, maxId, listener, sendPartials);
}

在这个对完整的Result的处理过程中,重点关注FrameProcessor下的3A回调与人脸检测回调,3A中的AF回回传AF的状态信息以CAMERA_MSG_FOCUS的形式通过notifyCallback. FaceDetect会以camera_frame_metadata_t的形式将人脸检测的定位的数据通过dataCallback回传,数据类型为CAMERA_MSG_PREVIEW_FRAME。


其中CaptureSequencer::manageStandardStart()在处理时,调用了registerFrameListener完成了listen的注册。

有了这些listener,在processListeners处理函数中,通过遍历mRangeListeners,来确保当前的CaptureResult 中对象的Request id和注册时的区间相匹配。在提取到适合处理当前Result的listener后,回调onResultAvailable()函数。

到这里void CaptureSequencer::onResultAvailable()就会被覆盖调用,经而我们定位到了mNewFrameReceived = Ture的回调过程。


3.4. 帧数据的回调:

上面重点是分析一个队CameraMetadata Result结果的分析,看上去还没有真正的视频帧数据的出现。对于视频流buffer的操作,上面提到了肯定是需要stream的,而不像FrameProcessor不需要建立stream来进行数据的传输。

对于数据的Callback处理,接口是returnOutputBuffers函数,该函数在preview模式下已经进行过分析,其重点就是将当前Result回来的buffer数据信息进行提取,然后分发给不同模块所维护着的Camera3Stream去作处理,本质是将当前Result返回的camera3_stream_buffer提取buffer_handle后通过queue_buffer操作后,就交由对应的Consumer去作处理。

对于直接预览的模块StreamProcessor,其Consumer是直接的SurfaceFlinger用于实时显示,CallbackProcessor是将CpuConsumer来将帧数据回传给APP使用,这些过程和Preview模式下都是类似,也是takepicture模式下同样需要处理的过程,而对于JpegProcessor而言,属于Picture模式专属,我们来看他接收到一帧HAL3 Jpeg Buffer的处理过程:

void JpegProcessor::onFrameAvailable(const BufferItem& /*item*/) {
    Mutex::Autolock l(mInputMutex);
    if (!mCaptureAvailable) {
        mCaptureAvailable = true;
        mCaptureAvailableSignal.signal();//采集到一帧jpeg图像
    }
}
上述调用的过程可参看博文 Android5.1中surface和CpuConsumer下生产者和消费者间的处理框架简述来加深理解,作为take picture模式下独有的模块,对应的Threadloop线程获得响应:

bool JpegProcessor::threadLoop() {
    status_t res;

    {
        Mutex::Autolock l(mInputMutex);
        while (!mCaptureAvailable) {
            res = mCaptureAvailableSignal.waitRelative(mInputMutex,
                    kWaitDuration);
            if (res == TIMED_OUT) return true;
        }
        mCaptureAvailable = false;
    }

    do {
        res = processNewCapture();//处理新的jpeg采集帧
    } while (res == OK);

    return true;
}
调用processNewCapture(),函数内部主要包括以下几个方面:

mCaptureConsumer->lockNextBuffer(&imgBuffer);这是从CPUConsumer中获取一个已经queuebuffer的buffer,lock过程最重要的是将这个buffer作mmap操作后映射到当前线程中。

然后通过这个虚拟地址将buffer地址copy到本进程的一个heap中,随后将这个buffer进行ummap操作。

最后是调用如下代码,去将本地的jpegbuffer传输给CaptureSequencer,所以可以说CaptureSequence虽然负责收集jpeg等数据,负责整个take picture的启动与控制,但本质上jpeg等数据的真正提取都是交由jpegprocessor、zslprocessor等模块来完成

    sp<CaptureSequencer> sequencer = mSequencer.promote();
    if (sequencer != 0) {
        sequencer->onCaptureAvailable(imgBuffer.timestamp, captureBuffer);//通知capturesequence有jpeg buffer到了
    }

到这里,就解决了CaptureSequeuer的wait状态机中的另一个wait等待的signal。


至此为止onResultAvailable()与onCaptureAvailable()均完成了回调,前者主要是由FrameProcessor来触发的,后者是有jpegProcessor来触发的,前者是回传的一帧jpeg图像的附加信息如timestamp/3A等,而后者是回传了一帧真正的jpeg图像。

下面是我小节的takepicture模式下几个模块间数据交互的过程图,本质是几个线程间Threadloop的响应与处理过程。

可以看到jpeg模式下,每次课回传给APP的数据包括原始的Callback数据流,jpegprocessor中的jpeg图像流,以及其他比如AF的状态,人脸识别后的人脸坐标原始信息camera_frame_metadata_t回传给APP。






























  • 1
    点赞
  • 21
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值