camera framework configure流程分析

应用层通过createCaptureSession来配置流:

mCameraDevice.createCaptureSession(Arrays.asList(mImageReader.getSurface()),
                    new CameraCaptureSession.StateCallback() {
                        @Override
                        public void onConfigured(@NonNull CameraCaptureSession cameraCaptureSession) {
                            Log.d(TAG, "onConfigured: ");
                            if (null == mCameraDevice) {
                                return;
                            }
                            // 当configure成功以后,开启预览
                            mCaptureSession = cameraCaptureSession;
                            try {
                                mPreviewRequest = mPreviewRequestBuilder.build();
                                mCaptureSession.setRepeatingRequest(mPreviewRequest,null, mBackgroundHandler);
                            } catch (CameraAccessException e) {
                                e.printStackTrace();
                            }
                        }
                        @Override
                        public void onConfigureFailed(@NonNull CameraCaptureSession cameraCaptureSession) {
                            Log.e(TAG, "CameraCaptureSession.StateCallback onConfigureFailed");
                        }

                    }, null

);

先看下createCaptureSession的实现:

public void createCaptureSession(List<Surface> outputs,
            CameraCaptureSession.StateCallback callback, Handler handler)
            throws CameraAccessException {
        List<OutputConfiguration> outConfigurations = new ArrayList<>(outputs.size());
        for (Surface surface : outputs) {
            outConfigurations.add(new OutputConfiguration(surface));
        }
        createCaptureSessionInternal(null, outConfigurations, callback,
                checkAndWrapHandler(handler), /*operatingMode*/ICameraDeviceUser.NORMAL_MODE,
                /*sessionParams*/ null);
    }

 这里会创建一个OutputConfiguration类型数组,OutputConfiguration类主要是关于流相关的配置,里面会有一个mSurface成员变量,用于接受形参中的surface:

public OutputConfiguration(int surfaceGroupId, @NonNull Surface surface, int rotation) {
        checkNotNull(surface, "Surface must not be null");
        checkArgumentInRange(rotation, ROTATION_0, ROTATION_270, "Rotation constant");
        mSurfaceGroupId = surfaceGroupId;
        mSurfaceType = SURFACE_TYPE_UNKNOWN;
        mSurfaces = new ArrayList<Surface>();
        mSurfaces.add(surface);
        mRotation = rotation;
        mConfiguredSize = SurfaceUtils.getSurfaceSize(surface);
        mConfiguredFormat = SurfaceUtils.getSurfaceFormat(surface);
        mConfiguredDataspace = SurfaceUtils.getSurfaceDataspace(surface);
        mConfiguredGenerationId = surface.getGenerationId();
        mIsDeferredConfig = false;
        mIsShared = false;
        mPhysicalCameraId = null;
    }
继续看下createCaptureSessionInternal的实现:
private void createCaptureSessionInternal(InputConfiguration inputConfig,
            List<OutputConfiguration> outputConfigurations,
            CameraCaptureSession.StateCallback callback, Executor executor,
            int operatingMode, CaptureRequest sessionParams) throws CameraAccessException {
         ……
         Surface input = null;
         try {
                // configure streams and then block until IDLE
                configureSuccess = configureStreamsChecked(inputConfig, outputConfigurations, operatingMode, sessionParams);
                if (configureSuccess == true && inputConfig != null) {
                    input = mRemoteDevice.getInputSurface();
                }
            } catch (CameraAccessException e) {
                configureSuccess = false;
                pendingException = e;
                input = null;
                if (DEBUG) {
                    Log.v(TAG, "createCaptureSession - failed with exception ", e);
                }
            }
            // Fire onConfigured if configureOutputs succeeded, fire onConfigureFailed otherwise.
            CameraCaptureSessionCore newSession = null;
            if (isConstrainedHighSpeed) {
                ArrayList<Surface> surfaces = new ArrayList<>(outputConfigurations.size());
                for (OutputConfiguration outConfig : outputConfigurations) {
                    surfaces.add(outConfig.getSurface());
                }
                StreamConfigurationMap config =
       getCharacteristics().get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP);
                SurfaceUtils.checkConstrainedHighSpeedSurfaces(surfaces, /*fpsRange*/null, config);
                newSession = new CameraConstrainedHighSpeedCaptureSessionImpl(mNextSessionId++,
                        callback, executor, this, mDeviceExecutor, configureSuccess,
                        mCharacteristics);
            } else {
                // 这里会创建一个cameraCaptureSessionImpl对象
                newSession = new CameraCaptureSessionImpl(mNextSessionId++, input,
                        callback, executor, this, mDeviceExecutor, configureSuccess);
            }
            // TODO: wait until current session closes, then create the new session
            mCurrentSession = newSession;
            if (pendingException != null) {
                throw pendingException;
            }
            mSessionStateCallback = mCurrentSession.getDeviceStateCallback();
        }

显然,configureStreamsChecked用于检测流是否配置成功,配置成功后,会new CameraCaptureSessionImpl做进一步处理,我们首先看下CameraCaptureSessionImpl的构造:

CameraCaptureSessionImpl(int id, Surface input,
            CameraCaptureSession.StateCallback callback, Executor stateExecutor,
            android.hardware.camera2.impl.CameraDeviceImpl deviceImpl,
            Executor deviceStateExecutor, boolean configureSuccess) {
        if (callback == null) {
            throw new IllegalArgumentException("callback must not be null");
        }
        mId = id;
        mIdString = String.format("Session %d: ", mId);
        mInput = input;
        mStateExecutor = checkNotNull(stateExecutor, "stateExecutor must not be null");
        // 将app层传入的callback赋值给成员变量mStateCallback
        mStateCallback = createUserStateCallbackProxy(mStateExecutor, callback);
        mDeviceExecutor = checkNotNull(deviceStateExecutor,
                "deviceStateExecutor must not be null");
        mDeviceImpl = checkNotNull(deviceImpl, "deviceImpl must not be null");
        // CameraDevice should call configureOutputs and have it finish before constructing us
        if (configureSuccess) {
            // 这里会将自身当做参数传递到app层
            mStateCallback.onConfigured(this);
            if (DEBUG) Log.v(TAG, mIdString + "Created session successfully");
            mConfigureSuccess = true;
        } else {
            mStateCallback.onConfigureFailed(this);
            mClosed = true; // do not fire any other callbacks, do not allow any other work
            Log.e(TAG, mIdString + "Failed to create capture session; configuration failed");
            mConfigureSuccess = false;
        }
 }

将形参callback放到mStateCallback中进行管理,也就是将回调的动作,放在线程中去处理。这里会根据configure是否成功来向上层发送onConfigured或者onConfigureFailed回调。如果配置成功了,app层就可以执行其他动作,如preview和capture request。接下来,我们重点分析一下configure是怎么配置的:

public boolean configureStreamsChecked(InputConfiguration inputConfig,
            List<OutputConfiguration> outputs, int operatingMode, CaptureRequest sessionParams)
            throws CameraAccessException {
           // 这里将outputs又放到了另一个集合中
                HashSet<OutputConfiguration> addSet = new HashSet<OutputConfiguration>(outputs);
            …..   
                waitUntilIdle();
                mRemoteDevice.beginConfigure();
                // 这里是重点,也就是为每一个surface执行一次createStream
                for (OutputConfiguration outConfig : outputs) {
                    if (addSet.contains(outConfig)) {
                        int streamId = mRemoteDevice.createStream(outConfig);
                        mConfiguredOutputs.put(streamId, outConfig);
                    }
                }
                if (sessionParams != null) {
                    mRemoteDevice.endConfigure(operatingMode, sessionParams.getNativeCopy());
                } else {
                    mRemoteDevice.endConfigure(operatingMode, null);
                }
                success = true;
        return success;
}

这里inputConfigure为null,所以我们暂时不用关心。这里可以看到这里会为outputs中的每一个OutputConfiguration执行一次createStream,也就是说需要为apk层createCaptureSession时传入的每个surface都配置一路流,因为每个surface请求的数据流可能不一样,因此要分开createStream。这里有两个重要函数createStreamendConfigure,分步分析:

① CameraDeviceClient::createStream的实现:

binder::Status CameraDeviceClient::createStream(
        const hardware::camera2::params::OutputConfiguration &outputConfiguration,
        /*out*/ int32_t* newStreamId) {
    ATRACE_CALL();
    // 这里会通过形参拿到bufferProducers,我们知道OutputConfiguration是对surface的一层封装
    // (加了一些流的属性),而gbp是构造surface的必要参数,通常拿到gbp都是用来创建surface的。
    // 在ImageReader_init中为每个surface创建了一个gbp,因此这里的bufferProducers大小为1 
    const std::vector<sp<IGraphicBufferProducer>>& bufferProducers =
            outputConfiguration.getGraphicBufferProducers();
    std::vector<sp<Surface>> surfaces;
    std::vector<sp<IBinder>> binders;
    status_t err;
    // Create stream for deferred surface case.
    if (deferredConsumerOnly) {
        return createDeferredSurfaceStreamLocked(outputConfiguration, isShared, newStreamId);
    }
    OutputStreamInfo streamInfo;
    bool isStreamInfoValid = false;
    for (auto& bufferProducer : bufferProducers) {
        // Don't create multiple streams for the same target surface
        sp<IBinder> binder = IInterface::asBinder(bufferProducer);
        // 将bufferProducer存入全局变量mStreamMap中,后面发送request请求时,会从中取gbp
        ssize_t index = mStreamMap.indexOfKey(binder);
        // 如果是found,说明已经为该surface创建了一个stream,而一个surface只允许有一个stream
        if (index != NAME_NOT_FOUND) {
            String8 msg = String8::format("Camera %s: Surface already has a stream created for it (ID %zd)", mCameraIdStr.string(), index);
            ALOGW("%s: %s", __FUNCTION__, msg.string());
            return STATUS_ERROR(CameraService::ERROR_ALREADY_EXISTS, msg.string());
        }
        sp<Surface> surface;
        // 通过gbp创建surface
        res = createSurfaceFromGbp(streamInfo, isStreamInfoValid, surface, bufferProducer);
        binders.push_back(IInterface::asBinder(bufferProducer));
        surfaces.push_back(surface);
    }
    int streamId = camera3::CAMERA3_STREAM_ID_INVALID;
    std::vector<int> surfaceIds;
    // 为surface创建stream
    err = mDevice->createStream(surfaces, deferredConsumer, streamInfo.width,
            streamInfo.height, streamInfo.format, streamInfo.dataSpace,
            static_cast<camera3_stream_rotation_t>(outputConfiguration.getRotation()),
            &streamId, physicalCameraId, &surfaceIds, 
            outputConfiguration.getSurfaceSetID(), isShared);
    ……
    for (auto& binder : binders) {
            ALOGV("%s: mStreamMap add binder %p streamId %d, surfaceId %d",
                    __FUNCTION__, binder.get(), streamId, i);
            // 这里很重要,将bufferqueueProducer和streamId、surfaceid进行了绑定,
            // 后面发送request的时候,会通过gbp来查询对应的streamid和surfaceid。
            mStreamMap.add(binder, StreamSurfaceId(streamId, surfaceIds[i]));
            i++;
     }
    // 将streamId和对应的streamInfo保存
    mConfiguredOutputs.add(streamId, outputConfiguration);
    mStreamInfoMap[streamId] = streamInfo;
}

上面有两个重要的全局变量mConfiguredOutputs和mStreamMap,mConfiguredOutputs会将streamId和gbp键值对保存,mStreamMap会将binder与surfaceId、streamId进行绑定。当发送request请求时,就可以先通过CameraDeviceClient::mConfiguredOutputs拿到gbp,然后查看gbp的IBinder对象在mStreamMap中的index,根据index找到相应的surfaceId、streamId,再根据streamId就可以在Camera3Device::mOutputStreams中找到配置好的stream。

先看下CameraDeviceClient::createSurfaceFromGbp函数中的重要实现:

{
    surface = new Surface(gbp, useAsync);
    ANativeWindow *anw = surface.get();
    err = anw->query(anw, NATIVE_WINDOW_WIDTH, &width));
    streamInfo.width = width;
    streamInfo.height = height;
    streamInfo.format = format;
    if (width != streamInfo.width) {
        String8 msg = String8::format("Camera %s:Surface width doesn't match: %d vs %d",
                mCameraIdStr.string(), width, streamInfo.width);
        ALOGE("%s: %s", __FUNCTION__, msg.string());
        return STATUS_ERROR(CameraService::ERROR_ILLEGAL_ARGUMENT, msg.string());
    }
}

1.获取output中的GraphicBufferProducer来create 一个Surface;

2.然后call query 函数查询这个gbp的width height format,query流程如下:

ANativeWindow::query->Surface::query->BufferQueueProducer::query(忽略了binder通信),而gbp中的width height format是在ImageReader_init中设置的:

{
   res = bufferConsumer->setDefaultBufferSize(width, height);
   res = bufferConsumer->setDefaultBufferFormat(nativeFormat);
}

这里最终会到BufferQueueConsumer中执行相应的方法,然后设置mCore(BufferQueue的核心)中的width height format,而bgp和gbc是共用一个mCore来对buffer进行管理的,因此这里也就设置了gbp的buffer的width height format。

3.将上述query获取到的gbp的width height format放进streamInfo中保管;

4.最后call isPublicFormat,roundBufferDimensionNearest 来检查上述width hight format是否有stream支持。

这里有个问题,这里为什么要创建surface?不是app层会传入么?因为从camera framework到Cameraserver会进行AIDL通信,而在Parcel中创建surface的话会额外增加一次进程间通信,因此使用gbp来创建surface可以节省通信时间。

再看下Camera3Device::createStream函数中的重要实现:

{
   sp<Camera3OutputStream> newStream;
    if (format == HAL_PIXEL_FORMAT_BLOB) {
        newStream = new Camera3OutputStream(mNextStreamId, consumers[0],
                width, height, blobBufferSize, format, dataSpace, rotation,
                mTimestampOffset, physicalCameraId, streamSetId);
   } else if (format == HAL_PIXEL_FORMAT_RAW_OPAQUE) {
     newStream = new Camera3OutputStream(mNextStreamId, consumers[0],
                width, height, rawOpaqueBufferSize, format, dataSpace, rotation,
                mTimestampOffset, physicalCameraId, streamSetId);
  }
  // 将newStream及其对应的streamId放到mOutputStream中进行管理
  res = mOutputStreams.add(mNextStreamId, newStream);
  // 保证每个surface对应的streamId都不一样
   *id = mNextStreamId++;

   if (wasActive) {
        ALOGV("%s: Restarting activity to reconfigure streams", __FUNCTION__);
        // Reuse current operating mode and session parameters for new stream config
        res = configureStreamsLocked(mOperatingMode, mSessionParams);
        if (res != OK) {
            CLOGE("Can't reconfigure device for new stream %d: %s (%d)",
                    mNextStreamId, strerror(-res), res);
            return res;
        }
        internalResumeLocked();
    }
}

这里主要干了以下几件事:

1.根据gbp的format来创建相应的Camera3OutputStream对象,也就是给在createSurfaceFromGbp中创建的surface配置相应的输出流;

2.由于app层可能会申请多路stream,这样就会创建多个Camera3OutputStream对象,因此这里会用一个全局变量mOutputStreams来保存这些Camera3OutputStream对象;

3.申请了outputStream就会被贴上相应的streamid标签(递增,保证每次的streamid都不一样);

4.发生错误的话,会调用configureStreamsLocked来重新分配流。

这里总结一下:上面用gbp创建了对应的surface,然后将gbp中的一些信息放在streaminfo[]中保存,再通过Camera3Device::createStream来为每个streaminfo创建streamId,最后将gbp,surfaceid,streamid放在mStreamMap中进行绑定。后面发送request的时候,通过该mStreamMap就可以知道那个gbp对应哪个surfaceid和streamid。

至此,creatStream分析结束,回过头去分析CameraDeviceClient::endConfigure。

② CameraDeviceClient::endConfigure中的执行流程:

CameraDeviceClient::endConfigure->Camera3Device::configureStreams->filterParamsAndConfigureLocked->configureStreamsLocked,

看下Camera3Device::configureStreamsLocked中的重要实现:

for (size_t i = 0; i < mOutputStreams.size(); i++) {
        // Don't configure bidi streams twice, nor add them twice to the list
        if (mOutputStreams[i].get() ==
            static_cast<Camera3StreamInterface*>(mInputStream.get())) {
            config.num_streams--;
            continue;
        }
        camera3_stream_t *outputStream;
        //返回的是this,即Camera3Stream 基类指针,实际对象是Camera3OutputStream
        outputStream = mOutputStreams.editValueAt(i)->startConfiguration();
        if (outputStream == NULL) {
            CLOGE("Can't start output stream configuration");
            cancelStreamsConfigurationLocked();
            return INVALID_OPERATION;
        }
        // 保存起来
        streams.add(outputStream);
        if (outputStream->format == HAL_PIXEL_FORMAT_BLOB &&
                outputStream->data_space == HAL_DATASPACE_V0_JFIF) {
            size_t k = i + ((mInputStream != nullptr) ? 1 : 0);
            bufferSizes[k] = static_cast<uint32_t>(
                    getJpegBufferSize(outputStream->width, outputStream->height));
        }
        // ifdef VENDOR_EDIT
        // added by jianlin3.liu@tcl.com, 2021/07/29, support h264 request
        else if (outputStream->format == HAL_PIXEL_FORMAT_H264) {
            size_t k = i + ((mInputStream != nullptr) ? 1 : 0); 
            bufferSizes[k] = static_cast<uint32_t>(
                    getH264BufferSize(outputStream->width, outputStream->height));
        }
        // endif VENDOR_EDIT
}
const camera_metadata_t *sessionBuffer = sessionParams.getAndLock();
// 从这里开始一步步进入到hal层
res = mInterface->configureStreams(sessionBuffer, &config, bufferSizes);
for (size_t i = 0; i < mOutputStreams.size(); i++) {
        sp<Camera3OutputStreamInterface> outputStream =
            mOutputStreams.editValueAt(i);
        if (outputStream->isConfiguring() && !outputStream->isConsumerConfigurationDeferred()) {
            res = outputStream->finishConfiguration();
            if (res != OK) {
                CLOGE("Can't finish configuring output stream %d: %s (%d)",
                        outputStream->getId(), strerror(-res), res);
                cancelStreamsConfigurationLocked();
                return BAD_VALUE;
            }
        }
}

这里会调用mInterface->configureStreams进入hal层做进一步的配置,如做如下准备:

VIDIOC_S_FMT

VIDIOC_REQBUFS

VIDIOC_QUERYBUF

VIDIOC_QBUF

VIDIOC_STREAMON

接下来为每一个outputStream执行outputStream->finishConfiguration()。会经过如下流程:

Camera3Stream::finishConfiguration->Camera3OutputStream::configureQueueLocked->Camera3OutputStream::configureConsumerQueueLocked,看下其中的重要实现:

{
    if (mMaxSize == 0) {
        // For buffers of known size
        res = native_window_set_buffers_dimensions(mConsumer.get(),
               camera3_stream::width, camera3_stream::height);
    } else {
        // For buffers with bounded size
        res = native_window_set_buffers_dimensions(mConsumer.get(), mMaxSize, 1);
    }
    if (res != OK) {
        ALOGE("%s: Unable to configure stream buffer dimensions"
                " %d x %d (maxSize %zu) for stream %d",
                __FUNCTION__, camera3_stream::width, camera3_stream::height,
                mMaxSize, mId);
        return res;
    }
    res = native_window_set_buffers_format(mConsumer.get(), camera3_stream::format);
    int maxConsumerBuffers;
    res = static_cast<ANativeWindow*>(mConsumer.get())->query(
       mConsumer.get(), NATIVE_WINDOW_MIN_UNDEQUEUED_BUFFERS, &maxConsumerBuffers);
    ALOGV("%s: Consumer wants %d buffers, HAL wants %d", __FUNCTION__,
            maxConsumerBuffers, camera3_stream::max_buffers);
     mTotalBufferCount = maxConsumerBuffers + camera3_stream::max_buffers;
     res = native_window_set_buffer_count(mConsumer.get(), mTotalBufferCount);
}

主要干了如下几件事:

1.重新设置surface的width height format等。

为什么要设置?之前createSurfaceFromGbp的时候不是已经设置好了吗?

这是因为configure完成后,hal层可能有修改,所以需要重新设置一遍。

2.查询surface的buffer count,也就是gbp的buffer count

3.设置buffcount, 大小为:maxConsumerBuffers + camera3_stream::max_buffers。

总结一下:

① configure为app传入的surface配置相应的stream,然后将gbp、streamid、surfaceid进行绑定,以便于发送request请求的时候去获取;

② 告诉hal层,需要从camera中获取什么样格式的buffer,over!

这次configure流程分析完毕,后续根据项目上遇到的问题进行补充。Bye~

  • 1
    点赞
  • 10
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值