Camera2 Preview流程

文章目录

1. 预览API调用

1.1 获取预览尺寸

通过CameraCharacteristics 获取预览尺寸

1.2 配置预览尺寸

Camera2 是把尺寸信息设置给 Surface,例如接收预览画面的 SurfaceTexture,或者是接收拍照图片的 ImageReader,相机在输出图像数据的时候会根据 Surface 配置的 Buffer 大小输出对应尺寸的画面。

1.3 创建 CameraCaptureSession

用于接收预览画面的 Surface 准备就绪了,接了下来我们要使用这个 Surface 创建一个 CameraCaptureSession 实例,涉及的方法是 CameraDevice.createCaptureSession(),该方法要求你传递以下三个参数:
outputs:所有用于接收图像数据的 Surface,例如本章用于接收预览画面的 Surface,后续还会有用于拍照的 Surface,这些 Surface 必须在创建 Session 之前就准备好,并且在创建 Session 的时候传递给底层用于配置 Pipeline。
callback:用于监听 Session 状态的 CameraCaptureSession.StateCallback 对象,就如同开关相机一样,创建和销毁 Session 也需要我们注册一个状态监听器。
handler:用于执行 CameraCaptureSession.StateCallback 的 Handler 对象,可以是异步线程的 Handler,也可以是主线程的 Handler。

1.4 创建 CaptureRequest

CaptureRequest 是向 CameraCaptureSession 提交 Capture 请求时的信息载体,其内部包括了本次 Capture 的参数配置和接收图像数据的 Surface。CaptureRequest 可以配置的信息非常多,包括图像格式、图像分辨率、传感器控制、闪光灯控制、3A 控制等等,我们可以通过 CameraDevice.createCaptureRequest() 方法创建一个 CaptureRequest.Builder 对象,该方法只有一个参数 templateType 用于指定使用何种模板创建 CaptureRequest.Builder 对象。

1.5 开启和停止预览

在 Camera2 里,预览本质上是不断重复执行的 Capture 操作,每一次 Capture 都会把预览画面输出到对应的 Surface 上,涉及的方法是 CameraCaptureSession.setRepeatingRequest(),该方法有三个参数:
request:在不断重复执行 Capture 时使用的 CaptureRequest 对象。
callback:监听每一次 Capture 状态的 CameraCaptureSession.CaptureCallback 对象,例如 onCaptureStarted() 意味着一次 Capture 的开始,而 onCaptureCompleted() 意味着一次 Capture 的结束。
hander:用于执行 CameraCaptureSession.CaptureCallback 的 Handler 对象,可以是异步线程的 Handler,也可以是主线程的 Handler,在我们的 Demo 里使用的是主线程 Handler。
如果要关闭预览的话,可以通过 CameraCaptureSession.stopRepeating() 停止不断重复执行的 Capture 操作:captureSession.stopRepeating(),到目前为止,如果一切正常的话,预览画面应该就已经显示出来了。

1.6 API调用代码说明

/**
* 创建一个新的[CameraCaptureSession]相机预览
*/
private fun createCameraPreviewSession() {
   try {
       val texture = textureView.surfaceTexture
       // 我们将默认缓冲区的大小配置为我们想要的相机预览的大小。
       texture.setDefaultBufferSize(previewSize.width, previewSize.height)
       // 这是我们需要开始预览的输出surface。
       val surface = Surface(texture)
       // 我们将CaptureRequest.Builder与输出surface 联系起来。
       previewRequestBuilder = cameraDevice!!.createCaptureRequest(
               CameraDevice.TEMPLATE_PREVIEW
       )
       previewRequestBuilder.addTarget(surface)
       // create a CameraCaptureSession for camera preview.
       cameraDevice?.createCaptureSession(Arrays.asList(surface),
               object : CameraCaptureSession.StateCallback() {
                   override fun onConfigured(cameraCaptureSession: CameraCaptureSession) {
                       // 当会话已准备就绪时,我们开始显示预览。
                       captureSession = cameraCaptureSession
                       try {
                           // 自动对焦应该是连续的相机预览
                           previewRequestBuilder.set(CaptureRequest.CONTROL_AF_MODE,
                                   CaptureRequest.CONTROL_AF_MODE_CONTINUOUS_PICTURE)
                           // 开始显示相机预览
                           previewRequest = previewRequestBuilder.build()
                           captureSession?.setRepeatingRequest(previewRequest,
                                   captureCallback, backgroundHandler)
                       } catch (e: CameraAccessException) {
                           Log.e(TAG, e.toString())
                       }

                   }
                   override fun onConfigureFailed(session: CameraCaptureSession) {
                       activity.showToast("Failed")
                   }
               }, null)
   } catch (e: CameraAccessException) {
       Log.e(TAG, e.toString())
   }
}

二、configureStreams 流程

三、预览流程

Camera2 预览流程分析一

1.梳理一下相机预览流程。

1). 从 TextureView 获取到 SurfaceTexture;
2). 将 SurfaceTexture 默认缓冲区的大小配置为相机预览的大小;
3). 新建一个 Surface 作为预览输出;
4). CaptureRequest.Builder 设置 Surface;
5). 创建 CameraCaptureSession 用于相机预览;
6). 创建成功 CameraCaptureSession 后给 CaptureRequest.Builder 设置自动对焦,必要时开启闪光灯;
7). 现在可以开始显示相机预览了,CameraCaptureSession 设置重复请求预览数据;

CameraCaptureSession.java

Camera 起预览时候回调用CameraCaptureSession类的setRepeatingRequest方法,该方法的实现是由CameraCaptureSessionImpl来完成的。

文件位置:frameworks\base\core\java\android\hardware\camera2

// 主要功能实现是通过CameraCaptureSessionImpl 实现
public abstract int setRepeatingRequest(@NonNull CaptureRequest request,
        @Nullable CaptureCallback listener, @Nullable Handler handler)

CameraCaptureSessionImpl.java

Camera 起预览时候回调用CameraCaptureSession类的setRepeatingRequest方法,该方法的实现是由CameraCaptureSessionImpl来完成的。

文件位置:frameworks\base\core\java\android\hardware\camera2\impl

@Override
public int setRepeatingRequest(CaptureRequest request, CaptureCallback callback,
            Handler handler) throws CameraAccessException {
	synchronized (mDeviceImpl.mInterfaceLock) {
	  // addPendingSequence调用 CameraDeviceImpl中的 setRepeatingRequest方法
		return addPendingSequence(mDeviceImpl.setRepeatingRequest(request,
		createCaptureCallbackProxy(handler, callback), mDeviceExecutor));
	}
}

第一个参数CaptureRequest只有一个Request,而在后面会将它包装成List.

2.CameraDeviceImpl.java–>createCaptureRequest()

CameraMetadataNative 类代表 Binder 跨进程通信到 camera service 的相机元数据封送的实现。

这里面涉及了一个重点逻辑调用了 mRemoteDevice 变量指向的对象的 createDefaultRequest(…) 方法。
templateType = CameraDevice.TEMPLATE_PREVIEW 其值为 1。然后将输出的 CameraMetadataNative 对象作为 CaptureRequest.Builder 入参构造一个 CaptureRequest.Builder 对象返回。
setRepeatingRequest

文件路径:frameworks/base/core/java/android/hardware/camera2/impl/CameraDeviceImpl.java

@Override
public CaptureRequest.Builder createCaptureRequest(int templateType,
		Set<String> physicalCameraIdSet)
		throws CameraAccessException {
	synchronized(mInterfaceLock) {
	  // CameraMetadataNative 
		CameraMetadataNative templatedRequest = null;
		// createDefaultRequest
		templatedRequest = mRemoteDevice.createDefaultRequest(templateType);
		CaptureRequest.Builder builder = new CaptureRequest.Builder(
				templatedRequest, /*reprocess*/false, CameraCaptureSession.SESSION_ID_NONE,
				getId(), physicalCameraIdSet); 
		return builder;
	}
}

首先来确定 mRemoteDevice 是哪里赋值的?
CameraDeviceImplsetRemoteDevice(…) 方法中又通过调用 CameraBinderDecoratornewInstance(…)方法创建了实现 ICameraDeviceUser 接口的对象。

3.CameraDeviceImpl.java–>setRemoteDevice()

public class CameraDeviceImpl extends CameraDevice {
    ......
    public void setRemoteDevice(ICameraDeviceUser remoteDevice) {
        synchronized(mInterfaceLock) {
            mRemoteDevice = CameraBinderDecorator.newInstance(remoteDevice);
            mDeviceHandler.post(mCallOnOpened);
            mDeviceHandler.post(mCallOnUnconfigured);
        }
    }
    ......
}

CameraBinderDecorator 类静态方法 newInstance(…) 是个泛型方法。实际上其内部实现为调用 Decorator 泛型类的 newInstance(…) 方法。入参除了 T 类型的对象,还有一个 CameraBinderDecoratorListener 对象。

4.CameraBinderDecorator.java

文件路径:frameworks/base/core/java/android/hardware/camera2/utils/CameraBinderDecorator.java

public class CameraBinderDecorator {
    ......
    static class CameraBinderDecoratorListener implements Decorator.DecoratorListener {

        @Override
        public void onBeforeInvocation(Method m, Object[] args) {
        }

        @Override
        public void onAfterInvocation(Method m, Object[] args, Object result) {
            // int return type => status_t => convert to exception
            if (m.getReturnType() == Integer.TYPE) {
                int returnValue = (Integer) result;
                throwOnError(returnValue);
            }
        }

        @Override
        public boolean onCatchException(Method m, Object[] args, Throwable t) {

            if (t instanceof DeadObjectException) {
                throw new CameraRuntimeException(CAMERA_DISCONNECTED,
                        "Process hosting the camera service has died unexpectedly",
                        t);
            } else if (t instanceof RemoteException) {
                throw new UnsupportedOperationException("An unknown RemoteException was thrown" +
                        " which should never happen.", t);
            }

            return false;
        }

        @Override
        public void onFinally(Method m, Object[] args) {
        }

    }    
    ......
    public static <T> T newInstance(T obj) {
        return Decorator.<T> newInstance(obj, new CameraBinderDecoratorListener());
    }
}

代码跟到这个位置就很清晰了,这里使用了 Java 的动态代理机制。也就是说最后返回的实现 ICameraDeviceUser 接口的对象实际上是动态代理到 remoteDevice 引用对象的对象。为什么需要绕一个大弯子?使用动态代理机制可以在不改变原有对象实现方法的基础上完成一些额外功能。

5.Decorator.java–>newInstance()

文件路径:frameworks/base/core/java/android/hardware/camera2/utils/Decorator.java

public class Decorator<T> implements InvocationHandler {
    ......
    @SuppressWarnings("unchecked")
    public static<T> T newInstance(T obj, DecoratorListener listener) {
        return (T)java.lang.reflect.Proxy.newProxyInstance(
                obj.getClass().getClassLoader(),
                obj.getClass().getInterfaces(),
                new Decorator<T>(obj, listener));
    }
    ......
}

继续分析 CameraDeviceImpl 类 setRemoteDevice(…) 何时被调用?

如果对 openCamera(…) 流程还有印象,就知道其赋值是在 CameraManager 类 openCameraDeviceUserAsync(…) 方法中。分析可知 mRemoteDevice 对象实际指向一个 ICameraDeviceUser.Stub.Proxy 类型的对象。

现在可以继续分析 createDefaultRequest(…) 方法调用流程了。ICameraDeviceUser.Stub 和 ICameraDeviceUser.Stub.Proxy 是编译 ICameraDeviceUser.aidl 生成的。最终会调用 BpCameraDeviceUser 类 createDefaultRequest(…) 方法。

6.ICameraDeviceUser.cpp–>createDefaultRequest()

文件路径:frameworks/av/camera/camera2/ICameraDeviceUser.cpp

class BpCameraDeviceUser : public BpInterface<ICameraDeviceUser>
{
public:
    ......
    // 从模板创建请求对象
    virtual status_t createDefaultRequest(int templateId,
                                          /*out*/
                                          CameraMetadata* request)
    {
        Parcel data, reply;
        data.writeInterfaceToken(ICameraDeviceUser::getInterfaceDescriptor());
        data.writeInt32(templateId);
        remote()->transact(CREATE_DEFAULT_REQUEST, data, &reply);

        reply.readExceptionCode();
        status_t result = reply.readInt32();

        CameraMetadata out;
        if (reply.readInt32() != 0) {
            out.readFromParcel(&reply);
        }

        if (request != NULL) {
            request->swap(out);
        }
        return result;
    }
}

BpCameraDeviceUser 类 createDefaultRequest(…) 方法会调用到 BnCameraDeviceUser 同名方法。

7.ICameraDeviceUser.h–>onTransact()

文件路径:frameworks/av/include/camera/camera2/ICameraDeviceUser.h

class BnCameraDeviceUser: public BnInterface<ICameraDeviceUser>
{
public:
    virtual status_t    onTransact( uint32_t code,
                                    const Parcel& data,
                                    Parcel* reply,
                                    uint32_t flags = 0);
};

BnCameraDeviceUser::onTransact(…) 接收到 CREATE_DEFAULT_REQUEST 类型的消息,并调用 createDefaultRequest(…) 方法处理,实际是 BnCameraDeviceUser 的子类。

8.ICameraDeviceUser.cpp–>onTransact()

status_t BnCameraDeviceUser::onTransact(
    uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
    switch(code) {
        ......
        case CREATE_DEFAULT_REQUEST: {
            CHECK_INTERFACE(ICameraDeviceUser, data, reply);
            int templateId = data.readInt32();
            CameraMetadata request;
            status_t ret;
            ret = createDefaultRequest(templateId, &request);
            reply->writeNoException();
            reply->writeInt32(ret);
            // out-variables are after exception and return value
            reply->writeInt32(1); // to mark presence of metadata object
            request.writeToParcel(const_cast<Parcel*>(reply));
            return NO_ERROR;
        } break;
        ......
   }
}

下面是继承链:

CameraDeviceClient -> Camera2ClientBase -> CameraDeviceClientBase -> BnCameraDeviceUser

mDevice 是在 Camera2ClientBase 构造器中初始化的,结合 openCamera 流程可知,假设 mDevice 指向 Camera3Device 对象。

9.CameraDeviceClient.cpp–>createDefaultRequest()

文件路径:frameworks/av/services/camera/libcameraservice/api2/CameraDeviceClient.cpp

// Create a request object from a template.
status_t CameraDeviceClient::createDefaultRequest(int templateId,
                                                  /*out*/
                                                  CameraMetadata* request)
{
    ATRACE_CALL();
    status_t res;
    if ( (res = checkPid(__FUNCTION__) ) != OK) return res;
    Mutex::Autolock icl(mBinderSerializationLock);
    if (!mDevice.get()) return DEAD_OBJECT;
    CameraMetadata metadata;
    if ( (res = mDevice->createDefaultRequest(templateId, &metadata) ) == OK &&
        request != NULL) {
        request->swap(metadata);
    }
    return res;
}

Camera3Device::createDefaultRequest(…) 方法实际上调用了 camera3_device_t 结构体内 ops 指向的 camera3_device_ops_t 结构体内的 construct_default_request_settings 函数指针(厂家需要实现)。
这个函数指针的接口定义如下:
1). 为标准相机用例创建捕获设置。
2). 设备必须返回配置为满足请求的用例的设置缓冲区,该缓冲区必须是 CAMERA3_TEMPLATE_ * 枚举之一。 必须包括所有请求控制字段。
3). HAL 保留了此结构的所有权,但是指向该结构的指针必须有效,直到关闭设备为止。 一旦此调用返回缓冲区,框架和 HAL可能不会修改缓冲区。 对于相同模板或其他模板的后续调用,可以返回相同的缓冲区。

10.Camera3Device.cpp–>createDefaultRequest()

文件路径:frameworks/av/services/camera/libcameraservice/device3/Camera3Device.cpp

status_t Camera3Device::createDefaultRequest(int templateId,
        CameraMetadata *request) {
    ATRACE_CALL();
    Mutex::Autolock il(mInterfaceLock);
    Mutex::Autolock l(mLock);

    switch (mStatus) {
        case STATUS_ERROR:
            CLOGE("Device has encountered a serious error");
            return INVALID_OPERATION;
        case STATUS_UNINITIALIZED:
            CLOGE("Device is not initialized!");
            return INVALID_OPERATION;
        case STATUS_UNCONFIGURED:
        case STATUS_CONFIGURED:
        case STATUS_ACTIVE:
            // OK
            break;
        default:
            SET_ERR_L("Unexpected status: %d", mStatus);
            return INVALID_OPERATION;
    }

    if (!mRequestTemplateCache[templateId].isEmpty()) {
        *request = mRequestTemplateCache[templateId];
        return OK;
    }

    const camera_metadata_t *rawRequest;
    ATRACE_BEGIN("camera3->construct_default_request_settings");
    rawRequest = mHal3Device->ops->construct_default_request_settings(
        mHal3Device, templateId);
    ATRACE_END();
    if (rawRequest == NULL) {
        ALOGI("%s: template %d is not supported on this camera device",
              __FUNCTION__, templateId);
        return BAD_VALUE;
    }
    *request = rawRequest;
    mRequestTemplateCache[templateId] = rawRequest;

    return OK;
}

CaptureRequest.Builder 设置 Surface 这一步,在请求的目标列表(HashSet)中添加一个 Surface,当向相机设备发出请求时,添加的 Surface 必须是对 CameraDevice#createCaptureSession 的最新调用中包含的 Surface 之一。

11.CaptureRequest.java–>CaptureRequest()

文件路径:frameworks/base/core/java/android/hardware/camera2/CaptureRequest.java

public final class CaptureRequest extends CameraMetadata<CaptureRequest.Key<?>>
        implements Parcelable {
    ......
    private final HashSet<Surface> mSurfaceSet;    
    ......
    private CaptureRequest(CameraMetadataNative settings, boolean isReprocess,
            int reprocessableSessionId) {
        mSettings = CameraMetadataNative.move(settings);
        mSurfaceSet = new HashSet<Surface>();
        mIsReprocess = isReprocess;
        if (isReprocess) {
            mReprocessableSessionId = reprocessableSessionId;
        } else {
            mReprocessableSessionId = CameraCaptureSession.SESSION_ID_NONE;
        }
    }
    ......
    public final static class Builder {

        private final CaptureRequest mRequest;  
        
        public Builder(CameraMetadataNative template, boolean reprocess,
                int reprocessableSessionId) {
            mRequest = new CaptureRequest(template, reprocess, reprocessableSessionId);
        }        
        
        public void addTarget(@NonNull Surface outputTarget) {
            mRequest.mSurfaceSet.add(outputTarget);
        }     
        ......
    }
    ......
}

创建 CameraCaptureSession 用于相机预览。

  • 将每个 Surface 包装成 OutputConfiguration(一个用于描述相机输出的类,其中包含一个 Surface 及其用于创建捕获会话的特定配置)
  • 调用内部方法 createCaptureSessionInternal(…)

12.CameraDeviceImpl.java–>createCaptureSession()

是用来创建预览session,输出位置是 Surface,会有两个 Surface,一个现实预览画面的 Surface,一个接收预览数据的 Surface。

文件路径:frameworks/base/core/java/android/hardware/camera2/impl/CameraDeviceImpl.java

public class CameraDeviceImpl extends CameraDevice {
    ......
    @Override
    public void createCaptureSession(List<Surface> outputs,
            CameraCaptureSession.StateCallback callback, Handler handler)
            throws CameraAccessException {
        List<OutputConfiguration> outConfigurations = new ArrayList<>(outputs.size());
        for (Surface surface : outputs) {
            outConfigurations.add(new OutputConfiguration(surface));
        }
        // createCaptureSessionInternal
        createCaptureSessionInternal(null, outConfigurations, callback, handler,
                /*isConstrainedHighSpeed*/false);
    }    
    ......
}

核心步骤:

  • 调用 configureStreamsChecked(…) 配置流然后阻塞直到空闲 IDLE
  • 创建 CameraCaptureSessionImpl 对象

13.CameraDeviceImpl.java–>createCaptureSessionInternal()

文件路径:frameworks/base/core/java/android/hardware/camera2/impl/CameraDeviceImpl.java

public class CameraDeviceImpl extends CameraDevice {
    ......
    private void createCaptureSessionInternal(InputConfiguration inputConfig,
            List<OutputConfiguration> outputConfigurations,
            CameraCaptureSession.StateCallback callback, Handler handler,
            boolean isConstrainedHighSpeed) throws CameraAccessException {
        synchronized(mInterfaceLock) {
            ......
            // TODO: dont block for this
            boolean configureSuccess = true;
            CameraAccessException pendingException = null;
            Surface input = null;
            try {
                // 通过configureStreamsChecked配置流,然后阻塞直到空闲 IDLE
                configureSuccess = configureStreamsChecked(inputConfig, outputConfigurations,
                        isConstrainedHighSpeed);
                if (configureSuccess == true && inputConfig != null) {
                    input = new Surface();
                    try {
                        mRemoteDevice.getInputSurface(/*out*/input);
                    } catch (CameraRuntimeException e) {
                        e.asChecked();
                    }
                }
            } catch (CameraAccessException e) {
                configureSuccess = false;
                pendingException = e;
                input = null;
            } catch (RemoteException e) {
                // impossible
                return;
            }

            List<Surface> outSurfaces = new ArrayList<>(outputConfigurations.size());
            for (OutputConfiguration config : outputConfigurations) {
                outSurfaces.add(config.getSurface());
            }
            // 如果 configureOutputs 成功,则触发 onConfigured,否则触发 onConfigureFailed。
            CameraCaptureSessionCore newSession = null;
            if (isConstrainedHighSpeed) {
                newSession = new CameraConstrainedHighSpeedCaptureSessionImpl(mNextSessionId++,
                        outSurfaces, callback, handler, this, mDeviceHandler, configureSuccess,
                        mCharacteristics);
            } else {
                // 进入此分支
                newSession = new CameraCaptureSessionImpl(mNextSessionId++, input,
                        outSurfaces, callback, handler, this, mDeviceHandler,
                        configureSuccess);
            }
            // TODO: wait until current session closes, then create the new session
            mCurrentSession = newSession;
            mSessionStateCallback = mCurrentSession.getDeviceStateCallback();
        }
    }    
    ......
}

14.CameraDeviceImpl.java–>configureStreamsChecked()

configureStreamsChecked(…) 函数尝试配置输入和输出;
设备将进入空闲状态,然后在可能的情况下配置新的输入和输出。
添加所有新流,遍历 outputs 列表,逐个跨进程调用到CameraDeviceClientcreateStream(…)方法。

文件路径:frameworks/base/core/java/android/hardware/camera2/impl/CameraDeviceImpl.java

public boolean configureStreamsChecked(InputConfiguration inputConfig,
        List<OutputConfiguration> outputs, boolean isConstrainedHighSpeed)
                throws CameraAccessException {
    // Treat a null input the same an empty list
    if (outputs == null) {
        outputs = new ArrayList<OutputConfiguration>();
    }
    checkInputConfiguration(inputConfig);
    boolean success = false;
    synchronized(mInterfaceLock) {
        checkIfCameraClosedOrInError();
        // Streams to create
        HashSet<OutputConfiguration> addSet = new HashSet<OutputConfiguration>(outputs);
        // Streams to delete
        List<Integer> deleteList = new ArrayList<Integer>();
        // 确定需要创建哪些流,删除哪些流
        for (int i = 0; i < mConfiguredOutputs.size(); ++i) {
            int streamId = mConfiguredOutputs.keyAt(i);
            OutputConfiguration outConfig = mConfiguredOutputs.valueAt(i);

            if (!outputs.contains(outConfig)) {
                deleteList.add(streamId);
            } else {
                addSet.remove(outConfig);  // 不要创建之前创建的流
            }
        }
        mDeviceHandler.post(mCallOnBusy);
        stopRepeating();
        try {
            waitUntilIdle();
            //
            mRemoteDevice.beginConfigure();
            // 如果输入配置不同,请重新配置输入流。
            InputConfiguration currentInputConfig = mConfiguredInput.getValue();
            if (inputConfig != currentInputConfig &&
                    (inputConfig == null || !inputConfig.equals(currentInputConfig))) {
                if (currentInputConfig != null) {
                    mRemoteDevice.deleteStream(mConfiguredInput.getKey());
                    mConfiguredInput = new SimpleEntry<Integer, InputConfiguration>(
                            REQUEST_ID_NONE, null);
                }
                if (inputConfig != null) {
                    int streamId = mRemoteDevice.createInputStream(inputConfig.getWidth(),
                            inputConfig.getHeight(), inputConfig.getFormat());
                    mConfiguredInput = new SimpleEntry<Integer, InputConfiguration>(
                            streamId, inputConfig);
                }
            }
            // 首先删除所有流(以释放HW资源)
            for (Integer streamId : deleteList) {
                mRemoteDevice.deleteStream(streamId);
                mConfiguredOutputs.delete(streamId);
            }
            // 添加所有新流
            for (OutputConfiguration outConfig : outputs) {
                if (addSet.contains(outConfig)) {
                    /* 将会在这里创建并配置输出流,该流输出显示,
                    * 注意 outConfig 中有显示 surface
                    */
                    int streamId = mRemoteDevice.createStream(outConfig);
                    mConfiguredOutputs.put(streamId, outConfig);
                }
            }
            try {
                /* 上述只是保存配置,这里才会真正的设置到 HAL */
                mRemoteDevice.endConfigure(isConstrainedHighSpeed);
            }
            catch (IllegalArgumentException e) {
                /**
                如果 HAL 不支持,camera 服务可以拒绝流配置 这只是程序员滥用 camera2 api 的结果。
                */
                Log.w(TAG, "Stream configuration failed");
                return false;
            }
            success = true;
        } catch (CameraRuntimeException e) {
            if (e.getReason() == CAMERA_IN_USE) {
                throw new IllegalStateException("The camera is currently busy." +
                        " You must wait until the previous operation completes.");
            }
            throw e.asChecked();
        } catch (RemoteException e) {
            // impossible
            return false;
        } finally {
            if (success && outputs.size() > 0) {
                mDeviceHandler.post(mCallOnIdle);
            } else {
                // Always return to the 'unconfigured' state if we didn't hit a fatal error
                mDeviceHandler.post(mCallOnUnconfigured);
            }
        }
    }
    return success;
}    

核心步骤:
创建 Native Surface
调用 Camera3Device 类 createStream(…) 方法创建流

15.CameraDeviceClient.cpp–>createStream()

1. 我们主要分析 mRemoteDevice.createStream(outConfig) 的操作,在这里,将会通过Binder将相应的配置信息传输到 CameraService 端的CameraDeviceClient。

2. 在通过Binder之后,CameraDeviceImpl 操作的 createStream() 将会由CameraService调用 CameraDeviceClient::createStream() 函数并返回,注意 CameraDeviceClient::createStream() 有两个函数参数,但是第二个可以默认为NULL。

文件路径:frameworks/av/services/camera/libcameraservice/api2/CameraDeviceClient.cpp

binder::Status CameraDeviceClient::createStream(
        const hardware::camera2::params::OutputConfiguration &outputConfiguration,
        /*out*/
        int32_t* newStreamId) {
    const std::vector<sp<IGraphicBufferProducer>>& bufferProducers =
            outputConfiguration.getGraphicBufferProducers();

    ...

    OutputStreamInfo streamInfo;
    bool isStreamInfoValid = false;
    
    ...

        sp<Surface> surface;
    	/* 将OutputConfiguration中的配置信息转换保存到streamInfo */
        res = createSurfaceFromGbp(streamInfo, isStreamInfoValid, surface, bufferProducer);
    ...
        /* 将 outputConfiguration 中的 surface 提取出来并添加到 surfaces */
        surfaces.push_back(surface);
    ...

    int streamId = camera3::CAMERA3_STREAM_ID_INVALID;
    std::vector<int> surfaceIds;
    /* 这里将传递 surfaces 作为参数创建流 */
    /* 这里又调用device的 createStream() 函数,那么这里又是调用到了哪里呢,看下面的解析 */
    err = mDevice->createStream(surfaces, deferredConsumer, streamInfo.width,
            streamInfo.height, streamInfo.format, streamInfo.dataSpace,
            static_cast<camera3_stream_rotation_t>(outputConfiguration.getRotation()),
            &streamId, physicalCameraId, &surfaceIds, outputConfiguration.getSurfaceSetID(),
            isShared);
    
    ...
		/* 保存相应的信息 */
        mStreamMap.add(binder, StreamSurfaceId(streamId, surfaceIds[i]));
    
        mConfiguredOutputs.add(streamId, outputConfiguration);
        mStreamInfoMap[streamId] = streamInfo;
}

上面的 mDevice->createStream() 将会调用到哪里呢?

CameraService —> libcameraservice

16.Camera3Device.cpp–>createStream()

现在重点来跟 Camera3Device 类 createStream(…) 方法。这里创建了 Camera3OutputStream (用于管理来自相机设备的单个输出数据流)对象,并将其添加到 mOutputStreams 指向的 KeyedVector 向量中。

文件路径:frameworks/av/services/camera/libcameraservice/device3/Camera3Device.cpp

status_t Camera3Device::createStream(sp<Surface> consumer,
        uint32_t width, uint32_t height, int format, android_dataspace dataSpace,
        camera3_stream_rotation_t rotation, int *id) {
    ATRACE_CALL();
    Mutex::Autolock il(mInterfaceLock);
    Mutex::Autolock l(mLock);
    ALOGV("Camera %d: Creating new stream %d: %d x %d, format %d, dataspace %d rotation %d",
            mId, mNextStreamId, width, height, format, dataSpace, rotation);

    status_t res;
    bool wasActive = false;

    switch (mStatus) {
        case STATUS_ERROR:
            CLOGE("Device has encountered a serious error");
            return INVALID_OPERATION;
        case STATUS_UNINITIALIZED:
            CLOGE("Device not initialized");
            return INVALID_OPERATION;
        case STATUS_UNCONFIGURED:
        case STATUS_CONFIGURED:
            // OK
            break;
        case STATUS_ACTIVE:
            ALOGV("%s: Stopping activity to reconfigure streams", __FUNCTION__);
            res = internalPauseAndWaitLocked();
            if (res != OK) {
                SET_ERR_L("Can't pause captures to reconfigure streams!");
                return res;
            }
            wasActive = true;
            break;
        default:
            SET_ERR_L("Unexpected status: %d", mStatus);
            return INVALID_OPERATION;
    }
    sp<Camera3OutputStream> newStream;
    if (format == HAL_PIXEL_FORMAT_BLOB) {
        ssize_t blobBufferSize;
        if (dataSpace != HAL_DATASPACE_DEPTH) {
            blobBufferSize = getJpegBufferSize(width, height);
        } else {
            blobBufferSize = getPointCloudBufferSize();
        }
       /* 创建 Camera3OutputStream 实例对象,注意第二个参数,它是 Surface,
    	 * 将保存在 Camera3OutputStream 的 mConsumer 成员 */
        newStream = new Camera3OutputStream(mNextStreamId, consumer,
                width, height, blobBufferSize, format, dataSpace, rotation);
    } else {
        newStream = new Camera3OutputStream(mNextStreamId, consumer,
                width, height, format, dataSpace, rotation);
    }
    newStream->setStatusTracker(mStatusTracker);
    /* newStream 实例对象保存到 mOutputStreams */
    res = mOutputStreams.add(mNextStreamId, newStream);
    *id = mNextStreamId++;
    mNeedConfig = true;
    // 如果开始时处于活动状态,则继续捕获
    if (wasActive) {
        ALOGV("%s: Restarting activity to reconfigure streams", __FUNCTION__);
        res = configureStreamsLocked();
        if (res != OK) {
            CLOGE("Can't reconfigure device for new stream %d: %s (%d)",
                    mNextStreamId, strerror(-res), res);
            return res;
        }
        internalResumeLocked();
    }
    // 创建stream 完毕
    return OK;
}

到这里为止,似乎 CameraDeviceClient::createStream() 函数已经跟踪完了,最后只是看到保存一些stream的配置信息,并没有设置到 HAL,那么,到底是在哪里设置的呢?
我们回头看看 CameraDeviceImpl::configureStreamsChecked() 方法,在该方法中,通过 mRemoteDevice 操作影响到 CameraService,在 createStream() 之后,还将会调用 endConfigure() 操作,既然 createStream() 并没有设置到 HAL,那么,有没有可能是 endConfigure() 操作呢,我们来看看 CameraDeviceClient::endConfigure() 实现。

创建一个新的 CameraCaptureSession。调用时,摄像头设备必须已经处于 IDLE 状态。不得有任何待处理的操作(例如,没有待处理的捕获、没有重复的请求、没有刷新)。

binder::Status CameraDeviceClient::endConfigure(int operatingMode,
        const hardware::camera2::impl::CameraMetadataNative& sessionParams) {
    ...
	/* 和上述一样,将会调用到 Camera3Device::configureStreams() */
    status_t err = mDevice->configureStreams(sessionParams, operatingMode);
    ...
}

17.CameraMetadata.cpp–>update()

文件路径:frameworks/av/camera/CameraMetadata.cpp

status_t CameraMetadata::update(uint32_t tag,
        const int32_t *data, size_t data_count) {
    status_t res;
    if (mLocked) {
        return INVALID_OPERATION;
    }
    if ( (res = checkType(tag, TYPE_INT32)) != OK) {
        return res;
    }
    return updateImpl(tag, (const void*)data, data_count);
}

mBuffer 是一个 camera_metadata_t * 指针,里面存了各种元数据。add_camera_metadata_entry、update_camera_metadata_entry 分别是添加和更新元数据条目的方法。

18.CameraMetadata.cpp–>updateImpl()

status_t CameraMetadata::updateImpl(uint32_t tag, const void *data,
        size_t data_count) {
    status_t res;
    if (mLocked) {
        return INVALID_OPERATION;
    }
    int type = get_camera_metadata_tag_type(tag);
    // 安全检查-确保数据未指向该元数据,因为如果需要调整大小,该元数据将失效
    size_t bufferSize = get_camera_metadata_size(mBuffer);
    uintptr_t bufAddr = reinterpret_cast<uintptr_t>(mBuffer);
    uintptr_t dataAddr = reinterpret_cast<uintptr_t>(data);
    if (dataAddr > bufAddr && dataAddr < (bufAddr + bufferSize)) {
        ALOGE("%s: Update attempted with data from the same metadata buffer!",
                __FUNCTION__);
        return INVALID_OPERATION;
    }
    size_t data_size = calculate_camera_metadata_entry_data_size(type,
            data_count);
    res = resizeIfNeeded(1, data_size);
    if (res == OK) {
        camera_metadata_entry_t entry;
        res = find_camera_metadata_entry(mBuffer, tag, &entry);
        if (res == NAME_NOT_FOUND) {
            res = add_camera_metadata_entry(mBuffer,
                    tag, data, data_count);
        } else if (res == OK) {
            res = update_camera_metadata_entry(mBuffer,
                    entry.index, data, data_count, NULL);
        }
    }
    if (res != OK) {
        ALOGE("%s: Unable to update metadata entry %s.%s (%x): %s (%d)",
                __FUNCTION__, get_camera_metadata_section_name(tag),
                get_camera_metadata_tag_name(tag), tag, strerror(-res), res);
    }

    IF_ALOGV() {
        ALOGE_IF(validate_camera_metadata_structure(mBuffer, /*size*/NULL) !=
                 OK,
                 "%s: Failed to validate metadata structure after update %p",
                 __FUNCTION__, mBuffer);
    }
    return res;
}

Camera3DevicesetStreamingRequestList(…) 方法实际工作是由 submitRequestsHelper(…) 完成的。

19.CameraMetadata.cpp–>setStreamingRequestList()

status_t Camera3Device::setStreamingRequestList(const List<const CameraMetadata> &requests,
                                                int64_t *lastFrameNumber) {
    //预览和拍照都会调用到   submitRequestsHelper 方法
    return submitRequestsHelper(requests, /*repeating*/true, lastFrameNumber);
}

submitRequestsHelper(…) 方法作用:
1).重点在给请求线程设置重复请求;
2).转换 Metadata List 到 RequestList;

20.CameraMetadata.cpp–>setStreamingRequestList()

status_t Camera3Device::submitRequestsHelper(
        const List<const CameraMetadata> &requests, bool repeating,
        /*out*/
        int64_t *lastFrameNumber) {
    ATRACE_CALL();
    Mutex::Autolock il(mInterfaceLock);
    Mutex::Autolock l(mLock);
    status_t res = checkStatusOkToCaptureLocked();
    RequestList requestList;
    res = convertMetadataListToRequestListLocked(requests, /*out*/&requestList);
    if (repeating) {
        //预览流程 setRepeatingRequests处理
        res = mRequestThread->setRepeatingRequests(requestList, lastFrameNumber);
    } else {
        // 拍照流程处理 queueRequestList
        res = mRequestThread->queueRequestList(requestList, lastFrameNumber);
    }

    if (res == OK) {
        waitUntilStateThenRelock(/*active*/true, kActiveTimeout);
        if (res != OK) {
            SET_ERR_L("Can't transition to active in %f seconds!",
                    kActiveTimeout/1e9);
        }
        ALOGV("Camera %d: Capture request %" PRId32 " enqueued", mId,
              (*(requestList.begin()))->mResultExtras.requestId);
    } else {
        CLOGE("Cannot queue request. Impossible.");
        return BAD_VALUE;
    }

    return res;
}

先来分析 convertMetadataListToRequestListLocked(…) 转换函数流程。遍历 metadataList,每个元素作为入参调用 setUpRequestLocked(…),返回 CaptureRequest 对象。

21.CameraMetadata.cpp–>setStreamingRequestList()

status_t Camera3Device::convertMetadataListToRequestListLocked(
        const List<const CameraMetadata> &metadataList, RequestList *requestList) {
    int32_t burstId = 0;
    for (List<const CameraMetadata>::const_iterator it = metadataList.begin();
            it != metadataList.end(); ++it) {
        sp<CaptureRequest> newRequest = setUpRequestLocked(*it);
        // 设置突发 ID 和请求 ID
        newRequest->mResultExtras.burstId = burstId++;
        if (it->exists(ANDROID_REQUEST_ID)) {
            if (it->find(ANDROID_REQUEST_ID).count == 0) {
                CLOGE("RequestID entry exists; but must not be empty in metadata");
                return BAD_VALUE;
            }
            newRequest->mResultExtras.requestId = it->find(ANDROID_REQUEST_ID).data.i32[0];
        } else {
            CLOGE("RequestID does not exist in metadata");
            return BAD_VALUE;
        }
        requestList->push_back(newRequest);
        ALOGV("%s: requestId = %" PRId32, __FUNCTION__, newRequest->mResultExtras.requestId);
    }
    // 如果这是高速视频录制请求,设置批次大小。
    if (mIsConstrainedHighSpeedConfiguration && requestList->size() > 0) {
        auto firstRequest = requestList->begin();
        for (auto& outputStream : (*firstRequest)->mOutputStreams) {
            if (outputStream->isVideoStream()) {
                (*firstRequest)->mBatchSize = requestList->size();
                break;
            }
        }
    }
    return OK;
}

setUpRequestLocked(…) 函数中,首先调用 configureStreamsLocked() 配置流,然后调用 createCaptureRequest(…) 创建 CaptureRequest 对象。

22.CameraMetadata.cpp–>setUpRequestLocked()

sp<Camera3Device::CaptureRequest> Camera3Device::setUpRequestLocked(
        const CameraMetadata &request) {
    status_t res;
    if (mStatus == STATUS_UNCONFIGURED || mNeedConfig) {
    // 流程配置
        res = configureStreamsLocked();
    }
    sp<CaptureRequest> newRequest = createCaptureRequest(request);
    return newRequest;
}

而在 Camera3Device::configureStreams() 中,最终又将是调用到 Camera3Device::configureStreamsLocked() 函数。而在 Camera3Device::configureStreamsLocked() 函数主要进行了以下操作:

这里将调用 HAL 方法配置流。

23.Camera3Device.cpp–>configureStreamsLocked()

文件路径:frameworks/av/services/camera/libcameraservice/device3/Camera3Device.cpp

status_t Camera3Device::configureStreamsLocked() {
    ATRACE_CALL();
    status_t res;
    if (mStatus != STATUS_UNCONFIGURED && mStatus != STATUS_CONFIGURED) {
        CLOGE("Not idle");
        return INVALID_OPERATION;
    }
    if (!mNeedConfig) {
        ALOGV("%s: Skipping config, no stream changes", __FUNCTION__);
        return OK;
    }
    // 设备 HALv3.2 或更旧规范错误的解决方法-零流需要添加虚拟流。
    if (mOutputStreams.size() == 0) {
        addDummyStreamLocked();
    } else {
        tryRemoveDummyStreamLocked();
    }
    // 开始配置流
    camera3_stream_configuration config;
    config.operation_mode = mIsConstrainedHighSpeedConfiguration ?
            CAMERA3_STREAM_CONFIGURATION_CONSTRAINED_HIGH_SPEED_MODE :
            CAMERA3_STREAM_CONFIGURATION_NORMAL_MODE;
    config.num_streams = (mInputStream != NULL) + mOutputStreams.size();

    Vector<camera3_stream_t*> streams;
    streams.setCapacity(config.num_streams);
    if (mInputStream != NULL) {
        camera3_stream_t *inputStream;
        // 开始输入流配置
        inputStream = mInputStream->startConfiguration();
        streams.add(inputStream);
    }
    for (size_t i = 0; i < mOutputStreams.size(); i++) {
        // 不要配置两次 bidi 流,也不要将它们两次添加到列表中
        if (mOutputStreams[i].get() ==
            static_cast<Camera3StreamInterface*>(mInputStream.get())) {
            config.num_streams--;
            continue;
        }
        camera3_stream_t *outputStream;
        // 开始输出流配置
        outputStream = mOutputStreams.editValueAt(i)->startConfiguration();
        streams.add(outputStream);
    }
    config.streams = streams.editArray();
    // 做 HAL 配置
    ATRACE_BEGIN("camera3->configure_streams");
    res = mHal3Device->ops->configure_streams(mHal3Device, &config);
    ATRACE_END();
    if (res == BAD_VALUE) {
        // HAL 将这组流拒绝为不支持,清理配置尝试并返回到未配置状态
        if (mInputStream != NULL && mInputStream->isConfiguring()) {
            res = mInputStream->cancelConfiguration();
            if (res != OK) {
                SET_ERR_L("Can't cancel configuring input stream %d: %s (%d)",
                        mInputStream->getId(), strerror(-res), res);
                return res;
            }
        }
        for (size_t i = 0; i < mOutputStreams.size(); i++) {
            sp<Camera3OutputStreamInterface> outputStream =
                    mOutputStreams.editValueAt(i);
            if (outputStream->isConfiguring()) {
            // 取消配置输出流
                res = outputStream->cancelConfiguration();
            }
        }
        // 返回到调用开始的状态,以便将来正确配置清理内容
        internalUpdateStatusLocked(STATUS_UNCONFIGURED);
        mNeedConfig = true;
        return BAD_VALUE;
    } else if (res != OK) {
        // 来自 configure_streams 的其他类型的错误-这不是期望的
        SET_ERR_L("Unable to configure streams with HAL: %s (%d)",
                strerror(-res), res);
        return res;
    }
    // 立即完成所有输入流配置
    if (mInputStream != NULL && mInputStream->isConfiguring()) {
        res = mInputStream->finishConfiguration(mHal3Device);
    }
    for (size_t i = 0; i < mOutputStreams.size(); i++) {
        sp<Camera3OutputStreamInterface> outputStream =
            mOutputStreams.editValueAt(i);
        // 立即完成所有输出流配置
        if (outputStream->isConfiguring()) {
            res = outputStream->finishConfiguration(mHal3Device);
        }
    }
    // 请求线程需要知道以避免在 configure_streams() 调用之间使用重复最后设置协议
    mRequestThread->configurationComplete();
    // 提高请求线程的优先级,以便高速记录到 SCHED_FIFO
    if (mIsConstrainedHighSpeedConfiguration) {
        pid_t requestThreadTid = mRequestThread->getTid();
        res = requestPriority(getpid(), requestThreadTid,
                kConstrainedHighSpeedThreadPriority, true);
        if (res != OK) {
            ALOGW("Can't set realtime priority for request processing thread: %s (%d)",
                    strerror(-res), res);
        } else {
            ALOGD("Set real time priority for request queue thread (tid %d)", requestThreadTid);
        }
    } else {
        // TODO: Set/restore normal priority for normal use cases
    }
    // 更新设备状态
    mNeedConfig = false;
    internalUpdateStatusLocked((mDummyStreamId == NO_STREAM) ?
            STATUS_CONFIGURED : STATUS_UNCONFIGURED);
    ALOGV("%s: Camera %d: Stream configuration complete", __FUNCTION__, mId);
    // 配置流后,删除已删除的流
    mDeletedStreams.clear();
    return OK;
}

再来分析调用 createCaptureRequest(…) 创建 CaptureRequest 对象。

Camera2 预览流程分析二

RequestThread 是在 Camera open 流程中启动的RequestThread 是用于管理向 HAL 设备提交捕获请求的线程。

1.Camera3Device.cpp–>initializeCommonLocked()

文件路径:frameworks/av/services/camera/libcameraservice/device3/Camera3Device.cpp

status_t Camera3Device::initializeCommonLocked()
{
    ......
    /** Start up request queue thread */    
    mRequestThread = new RequestThread(this, mStatusTracker, mInterface, sessionParamKeys, mUseHalBufManager);
    res = mRequestThread->run(String8::format("C3Dev-%s-ReqQueue", mId.string()).string());
    ......
}

RequestThread 类调用 run 方法,就会启动threadLoop() 函数,此函数返回值为 true,就会进入循环模式一直调用 threadLoop(),返回 false 则只调用一次。

threadLoop() 函数核心步骤:

  • 调用 waitForNextRequestBatch() 等待下一批请求;
  • 调用 prepareHalRequests() 准备一批 HAL 请求和输出缓冲区;
  • 向 HAL 提交一批请求;

2.Camera3Device.cpp–>threadLoop()

bool Camera3Device::RequestThread::threadLoop() {
    status_t res;
    /**
    任何从 threadLoop() 调用的函数都不能持有 mInterfaceLock,因为它可能导致死锁(disconnect() -> 持有 mInterfaceMutex -> 等待请求线程完成 -> 请求线程在 mInterfaceMutex 上等待)
    */
    // 处理暂停状态
    if (waitIfPaused()) {
        return true;
    }
    // 等待下一批请求
    waitForNextRequestBatch();
    // 获取最新的请求 ID(如果有的话)
    int latestRequestId;
    camera_metadata_entry_t requestIdEntry = mNextRequests[mNextRequests.size() - 1].
            captureRequest->mSettings.find(ANDROID_REQUEST_ID);
    if (requestIdEntry.count > 0) {
        latestRequestId = requestIdEntry.data.i32[0];
    } else {
        latestRequestId = NAME_NOT_FOUND;
    }
    // 准备一批 HAL 请求和输出缓冲区
    res = prepareHalRequests();
    if (res == TIMED_OUT) {
        // 如果输出缓冲区超时,这不是致命错误
        cleanUpFailedRequests(/*sendRequestError*/ true);
        return true;
    } else if (res != OK) {
        cleanUpFailedRequests(/*sendRequestError*/ false);
        return false;
    }
    // 通知 waitUntilRequestProcessed 线程一个新的请求 ID
    {
        Mutex::Autolock al(mLatestRequestMutex);
        mLatestRequestId = latestRequestId;
        mLatestRequestSignal.signal();
    }
    // 向HAL提交一批请求。
    // 仅当批量提交多个请求时才使用刷新锁。
    bool useFlushLock = mNextRequests.size() > 1;
    if (useFlushLock) {
        mFlushLock.lock();
    }

    sp<Camera3Device> parent = mParent.promote();
    if (parent != nullptr) {
        parent->mRequestBufferSM.onSubmittingRequest();
    }

    bool submitRequestSuccess = false;
    submitRequestSuccess = sendRequestsBatch();
    mRequestLatency.add(tRequestStart, tRequestEnd);
    
    if (useFlushLock) {
        mFlushLock.unlock();
    }
    // 取消设置为当前请求
    {
        Mutex::Autolock l(mRequestLock);
        mNextRequests.clear();
    }
    return submitRequestSuccess;
}

void Camera3Device::RequestBufferStateMachine::onSubmittingRequest() {
    std::lock_guard<std::mutex> lock(mLock);
    mRequestThreadPaused = false;
    /**inflight map注册现在实际上发生在 prepareHalRequest 中,但它已经足够接近了。*/
    mInflightMapEmpty = false;
    if (mStatus == RB_STATUS_STOPPED) {
        mStatus = RB_STATUS_READY;
    }
    return;
}

bool Camera3Device::RequestThread::sendRequestsBatch() {
    ATRACE_CALL();
    status_t res;
    size_t batchSize = mNextRequests.size();
    std::vector<camera3_capture_request_t*> requests(batchSize);
    uint32_t numRequestProcessed = 0;
    for (size_t i = 0; i < batchSize; i++) {
        requests[i] = &mNextRequests.editItemAt(i).halRequest;
        ATRACE_ASYNC_BEGIN("frame capture", mNextRequests[i].halRequest.frame_number);
    }

    res = mInterface->processBatchCaptureRequests(requests, &numRequestProcessed);

    bool triggerRemoveFailed = false;
    NextRequest& triggerFailedRequest = mNextRequests.editItemAt(0);
    for (size_t i = 0; i < numRequestProcessed; i++) {
        NextRequest& nextRequest = mNextRequests.editItemAt(i);
        nextRequest.submitted = true;

        updateNextRequest(nextRequest);

        if (!triggerRemoveFailed) {
            // 删除任何先前排队的触发器(解锁后)
            status_t removeTriggerRes = removeTriggers(mPrevRequest);
            if (removeTriggerRes != OK) {
                triggerRemoveFailed = true;
                triggerFailedRequest = nextRequest;
            }
        }
    }

    if (triggerRemoveFailed) {
        cleanUpFailedRequests(/*sendRequestError*/ false);
        return false;
    }

    if (res != OK) {
/**此处应该只因格式错误的请求或设备级错误而失败,因此将所有错误视为致命错误。 错误的元数据失败应该通过通知来实现。*/
        SET_ERR("RequestThread: Unable to submit capture request %d to HAL device: %s (%d)",
                mNextRequests[numRequestProcessed].halRequest.frame_number,
                strerror(-res), res);
        cleanUpFailedRequests(/*sendRequestError*/ false);
        return false;
    }
    return true;
}

等待下一批请求,然后将其放入 mNextRequests。如果 mNextRequests 超时,它将为空。这里主要调用了 waitForNextRequestLocked() 方法获取 CaptureRequest,然后给 NextRequest 成员赋值,最后将 nextRequest 添加到 mNextRequests 中。如果还存在额外的请求,继续调用 waitForNextRequestLocked() 逐个获取 CaptureRequest,并给 NextRequest 成员赋值,最后添加到 mNextRequests 中。

camera3_capture_request_t 结构体:

图像捕获/缓冲区重新处理的单个请求,由框架在 process_capture_request() 中发送到 Camera HAL 设备。

该请求包含用于此捕获的设置,以及用于将生成的图像数据写入其中的一组输出缓冲区。它可以有选择地包含一个输入缓冲区,在这种情况下,该请求用于重新处理该输入缓冲区,而不是捕获新的相机传感器拍摄的图像。捕获由 frame_number 标识。

作为响应,相机 HAL 设备必须使用 process_capture_result() 回调向该框架异步发送 camera3_capture_result 结构。

3.Camera3Device.cpp–>waitForNextRequestBatch() 等待下一批请求

void Camera3Device::RequestThread::waitForNextRequestBatch() {
    // 为单个重复请求进行了优化,以避免将该请求临时放入队列中。
    Mutex::Autolock l(mRequestLock);
    NextRequest nextRequest;
    nextRequest.captureRequest = waitForNextRequestLocked();
    nextRequest.halRequest = camera3_capture_request_t();
    nextRequest.submitted = false;
    mNextRequests.add(nextRequest);
    // 等待额外的请求
    const size_t batchSize = nextRequest.captureRequest->mBatchSize;
    for (size_t i = 1; i < batchSize; i++) {
        NextRequest additionalRequest;
        additionalRequest.captureRequest = waitForNextRequestLocked();
        additionalRequest.halRequest = camera3_capture_request_t();
        additionalRequest.submitted = false;
        mNextRequests.add(additionalRequest);
    }
    if (mNextRequests.size() < batchSize) {
        ALOGE("RequestThread: only get %d out of %d requests. Skipping requests.",
                mNextRequests.size(), batchSize);
        cleanUpFailedRequests(/*sendRequestError*/true);
    }
    return;
}

等待请求,如果超时则返回 NULL,必须在持有 mRequestLock 的情况下调用,waitForNextRequestLocked() 主要用来获取下一个 CaptureRequest,首先遍历 mRepeatingRequests,将其首元素取出赋给 nextRequest,接着将其剩余的元素插入到 mRequestQueue。以后再次调用 waitForNextRequestLocked() 则从 mRequestQueue 取出元素赋给 nextRequest

4.Camera3Device.cpp–>waitForNextRequestLocked()

sp<Camera3Device::CaptureRequest>
        Camera3Device::RequestThread::waitForNextRequestLocked() {
    status_t res;
    sp<CaptureRequest> nextRequest;
    while (mRequestQueue.empty()) {
        if (!mRepeatingRequests.empty()) {
            /**始终将所有请求自动排入重复请求列表中。 保证一组完整的按顺序捕获到应用程序。*/
            const RequestList &requests = mRepeatingRequests;
            RequestList::const_iterator firstRequest =
                    requests.begin();
            nextRequest = *firstRequest;
            mRequestQueue.insert(mRequestQueue.end(),
                    ++firstRequest,
                    requests.end());
            // 不需要等待
            mRepeatingLastFrameNumber = mFrameNumber + requests.size() - 1;
            break;
        }
        res = mRequestSignal.waitRelative(mRequestLock, kRequestTimeout);
        if ((mRequestQueue.empty() && mRepeatingRequests.empty()) ||
                exitPending()) {
            Mutex::Autolock pl(mPauseLock);
            if (mPaused == false) {
                ALOGV("%s: RequestThread: Going idle", __FUNCTION__);
                mPaused = true;
                // 让追踪者知道
                sp<StatusTracker> statusTracker = mStatusTracker.promote();
                if (statusTracker != 0) {
                    statusTracker->markComponentIdle(mStatusId, Fence::NO_FENCE);
                }
            }
            // 停止等待现在,让线程管理发生
            return NULL;
        }
    }
    if (nextRequest == NULL) {
        // 尚无 repeating request,因此队列现在必须有一个条目。
        RequestList::iterator firstRequest =
                mRequestQueue.begin();
        nextRequest = *firstRequest;
        mRequestQueue.erase(firstRequest);
    }
    /**如果我们已经通过 setPaused 清除 mDoPause 取消暂停,
    则需要更新内部暂停状态(capture/setRepeatingRequest 直接取消暂停)。*/
    Mutex::Autolock pl(mPauseLock);
    if (mPaused) {
        sp<StatusTracker> statusTracker = mStatusTracker.promote();
        if (statusTracker != 0) {
            statusTracker->markComponentActive(mStatusId);
        }
    }
    mPaused = false;
    /**检查自上次以来是否已重新配置,如果是,请重置预览请求。
    在配置调用之间不能使用 “NULL request == repeat”。*/
    if (mReconfigured) {
        mPrevRequest.clear();
        mReconfigured = false;
    }
    if (nextRequest != NULL) {
        nextRequest->mResultExtras.frameNumber = mFrameNumber++;
        nextRequest->mResultExtras.afTriggerId = mCurrentAfTriggerId;
        nextRequest->mResultExtras.precaptureTriggerId = mCurrentPreCaptureTriggerId;
        /**由于 RequestThread::clear() 从输入流中删除缓冲区,
        因此在解锁 mRequestLock 之前在此处获取正确的缓冲区*/
        if (nextRequest->mInputStream != NULL) {
            res = nextRequest->mInputStream->getInputBuffer(&nextRequest->mInputBuffer);
            if (res != OK) {
                //无法从 gralloc 队列获取输入缓冲区-可能是断开队列或其他生产者行为不当造成的,因此不是致命错误.
                if (mListener != NULL) {
                    mListener->notifyError(
                            ICameraDeviceCallbacks::ERROR_CAMERA_REQUEST,
                            nextRequest->mResultExtras);
                }
                return NULL;
            }
        }
    }
    handleAePrecaptureCancelRequest(nextRequest);
    return nextRequest;
}

现在来分析调用 prepareHalRequests() 准备一批 HAL 请求和输出缓冲区干了什么?

mNextRequests 中准备 HAL 请求和输出缓冲区。 如果任何输出缓冲区超时,则返回 TIMED_OUT。如果返回错误,则调用方应清除待处理的请求批处理。
准备输出缓冲区的代码逻辑有点绕,下面重点分析。

camera3_stream_buffer_t 结构体:
来自 camera3 流的单个缓冲区。它包括其父流的句柄,gralloc 缓冲区本身的句柄以及同步栅栏。缓冲区不指定将其用于输入还是输出;
这取决于其父流类型以及如何将缓冲区传递到 HAL 设备。

5.Camera3Device.cpp–>prepareHalRequests()

status_t Camera3Device::RequestThread::prepareHalRequests() {
    ATRACE_CALL();
    /**
    Vector<NextRequest> mNextRequests;
    下一批请求正在准备提交给 HAL,不再在请求队列中。 即使在线程循环之外持有 mRequestLock 也是只读的
    */
    for (size_t i = 0; i < mNextRequests.size(); i++) {
        sp<CaptureRequest> captureRequest = nextRequest.captureRequest;
        camera3_capture_request_t* halRequest = &nextRequest.halRequest;
        Vector<camera3_stream_buffer_t>* outputBuffers = &nextRequest.outputBuffers;
        // 准备向 HAL 发送的请求
        halRequest->frame_number = captureRequest->mResultExtras.frameNumber;
        // 插入所有排队的触发器(在锁定元数据之前)
        status_t res = insertTriggers(captureRequest);
        int triggerCount = res;
        bool triggersMixedIn = (triggerCount > 0 || mPrevTriggers > 0);
        mPrevTriggers = triggerCount;
        // 如果请求与上次相同,或者我们上次有触发器
        if (mPrevRequest != captureRequest || triggersMixedIn) {
            /** 如果设置了触发器但未设置触发器 ID,则插入虚拟触发器 ID */
            res = addDummyTriggerIds(captureRequest);
            /**
             * 该请求应进行预排序
             */
            captureRequest->mSettings.sort();
            halRequest->settings = captureRequest->mSettings.getAndLock();
            mPrevRequest = captureRequest;
            IF_ALOGV() {
                camera_metadata_ro_entry_t e = camera_metadata_ro_entry_t();
                find_camera_metadata_ro_entry(
                        halRequest->settings,
                        ANDROID_CONTROL_AF_TRIGGER,
                        &e
                );
                if (e.count > 0) {
                    ALOGV("%s: Request (frame num %d) had AF trigger 0x%x",
                          __FUNCTION__,
                          halRequest->frame_number,
                          e.data.u8[0]);
                }
            }
        } else {
            // leave request.settings NULL to indicate 'reuse latest given'
            ALOGVV("%s: Request settings are REUSED",
                   __FUNCTION__);
        }
        uint32_t totalNumBuffers = 0;
        // 填充缓冲区
        if (captureRequest->mInputStream != NULL) {
            halRequest->input_buffer = &captureRequest->mInputBuffer;
            totalNumBuffers += 1;
        } else {
            halRequest->input_buffer = NULL;
        }
        outputBuffers->insertAt(camera3_stream_buffer_t(), 0,
                captureRequest->mOutputStreams.size());
        halRequest->output_buffers = outputBuffers->array();
        for (size_t i = 0; i < captureRequest->mOutputStreams.size(); i++) {
            res = captureRequest->mOutputStreams.editItemAt(i)->
                    getBuffer(&outputBuffers->editItemAt(i));
            if (res != OK) {
                /**无法从 gralloc 队列获取输出缓冲区-这可能是由于废弃的队列或其他使用者行为不当造成的,
                因此不是致命错误*/
                ALOGE("RequestThread: Can't get output buffer, skipping request:"
                        " %s (%d)", strerror(-res), res);
                return TIMED_OUT;
            }
            halRequest->num_output_buffers++;
        }
        totalNumBuffers += halRequest->num_output_buffers;
        // 进行中的队列中的日志请求
        sp<Camera3Device> parent = mParent.promote();
        res = parent->registerInFlight(halRequest->frame_number,
                totalNumBuffers, captureRequest->mResultExtras,
                /*hasInput*/halRequest->input_buffer != NULL,
                captureRequest->mAeTriggerCancelOverride);
    }
    return OK;
}

首先来了解一下 CaptureRequest 类,这个类实现在 Camera3Device 中。

6.Camera3Device.h

class Camera3Device :
            public CameraDeviceBase,
            private camera3_callback_ops {
    ......
  private:   
    class CaptureRequest : public LightRefBase<CaptureRequest> {
      public:
        CameraMetadata                      mSettings;
        sp<camera3::Camera3Stream>          mInputStream;
        camera3_stream_buffer_t             mInputBuffer;
        Vector<sp<camera3::Camera3OutputStreamInterface> >
                                            mOutputStreams;
        CaptureResultExtras                 mResultExtras;
        // 用于取消不支持 CONTROL_AE_PRECAPTURE_TRIGGER_CANCEL 的设备的 AE 预捕获触发
        AeTriggerCancelOverride_t           mAeTriggerCancelOverride;
        // 一次应提交给 HAL 的请求数。
        // 例如,如果批次大小为 8,
        // 则此请求和随后的 7 个请求将同时提交给 HAL。 
        // 随后的 7 个请求的批处理将被请求线程忽略。
        int                                 mBatchSize;
    };
    ......
}

CaptureRequest 成员 mOutputStreams 是一个 Vector,这个向量中的每个对象是一个指向 camera3::Camera3OutputStreamInterface 的强引用。camera3::Camera3OutputStreamInterface 只是一个接口,它是在 Camera3Device 类 createStream(…) 方法中添加的,其中创建了 Camera3OutputStream (用于管理来自相机设备的单个输出数据流)对象,并将其添加到 mOutputStreams 指向的 KeyedVector 向量中。在其后的流程中,Camera3Device::createCaptureRequest(…) 方法中会在 Camera3Device 类 mOutputStreams 成员获取输出流,并将流 push 到 CaptureRequest 成员 mOutputStreams 向量中。

Camera3OutputStreamInterface (用于管理来自相机设备的单个输出数据流)接口继承自 Camera3StreamInterface 接口。

7.Camera3OutputStreamInterface.h

class Camera3OutputStreamInterface : public virtual Camera3StreamInterface {
    ......
}

Camera3StreamInterface 接口用于管理来自相机设备的单个输入和/或输出数据流。其中定义了 getBuffer(…) 纯虚函数。

getBuffer(…) 的作用是用该流的下一个有效缓冲区填充 camera3_stream_buffer,以移交给 HAL。仅在调用 finishConfiguration 后才可以调用此方法。对于双向流,此方法适用于输出侧缓冲区。

8.Camera3StreamInterface.h

class Camera3StreamInterface : public virtual RefBase {
    ......
    virtual status_t getBuffer(camera3_stream_buffer *buffer) = 0;
    ......
}

现在不难知道 RequestThread::prepareHalRequests() 方法中调用 CaptureRequest 成员 mOutputStreams 向量中对象的 getBuffer(…) 函数,实际上就是调用 Camera3OutputStream 类 getBuffer(…) 具体实现。

Camera3OutputStream.h

class Camera3OutputStream :
        public Camera3IOStreamBase,
        public Camera3OutputStreamInterface {
    ......
}

查找 Camera3OutputStream 类中的 getBuffer(…) 方法发现其实现实际位于 Camera3Stream 类中。Camera3OutputStream 并没有直接继承 Camera3Stream,而是直接继承自 Camera3IOStreamBase,Camera3IOStreamBase 又继承自 Camera3Stream。

9.Camera3OutputStream.h

class Camera3IOStreamBase :
        public Camera3Stream {
    ......
}

真正获取 buffer 的函数实际是 getBufferLocked(…),它位于 Camera3OutputStream 类中。

10.Camera3Stream.cpp

status_t Camera3Stream::getBuffer(camera3_stream_buffer *buffer) {
    ATRACE_CALL();
    Mutex::Autolock l(mLock);
    status_t res = OK;
    // 仅在已配置流时才应调用此函数。
    if (mState != STATE_CONFIGURED) {
        ALOGE("%s: Stream %d: Can't get buffers if stream is not in CONFIGURED state %d",
                __FUNCTION__, mId, mState);
        return INVALID_OPERATION;
    }
    // 如果即将达到限制,等待新缓冲区返回
    if (getHandoutOutputBufferCountLocked() == camera3_stream::max_buffers) {
        ALOGV("%s: Already dequeued max output buffers (%d), wait for next returned one.",
                __FUNCTION__, camera3_stream::max_buffers);
        res = mOutputBufferReturnedSignal.waitRelative(mLock, kWaitForBufferDuration);
        if (res != OK) {
            if (res == TIMED_OUT) {
                ALOGE("%s: wait for output buffer return timed out after %lldms (max_buffers %d)",
                        __FUNCTION__, kWaitForBufferDuration / 1000000LL,
                        camera3_stream::max_buffers);
            }
            return res;
        }
    }
    // 真正获取 buffer 的函数
    res = getBufferLocked(buffer);
    if (res == OK) {
        // 激活 BufferListener 回调函数
        fireBufferListenersLocked(*buffer, /*acquired*/true, /*output*/true);
    }
    return res;
}

mConsumer 指向了 Surface,Surface 继承了 ANativeWindow。

11.Camera3OutputStream.cpp

status_t Camera3OutputStream::getBufferLocked(camera3_stream_buffer *buffer) {
    ATRACE_CALL();
    status_t res;

    if ((res = getBufferPreconditionCheckLocked()) != OK) {
        return res;
    }

    ANativeWindowBuffer* anb;
    int fenceFd;

    /**
     * 短暂释放锁,以避免在以下情况下出现死锁:
     * Thread 1: StreamingProcessor::startStream -> Camera3Stream::isConfiguring().
     * This thread acquired StreamingProcessor lock and try to lock Camera3Stream lock.
     * Thread 2: Camera3Stream::returnBuffer->StreamingProcessor::onFrameAvailable().
     * This thread acquired Camera3Stream lock and bufferQueue lock, and try to lock
     * StreamingProcessor lock.
     * Thread 3: Camera3Stream::getBuffer(). This thread acquired Camera3Stream lock
     * and try to lock bufferQueue lock.
     * Then there is circular locking dependency.
     */
    sp<ANativeWindow> currentConsumer = mConsumer;
    mLock.unlock();

    res = currentConsumer->dequeueBuffer(currentConsumer.get(), &anb, &fenceFd);
    mLock.lock();
    if (res != OK) {
        ALOGE("%s: Stream %d: Can't dequeue next output buffer: %s (%d)",
                __FUNCTION__, mId, strerror(-res), res);
        return res;
    }

    /**
     * HAL现在拥有 FenceFD,但发生错误的情况除外,
     * 在这种情况下,我们将其重新分配给 acquire_fence
     */
    handoutBufferLocked(*buffer, &(anb->handle), /*acquireFence*/fenceFd,
                        /*releaseFence*/-1, CAMERA3_BUFFER_STATUS_OK, /*output*/true);

    return OK;
}

先来看一下 ANativeWindow 结构体的定义。重点来分析 dequeueBuffer 函数指针,EGL 调用了 hook 以获取缓冲区。如果没有可用的缓冲区,则此调用可能会阻塞。

12.system/core/include/system/window.h

struct ANativeWindow
{
#ifdef __cplusplus
    ANativeWindow()
        : flags(0), minSwapInterval(0), maxSwapInterval(0), xdpi(0), ydpi(0)
    {
        common.magic = ANDROID_NATIVE_WINDOW_MAGIC;
        common.version = sizeof(ANativeWindow);
        memset(common.reserved, 0, sizeof(common.reserved));
    }

    /* Implement the methods that sp<ANativeWindow> expects so that it
       can be used to automatically refcount ANativeWindow's. */
    void incStrong(const void* /*id*/) const {
        common.incRef(const_cast<android_native_base_t*>(&common));
    }
    void decStrong(const void* /*id*/) const {
        common.decRef(const_cast<android_native_base_t*>(&common));
    }
#endif

    ......
    int     (*dequeueBuffer)(struct ANativeWindow* window,
                struct ANativeWindowBuffer** buffer, int* fenceFd);
    ......
}

Surface 类是 ANativeWindow 的实现,可将图形缓冲区输入到 BufferQueue 中。

13.frameworks/native/include/gui/Surface.h

class Surface
    : public ANativeObjectBase<ANativeWindow, Surface, RefBase>
{
    ......
}

Surface 构造器中初始化 ANativeWindow 函数指针。ANativeWindow::dequeueBuffer 赋值为 hook_dequeueBuffer。

frameworks/native/libs/gui/Surface.cpp

Surface::Surface(
        const sp<IGraphicBufferProducer>& bufferProducer,
        bool controlledByApp)
    : mGraphicBufferProducer(bufferProducer),
      mGenerationNumber(0)
{
    ANativeWindow::setSwapInterval  = hook_setSwapInterval;
    ANativeWindow::dequeueBuffer    = hook_dequeueBuffer;
    ANativeWindow::cancelBuffer     = hook_cancelBuffer;
    ANativeWindow::queueBuffer      = hook_queueBuffer;
    ANativeWindow::query            = hook_query;
    ANativeWindow::perform          = hook_perform;

    ANativeWindow::dequeueBuffer_DEPRECATED = hook_dequeueBuffer_DEPRECATED;
    ANativeWindow::cancelBuffer_DEPRECATED  = hook_cancelBuffer_DEPRECATED;
    ANativeWindow::lockBuffer_DEPRECATED    = hook_lockBuffer_DEPRECATED;
    ANativeWindow::queueBuffer_DEPRECATED   = hook_queueBuffer_DEPRECATED;

    ......
}

首先将 ANativeWindow 转化为 Surface。然后调用 Surface 类带有两个入参的 dequeueBuffer(…) 函数。

14.frameworks/native/libs/gui/Surface.cpp


int Surface::hook_dequeueBuffer(ANativeWindow* window,
        ANativeWindowBuffer** buffer, int* fenceFd) {
    Surface* c = getSelf(window);
    return c->dequeueBuffer(buffer, fenceFd);
}

Surface::dequeueBuffer(…) 首先调用 IGraphicBufferProducer::dequeueBuffer,然后从 GraphicBuffer 中获取 buffer。

15.frameworks/native/libs/gui/Surface.cpp

int Surface::dequeueBuffer(android_native_buffer_t** buffer, int* fenceFd) {
    ATRACE_CALL();
    ALOGV("Surface::dequeueBuffer");

    uint32_t reqWidth;
    uint32_t reqHeight;
    bool swapIntervalZero;
    PixelFormat reqFormat;
    uint32_t reqUsage;

    {
        Mutex::Autolock lock(mMutex);

        reqWidth = mReqWidth ? mReqWidth : mUserWidth;
        reqHeight = mReqHeight ? mReqHeight : mUserHeight;

        swapIntervalZero = mSwapIntervalZero;
        reqFormat = mReqFormat;
        reqUsage = mReqUsage;
    } // Drop the lock so that we can still touch the Surface while blocking in IGBP::dequeueBuffer

    int buf = -1;
    sp<Fence> fence;
    status_t result = mGraphicBufferProducer->dequeueBuffer(&buf, &fence, swapIntervalZero,
            reqWidth, reqHeight, reqFormat, reqUsage);

    if (result < 0) {
        ALOGV("dequeueBuffer: IGraphicBufferProducer::dequeueBuffer(%d, %d, %d, %d, %d)"
             "failed: %d", swapIntervalZero, reqWidth, reqHeight, reqFormat,
             reqUsage, result);
        return result;
    }

    Mutex::Autolock lock(mMutex);

    sp<GraphicBuffer>& gbuf(mSlots[buf].buffer);

    // this should never happen
    ALOGE_IF(fence == NULL, "Surface::dequeueBuffer: received null Fence! buf=%d", buf);

    if (result & IGraphicBufferProducer::RELEASE_ALL_BUFFERS) {
        freeAllBuffers();
    }

    if ((result & IGraphicBufferProducer::BUFFER_NEEDS_REALLOCATION) || gbuf == 0) {
        result = mGraphicBufferProducer->requestBuffer(buf, &gbuf);
        if (result != NO_ERROR) {
            ALOGE("dequeueBuffer: IGraphicBufferProducer::requestBuffer failed: %d", result);
            mGraphicBufferProducer->cancelBuffer(buf, fence);
            return result;
        }
    }

    if (fence->isValid()) {
        *fenceFd = fence->dup();
        if (*fenceFd == -1) {
            ALOGE("dequeueBuffer: error duping fence: %d", errno);
            // dup() should never fail; something is badly wrong. Soldier on
            // and hope for the best; the worst that should happen is some
            // visible corruption that lasts until the next frame.
        }
    } else {
        *fenceFd = -1;
    }

    *buffer = gbuf.get();
    return OK;
}

mGraphicBufferProducer 是在 Surface 构造器中初始化的。它实际指向一个 BpGraphicBufferProducer 对象。调用 BpGraphicBufferProducer 类 dequeueBuffer(…),远端 BnGraphicBufferProducer 类 dequeueBuffer(…) 会响应。

16.frameworks/native/libs/gui/IGraphicBufferProducer.cpp

class BpGraphicBufferProducer : public BpInterface<IGraphicBufferProducer>
{
public:
    ......
    virtual status_t dequeueBuffer(int *buf, sp<Fence>* fence, bool async,
            uint32_t width, uint32_t height, PixelFormat format,
            uint32_t usage) {
        Parcel data, reply;
        data.writeInterfaceToken(IGraphicBufferProducer::getInterfaceDescriptor());
        data.writeInt32(static_cast<int32_t>(async));
        data.writeUint32(width);
        data.writeUint32(height);
        data.writeInt32(static_cast<int32_t>(format));
        data.writeUint32(usage);
        status_t result = remote()->transact(DEQUEUE_BUFFER, data, &reply);
        if (result != NO_ERROR) {
            return result;
        }
        *buf = reply.readInt32();
        bool nonNull = reply.readInt32();
        if (nonNull) {
            *fence = new Fence();
            reply.read(**fence);
        }
        result = reply.readInt32();
        return result;
    }    
    ......
}

BufferQueueProducer 继承自 BnGraphicBufferProducer。因此远端 BnGraphicBufferProducer 类 dequeueBuffer(…) 具体实现位于 BufferQueueProducer 中。while 循环中调用 waitForFreeSlotThenRelock(…) 查找缓存区,然后就可以获取到 GraphicBuffer。

17.frameworks/native/libs/gui/BufferQueueProducer.cpp

status_t BufferQueueProducer::dequeueBuffer(int *outSlot,
        sp<android::Fence> *outFence, bool async,
        uint32_t width, uint32_t height, PixelFormat format, uint32_t usage) {
    ATRACE_CALL();
    { // Autolock scope
        Mutex::Autolock lock(mCore->mMutex);
        mConsumerName = mCore->mConsumerName;
    } // Autolock scope

    BQ_LOGV("dequeueBuffer: async=%s w=%u h=%u format=%#x, usage=%#x",
            async ? "true" : "false", width, height, format, usage);

    if ((width && !height) || (!width && height)) {
        BQ_LOGE("dequeueBuffer: invalid size: w=%u h=%u", width, height);
        return BAD_VALUE;
    }

    status_t returnFlags = NO_ERROR;
    EGLDisplay eglDisplay = EGL_NO_DISPLAY;
    EGLSyncKHR eglFence = EGL_NO_SYNC_KHR;
    bool attachedByConsumer = false;

    { // Autolock scope
        Mutex::Autolock lock(mCore->mMutex);
        mCore->waitWhileAllocatingLocked();

        if (format == 0) {
            format = mCore->mDefaultBufferFormat;
        }

        // 启用消费者请求的使用位
        usage |= mCore->mConsumerUsageBits;

        const bool useDefaultSize = !width && !height;
        if (useDefaultSize) {
            width = mCore->mDefaultWidth;
            height = mCore->mDefaultHeight;
        }

        int found = BufferItem::INVALID_BUFFER_SLOT;
        while (found == BufferItem::INVALID_BUFFER_SLOT) {
            status_t status = waitForFreeSlotThenRelock("dequeueBuffer", async,
                    &found, &returnFlags);
            if (status != NO_ERROR) {
                return status;
            }

            // This should not happen
            if (found == BufferQueueCore::INVALID_BUFFER_SLOT) {
                BQ_LOGE("dequeueBuffer: no available buffer slots");
                return -EBUSY;
            }

            const sp<GraphicBuffer>& buffer(mSlots[found].mGraphicBuffer);

            // 如果不允许我们分配新的缓冲区,
            // 则 waitForFreeSlotThenRelock 必须返回一个包含缓冲区的 Slot。 
            // 如果需要重新分配此缓冲区以满足请求的属性,我们将其释放并尝试获取另一个缓冲区。
            if (!mCore->mAllowAllocation) {
                if (buffer->needsReallocation(width, height, format, usage)) {
                    mCore->freeBufferLocked(found);
                    found = BufferItem::INVALID_BUFFER_SLOT;
                    continue;
                }
            }
        }

        *outSlot = found;
        ATRACE_BUFFER_INDEX(found);

        attachedByConsumer = mSlots[found].mAttachedByConsumer;

        mSlots[found].mBufferState = BufferSlot::DEQUEUED;

        const sp<GraphicBuffer>& buffer(mSlots[found].mGraphicBuffer);
        if ((buffer == NULL) ||
                buffer->needsReallocation(width, height, format, usage))
        {
            mSlots[found].mAcquireCalled = false;
            mSlots[found].mGraphicBuffer = NULL;
            mSlots[found].mRequestBufferCalled = false;
            mSlots[found].mEglDisplay = EGL_NO_DISPLAY;
            mSlots[found].mEglFence = EGL_NO_SYNC_KHR;
            mSlots[found].mFence = Fence::NO_FENCE;
            mCore->mBufferAge = 0;

            returnFlags |= BUFFER_NEEDS_REALLOCATION;
        } else {
            // 我们加1是因为这是该缓冲区排队时的帧号
            mCore->mBufferAge =
                    mCore->mFrameCounter + 1 - mSlots[found].mFrameNumber;
        }

        BQ_LOGV("dequeueBuffer: setting buffer age to %" PRIu64,
                mCore->mBufferAge);

        if (CC_UNLIKELY(mSlots[found].mFence == NULL)) {
            BQ_LOGE("dequeueBuffer: about to return a NULL fence - "
                    "slot=%d w=%d h=%d format=%u",
                    found, buffer->width, buffer->height, buffer->format);
        }

        eglDisplay = mSlots[found].mEglDisplay;
        eglFence = mSlots[found].mEglFence;
        *outFence = mSlots[found].mFence;
        mSlots[found].mEglFence = EGL_NO_SYNC_KHR;
        mSlots[found].mFence = Fence::NO_FENCE;

        mCore->validateConsistencyLocked();
    } // Autolock scope

    if (returnFlags & BUFFER_NEEDS_REALLOCATION) {
        status_t error;
        BQ_LOGV("dequeueBuffer: allocating a new buffer for slot %d", *outSlot);
        sp<GraphicBuffer> graphicBuffer(mCore->mAllocator->createGraphicBuffer(
                width, height, format, usage, &error));
        if (graphicBuffer == NULL) {
            BQ_LOGE("dequeueBuffer: createGraphicBuffer failed");
            return error;
        }

        { // Autolock scope
            Mutex::Autolock lock(mCore->mMutex);

            if (mCore->mIsAbandoned) {
                BQ_LOGE("dequeueBuffer: BufferQueue has been abandoned");
                return NO_INIT;
            }

            graphicBuffer->setGenerationNumber(mCore->mGenerationNumber);
            mSlots[*outSlot].mGraphicBuffer = graphicBuffer;
        } // Autolock scope
    }

    if (attachedByConsumer) {
        returnFlags |= BUFFER_NEEDS_REALLOCATION;
    }

    if (eglFence != EGL_NO_SYNC_KHR) {
        EGLint result = eglClientWaitSyncKHR(eglDisplay, eglFence, 0,
                1000000000);
        // 如果出现问题,打印 log,但是返回缓冲区而不同步对其的访问。 
        // 现在中止出队操作为时已晚。
        if (result == EGL_FALSE) {
            BQ_LOGE("dequeueBuffer: error %#x waiting for fence",
                    eglGetError());
        } else if (result == EGL_TIMEOUT_EXPIRED_KHR) {
            BQ_LOGE("dequeueBuffer: timeout waiting for fence");
        }
        eglDestroySyncKHR(eglDisplay, eglFence);
    }

    BQ_LOGV("dequeueBuffer: returning slot=%d/%" PRIu64 " buf=%p flags=%#x",
            *outSlot,
            mSlots[*outSlot].mFrameNumber,
            mSlots[*outSlot].mGraphicBuffer->handle, returnFlags);

    return returnFlags;
}

最后再来分析向 HAL 提交一批请求。这主要是调用 HAL process_capture_request(…) 实现的。mHal3Device 指向 camera3_device_t 数据类型,它实际上是一个 camera3_device 结构体。

common.version 必须等于 CAMERA_DEVICE_API_VERSION_3_0 才能将该设备标识为实现相机设备 HAL 的 3.0 版。

性能要求:

相机打开(common.module-> common.methods-> open)应在 200 毫秒内返回,并且必须在 500 毫秒内返回。
相机关闭(common.close)应该在 200 毫秒内返回,并且必须在 500 毫秒内返回。

18.hardware/libhardware/include/hardware/camera3.h

/**********************************************************************
 *
 * 相机设备定义
 *
 */
typedef struct camera3_device {
    hw_device_t common;
    camera3_device_ops_t *ops;
    void *priv;
} camera3_device_t;

再来看 camera3_device_ops_t 结构体,其中定义了 process_capture_request 函数指针。

process_capture_request 函数指针含义:

向 HAL 发送新的捕获请求。在准备好接受下一个处理请求之前,HAL 不应从此调用中返回。框架一次只对 process_capture_request() 进行一次调用,并且所有调用均来自同一线程。一旦有新的请求及其关联的缓冲区可用,将立即对 process_capture_request() 进行下一次调用。在正常的预览场景中,这意味着框架将几乎立即再次调用该函数。

实际的请求处理是异步的,捕获结果由 HAL 通过 process_capture_result() 调用返回。此调用要求结果元数据可用,但是输出缓冲区可以简单地提供同步栅栏以等待。预计将同时发出多个请求,以保持完整的输出帧速率。

框架保留了请求结构的所有权。仅保证在此调用期间有效。HAL 设备必须为其捕获处理所需保留的信息进行复制。HAL 负责等待并关闭缓冲区的栅栏,并将缓冲区的句柄返回给框架。

19.hardware/libhardware/include/hardware/camera3.h

typedef struct camera3_device_ops {
    ......
    int (*process_capture_request)(const struct camera3_device *,
            camera3_capture_request_t *request);
    ......
} camera3_device_ops_t;

以 moto Nexus 6 HAL 为例,process_capture_request 函数指针指向 QCamera3HWI.cpp 中 QCamera3HardwareInterface::process_capture_request 方法。

首先从 camera3_device 结构体 priv 中取出私有数据强转为 QCamera3HardwareInterface* 指针,接着调用其 processCaptureRequest(…) 方法。

20.QCamera3HWI.cpp

int QCamera3HardwareInterface::process_capture_request(
                    const struct camera3_device *device,
                    camera3_capture_request_t *request)
{
    CDBG("%s: E", __func__);
    QCamera3HardwareInterface *hw =
        reinterpret_cast<QCamera3HardwareInterface *>(device->priv);
    if (!hw) {
        ALOGE("%s: NULL camera device", __func__);
        return -EINVAL;
    }

    int rc = hw->processCaptureRequest(request);
    CDBG("%s: X", __func__);
    return rc;
}

处理来自相机服务的捕获请求。

首次调用会初始化所有流;
启动所有流;
更新待处理请求列表和待处理缓冲区映射,然后在其他流上调用请求。

21.QCamera3HWI.cpp

int QCamera3HardwareInterface::processCaptureRequest(
                    camera3_capture_request_t *request)
{
    ATRACE_CALL();
    int rc = NO_ERROR;
    int32_t request_id;
    CameraMetadata meta;

    pthread_mutex_lock(&mMutex);
    // 验证请求的有效性质
    rc = validateCaptureRequest(request);
    if (rc != NO_ERROR) {
        ALOGE("%s: incoming request is not valid", __func__);
        pthread_mutex_unlock(&mMutex);
        return rc;
    }

    meta = request->settings;

    // 对于第一个捕获请求,发送捕获意图,然后在所有流上进行流传输
    if (mFirstRequest) {

         /* 获取用于流配置的eis信息 */
        cam_is_type_t is_type;
        char is_type_value[PROPERTY_VALUE_MAX];
        property_get("camera.is_type", is_type_value, "0");
        is_type = static_cast<cam_is_type_t>(atoi(is_type_value));

        if (meta.exists(ANDROID_CONTROL_CAPTURE_INTENT)) {
            int32_t hal_version = CAM_HAL_V3;
            uint8_t captureIntent =
                meta.find(ANDROID_CONTROL_CAPTURE_INTENT).data.u8[0];
            mCaptureIntent = captureIntent;
            memset(mParameters, 0, sizeof(parm_buffer_t));
            AddSetParmEntryToBatch(mParameters, CAM_INTF_PARM_HAL_VERSION,
                sizeof(hal_version), &hal_version);
            AddSetParmEntryToBatch(mParameters, CAM_INTF_META_CAPTURE_INTENT,
                sizeof(captureIntent), &captureIntent);
        }

        //如果启用了EIS,则将其打开以用于视频录制,
        //前置相机和4k视频没有EIS
        bool setEis = mEisEnable && (gCamCapability[mCameraId]->position == CAM_POSITION_BACK &&
            (mCaptureIntent ==  CAMERA3_TEMPLATE_VIDEO_RECORD ||
             mCaptureIntent == CAMERA3_TEMPLATE_VIDEO_SNAPSHOT));
        int32_t vsMode;
        vsMode = (setEis)? DIS_ENABLE: DIS_DISABLE;
        rc = AddSetParmEntryToBatch(mParameters,
                CAM_INTF_PARM_DIS_ENABLE,
                sizeof(vsMode), &vsMode);

        //除非支持EIS,否则IS类型将为0。如果支持EIS,则取决于流和视频大小,可以为1或4
        if (setEis){
            if (m_bIs4KVideo) {
                is_type = IS_TYPE_DIS;
            } else {
                is_type = IS_TYPE_EIS_2_0;
            }
        }

        for (size_t i = 0; i < request->num_output_buffers; i++) {
            const camera3_stream_buffer_t& output = request->output_buffers[i];
            QCamera3Channel *channel = (QCamera3Channel *)output.stream->priv;
            /*for livesnapshot stream is_type will be DIS*/
            if (setEis && output.stream->format == HAL_PIXEL_FORMAT_BLOB) {
                rc = channel->registerBuffer(output.buffer, IS_TYPE_DIS);
            } else {
                rc = channel->registerBuffer(output.buffer, is_type);
            }
            if (rc < 0) {
                ALOGE("%s: registerBuffer failed",
                        __func__);
                pthread_mutex_unlock(&mMutex);
                return -ENODEV;
            }
        }

        /*设置捕获意图、hal版本和dis激活参数到后端*/
        mCameraHandle->ops->set_parms(mCameraHandle->camera_handle,
                    mParameters);


        //首先初始化所有流
        for (List<stream_info_t *>::iterator it = mStreamInfo.begin();
            it != mStreamInfo.end(); it++) {
            QCamera3Channel *channel = (QCamera3Channel *)(*it)->stream->priv;
            if (setEis && (*it)->stream->format == HAL_PIXEL_FORMAT_BLOB) {
                rc = channel->initialize(IS_TYPE_DIS);
            } else {
                rc = channel->initialize(is_type);
            }
            if (NO_ERROR != rc) {
                ALOGE("%s : Channel initialization failed %d", __func__, rc);
                pthread_mutex_unlock(&mMutex);
                return rc;
            }
        }

        if (mRawDumpChannel) {
            rc = mRawDumpChannel->initialize(is_type);
            if (rc != NO_ERROR) {
                ALOGE("%s: Error: Raw Dump Channel init failed", __func__);
                pthread_mutex_unlock(&mMutex);
                return rc;
            }
        }
        if (mSupportChannel) {
            rc = mSupportChannel->initialize(is_type);
            if (rc < 0) {
                ALOGE("%s: Support channel initialization failed", __func__);
                pthread_mutex_unlock(&mMutex);
                return rc;
            }
        }

        //然后启动它们
        CDBG_HIGH("%s: Start META Channel", __func__);
        rc = mMetadataChannel->start();
        if (rc < 0) {
            ALOGE("%s: META channel start failed", __func__);
            pthread_mutex_unlock(&mMutex);
            return rc;
        }

        if (mSupportChannel) {
            rc = mSupportChannel->start();
            if (rc < 0) {
                ALOGE("%s: Support channel start failed", __func__);
                mMetadataChannel->stop();
                pthread_mutex_unlock(&mMutex);
                return rc;
            }
        }
        for (List<stream_info_t *>::iterator it = mStreamInfo.begin();
            it != mStreamInfo.end(); it++) {
            QCamera3Channel *channel = (QCamera3Channel *)(*it)->stream->priv;
            CDBG_HIGH("%s: Start Regular Channel mask=%d", __func__, channel->getStreamTypeMask());
            rc = channel->start();
            if (rc < 0) {
                ALOGE("%s: channel start failed", __func__);
                pthread_mutex_unlock(&mMutex);
                return rc;
            }
        }

        if (mRawDumpChannel) {
            CDBG("%s: Starting raw dump stream",__func__);
            rc = mRawDumpChannel->start();
            if (rc != NO_ERROR) {
                ALOGE("%s: Error Starting Raw Dump Channel", __func__);
                for (List<stream_info_t *>::iterator it = mStreamInfo.begin();
                      it != mStreamInfo.end(); it++) {
                    QCamera3Channel *channel =
                        (QCamera3Channel *)(*it)->stream->priv;
                    ALOGE("%s: Stopping Regular Channel mask=%d", __func__,
                        channel->getStreamTypeMask());
                    channel->stop();
                }
                if (mSupportChannel)
                    mSupportChannel->stop();
                mMetadataChannel->stop();
                pthread_mutex_unlock(&mMutex);
                return rc;
            }
        }
        mWokenUpByDaemon = false;
        mPendingRequest = 0;
    }

    uint32_t frameNumber = request->frame_number;
    cam_stream_ID_t streamID;

    if (meta.exists(ANDROID_REQUEST_ID)) {
        request_id = meta.find(ANDROID_REQUEST_ID).data.i32[0];
        mCurrentRequestId = request_id;
        CDBG("%s: Received request with id: %d",__func__, request_id);
    } else if (mFirstRequest || mCurrentRequestId == -1){
        ALOGE("%s: Unable to find request id field, \
                & no previous id available", __func__);
        return NAME_NOT_FOUND;
    } else {
        CDBG("%s: Re-using old request id", __func__);
        request_id = mCurrentRequestId;
    }

    CDBG("%s: %d, num_output_buffers = %d input_buffer = %p frame_number = %d",
                                    __func__, __LINE__,
                                    request->num_output_buffers,
                                    request->input_buffer,
                                    frameNumber);
    // 首先获取所有请求缓冲区
    streamID.num_streams = 0;
    int blob_request = 0;
    uint32_t snapshotStreamId = 0;
    for (size_t i = 0; i < request->num_output_buffers; i++) {
        const camera3_stream_buffer_t& output = request->output_buffers[i];
        QCamera3Channel *channel = (QCamera3Channel *)output.stream->priv;

        if (output.stream->format == HAL_PIXEL_FORMAT_BLOB) {
            //调用函数存储jpeg数据的本地副本以进行编码参数
            blob_request = 1;
            snapshotStreamId = channel->getStreamID(channel->getStreamTypeMask());
        }

        if (output.acquire_fence != -1) {
           rc = sync_wait(output.acquire_fence, TIMEOUT_NEVER);
           close(output.acquire_fence);
           if (rc != OK) {
              ALOGE("%s: sync wait failed %d", __func__, rc);
              pthread_mutex_unlock(&mMutex);
              return rc;
           }
        }

        streamID.streamID[streamID.num_streams] =
            channel->getStreamID(channel->getStreamTypeMask());
        streamID.num_streams++;


    }

    if (blob_request && mRawDumpChannel) {
        CDBG("%s: Trigger Raw based on blob request if Raw dump is enabled", __func__);
        streamID.streamID[streamID.num_streams] =
            mRawDumpChannel->getStreamID(mRawDumpChannel->getStreamTypeMask());
        streamID.num_streams++;
    }

    if(request->input_buffer == NULL) {
       rc = setFrameParameters(request, streamID, snapshotStreamId);
        if (rc < 0) {
            ALOGE("%s: fail to set frame parameters", __func__);
            pthread_mutex_unlock(&mMutex);
            return rc;
        }
    } else {

        if (request->input_buffer->acquire_fence != -1) {
           rc = sync_wait(request->input_buffer->acquire_fence, TIMEOUT_NEVER);
           close(request->input_buffer->acquire_fence);
           if (rc != OK) {
              ALOGE("%s: input buffer sync wait failed %d", __func__, rc);
              pthread_mutex_unlock(&mMutex);
              return rc;
           }
        }
    }

    /* 更新待处理请求列表和待处理缓冲区映射 */
    PendingRequestInfo pendingRequest;
    pendingRequest.frame_number = frameNumber;
    pendingRequest.num_buffers = request->num_output_buffers;
    pendingRequest.request_id = request_id;
    pendingRequest.blob_request = blob_request;
    pendingRequest.bUrgentReceived = 0;

    pendingRequest.input_buffer = request->input_buffer;
    pendingRequest.settings = request->settings;
    pendingRequest.pipeline_depth = 0;
    pendingRequest.partial_result_cnt = 0;
    extractJpegMetadata(pendingRequest.jpegMetadata, request);

    //提取捕获意图
    if (meta.exists(ANDROID_CONTROL_CAPTURE_INTENT)) {
        mCaptureIntent =
                meta.find(ANDROID_CONTROL_CAPTURE_INTENT).data.u8[0];
    }
    pendingRequest.capture_intent = mCaptureIntent;

    for (size_t i = 0; i < request->num_output_buffers; i++) {
        RequestedBufferInfo requestedBuf;
        requestedBuf.stream = request->output_buffers[i].stream;
        requestedBuf.buffer = NULL;
        pendingRequest.buffers.push_back(requestedBuf);

        // 添加缓存区句柄到待处理缓存区列表
        PendingBufferInfo bufferInfo;
        bufferInfo.frame_number = frameNumber;
        bufferInfo.buffer = request->output_buffers[i].buffer;
        bufferInfo.stream = request->output_buffers[i].stream;
        mPendingBuffersMap.mPendingBufferList.push_back(bufferInfo);
        mPendingBuffersMap.num_buffers++;
        CDBG("%s: frame = %d, buffer = %p, stream = %p, stream format = %d",
          __func__, frameNumber, bufferInfo.buffer, bufferInfo.stream,
          bufferInfo.stream->format);
    }
    CDBG("%s: mPendingBuffersMap.num_buffers = %d",
          __func__, mPendingBuffersMap.num_buffers);

    mPendingRequestsList.push_back(pendingRequest);

    if(mFlush) {
        pthread_mutex_unlock(&mMutex);
        return NO_ERROR;
    }

    // 通知元数据通道我们收到请求
    mMetadataChannel->request(NULL, frameNumber);

    metadata_buffer_t reproc_meta;
    memset(&reproc_meta, 0, sizeof(metadata_buffer_t));

    if(request->input_buffer != NULL){
        rc = setReprocParameters(request, &reproc_meta, snapshotStreamId);
        if (NO_ERROR != rc) {
            ALOGE("%s: fail to set reproc parameters", __func__);
            pthread_mutex_unlock(&mMutex);
            return rc;
        }
    }

    // 在其他流上调用请求
    for (size_t i = 0; i < request->num_output_buffers; i++) {
        const camera3_stream_buffer_t& output = request->output_buffers[i];
        QCamera3Channel *channel = (QCamera3Channel *)output.stream->priv;

        if (channel == NULL) {
            ALOGE("%s: invalid channel pointer for stream", __func__);
            continue;
        }

        if (output.stream->format == HAL_PIXEL_FORMAT_BLOB) {
            rc = channel->request(output.buffer, frameNumber,
                    request->input_buffer, (request->input_buffer)? &reproc_meta : mParameters);
            if (rc < 0) {
                ALOGE("%s: Fail to request on picture channel", __func__);
                pthread_mutex_unlock(&mMutex);
                return rc;
            }
        } else {
            CDBG("%s: %d, request with buffer %p, frame_number %d", __func__,
                __LINE__, output.buffer, frameNumber);
            rc = channel->request(output.buffer, frameNumber);
        }
        if (rc < 0)
            ALOGE("%s: request failed", __func__);
    }

    if(request->input_buffer == NULL) {
        /*将参数设置到后端*/
        mCameraHandle->ops->set_parms(mCameraHandle->camera_handle, mParameters);
    }

    mFirstRequest = false;
    // 添加了定时condition等待
    struct timespec ts;
    uint8_t isValidTimeout = 1;
    rc = clock_gettime(CLOCK_REALTIME, &ts);
    if (rc < 0) {
      isValidTimeout = 0;
      ALOGE("%s: Error reading the real time clock!!", __func__);
    }
    else {
      // 将超时设置为5秒
      ts.tv_sec += 5;
    }
    //阻塞在条件变量上

    mPendingRequest++;
    while (mPendingRequest >= MIN_INFLIGHT_REQUESTS) {
        if (!isValidTimeout) {
            CDBG("%s: Blocking on conditional wait", __func__);
            pthread_cond_wait(&mRequestCond, &mMutex);
        }
        else {
            CDBG("%s: Blocking on timed conditional wait", __func__);
            rc = pthread_cond_timedwait(&mRequestCond, &mMutex, &ts);
            if (rc == ETIMEDOUT) {
                rc = -ENODEV;
                ALOGE("%s: Unblocked on timeout!!!!", __func__);
                break;
            }
        }
        CDBG("%s: Unblocked", __func__);
        if (mWokenUpByDaemon) {
            mWokenUpByDaemon = false;
            if (mPendingRequest < MAX_INFLIGHT_REQUESTS)
                break;
        }
    }
    pthread_mutex_unlock(&mMutex);

    return rc;
}

Camera2 预览流程分析三

先来分析 QCamera3Channel 初始化,通过《Android 源码 Camera2 HAL3 流配置》一节可以知道实际上 对应于 HAL_PIXEL_FORMAT_YCbCr_420_888 格式创建的 QCamera3Channel 实现类指向了 QCamera3RegularChannel。

  • 调用 init(…) 初始化;
  • 确定流格式;
  • 调用 addStream(…) 添加流。

QCamera3Channel.cpp–>initialize()

文件路径:hardware/qcom/camera/QCamera2/HAL3/QCamera3Channel.cpp

int32_t QCamera3RegularChannel::initialize(cam_is_type_t isType)
{
    ATRACE_CALL();
    int32_t rc = NO_ERROR;
    cam_format_t streamFormat;
    cam_dimension_t streamDim;

    if (NULL == mCamera3Stream) {
        ALOGE("%s: Camera stream uninitialized", __func__);
        return NO_INIT;
    }

    if (1 <= m_numStreams) {
        // Hal v3 中每个 channel 仅支持一个流
        return NO_ERROR;
    }

    rc = init(NULL, NULL);
    if (rc < 0) {
        ALOGE("%s: init failed", __func__);
        return rc;
    }

    mNumBufs = CAM_MAX_NUM_BUFS_PER_STREAM;
    mIsType  = isType;

    if (mCamera3Stream->format == HAL_PIXEL_FORMAT_IMPLEMENTATION_DEFINED) {
        if (mStreamType ==  CAM_STREAM_TYPE_VIDEO) {
            streamFormat = VIDEO_FORMAT;
        } else if (mStreamType == CAM_STREAM_TYPE_PREVIEW) {
            streamFormat = PREVIEW_FORMAT;
        } else {
            //TODO: 在libgralloc中为ZSL缓冲区添加一个新标志,
            //并且其大小需要正确对齐和填充。
            streamFormat = DEFAULT_FORMAT;
        }
    } else if(mCamera3Stream->format == HAL_PIXEL_FORMAT_YCbCr_420_888) {
         streamFormat = CALLBACK_FORMAT;
    } else if (mCamera3Stream->format == HAL_PIXEL_FORMAT_RAW_OPAQUE ||
         mCamera3Stream->format == HAL_PIXEL_FORMAT_RAW10 ||
         mCamera3Stream->format == HAL_PIXEL_FORMAT_RAW16) {
         // Bayer pattern doesn't matter here.
         // All CAMIF raw format uses 10bit.
         streamFormat = RAW_FORMAT;
    } else {
        //TODO: Fail for other types of streams for now
        ALOGE("%s: format is not IMPLEMENTATION_DEFINED or flexible", __func__);
        return -EINVAL;
    }

    streamDim.width = mCamera3Stream->width;
    streamDim.height = mCamera3Stream->height;

    rc = QCamera3Channel::addStream(mStreamType,
            streamFormat,
            streamDim,
            mNumBufs,
            mPostProcMask,
            mIsType);

    return rc;
}

初始化 channel。m_camOps 指向了 mm_camera_ops_t 相机操作方法表,add_channel 实际实现是 mm_camera_intf_add_channel。

QCamera3Channel.cpp–>initialize()

int32_t QCamera3Channel::init(mm_camera_channel_attr_t *attr,
                             mm_camera_buf_notify_t dataCB)
{
    m_handle = m_camOps->add_channel(m_camHandle,
                                      attr,
                                      dataCB,
                                      this);
    if (m_handle == 0) {
        ALOGE("%s: Add channel failed", __func__);
        return UNKNOWN_ERROR;
    }
    return NO_ERROR;
}

mm_camera_interface.c–>mm_camera_ops ()

hardware/qcom/camera/QCamera2/stack/mm-camera-interface/src/mm_camera_interface.c

/* camera ops v-table */
static mm_camera_ops_t mm_camera_ops = {
    .query_capability = mm_camera_intf_query_capability,
    .register_event_notify = mm_camera_intf_register_event_notify,
    .close_camera = mm_camera_intf_close,
    .error_close_camera = mm_camera_intf_error_close,
    .set_parms = mm_camera_intf_set_parms,
    .get_parms = mm_camera_intf_get_parms,
    .do_auto_focus = mm_camera_intf_do_auto_focus,
    .cancel_auto_focus = mm_camera_intf_cancel_auto_focus,
    .prepare_snapshot = mm_camera_intf_prepare_snapshot,
    .start_zsl_snapshot = mm_camera_intf_start_zsl_snapshot,
    .stop_zsl_snapshot = mm_camera_intf_stop_zsl_snapshot,
    .map_buf = mm_camera_intf_map_buf,
    .unmap_buf = mm_camera_intf_unmap_buf,
    .add_channel = mm_camera_intf_add_channel,
    .delete_channel = mm_camera_intf_del_channel,
    .get_bundle_info = mm_camera_intf_get_bundle_info,
    .add_stream = mm_camera_intf_add_stream,
    .delete_stream = mm_camera_intf_del_stream,
    .config_stream = mm_camera_intf_config_stream,
    .qbuf = mm_camera_intf_qbuf,
    .map_stream_buf = mm_camera_intf_map_stream_buf,
    .unmap_stream_buf = mm_camera_intf_unmap_stream_buf,
    .set_stream_parms = mm_camera_intf_set_stream_parms,
    .get_stream_parms = mm_camera_intf_get_stream_parms,
    .start_channel = mm_camera_intf_start_channel,
    .stop_channel = mm_camera_intf_stop_channel,
    .request_super_buf = mm_camera_intf_request_super_buf,
    .cancel_super_buf_request = mm_camera_intf_cancel_super_buf_request,
    .flush_super_buf_queue = mm_camera_intf_flush_super_buf_queue,
    .configure_notify_mode = mm_camera_intf_configure_notify_mode,
    .process_advanced_capture = mm_camera_intf_process_advanced_capture
};

添加 channel,主要调用了 mm_camera_add_channel 函数。

camera_handle: 相机句柄

attr : 通道的 bundle 属性(如果需要)

channel_cb : bundle 数据通知的回调函数

userdata : 用户数据指针

mm_camera_interface.c–>mm_camera_intf_add_channel()

static uint32_t mm_camera_intf_add_channel(uint32_t camera_handle,
                                           mm_camera_channel_attr_t *attr,
                                           mm_camera_buf_notify_t channel_cb,
                                           void *userdata)
{
    uint32_t ch_id = 0;
    mm_camera_obj_t * my_obj = NULL;

    CDBG("%s :E camera_handler = %d", __func__, camera_handle);
    pthread_mutex_lock(&g_intf_lock);
    my_obj = mm_camera_util_get_camera_by_handler(camera_handle);

    if(my_obj) {
        pthread_mutex_lock(&my_obj->cam_lock);
        pthread_mutex_unlock(&g_intf_lock);
        ch_id = mm_camera_add_channel(my_obj, attr, channel_cb, userdata);
    } else {
        pthread_mutex_unlock(&g_intf_lock);
    }
    CDBG("%s :X ch_id = %d", __func__, ch_id);
    return ch_id;
}

首先查找未使用的 slot,接着调用 mm_channel_init(…) 初始化。

mm_camera.c–>mm_camera_add_channel()

hardware/qcom/camera/QCamera2/stack/mm-camera-interface/src/mm_camera.c

uint32_t mm_camera_add_channel(mm_camera_obj_t *my_obj,
                               mm_camera_channel_attr_t *attr,
                               mm_camera_buf_notify_t channel_cb,
                               void *userdata)
{
    mm_channel_t *ch_obj = NULL;
    uint8_t ch_idx = 0;
    uint32_t ch_hdl = 0;

    for(ch_idx = 0; ch_idx < MM_CAMERA_CHANNEL_MAX; ch_idx++) {
        if (MM_CHANNEL_STATE_NOTUSED == my_obj->ch[ch_idx].state) {
            ch_obj = &my_obj->ch[ch_idx];
            break;
        }
    }

    if (NULL != ch_obj) {
        /* initialize channel obj */
        memset(ch_obj, 0, sizeof(mm_channel_t));
        ch_hdl = mm_camera_util_generate_handler(ch_idx);
        ch_obj->my_hdl = ch_hdl;
        ch_obj->state = MM_CHANNEL_STATE_STOPPED;
        ch_obj->cam_obj = my_obj;
        pthread_mutex_init(&ch_obj->ch_lock, NULL);
        mm_channel_init(ch_obj, attr, channel_cb, userdata);
    }

    pthread_mutex_unlock(&my_obj->cam_lock);

    return ch_hdl;
}

在 channel 打开时启动数据轮询线程。

mm_camera_channel.c–>mm_channel_init()

hardware/qcom/camera/QCamera2/stack/mm-camera-interface/src/mm_camera_channel.c

int32_t mm_channel_init(mm_channel_t *my_obj,
                        mm_camera_channel_attr_t *attr,
                        mm_camera_buf_notify_t channel_cb,
                        void *userdata)
{
    int32_t rc = 0;

    my_obj->bundle.super_buf_notify_cb = channel_cb;
    my_obj->bundle.user_data = userdata;
    if (NULL != attr) {
        my_obj->bundle.superbuf_queue.attr = *attr;
    }

    CDBG("%s : Launch data poll thread in channel open", __func__);
    mm_camera_poll_thread_launch(&my_obj->poll_thread[0],
                                 MM_CAMERA_POLL_TYPE_DATA);

    /* 将状态更改为停止状态 */
    my_obj->state = MM_CHANNEL_STATE_STOPPED;
    return rc;
}

调用了 pthread_create 创建了线程,线程执行的方法是 mm_camera_poll_thread(入参 poll_cb 指向 mm_camera_poll_thread_t 结构)。

5.mm_camera_thread.c–>mm_camera_poll_thread_launch()

int32_t mm_camera_poll_thread_launch(mm_camera_poll_thread_t * poll_cb,
                                     mm_camera_poll_thread_type_t poll_type)
{
    int32_t rc = 0;
    poll_cb->poll_type = poll_type;

    poll_cb->pfds[0] = -1;
    poll_cb->pfds[1] = -1;
    rc = pipe(poll_cb->pfds);
    if(rc < 0) {
        CDBG_ERROR("%s: pipe open rc=%d\n", __func__, rc);
        return -1;
    }

    poll_cb->timeoutms = -1;  /* Infinite seconds */

    CDBG("%s: poll_type = %d, read fd = %d, write fd = %d timeout = %d",
        __func__, poll_cb->poll_type,
        poll_cb->pfds[0], poll_cb->pfds[1],poll_cb->timeoutms);

    pthread_mutex_init(&poll_cb->mutex, NULL);
    pthread_cond_init(&poll_cb->cond_v, NULL);

    /* launch the thread */
    pthread_mutex_lock(&poll_cb->mutex);
    poll_cb->status = 0;
    pthread_create(&poll_cb->pid, NULL, mm_camera_poll_thread, (void *)poll_cb);
    if(!poll_cb->status) {
        pthread_cond_wait(&poll_cb->cond_v, &poll_cb->mutex);
    }
    pthread_mutex_unlock(&poll_cb->mutex);
    CDBG("%s: End",__func__);
    return rc;
}

轮询线程入口函数。

6.mm_camera_thread.c–>mm_camera_poll_thread()

static void *mm_camera_poll_thread(void *data)
{
    // 设置进程名 
    prctl(PR_SET_NAME, (unsigned long)"mm_cam_poll_th", 0, 0, 0);
    mm_camera_poll_thread_t *poll_cb = (mm_camera_poll_thread_t *)data;

    /* 先添加管道读 fd 到 poll */
    poll_cb->poll_fds[poll_cb->num_fds++].fd = poll_cb->pfds[0];

    mm_camera_poll_sig_done(poll_cb);
    mm_camera_poll_set_state(poll_cb, MM_CAMERA_POLL_TASK_STATE_POLL);
    return mm_camera_poll_fn(poll_cb);
}

轮询线程例程。mm_camera_poll_proc_pipe 实现轮询线程例程处理管道。

6.mm_camera_thread.c–>mm_camera_poll_fn()

static void *mm_camera_poll_fn(mm_camera_poll_thread_t *poll_cb)
{
    int rc = 0, i;

    if (NULL == poll_cb) {
        CDBG_ERROR("%s: poll_cb is NULL!\n", __func__);
        return NULL;
    }
    CDBG("%s: poll type = %d, num_fd = %d poll_cb = %p\n",
         __func__, poll_cb->poll_type, poll_cb->num_fds,poll_cb);
    do {
         for(i = 0; i < poll_cb->num_fds; i++) {
            // 普通或优先级带数据可读 | 普通数据可读 | 高优先级数据可读
            poll_cb->poll_fds[i].events = POLLIN|POLLRDNORM|POLLPRI;
         }

         rc = poll(poll_cb->poll_fds, poll_cb->num_fds, poll_cb->timeoutms);
         if(rc > 0) {
            if ((poll_cb->poll_fds[0].revents & POLLIN) &&
                (poll_cb->poll_fds[0].revents & POLLRDNORM)) {
                /* 如果我们在管道上有数据,我们只在这个迭代中处理管道 */
                CDBG("%s: cmd received on pipe\n", __func__);
                mm_camera_poll_proc_pipe(poll_cb);
            } else {
                for(i=1; i<poll_cb->num_fds; i++) {
                    /* 检查ctrl事件 */
                    if ((poll_cb->poll_type == MM_CAMERA_POLL_TYPE_EVT) &&
                        (poll_cb->poll_fds[i].revents & POLLPRI)) {
                        CDBG("%s: mm_camera_evt_notify\n", __func__);
                        if (NULL != poll_cb->poll_entries[i-1].notify_cb) {
                            poll_cb->poll_entries[i-1].notify_cb(poll_cb->poll_entries[i-1].user_data);
                        }
                    }

                    if ((MM_CAMERA_POLL_TYPE_DATA == poll_cb->poll_type) &&
                        (poll_cb->poll_fds[i].revents & POLLIN) &&
                        (poll_cb->poll_fds[i].revents & POLLRDNORM)) {
                        CDBG("%s: mm_stream_data_notify\n", __func__);
                        if (NULL != poll_cb->poll_entries[i-1].notify_cb) {
                            poll_cb->poll_entries[i-1].notify_cb(poll_cb->poll_entries[i-1].user_data);
                        }
                    }
                }
            }
        } else {
            /* in error case sleep 10 us and then continue. hard coded here */
            usleep(10);
            continue;
        }
    } while ((poll_cb != NULL) && (poll_cb->state == MM_CAMERA_POLL_TASK_STATE_POLL));
    return NULL;
}

再来分析调用 addStream(…) 添加流。

添加 stream 到 channel。首先创建 QCamera3Stream 对象,然后调用其 init 方法初始化。

7.QCamera3Channel.cpp–>addStream()

int32_t QCamera3Channel::addStream(cam_stream_type_t streamType,
                                  cam_format_t streamFormat,
                                  cam_dimension_t streamDim,
                                  uint8_t minStreamBufNum,
                                  uint32_t postprocessMask,
                                  cam_is_type_t isType)
{
    int32_t rc = NO_ERROR;

    if (m_numStreams >= 1) {
        ALOGE("%s: Only one stream per channel supported in v3 Hal", __func__);
        return BAD_VALUE;
    }

    if (m_numStreams >= MAX_STREAM_NUM_IN_BUNDLE) {
        ALOGE("%s: stream number (%d) exceeds max limit (%d)",
              __func__, m_numStreams, MAX_STREAM_NUM_IN_BUNDLE);
        return BAD_VALUE;
    }
    QCamera3Stream *pStream = new QCamera3Stream(m_camHandle,
                                               m_handle,
                                               m_camOps,
                                               mPaddingInfo,
                                               this);
    if (pStream == NULL) {
        ALOGE("%s: No mem for Stream", __func__);
        return NO_MEMORY;
    }

    rc = pStream->init(streamType, streamFormat, streamDim, NULL, minStreamBufNum,
                       postprocessMask, isType, streamCbRoutine, this);
    if (rc == 0) {
        mStreams[m_numStreams] = pStream;
        m_numStreams++;
    } else {
        delete pStream;
    }
    return rc;
}

  • 添加流,调用 mm_camera_intf_add_stream 函数
  • 分配和映射流信息内存,调用 mm_camera_intf_map_stream_buf 函数
  • 配置流,调用 mm_camera_intf_config_stream 函数

8.QCamera3Stream.cpp–>init()


int32_t QCamera3Stream::init(cam_stream_type_t streamType,
                            cam_format_t streamFormat,
                            cam_dimension_t streamDim,
                            cam_stream_reproc_config_t* reprocess_config,
                            uint8_t minNumBuffers,
                            uint32_t postprocess_mask,
                            cam_is_type_t is_type,
                            hal3_stream_cb_routine stream_cb,
                            void *userdata)
{
    int32_t rc = OK;
    mm_camera_stream_config_t stream_config;

    mHandle = mCamOps->add_stream(mCamHandle, mChannelHandle);
    if (!mHandle) {
        ALOGE("add_stream failed");
        rc = UNKNOWN_ERROR;
        goto done;
    }

    // allocate and map stream info memory
    mStreamInfoBuf = new QCamera3HeapMemory();
    if (mStreamInfoBuf == NULL) {
        ALOGE("%s: no memory for stream info buf obj", __func__);
        rc = -ENOMEM;
        goto err1;
    }
    rc = mStreamInfoBuf->allocate(1, sizeof(cam_stream_info_t), false);
    if (rc < 0) {
        ALOGE("%s: no memory for stream info", __func__);
        rc = -ENOMEM;
        goto err2;
    }

    mStreamInfo =
        reinterpret_cast<cam_stream_info_t *>(mStreamInfoBuf->getPtr(0));
    memset(mStreamInfo, 0, sizeof(cam_stream_info_t));
    mStreamInfo->stream_type = streamType;
    mStreamInfo->fmt = streamFormat;
    mStreamInfo->dim = streamDim;
    mStreamInfo->num_bufs = minNumBuffers;
    mStreamInfo->pp_config.feature_mask = postprocess_mask;
    ALOGV("%s: stream_type is %d, feature_mask is %d",
          __func__, mStreamInfo->stream_type, mStreamInfo->pp_config.feature_mask);
    mStreamInfo->is_type = is_type;
    rc = mCamOps->map_stream_buf(mCamHandle,
            mChannelHandle, mHandle, CAM_MAPPING_BUF_TYPE_STREAM_INFO,
            0, -1, mStreamInfoBuf->getFd(0), mStreamInfoBuf->getSize(0));
    if (rc < 0) {
        ALOGE("Failed to map stream info buffer");
        goto err3;
    }

    mNumBufs = minNumBuffers;
    if (reprocess_config != NULL) {
       mStreamInfo->reprocess_config = *reprocess_config;
       mStreamInfo->streaming_mode = CAM_STREAMING_MODE_BURST;
       //mStreamInfo->num_of_burst = reprocess_config->offline.num_of_bufs;
       mStreamInfo->num_of_burst = 1;
       ALOGI("%s: num_of_burst is %d", __func__, mStreamInfo->num_of_burst);
    } else {
       mStreamInfo->streaming_mode = CAM_STREAMING_MODE_CONTINUOUS;
    }

    // Configure the stream
    stream_config.stream_info = mStreamInfo;
    stream_config.mem_vtbl = mMemVtbl;
    stream_config.padding_info = mPaddingInfo;
    stream_config.userdata = this;
    stream_config.stream_cb = dataNotifyCB;

    rc = mCamOps->config_stream(mCamHandle,
            mChannelHandle, mHandle, &stream_config);
    if (rc < 0) {
        ALOGE("Failed to config stream, rc = %d", rc);
        goto err4;
    }

    mDataCB = stream_cb;
    mUserData = userdata;
    return 0;

err4:
    mCamOps->unmap_stream_buf(mCamHandle,
            mChannelHandle, mHandle, CAM_MAPPING_BUF_TYPE_STREAM_INFO, 0, -1);
err3:
    mStreamInfoBuf->deallocate();
err2:
    delete mStreamInfoBuf;
    mStreamInfoBuf = NULL;
    mStreamInfo = NULL;
err1:
    mCamOps->delete_stream(mCamHandle, mChannelHandle, mHandle);
    mHandle = 0;
    mNumBufs = 0;
done:
    return rc;
}
  • 通过相机句柄(camera_handle)查到 mm_camera_obj_t 对象;
  • 调用 mm_camera_add_stream(…) 添加流到 channel。

9.mm_camera_interface.c–>mm_camera_intf_add_stream()

static uint32_t mm_camera_intf_add_stream(uint32_t camera_handle,
                                          uint32_t ch_id)
{
    uint32_t stream_id = 0;
    mm_camera_obj_t * my_obj = NULL;

    CDBG("%s : E handle = %d ch_id = %d",
         __func__, camera_handle, ch_id);

    pthread_mutex_lock(&g_intf_lock);
    my_obj = mm_camera_util_get_camera_by_handler(camera_handle);

    if(my_obj) {
        pthread_mutex_lock(&my_obj->cam_lock);
        pthread_mutex_unlock(&g_intf_lock);
        stream_id = mm_camera_add_stream(my_obj, ch_id);
    } else {
        pthread_mutex_unlock(&g_intf_lock);
    }
    CDBG("%s :X stream_id = %d", __func__, stream_id);
    return stream_id;
}

  • 通过 mm_camera_obj_t 对象和 channel id 查到 mm_channel_t 对象;
  • 调用 mm_channel_fsm_fn 函数进一步添加流到 channel。

10.mm_camera.c–>mm_camera_add_stream()

uint32_t mm_camera_add_stream(mm_camera_obj_t *my_obj,
                              uint32_t ch_id)
{
    uint32_t s_hdl = 0;
    mm_channel_t * ch_obj =
        mm_camera_util_get_channel_by_handler(my_obj, ch_id);

    if (NULL != ch_obj) {
        pthread_mutex_lock(&ch_obj->ch_lock);
        pthread_mutex_unlock(&my_obj->cam_lock);

        mm_channel_fsm_fn(ch_obj,
                          MM_CHANNEL_EVT_ADD_STREAM,
                          NULL,
                          (void*)&s_hdl);
    } else {
        pthread_mutex_unlock(&my_obj->cam_lock);
    }

    return s_hdl;
}


通道有限状态机入口函数。根据通道状态的不同,传入的事件将得到不同的处理。回到 mm_channel_init 方法,不难知道此处 mm_channel_t state 字段为 MM_CHANNEL_STATE_STOPPED。

11.mm_camera_channel.c–>mm_channel_fsm_fn()

int32_t mm_channel_fsm_fn(mm_channel_t *my_obj,
                          mm_channel_evt_type_t evt,
                          void * in_val,
                          void * out_val)
{
    int32_t rc = -1;

    CDBG("%s : E state = %d", __func__, my_obj->state);
    switch (my_obj->state) {
    case MM_CHANNEL_STATE_NOTUSED:
        rc = mm_channel_fsm_fn_notused(my_obj, evt, in_val, out_val);
        break;
    case MM_CHANNEL_STATE_STOPPED:
        rc = mm_channel_fsm_fn_stopped(my_obj, evt, in_val, out_val);
        break;
    case MM_CHANNEL_STATE_ACTIVE:
        rc = mm_channel_fsm_fn_active(my_obj, evt, in_val, out_val);
        break;
    case MM_CHANNEL_STATE_PAUSED:
        rc = mm_channel_fsm_fn_paused(my_obj, evt, in_val, out_val);
        break;
    default:
        CDBG("%s: Not a valid state (%d)", __func__, my_obj->state);
        break;
    }

    /* unlock ch_lock */
    pthread_mutex_unlock(&my_obj->ch_lock);
    CDBG("%s : X rc = %d", __func__, rc);
    return rc;
}

channel 有限状态机功能来处理处于 STOPPED 状态的事件。此处 mm_channel_evt_type_t 为 MM_CHANNEL_EVT_ADD_STREAM。

12.mm_camera_channel.c–>mm_channel_fsm_fn_stopped()

int32_t mm_channel_fsm_fn_stopped(mm_channel_t *my_obj,
                                  mm_channel_evt_type_t evt,
                                  void * in_val,
                                  void * out_val)
{
    int32_t rc = 0;
    CDBG("%s : E evt = %d", __func__, evt);
    switch (evt) {
    case MM_CHANNEL_EVT_ADD_STREAM:
        {
            uint32_t s_hdl = 0;
            s_hdl = mm_channel_add_stream(my_obj);
            *((uint32_t*)out_val) = s_hdl;
            rc = 0;
        }
        break;
    ......
    default:
        CDBG_ERROR("%s: invalid state (%d) for evt (%d)",
                   __func__, my_obj->state, evt);
        break;
    }
    CDBG("%s : E rc = %d", __func__, rc);
    return rc;
}

  • 检查可用的流对象
  • 初始化流对象
  • 请求流

13.mm_camera_channel.c–>mm_channel_add_stream()

uint32_t mm_channel_add_stream(mm_channel_t *my_obj)
{
    int32_t rc = 0;
    uint8_t idx = 0;
    uint32_t s_hdl = 0;
    mm_stream_t *stream_obj = NULL;

    CDBG("%s : E", __func__);
    /* 检查可用的流对象 */
    for (idx = 0; idx < MAX_STREAM_NUM_IN_BUNDLE; idx++) {
        if (MM_STREAM_STATE_NOTUSED == my_obj->streams[idx].state) {
            stream_obj = &my_obj->streams[idx];
            break;
        }
    }
    if (NULL == stream_obj) {
        CDBG_ERROR("%s: streams reach max, no more stream allowed to add", __func__);
        return s_hdl;
    }

    /* 初始化流对象*/
    memset(stream_obj, 0, sizeof(mm_stream_t));
    stream_obj->fd = -1;
    stream_obj->my_hdl = mm_camera_util_generate_handler(idx);
    stream_obj->ch_obj = my_obj;
    pthread_mutex_init(&stream_obj->buf_lock, NULL);
    pthread_mutex_init(&stream_obj->cb_lock, NULL);
    stream_obj->state = MM_STREAM_STATE_INITED;

    /* 请求流 */
    rc = mm_stream_fsm_fn(stream_obj, MM_STREAM_EVT_ACQUIRE, NULL, NULL);
    if (0 == rc) {
        s_hdl = stream_obj->my_hdl;
    } else {
        /* error during acquire, de-init */
        pthread_mutex_destroy(&stream_obj->buf_lock);
        pthread_mutex_destroy(&stream_obj->cb_lock);
        memset(stream_obj, 0, sizeof(mm_stream_t));
    }
    CDBG("%s : stream handle = %d", __func__, s_hdl);
    return s_hdl;
}

流有限状态机入口函数。 根据流状态,传入事件将以不同方式处理。

14.mm_camera_stream.c–>mm_stream_fsm_fn()

int32_t mm_stream_fsm_fn(mm_stream_t *my_obj,
                         mm_stream_evt_type_t evt,
                         void * in_val,
                         void * out_val)
{
    int32_t rc = -1;

    CDBG("%s: E, my_handle = 0x%x, fd = %d, state = %d",
         __func__, my_obj->my_hdl, my_obj->fd, my_obj->state);
    switch (my_obj->state) {
    ......
    case MM_STREAM_STATE_INITED:
        rc = mm_stream_fsm_inited(my_obj, evt, in_val, out_val);
        break;
    ......
    default:
        CDBG("%s: Not a valid state (%d)", __func__, my_obj->state);
        break;
    }
    CDBG("%s : X rc =%d",__func__,rc);
    return rc;
}

流有限状态机函数来处理 INITED 状态的事件。此处传入的 mm_stream_evt_type_t 为 MM_STREAM_EVT_ACQUIRE。

  • 打开设备节点;
  • 通过 v4l2 ioctl 将流扩展模式设置为服务端;
  • 将 mm_stream_t state 字段置为 MM_STREAM_STATE_ACQUIRED。

15.mm_camera_stream.c–>mm_stream_fsm_inited()

int32_t mm_stream_fsm_inited(mm_stream_t *my_obj,
                             mm_stream_evt_type_t evt,
                             void * in_val,
                             void * out_val)
{
    int32_t rc = 0;
    char dev_name[MM_CAMERA_DEV_NAME_LEN];

    CDBG("%s: E, my_handle = 0x%x, fd = %d, state = %d",
         __func__, my_obj->my_hdl, my_obj->fd, my_obj->state);
    switch(evt) {
    case MM_STREAM_EVT_ACQUIRE:
        if ((NULL == my_obj->ch_obj) || (NULL == my_obj->ch_obj->cam_obj)) {
            CDBG_ERROR("%s: NULL channel or camera obj\n", __func__);
            rc = -1;
            break;
        }

        if (NULL == my_obj) {
            CDBG_ERROR("%s: NULL camera object\n", __func__);
            rc = -1;
            break;
        }
        snprintf(dev_name, sizeof(dev_name), "/dev/%s",
                 mm_camera_util_get_dev_name(my_obj->ch_obj->cam_obj->my_hdl));

        my_obj->fd = open(dev_name, O_RDWR | O_NONBLOCK);
        if (my_obj->fd < 0) {
            CDBG_ERROR("%s: open dev returned %d\n", __func__, my_obj->fd);
            rc = -1;
            break;
        }
        CDBG("%s: open dev fd = %d\n", __func__, my_obj->fd);
        rc = mm_stream_set_ext_mode(my_obj);
        if (0 == rc) {
            my_obj->state = MM_STREAM_STATE_ACQUIRED;
        } else {
            /* failed setting ext_mode
             * close fd */
            close(my_obj->fd);
            my_obj->fd = -1;
            break;
        }
        break;
    default:
        CDBG_ERROR("%s: invalid state (%d) for evt (%d), in(%p), out(%p)",
                   __func__, my_obj->state, evt, in_val, out_val);
        break;
    }
    return rc;
}

再来分析映射流信息内存,调用 mm_camera_intf_map_stream_buf 函数。通过域套接字将流缓冲区映射到服务端。

  • 根据 camera_handle 相机句柄查找到 mm_camera_obj_t 对象
  • 调用 mm_camera_map_stream_buf 函数进一步处理

15.mm_camera_stream.c–>mm_camera_intf_map_stream_buf()

static int32_t mm_camera_intf_map_stream_buf(uint32_t camera_handle,
                                             uint32_t ch_id,
                                             uint32_t stream_id,
                                             uint8_t buf_type,
                                             uint32_t buf_idx,
                                             int32_t plane_idx,
                                             int fd,
                                             uint32_t size)
{
    int32_t rc = -1;
    mm_camera_obj_t * my_obj = NULL;

    pthread_mutex_lock(&g_intf_lock);
    my_obj = mm_camera_util_get_camera_by_handler(camera_handle);

    CDBG("%s :E camera_handle = %d, ch_id = %d, s_id = %d, buf_idx = %d, plane_idx = %d",
         __func__, camera_handle, ch_id, stream_id, buf_idx, plane_idx);

    if(my_obj) {
        pthread_mutex_lock(&my_obj->cam_lock);
        pthread_mutex_unlock(&g_intf_lock);
        rc = mm_camera_map_stream_buf(my_obj, ch_id, stream_id,
                                      buf_type, buf_idx, plane_idx,
                                      fd, size);
    }else{
        pthread_mutex_unlock(&g_intf_lock);
    }

    CDBG("%s :X rc = %d", __func__, rc);
    return rc;
}

  • 通过 mm_camera_obj_t 对象和 channel id 查找到 mm_channel_t 对象
  • 调用 mm_channel_fsm_fn 函数进一步处理

最后会把 packet 发送到服务端。

16.mm_camera.c–>mm_camera_map_stream_buf()

int32_t mm_camera_map_stream_buf(mm_camera_obj_t *my_obj,
                                 uint32_t ch_id,
                                 uint32_t stream_id,
                                 uint8_t buf_type,
                                 uint32_t buf_idx,
                                 int32_t plane_idx,
                                 int fd,
                                 uint32_t size)
{
    int32_t rc = -1;
    mm_evt_paylod_map_stream_buf_t payload;
    mm_channel_t * ch_obj =
        mm_camera_util_get_channel_by_handler(my_obj, ch_id);

    if (NULL != ch_obj) {
        pthread_mutex_lock(&ch_obj->ch_lock);
        pthread_mutex_unlock(&my_obj->cam_lock);

        memset(&payload, 0, sizeof(payload));
        payload.stream_id = stream_id;
        payload.buf_type = buf_type;
        payload.buf_idx = buf_idx;
        payload.plane_idx = plane_idx;
        payload.fd = fd;
        payload.size = size;
        rc = mm_channel_fsm_fn(ch_obj,
                               MM_CHANNEL_EVT_MAP_STREAM_BUF,
                               (void*)&payload,
                               NULL);
    } else {
        pthread_mutex_unlock(&my_obj->cam_lock);
    }

    return rc;
}

现在来分析配置流,调用 mm_camera_intf_config_stream 函数。

  • 查找到 mm_camera_obj_t 对象;
  • 调用 mm_camera_config_stream 进一步配置流;

17.mm_camera_interface.c–>mm_camera_intf_config_stream()

static int32_t mm_camera_intf_config_stream(uint32_t camera_handle,
                                            uint32_t ch_id,
                                            uint32_t stream_id,
                                            mm_camera_stream_config_t *config)
{
    int32_t rc = -1;
    mm_camera_obj_t * my_obj = NULL;

    CDBG("%s :E handle = %d, ch_id = %d,stream_id = %d",
         __func__, camera_handle, ch_id, stream_id);

    pthread_mutex_lock(&g_intf_lock);
    my_obj = mm_camera_util_get_camera_by_handler(camera_handle);

    CDBG("%s :mm_camera_intf_config_stream stream_id = %d",__func__,stream_id);

    if(my_obj) {
        pthread_mutex_lock(&my_obj->cam_lock);
        pthread_mutex_unlock(&g_intf_lock);
        rc = mm_camera_config_stream(my_obj, ch_id, stream_id, config);
    } else {
        pthread_mutex_unlock(&g_intf_lock);
    }
    CDBG("%s :X rc = %d", __func__, rc);
    return rc;
}

  • 查找到 mm_channel_t 对象
  • 调用 mm_channel_fsm_fn 函数进一步处理

18.mm_camera.c–>mm_camera_config_stream()

int32_t mm_camera_config_stream(mm_camera_obj_t *my_obj,
                                uint32_t ch_id,
                                uint32_t stream_id,
                                mm_camera_stream_config_t *config)
{
    int32_t rc = -1;
    mm_channel_t * ch_obj =
        mm_camera_util_get_channel_by_handler(my_obj, ch_id);
    mm_evt_paylod_config_stream_t payload;

    if (NULL != ch_obj) {
        pthread_mutex_lock(&ch_obj->ch_lock);
        pthread_mutex_unlock(&my_obj->cam_lock);

        memset(&payload, 0, sizeof(mm_evt_paylod_config_stream_t));
        payload.stream_id = stream_id;
        payload.config = config;
        rc = mm_channel_fsm_fn(ch_obj,
                               MM_CHANNEL_EVT_CONFIG_STREAM,
                               (void*)&payload,
                               NULL);
    } else {
        pthread_mutex_unlock(&my_obj->cam_lock);
    }

    return rc;
}

mm_channel_t state 字段为 MM_CHANNEL_STATE_STOPPED。

19.mm_camera_channel.c–>mm_channel_fsm_fn()

int32_t mm_channel_fsm_fn(mm_channel_t *my_obj,
                          mm_channel_evt_type_t evt,
                          void * in_val,
                          void * out_val)
{
    int32_t rc = -1;

    CDBG("%s : E state = %d", __func__, my_obj->state);
    switch (my_obj->state) {
    ......
    case MM_CHANNEL_STATE_STOPPED:
        rc = mm_channel_fsm_fn_stopped(my_obj, evt, in_val, out_val);
        break;
    ......
    }

    /* unlock ch_lock */
    pthread_mutex_unlock(&my_obj->ch_lock);
    CDBG("%s : X rc = %d", __func__, rc);
    return rc;
}

此处调用 mm_channel_config_stream 继续处理。

20.mm_camera_channel.c–>mm_channel_fsm_fn_stopped()

int32_t mm_channel_fsm_fn_stopped(mm_channel_t *my_obj,
                                  mm_channel_evt_type_t evt,
                                  void * in_val,
                                  void * out_val)
{
    int32_t rc = 0;
    CDBG("%s : E evt = %d", __func__, evt);
    switch (evt) {
    ......
    case MM_CHANNEL_EVT_CONFIG_STREAM:
        {
            mm_evt_paylod_config_stream_t *payload =
                (mm_evt_paylod_config_stream_t *)in_val;
            rc = mm_channel_config_stream(my_obj,
                                          payload->stream_id,
                                          payload->config);
        }
        break;
    ......
    default:
        CDBG_ERROR("%s: invalid state (%d) for evt (%d)",
                   __func__, my_obj->state, evt);
        break;
    }
    CDBG("%s : E rc = %d", __func__, rc);
    return rc;
}

  • 查找到 mm_stream_t 对象;
  • 调用 mm_stream_fsm_fn 函数设置流格式。

21.mm_camera_channel.c–>mm_channel_config_stream()

int32_t mm_channel_config_stream(mm_channel_t *my_obj,
                                   uint32_t stream_id,
                                   mm_camera_stream_config_t *config)
{
    int rc = -1;
    mm_stream_t * stream_obj = NULL;
    CDBG("%s : E stream ID = %d", __func__, stream_id);
    stream_obj = mm_channel_util_get_stream_by_handler(my_obj, stream_id);

    if (NULL == stream_obj) {
        CDBG_ERROR("%s :Invalid Stream Object for stream_id = %d", __func__, stream_id);
        return rc;
    }

    /* set stream fmt */
    rc = mm_stream_fsm_fn(stream_obj,
                          MM_STREAM_EVT_SET_FMT,
                          (void *)config,
                          NULL);
    CDBG("%s : X rc = %d",__func__,rc);
    return rc;
}

此处 mm_stream_t state 在 mm_stream_fsm_inited 函数中被赋值为 MM_STREAM_STATE_ACQUIRED。

22.mm_camera_stream.c–>mm_stream_fsm_fn()

int32_t mm_stream_fsm_fn(mm_stream_t *my_obj,
                         mm_stream_evt_type_t evt,
                         void * in_val,
                         void * out_val)
{
    int32_t rc = -1;

    CDBG("%s: E, my_handle = 0x%x, fd = %d, state = %d",
         __func__, my_obj->my_hdl, my_obj->fd, my_obj->state);
    switch (my_obj->state) {
    ......
    case MM_STREAM_STATE_ACQUIRED:
        rc = mm_stream_fsm_acquired(my_obj, evt, in_val, out_val);
        break;
    ......
    default:
        CDBG("%s: Not a valid state (%d)", __func__, my_obj->state);
        break;
    }
    CDBG("%s : X rc =%d",__func__,rc);
    return rc;
}

流有限状态机功能,以处理 ACQUIRED 状态下的事件。

此处 mm_stream_evt_type_t 为 MM_STREAM_EVT_SET_FMT。

23.mm_camera_stream.c–>mm_stream_fsm_acquired()

int32_t mm_stream_fsm_acquired(mm_stream_t *my_obj,
                               mm_stream_evt_type_t evt,
                               void * in_val,
                               void * out_val)
{
    int32_t rc = 0;

    CDBG("%s: E, my_handle = 0x%x, fd = %d, state = %d",
         __func__, my_obj->my_hdl, my_obj->fd, my_obj->state);
    switch(evt) {
    case MM_STREAM_EVT_SET_FMT:
        {
            mm_camera_stream_config_t *config =
                (mm_camera_stream_config_t *)in_val;

            rc = mm_stream_config(my_obj, config);

            /* change state to configed */
            my_obj->state = MM_STREAM_STATE_CFG;

            break;
        }
    ......
    default:
        CDBG_ERROR("%s: invalid state (%d) for evt (%d), in(%p), out(%p)",
                   __func__, my_obj->state, evt, in_val, out_val);
    }
    CDBG("%s :X rc = %d", __func__, rc);
    return rc;
}

调用 mm_stream_sync_info 和 mm_stream_set_fmt 对服务端进行实际流配置。

24.mm_camera_stream.c–>mm_stream_config()

int32_t mm_stream_config(mm_stream_t *my_obj,
                         mm_camera_stream_config_t *config)
{
    int32_t rc = 0;
    CDBG("%s: E, my_handle = 0x%x, fd = %d, state = %d",
         __func__, my_obj->my_hdl, my_obj->fd, my_obj->state);
    my_obj->stream_info = config->stream_info;
    my_obj->buf_num = config->stream_info->num_bufs;
    my_obj->mem_vtbl = config->mem_vtbl;
    my_obj->padding_info = config->padding_info;
    /* cd through intf always palced at idx 0 of buf_cb */
    my_obj->buf_cb[0].cb = config->stream_cb;
    my_obj->buf_cb[0].user_data = config->userdata;
    my_obj->buf_cb[0].cb_count = -1; /* infinite by default */

    rc = mm_stream_sync_info(my_obj);
    if (rc == 0) {
        rc = mm_stream_set_fmt(my_obj);
    }
    return rc;
}

Camera2 预览流程分析四

**《Camera2 预览流程分析二》**中进行了流启动,这是调用 QCamera3Channel start() 方法实现的,对应于 HAL_PIXEL_FORMAT_YCbCr_420_888 格式创建的 QCamera3Channel 实现类指向了 QCamera3RegularChannel。

1.QCamera3Channel.cpp–>start()

hardware/qcom/camera/QCamera2/HAL3/QCamera3Channel.cpp

int32_t QCamera3RegularChannel::start()
{
    ATRACE_CALL();
    int32_t rc = NO_ERROR;

    if (0 < mMemory.getCnt()) {
        rc = QCamera3Channel::start();
    }
    return rc;
}

  • 启动流,流的类型为 QCamera3Stream,这是在 addStream 中添加的;
  • 启动 channel。

开始流。将启动主流线程来处理与流相关的操作。

  • 初始化 QCameraQueue;
  • 启动流线程,调用 dataProcRoutine 例程。

2.QCamera3Stream.cpp–>start()

hardware/qcom/camera/QCamera2/HAL3/QCamera3Stream.cpp

int32_t QCamera3Stream::start()
{
   int32_t rc = 0;
    mDataQ.init();
    mTimeoutFrameQ.clear();
    if (mBatchSize)
        mFreeBatchBufQ.init();
    rc = mProcTh.launch(dataProcRoutine, this);
    return rc;
}

调用 pthread_create 创建并开始运行线程

3.QCameraCmdThread.cpp–>launch()

int32_t QCameraCmdThread::launch(void *(*start_routine)(void *),
                                 void* user_data)
{
    /* launch the thread */
    pthread_create(&cmd_pid,
                   NULL,
                   start_routine,
                   user_data);
    return NO_ERROR;
}

用于处理主流线程中的数据的函数。处理 cmd 队列新的通知,如果 camera_cmd_type_t 类型为 CAMERA_CMD_TYPE_DO_NEXT_JOB,则从 QCameraQueue 队列中 dequeue 数据,并调用 mDataCB 指向的函数。

4.QCamera3Stream.cpp–>dataProcRoutine()

void *QCamera3Stream::dataProcRoutine(void *data)
{
    int running = 1;
    int ret;
    QCamera3Stream *pme = (QCamera3Stream *)data;
    QCameraCmdThread *cmdThread = &pme->mProcTh;
    cmdThread->setName("cam_stream_proc");

    CDBG("%s: E", __func__);
    do {
        do {
            ret = cam_sem_wait(&cmdThread->cmd_sem);
            if (ret != 0 && errno != EINVAL) {
                ALOGE("%s: cam_sem_wait error (%s)",
                      __func__, strerror(errno));
                return NULL;
            }
        } while (ret != 0);

        // 收到有关cmd队列中新cmd可用的通知
        camera_cmd_type_t cmd = cmdThread->getCmd();
        switch (cmd) {
        case CAMERA_CMD_TYPE_DO_NEXT_JOB:
            {
                CDBG("%s: Do next job", __func__);
                mm_camera_super_buf_t *frame =
                    (mm_camera_super_buf_t *)pme->mDataQ.dequeue();
                if (NULL != frame) {
                    if (pme->mDataCB != NULL) {
                        pme->mDataCB(frame, pme, pme->mUserData);
                    } else {
                        // 没有数据cb例程,在这里返回buf
                        pme->bufDone(frame->bufs[0]->buf_idx);
                    }
                }
            }
            break;
        case CAMERA_CMD_TYPE_EXIT:
            CDBG_HIGH("%s: Exit", __func__);
            /* 刷新数据buf队列 */
            pme->mDataQ.flush();
            running = 0;
            break;
        default:
            break;
        }
    } while (running);
    CDBG("%s: X", __func__);
    return NULL;
}

流的回调例程。

5.QCamera3Channel.cpp–>streamCbRoutine()

void QCamera3Channel::streamCbRoutine(mm_camera_super_buf_t *super_frame,
                QCamera3Stream *stream, void *userdata)
{
    QCamera3Channel *channel = (QCamera3Channel *)userdata;
    if (channel == NULL) {
        ALOGE("%s: invalid channel pointer", __func__);
        return;
    }
    channel->streamCbRoutine(super_frame, stream);
}

  • 参数校验;
  • 填充 camera3_stream_buffer_t 结构,准备回调到 framework;
  • 调用 mChannelCB 指向的函数,实际上指向 QCamera3HardwareInterface::captureResultCb;

6.QCamera3Channel.cpp–>streamCbRoutine()

void QCamera3RegularChannel::streamCbRoutine(
                            mm_camera_super_buf_t *super_frame,
                            QCamera3Stream *stream)
{
    ATRACE_CALL();
    //FIXME Q Buf back in case of error?
    uint8_t frameIndex;
    buffer_handle_t *resultBuffer;
    int32_t resultFrameNumber;
    camera3_stream_buffer_t result;

    if (NULL == stream) {
        ALOGE("%s: Invalid stream", __func__);
        return;
    }

    if(!super_frame) {
         ALOGE("%s: Invalid Super buffer",__func__);
         return;
    }

    if(super_frame->num_bufs != 1) {
         ALOGE("%s: Multiple streams are not supported",__func__);
         return;
    }
    if(super_frame->bufs[0] == NULL ) {
         ALOGE("%s: Error, Super buffer frame does not contain valid buffer",
                  __func__);
         return;
    }

    frameIndex = (uint8_t)super_frame->bufs[0]->buf_idx;
    if(frameIndex >= mNumBufs) {
         ALOGE("%s: Error, Invalid index for buffer",__func__);
         stream->bufDone(frameIndex);
         return;
    }

    使用以下数据发布 framework 回调
    resultBuffer = (buffer_handle_t *)mMemory.getBufferHandle(frameIndex);
    resultFrameNumber = mMemory.getFrameNumber(frameIndex);

    result.stream = mCamera3Stream;
    result.buffer = resultBuffer;
    result.status = CAMERA3_BUFFER_STATUS_OK;
    result.acquire_fence = -1;
    result.release_fence = -1;
    int32_t rc = stream->bufRelease(frameIndex);
    if (NO_ERROR != rc) {
        ALOGE("%s: Error %d releasing stream buffer %d",
                __func__, rc, frameIndex);
    }

    rc = mMemory.unregisterBuffer(frameIndex);
    if (NO_ERROR != rc) {
        ALOGE("%s: Error %d unregistering stream buffer %d",
                __func__, rc, frameIndex);
    }

    if (0 <= resultFrameNumber){
        mChannelCB(NULL, &result, (uint32_t)resultFrameNumber, mUserData);
    } else {
        ALOGE("%s: Bad frame number", __func__);
    }

    free(super_frame);
    return;
}

所有 channel 的回调处理程序(流以及元数据)

7.QCamera3HWI.cpp–>captureResultCb()

void QCamera3HardwareInterface::captureResultCb(mm_camera_super_buf_t *metadata,
                camera3_stream_buffer_t *buffer,
                uint32_t frame_number, void *userdata)
{
    QCamera3HardwareInterface *hw = (QCamera3HardwareInterface *)userdata;
    if (hw == NULL) {
        ALOGE("%s: Invalid hw %p", __func__, hw);
        return;
    }

    hw->captureResultCb(metadata, buffer, frame_number);
    return;
}

此处重点跟 handleBufferWithLock 函数,处理持有 mMutex 锁的图像缓冲区回调。

8.QCamera3HWI.cpp–>captureResultCb()

void QCamera3HardwareInterface::captureResultCb(mm_camera_super_buf_t *metadata_buf,
                camera3_stream_buffer_t *buffer, uint32_t frame_number)
{
    pthread_mutex_lock(&mMutex);

    /*假设在任何重新处理之前调用flush()。收到任何回调后立即发送通知和结果*/
    if (mLoopBackResult) {
        /* 发送通知 */
        camera3_notify_msg_t notify_msg;
        notify_msg.type = CAMERA3_MSG_SHUTTER;
        notify_msg.message.shutter.frame_number = mLoopBackResult->frame_number;
        notify_msg.message.shutter.timestamp = mLoopBackTimestamp;
        mCallbackOps->notify(mCallbackOps, &notify_msg);
        /* 发送捕获结果 */
        mCallbackOps->process_capture_result(mCallbackOps, mLoopBackResult);
        free_camera_metadata((camera_metadata_t *)mLoopBackResult->result);
        free(mLoopBackResult);
        mLoopBackResult = NULL;
    }

    if (metadata_buf)
        handleMetadataWithLock(metadata_buf);
    else
        handleBufferWithLock(buffer, frame_number);
    pthread_mutex_unlock(&mMutex);
}

帧号不在待处理列表直接调用 process_capture_result 处理。

从源头HAL回调

首先的,Camera HAL通过 VIDIOC_DQBUF 拿到图像数据之后,将会通过 mCallbackOps->notify()、mCallbackOps->process_capture_result() 回调通知 CameraProvider,而 mCallbackOps 是 CameraProvider 在调用 CameraDeviceSession::initialize() 时传递 this 指针下来赋值的,而 process_capture_result() 指针指向implementation 的 CameraDeviceSession::sProcessCaptureResult(),notify() 指向implementation 的 CameraDeviceSession::sNotify()。

一般的,camera HAL中,在获取图像数据之后,图像数据信息拷贝到 request->output_buffers[0] ,接着将调用 mCallbackOps->notify() 发送请求完成的消息到 CameraProvider,再调用 mCallbackOps->process_capture_result() 发送请求返回的相应数据信息。

文件位置:hardware/libhardware/modules/camera/3_4/camera.cpp

void Camera::notifyShutter(uint32_t frame_number, uint64_t timestamp)
{
    camera3_notify_msg_t message;
    memset(&message, 0, sizeof(message));
    message.type = CAMERA3_MSG_SHUTTER;
    message.message.shutter.frame_number = frame_number;
    message.message.shutter.timestamp = timestamp;
    mCallbackOps->notify(mCallbackOps, &message);
}

void Camera::sendResult(std::shared_ptr<CaptureRequest> request) {
    // Fill in the result struct
    // (it only needs to live until the end of the framework callback).
    camera3_capture_result_t result {
        request->frame_number,
        request->settings.getAndLock(),
        static_cast<uint32_t>(request->output_buffers.size()),
        request->output_buffers.data(),
        request->input_buffer.get(),
        1,  // Total result; only 1 part.
        0,  // Number of physical camera metadata.
        nullptr,
        nullptr
    };
    // Make the framework callback.
    mCallbackOps->process_capture_result(mCallbackOps, &result);
}

CameraProvider

文件位置:hardware\interfaces\camera\device\3.2\default\CameraDeviceSession.cpp

void CameraDeviceSession::sNotify(
        const camera3_callback_ops *cb,
        const camera3_notify_msg *msg) {
    CameraDeviceSession *d =
            const_cast<CameraDeviceSession*>(static_cast<const CameraDeviceSession*>(cb));
    NotifyMsg hidlMsg;
    convertToHidl(msg, &hidlMsg);
    ...
	/* 在进行一些参数检查之后,将调用回调的 notify() 函数,
     * mResultBatcher 实际是在open camera时创建 CameraDeviceSession 
     * 实例对象时传递下来的回调函数,而通过阅读代码可以知道,这个mResultBatcher
     * 实质是 CameraService 端的 Camera3Device */
    d->mResultBatcher.notify(hidlMsg);
}

9.QCamera3HWI.cpp–>handleBufferWithLock()

void QCamera3HardwareInterface::handleBufferWithLock(
    camera3_stream_buffer_t *buffer, uint32_t frame_number)
{
    ATRACE_CALL();
    // 如果待处理的请求列表中不存在帧号,则直接将缓冲区发送到 framework,
    // 并更新待处理的缓冲区映射,否则,记录缓冲区。
    List<PendingRequestInfo>::iterator i = mPendingRequestsList.begin();
    while (i != mPendingRequestsList.end() && i->frame_number != frame_number){
        i++;
    }
    if (i == mPendingRequestsList.end()) {
        // 验证所有挂起的请求frame_number是否更大
        for (List<PendingRequestInfo>::iterator j = mPendingRequestsList.begin();
                j != mPendingRequestsList.end(); j++) {
            if (j->frame_number < frame_number) {
                ALOGE("%s: Error: pending frame number %d is smaller than %d",
                        __func__, j->frame_number, frame_number);
            }
        }
        camera3_capture_result_t result;
        memset(&result, 0, sizeof(camera3_capture_result_t));
        result.result = NULL;
        result.frame_number = frame_number;
        result.num_output_buffers = 1;
        result.partial_result = 0;
        for (List<PendingFrameDropInfo>::iterator m = mPendingFrameDropList.begin();
                m != mPendingFrameDropList.end(); m++) {
            QCamera3Channel *channel = (QCamera3Channel *)buffer->stream->priv;
            uint32_t streamID = channel->getStreamID(channel->getStreamTypeMask());
            if((m->stream_ID == streamID) && (m->frame_number==frame_number) ) {
                buffer->status=CAMERA3_BUFFER_STATUS_ERROR;
                CDBG("%s: Stream STATUS_ERROR frame_number=%d, streamID=%d",
                        __func__, frame_number, streamID);
                m = mPendingFrameDropList.erase(m);
                break;
            }
        }
        result.output_buffers = buffer;
        CDBG("%s: result frame_number = %d, buffer = %p",
                __func__, frame_number, buffer->buffer);

        for (List<PendingBufferInfo>::iterator k =
                mPendingBuffersMap.mPendingBufferList.begin();
                k != mPendingBuffersMap.mPendingBufferList.end(); k++ ) {
            if (k->buffer == buffer->buffer) {
                CDBG("%s: Found Frame buffer, take it out from list",
                        __func__);

                mPendingBuffersMap.num_buffers--;
                k = mPendingBuffersMap.mPendingBufferList.erase(k);
                break;
            }
        }
        CDBG("%s: mPendingBuffersMap.num_buffers = %d",
            __func__, mPendingBuffersMap.num_buffers);

        mCallbackOps->process_capture_result(mCallbackOps, &result);
    } else {
        if (i->input_buffer) {
            CameraMetadata settings;
            camera3_notify_msg_t notify_msg;
            memset(&notify_msg, 0, sizeof(camera3_notify_msg_t));
            nsecs_t capture_time = systemTime(CLOCK_MONOTONIC);
            if(i->settings) {
                settings = i->settings;
                if (settings.exists(ANDROID_SENSOR_TIMESTAMP)) {
                    capture_time = settings.find(ANDROID_SENSOR_TIMESTAMP).data.i64[0];
                } else {
                    ALOGE("%s: No timestamp in input settings! Using current one.",
                            __func__);
                }
            } else {
                ALOGE("%s: Input settings missing!", __func__);
            }

            notify_msg.type = CAMERA3_MSG_SHUTTER;
            notify_msg.message.shutter.frame_number = frame_number;
            notify_msg.message.shutter.timestamp = capture_time;

            if (i->input_buffer->release_fence != -1) {
               int32_t rc = sync_wait(i->input_buffer->release_fence, TIMEOUT_NEVER);
               close(i->input_buffer->release_fence);
               if (rc != OK) {
               ALOGE("%s: input buffer sync wait failed %d", __func__, rc);
               }
            }

            for (List<PendingBufferInfo>::iterator k =
                    mPendingBuffersMap.mPendingBufferList.begin();
                    k != mPendingBuffersMap.mPendingBufferList.end(); k++ ) {
                if (k->buffer == buffer->buffer) {
                    CDBG("%s: Found Frame buffer, take it out from list",
                            __func__);

                    mPendingBuffersMap.num_buffers--;
                    k = mPendingBuffersMap.mPendingBufferList.erase(k);
                    break;
                }
            }
            CDBG("%s: mPendingBuffersMap.num_buffers = %d",
                __func__, mPendingBuffersMap.num_buffers);

            bool notifyNow = true;
            for (List<PendingRequestInfo>::iterator j = mPendingRequestsList.begin();
                    j != mPendingRequestsList.end(); j++) {
                if (j->frame_number < frame_number) {
                    notifyNow = false;
                    break;
                }
            }

            if (notifyNow) {
                camera3_capture_result result;
                memset(&result, 0, sizeof(camera3_capture_result));
                result.frame_number = frame_number;
                result.result = i->settings;
                result.input_buffer = i->input_buffer;
                result.num_output_buffers = 1;
                result.output_buffers = buffer;
                result.partial_result = PARTIAL_RESULT_COUNT;

                mCallbackOps->notify(mCallbackOps, &notify_msg);
                mCallbackOps->process_capture_result(mCallbackOps, &result);
                CDBG("%s: Notify reprocess now %d!", __func__, frame_number);
                i = mPendingRequestsList.erase(i);
                mPendingRequest--;
            } else {
                // 缓存重新处理结果以供以后使用
                PendingReprocessResult pendingResult;
                memset(&pendingResult, 0, sizeof(PendingReprocessResult));
                pendingResult.notify_msg = notify_msg;
                pendingResult.buffer = *buffer;
                pendingResult.frame_number = frame_number;
                mPendingReprocessResultList.push_back(pendingResult);
                CDBG("%s: Cache reprocess result %d!", __func__, frame_number);
            }
        } else {
            for (List<RequestedBufferInfo>::iterator j = i->buffers.begin();
                j != i->buffers.end(); j++) {
                if (j->stream == buffer->stream) {
                    if (j->buffer != NULL) {
                        ALOGE("%s: Error: buffer is already set", __func__);
                    } else {
                        j->buffer = (camera3_stream_buffer_t *)malloc(
                            sizeof(camera3_stream_buffer_t));
                        *(j->buffer) = *buffer;
                        CDBG("%s: cache buffer %p at result frame_number %d",
                            __func__, buffer, frame_number);
                    }
                }
            }
        }
    }
}

CameraDeviceSession::sProcessCaptureResult()

/**
 * Static callback forwarding methods from HAL to instance
 */
void CameraDeviceSession::sProcessCaptureResult(
        const camera3_callback_ops *cb,
        const camera3_capture_result *hal_result) {
    CameraDeviceSession *d =
            const_cast<CameraDeviceSession*>(static_cast<const CameraDeviceSession*>(cb));

    CaptureResult result = {};
    camera3_capture_result shadowResult;
    bool handlePhysCam = (d->mDeviceVersion >= CAMERA_DEVICE_API_VERSION_3_5);
    std::vector<::android::hardware::camera::common::V1_0::helper::CameraMetadata> compactMds;
    std::vector<const camera_metadata_t*> physCamMdArray;
    sShrinkCaptureResult(&shadowResult, hal_result, &compactMds, &physCamMdArray, handlePhysCam);

    /* 检查数据的完成性以及擦除之前提交请求信息时的一些参数信息 */
    status_t ret = d->constructCaptureResult(result, &shadowResult);
    if (ret == OK) {
        /* 一样的,调用到CameraService 的 Camera3Device */
        d->mResultBatcher.processCaptureResult(result);
    }
}

打开 camera 设备时候,会给 camera3_callback_ops::process_capture_result 赋值,上面的函数调用实际调用到 sProcessCaptureResult 函数。

10.Camera3Device.cpp–>Camera3Device()

Camera3Device::Camera3Device(int id):
        mId(id),
        mIsConstrainedHighSpeedConfiguration(false),
        mHal3Device(NULL),
        mStatus(STATUS_UNINITIALIZED),
        mStatusWaiters(0),
        mUsePartialResult(false),
        mNumPartialResults(1),
        mNextResultFrameNumber(0),
        mNextReprocessResultFrameNumber(0),
        mNextShutterFrameNumber(0),
        mNextReprocessShutterFrameNumber(0),
        mListener(NULL)
{
    ATRACE_CALL();
    camera3_callback_ops::notify = &sNotify;
    camera3_callback_ops::process_capture_result = &sProcessCaptureResult;
    ALOGV("%s: Created device for camera %d", __FUNCTION__, id);
}

从HAL到实例的静态回调转发方法。

11.Camera3Device.cpp–>sProcessCaptureResult()

void Camera3Device::sProcessCaptureResult(const camera3_callback_ops *cb,
        const camera3_capture_result *result) {
    Camera3Device *d =
            const_cast<Camera3Device*>(static_cast<const Camera3Device*>(cb));

    d->processCaptureResult(result);
}

相机HAL设备的回调方法。重点分析 returnOutputBuffers(…) 函数。

12.Camera3Device.cpp–>processCaptureResult()

void Camera3Device::processCaptureResult(const camera3_capture_result *result) {
     /* 在该函数中,还是进行取帧号的操作,isPartialResult是指部分,
     * 目前对Partial的具体含义还不是很了解,可能的情况比如拍HDR,需要
     * 采集三帧,三帧的FrameNumber相同,这三帧一起才能解析合成一帧图片,
     * 所以三帧中的每一帧就是Partial的意思了。(源自参考文章的理解)
     */
    uint32_t frameNumber = result->frame_number;
    // 对于HAL3.2或更高版本,如果 HAL 不支持 partial,
    // 则当此结果中包含元数据时,必须始终将 partial_result 设置为1。
    if (!mUsePartialResult &&
            mDeviceVersion >= CAMERA_DEVICE_API_VERSION_3_2 &&
            result->result != NULL &&
            result->partial_result != 1) {
        return;
    }
    bool isPartialResult = false;
    CameraMetadata collectedPartialResult;
    CaptureResultExtras resultExtras;
    bool hasInputBufferInRequest = false;

    /** 从进行中的请求列表中获取快门时间戳和resultExtras,
    并在此帧的快门通知中添加。 如果尚未收到快门时间戳,
    将输出缓冲区附加到进行中的请求中,当快门时间戳到达时,将返回它们。 
    如果已收到所有结果数据和快门时间戳,更新进行中状态并删除进行中条目。*/
    nsecs_t shutterTimestamp = 0;

    {
        Mutex::Autolock l(mInFlightLock);
        ssize_t idx = mInFlightMap.indexOfKey(frameNumber);
        InFlightRequest &request = mInFlightMap.editValueAt(idx);
        // 如果部分计数不是0(仅用于缓冲区),则始终将其更新为最新的数。
        // 当框架将相邻的部分结果聚合为一个时,将使用最新的部分计数。
        if (result->partial_result != 0)
            request.resultExtras.partialResultCount = result->partial_result;
        // 检查此结果是否只包含部分元数据
        if (mUsePartialResult && result->result != NULL) {
            if (mDeviceVersion >= CAMERA_DEVICE_API_VERSION_3_2) {
                isPartialResult = (result->partial_result < mNumPartialResults);
                if (isPartialResult) {
                    request.partialResult.collectedResult.append(result->result);
                }
            } else {
                camera_metadata_ro_entry_t partialResultEntry;
                res = find_camera_metadata_ro_entry(result->result,
                        ANDROID_QUIRKS_PARTIAL_RESULT, &partialResultEntry);
                if (res != NAME_NOT_FOUND &&
                        partialResultEntry.count > 0 &&
                        partialResultEntry.data.u8[0] ==
                        ANDROID_QUIRKS_PARTIAL_RESULT_PARTIAL) {
                    // A partial result. Flag this as such, and collect this
                    // set of metadata into the in-flight entry.
                    isPartialResult = true;
                    request.partialResult.collectedResult.append(
                        result->result);
                    request.partialResult.collectedResult.erase(
                        ANDROID_QUIRKS_PARTIAL_RESULT);
                }
            }

            if (isPartialResult) {
                // Fire off a 3A-only result if possible
                if (!request.partialResult.haveSent3A) {
                    request.partialResult.haveSent3A =
                            processPartial3AResult(frameNumber,
                                    request.partialResult.collectedResult,
                                    request.resultExtras);
                }
            }
        }

        shutterTimestamp = request.shutterTimestamp;
        hasInputBufferInRequest = request.hasInputBuffer;

        // 我们是否得到了这次捕获的(最终的)结果元数据?
        if (result->result != NULL && !isPartialResult) {
            if (request.haveResultMetadata) {
            if (mUsePartialResult &&
                    !request.partialResult.collectedResult.isEmpty()) {
                collectedPartialResult.acquire(
                    request.partialResult.collectedResult);
            }
            request.haveResultMetadata = true;
        }

        uint32_t numBuffersReturned = result->num_output_buffers;
        if (result->input_buffer != NULL) {
            if (hasInputBufferInRequest) {
                numBuffersReturned += 1;
            }
        }
        request.numBuffersLeft -= numBuffersReturned;
        camera_metadata_ro_entry_t entry;
        res = find_camera_metadata_ro_entry(result->result,
                ANDROID_SENSOR_TIMESTAMP, &entry);
        if (res == OK && entry.count == 1) {
            request.sensorTimestamp = entry.data.i64[0];
        }
        // 如果还没有接收到shutter事件,则将输出缓冲区附加到正在处理的请求。
        // 否则,将输出缓冲区返回到流。
        if (shutterTimestamp == 0) {
            request.pendingOutputBuffers.appendArray(result->output_buffers,
                result->num_output_buffers);
        } else {
             /* 在这里,将会返回buf到 Surface 进行显示 */
            returnOutputBuffers(result->output_buffers,
                result->num_output_buffers, shutterTimestamp);
        }

        if (result->result != NULL && !isPartialResult) {
            if (shutterTimestamp == 0) {
                request.pendingMetadata = result->result;
                request.partialResult.collectedResult = collectedPartialResult;
            } else {
                CameraMetadata metadata;
                metadata = result->result;
                /* 返回结果到 APP */
                sendCaptureResult(metadata, request.resultExtras,
                    collectedPartialResult, frameNumber, hasInputBufferInRequest,
                    request.aeTriggerCancelOverride);
            }
        }
        removeInFlightRequestIfReadyLocked(idx);
    } // scope for mInFlightLock

    if (result->input_buffer != NULL) {
        if (hasInputBufferInRequest) {
            Camera3Stream *stream =
                Camera3Stream::cast(result->input_buffer->stream);
            res = stream->returnInputBuffer(*(result->input_buffer));
            // Note: stream may be deallocated at this point, if this buffer was the
            // last reference to it.
            if (res != OK) {
                ALOGE("%s: RequestThread: Can't return input buffer for frame %d to"
                      "  its stream:%s (%d)",  __FUNCTION__,
                      frameNumber, strerror(-res), res);
            }
        } 
    }
}

先获取 Camera3Stream 对象,然后调用其 returnBuffer 方法。

13.Camera3Device.cpp–>returnOutputBuffers()

在 Camera3Device::processCaptureResult() 中将调用到 returnOutputBuffers() 函数,而该函数的调用流程将如下:

Camera3Device::processCaptureResult() ---> stream->returnBuffer()
    Camera3Stream::returnBuffer() ---> returnBufferLocked()
    	Camera3OutputStream::returnBufferLocked() ---> returnAnyBufferLocked( , , true)
    		Camera3IOStreamBase::returnAnyBufferLocked() ---> returnBufferCheckedLocked()
    			Camera3OutputStream::returnBufferCheckedLocked()
void Camera3Device::returnOutputBuffers(
        const camera3_stream_buffer_t *outputBuffers, size_t numBuffers,
        nsecs_t timestamp) {
    for (size_t i = 0; i < numBuffers; i++)
    {
        Camera3Stream *stream = Camera3Stream::cast(outputBuffers[i].stream);
        status_t res = stream->returnBuffer(outputBuffers[i], timestamp);
        // 如果该缓冲区是对流的最后引用,则流可能在此时被释放。
        if (res != OK) {
            ALOGE("Can't return buffer to its stream: %s (%d)",
                strerror(-res), res);
        }
    }
}

此处调用 returnBufferLocked 继续返回。

14.Camera3Stream.cpp–>returnBuffer()

status_t Camera3Stream::returnBuffer(const camera3_stream_buffer &buffer,
        nsecs_t timestamp) {
    ATRACE_CALL();
    Mutex::Autolock l(mLock);
    /**
     * TODO: 首先检查状态是否有效。
     *
     * <HAL3.2 IN_CONFIG and IN_RECONFIG in addition to CONFIGURED.
     * >= HAL3.2 CONFIGURED only
     *
     * 也对getBuffer执行此操作。
     */
    status_t res = returnBufferLocked(buffer, timestamp);
    if (res == OK) {
        fireBufferListenersLocked(buffer, /*acquired*/false, /*output*/true);
    }

    // 即使缓冲区返回失败,我们仍然希望向等待缓冲区返回的人发送信号。
    mOutputBufferReturnedSignal.signal();

    return res;
}

在**《Camera2 预览流程分析一》**中创建了 Camera3OutputStream 对象。此处调用了 returnAnyBufferLocked 函数。

15.Camera3OutputStream.cpp–>returnBufferLocked()

status_t Camera3OutputStream::returnBufferLocked(
        const camera3_stream_buffer &buffer,
        nsecs_t timestamp) {
    ATRACE_CALL();

    status_t res = returnAnyBufferLocked(buffer, timestamp, /*output*/true);

    if (res != OK) {
        return res;
    }

    mLastTimestamp = timestamp;

    return OK;
}

16.Camera3IOStreamBase.cpp–>returnAnyBufferLocked()

此处重点来看 returnBufferCheckedLocked 方法。

status_t Camera3IOStreamBase::returnAnyBufferLocked(
        const camera3_stream_buffer &buffer,
        nsecs_t timestamp,
        bool output) {
    status_t res;

    // returnBuffer may be called from a raw pointer, not a sp<>, and we'll be
    // decrementing the internal refcount next. In case this is the last ref, we
    // might get destructed on the decStrong(), so keep an sp around until the
    // end of the call - otherwise have to sprinkle the decStrong on all exit
    // points.
    sp<Camera3IOStreamBase> keepAlive(this);
    decStrong(this);

    if ((res = returnBufferPreconditionCheckLocked()) != OK) {
        return res;
    }

    sp<Fence> releaseFence;
    res = returnBufferCheckedLocked(buffer, timestamp, output,
                                    &releaseFence);
    // Res may be an error, but we still want to decrement our owned count
    // to enable clean shutdown. So we'll just return the error but otherwise
    // carry on

    if (releaseFence != 0) {
        mCombinedFence = Fence::merge(mName, mCombinedFence, releaseFence);
    }

    if (output) {
        mHandoutOutputBufferCount--;
    }

    mHandoutTotalBufferCount--;
    if (mHandoutTotalBufferCount == 0 && mState != STATE_IN_CONFIG &&
            mState != STATE_IN_RECONFIG && mState != STATE_PREPARING) {
        /**
         * Avoid a spurious IDLE->ACTIVE->IDLE transition when using buffers
         * before/after register_stream_buffers during initial configuration
         * or re-configuration, or during prepare pre-allocation
         */
        ALOGV("%s: Stream %d: All buffers returned; now idle", __FUNCTION__,
                mId);
        sp<StatusTracker> statusTracker = mStatusTracker.promote();
        if (statusTracker != 0) {
            statusTracker->markComponentIdle(mStatusId, mCombinedFence);
        }
    }

    mBufferReturnedSignal.signal();

    if (output) {
        mLastTimestamp = timestamp;
    }

    return res;
}

在回过头看**《Camera2 预览流程分析二》**,这里消费者 queueBuffer,正真开始消费 Camera 帧。

17.Camera3OutputStream.cpp–>returnBufferCheckedLocked()

文件位置:framework\av\services\camera\libcameraservice\device3\Camera3OutputStream.cpp

status_t Camera3OutputStream::returnBufferCheckedLocked(
            const camera3_stream_buffer &buffer,
            nsecs_t timestamp,
            bool output,
            /*out*/
            sp<Fence> *releaseFenceOut) {

    (void)output;
    status_t res;
    // Fence management - always honor release fence from HAL
    sp<Fence> releaseFence = new Fence(buffer.release_fence);
    int anwReleaseFence = releaseFence->dup();

    /**
     * 简单释放锁以避免死锁
     * StreamingProcessor::startStream -> Camera3Stream::isConfiguring
     * 在 queueBuffer 期间(此线程将进入StreamingProcessor::onFrameAvailable)
     */
     /* 这个 mConsumer 是在创建 Camera3OutputStream 实例对象时传递进来的参数,
     * 而这个实例对象是在 Camera3Device::createStream() 时创建的,可以看看上面
     * 的章节,mConsumer 是在创建流时传递下来的 Surface */

    sp<ANativeWindow> currentConsumer = mConsumer;
    mLock.unlock();

    /**
     * 将缓冲区返回到 ANativeWindow
     */
    if (buffer.status == CAMERA3_BUFFER_STATUS_ERROR) {
        // 取消 buffer
        res = currentConsumer->cancelBuffer(currentConsumer.get(),
                container_of(buffer.buffer, ANativeWindowBuffer, handle),
                anwReleaseFence);
    } else {
        if (mTraceFirstBuffer && (stream_type == CAMERA3_STREAM_OUTPUT)) {
            {
                char traceLog[48];
                snprintf(traceLog, sizeof(traceLog), "Stream %d: first full buffer\n", mId);
                ATRACE_NAME(traceLog);
            }
            mTraceFirstBuffer = false;
        }
        /* Certain consumers (such as AudioSource or HardwareComposer) use
         * MONOTONIC time, causing time misalignment if camera timestamp is
         * in BOOTTIME. Do the conversion if necessary. */
        res = native_window_set_buffers_timestamp(mConsumer.get(),
                mUseMonoTimestamp ? timestamp - mTimestampOffset : timestamp);
        /* 将调用 queueBufferToConsumer() 归还 buffer,这时
         * buffer 已经经过ISP HAL等处理、填充完毕,可以进行显示了 */
        res = queueBufferToConsumer(currentConsumer, anwBuffer, anwReleaseFence, surface_ids);
    }
    mLock.lock();
    // 一旦一个有效的缓冲区被返回到队列中,就不能再将所有的缓冲区取出来进行预分配。
    if (buffer.status != CAMERA3_BUFFER_STATUS_ERROR) {
        mStreamUnpreparable = true;
    }
    *releaseFenceOut = releaseFence;
    return res;
}
status_t Camera3OutputStream::queueBufferToConsumer(sp<ANativeWindow>& consumer,
            ANativeWindowBuffer* buffer, int anwReleaseFence) {
    /* 直接调用 ANativeWindow 对象的 queueBuffer 方法将 buffer 归还回去,
     * ANativeWindow 是 OpenGL 定义的图形接口,在 Android上 的实现就是
     * Surface 和 SurfaceFlinger,一个用于生产 buffer ,一个用于消费 buffer
     */
    return consumer->queueBuffer(consumer.get(), buffer, anwReleaseFence);
}

这样在 queueBuffer() 之后,buffer 将可以通过 Surface 显示出来了。

Camera3Device::sendCaptureResult()

了解到显示buffer的填充之后,我们再来看看,CameraService 又是如何通过 sendCaptureResult() 告知 APP 的。

void Camera3Device::sendCaptureResult(CameraMetadata &pendingMetadata,
        CaptureResultExtras &resultExtras,
        CameraMetadata &collectedPartialResult,
        uint32_t frameNumber,
        bool reprocess,
        const std::vector<PhysicalCaptureResultInfo>& physicalMetadatas) {

    ...
    /* 在完成 captureResult 的填充之后,将会把结果通过
     * insertResultLocked() 添加到 mResultQueue 队列中 */
    insertResultLocked(&captureResult, frameNumber);
}

void Camera3Device::insertResultLocked(CaptureResult *result,
        uint32_t frameNumber) {
    ...

    // Valid result, insert into queue
    List<CaptureResult>::iterator queuedResult =
            mResultQueue.insert(mResultQueue.end(), CaptureResult(*result));

    /* 添加完毕之后,发送信号 */
    mResultSignal.signal();
}

在 Camera3Device::insertResultLocked() 中的 mResultSignal 信号到底是发给谁呢?

通过搜索代码可以知道,在 FrameProcessorBase 帧处理线程中将会接收该信号。而帧处理线程是在 CameraDeviceClient 对象的 CameraDeviceClient::initializeImpl() 中启动的(open camera阶段调用到该函数)。

看看帧处理线程进行什么操作?

bool FrameProcessorBase::threadLoop() {
    status_t res;

    sp<CameraDeviceBase> device;
    {
        device = mDevice.promote();
        if (device == 0) return false;
    }

    /* 就是在这里等待 mResultSignal 信号 */
    res = device->waitForNextFrame(kWaitDuration);
    if (res == OK) {
        /* 处理帧 */
        processNewFrames(device);
    }
    
    ...
        
    return true;
}

void FrameProcessorBase::processNewFrames(const sp<CameraDeviceBase> &device) {
    CaptureResult result;
    
    /* 读取队列中的 captureResult 信息 */
    while ( (res = device->getNextResult(&result)) == OK) {
        ...
        if (!processSingleFrame(result, device)) {
            break;
        }
        ...
}

bool FrameProcessorBase::processSingleFrame(CaptureResult &result,
                                            const sp<CameraDeviceBase> &device) {

    return processListeners(result, device) == OK;
}
    
status_t FrameProcessorBase::processListeners(const CaptureResult &result,
        const sp<CameraDeviceBase> &device) {
    ...
    List<sp<FilteredListener> > listeners;
    {
        /* 获取监听并添加到 listeners,mRangeListeners 是通过
         * FrameProcessorBase::registerListener() 登记监听的,
         * 在 CameraDeviceClient::initializeImpl() 中创建
         * FrameProcessorBase 实例时,将会把 CameraDeviceClient
         * 作为监听者进行注册登记到 mRangeListeners */
        List<RangeListener>::iterator item = mRangeListeners.begin(); 
        while (item != mRangeListeners.end()) {
            ...
                            listeners.push_back(listener);
        }
    }
    
    List<sp<FilteredListener> >::iterator item = listeners.begin();
    for (; item != listeners.end(); item++) {
        /* 针对监听者,将会调用 onResultAvailable(),这里将调
         * 用到 CameraDeviceClient::onResultAvailable() */
        (*item)->onResultAvailable(result);
    }
    return OK;
}
    
/** Device-related methods */
void CameraDeviceClient::onResultAvailable(const CaptureResult& result) {

    /* 这里的 mRemoteCallback 就是在CameraService 创建 CameraDeviceClient
     * 实例时传递进来的 remoteCb,自然就是 Camera APP当中的成员了,它是
     * CameraDeviceImpl 类的内部类CameraDeviceCallbacks对象 */
    sp<hardware::camera2::ICameraDeviceCallbacks> remoteCb = mRemoteCallback;
    if (remoteCb != NULL) {
        remoteCb->onResultReceived(result.mMetadata, result.mResultExtras,
                result.mPhysicalMetadatas);
    }
}

  • 3
    点赞
  • 11
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值