Android: Camera1 open、preview、take picture流程分析

一、Camera 架构

NOTE:这是 Android Camera API 1 ,Camera 的架构与 Android 整体架构是保持一致的:
Framework : Camera.java
Android Runtime : android_hardware_Camera.cpp
Library :
         Client (Camera.cpp, ICameraClient.cpp, etc...)
         Server (CameraService.cpp, ICameraService.cpp, etc...)
HAL : CameraHardwareInterface.h

  架构简图:

       

  根据架构简图可以看到,实际上 Camera 的架构与 Android 架构是一一对应的,上层应用调用 Camera 相关的方法后,指令依次通过框架层、运行时环境、本地库、硬件抽象层,最终到达具体设备。设备执行动作后,获得的数据又会沿着反方向依次发送到最上层。
需要注意的是,在本地库这一层中,涉及到一个 C/S 结构:

  • 即通过客户端与服务端的交互来传递指令与数据。
  • 实际上,只有服务端与 HAL 层进行沟通。
  • 由于客户端与服务端是不同的进程,它们之间需要依靠 IPC Binder 机制 来进行通讯。

  相关源码位置(Android 7.1 源码):

Application:(这部分不是学习的重点)
    packages/apps/Camera2/src/com/android/camera/***

Framework:
    /frameworks/base/core/java/android/hardware/Camera.java

Android Runtime:
    frameworks/base/core/jni/android_hardware_Camera.cpp

C/C++ Libraries:
    Client:
        frameworks/av/camera/CameraBase.cpp
        frameworks/av/camera/Camera.cpp
        frameworks/av/camera/ICamera.cpp
        frameworks/av/camera/aidl/android/hardware/ICamera.aidl
        frameworks/av/camera/aidl/android/hardware/ICameraClient.aidl
    Server:
        frameworks/av/camera/cameraserver/main_cameraserver.cpp
        frameworks/av/services/camera/libcameraservice/CameraService.cpp
        frameworks/av/services/camera/libcameraservice/api1/CameraClient.cpp
        frameworks/av/camera/aidl/android/hardware/ICameraService.aidl

HAL:
    HAL 1:(此篇文章分析)
        frameworks/av/services/camera/libcameraservice/device1/CameraHardwareInterface.h
    HAL 3:(参考:https://www.cnblogs.com/blogs-of-lxl/p/10651611.html )
        frameworks/av/services/camera/libcameraservice/device3/***

二、Camera Open 调用流程

 1.Camera.java(Framework层):frameworks/base/core/java/android/hardware/Camera.java

/***    
* Creates a new Camera object to access 
* the first back-facing camera on the     
* device. If the device does not have a back-facing camera,
* this returns null.     
* @see #open(int)     
*/
public static Camera open() {
    int numberOfCameras = getNumberOfCameras();  //获取 Camera 设备的个数。      
    CameraInfo cameraInfo = new CameraInfo(); 
    for (int i = 0; i < numberOfCameras; i++) {
        getCameraInfo(i, cameraInfo); //依次获取设备信息,
        if (cameraInfo.facing == CameraInfo.CAMERA_FACING_BACK) { //如果是获取到后置摄像头(默认),则调用 new Camera(int) 构造对应的摄像头实例。
            return new Camera(i);
        } 
    }        
    return null;    
}

  Camera(int cameraId)

/** used by Camera#open, Camera#open(int) */    
Camera(int cameraId) {        
    int err = cameraInitNormal(cameraId); //通过调用 cameraInitNormal(Id) 方法对指定摄像头进行初始化。 
    if (checkInitErrors(err)) {            
        if (err == -EACCES) {                
            throw new RuntimeException("Fail to connect to camera service");            
        } else if (err == -ENODEV) {                
            throw new RuntimeException("Camera initialization failed");            
        }            
    // Should never hit this.            
    throw new RuntimeException("Unknown camera error");        
    }    
}

  cameraInitNormal(int cameraId)

private int cameraInitNormal(int cameraId) {        
    return cameraInitVersion(cameraId, 
            CAMERA_HAL_API_VERSION_NORMAL_CONNECT); //调用 cameraInitVersion(int cameraId, int halVersion),指定 halVersion 参数
}

  cameraInitVersion(int cameraId, int halVersion)

private int cameraInitVersion(int cameraId,
                              int halVersion) {      
    //将各个回调函数置空  
    mShutterCallback = null;        
    mRawImageCallback = null;        
    mJpegCallback = null;        
    mPreviewCallback = null;        
    mPostviewCallback = null;        
    mUsingPreviewAllocation = false;        
    mZoomListener = null;        

    Looper looper;    //通过 Looper 对事件处理对象进行实例化后,就调用 native_setup 方法进入 JNI(Java Native Interface) 库中调用对应的函数。    
    if ((looper = Looper.myLooper()) != null) {            
        mEventHandler = new EventHandler(this, looper);        
    } else if ((looper = Looper.getMainLooper()) != null) {            
        mEventHandler = new EventHandler(this, looper);        
    } else {            
        mEventHandler = null;        
    }        
    return native_setup(new WeakReference<Camera>(this),
                cameraId, halVersion,
                ActivityThread.currentOpPackageName());    
}

  至此,open() 方法开始进入 Android Runtime 层。

  简图总结:

  

 

  2. android_hardware_Camera.cpp(Android Runtime):frameworks/base/core/jni/android_hardware_Camera.cpp

// connect to camera service
static jint android_hardware_Camera_native_setup(JNIEnv *env,
    jobject thiz, jobject weak_this,
    jint cameraId, jint halVersion,
    jstring clientPackageName)
{
    // convert jstring to String16(clientPackageName -> clientName)  //刚开始要先把 clientPackageName 做一个类型转换,变成 clientName
    ......
    ......

    sp<Camera> camer  //建立一个Camera类型的 StrongPointer(sp),然后通过 Camera::connect()或 Camera::connectLegacy(),让客户端与服务端进行连  //接,并返回相应的 Camera实例
    if (halVersion == CAMERA_HAL_API_VERSION_NORMAL_CONNECT) {
        /***** NOTE THIS *****/
        // Default path: hal version is don't care, do normal camera connect.
        camera = Camera::connect(cameraId, clientName, //进入C/C++ Libraries的 C/S 结构中,而 Camera则属于 Client
                Camera::USE_CALLING_UID, 
                Camera::USE_CALLING_PID);
    } else {
        jint status = Camera::connectLegacy(cameraId,
                halVersion, clientName,
                Camera::USE_CALLING_UID, camera);
        if (status != NO_ERROR) {
            return status;
        }
    }

    if (camera == NULL) {
        return -EACCES;
    }

  //最后对返回的实例进行一些基本的检查,并保存上下文
    // make sure camera hardware is alive
    if (camera->getStatus() != NO_ERROR) {
        return NO_INIT;
    }

    // save context in opaque field
    ......
    ......
}

  简图总结:

  

 


 3.Camera(C/C++ Libraries):

  (1)frameworks/av/include/camera/Camera.h

template <>
struct CameraTraits<Camera>
{
    typedef CameraListener                     TCamListener;
    typedef ::android::hardware::ICamera       TCamUser;
    typedef ::android::hardware::ICameraClient TCamCallbacks;
    typedef ::android::binder::Status(::android::hardware::ICameraService::*TCamConnectService)
        (const sp<::android::hardware::ICameraClient>&,
        int, const String16&, int, int,
        /*out*/
        sp<::android::hardware::ICamera>*);
    static TCamConnectService     fnConnectService;
};

  (2)framework/av/camera/Camera.cpp

CameraTraits<Camera>::TCamConnectService CameraTraits<Camera>::fnConnectService =  
    &::android::hardware::ICameraService::connect;//注:fnConnectService 是对应到 ICameraService::connect 函数的

    Camera::connect :

sp<Camera> Camera::connect(int cameraId, 
    const String16& clientPackageName,
    int clientUid, int clientPid)
{
    return CameraBaseT::connect(cameraId,  
        clientPackageName, clientUid, clientPid);//直接调用了 CameraBaseT::connect()这是定义在 CameraBase.cpp中的函数。
}

  (3)frameworks/av/include/camera/CameraBase.h

template <typename TCam, typename TCamTraits = CameraTraits<TCam> >  //TCam对应 Camera;TCamTraits对应 CameraTraits<Camera>

    注意类成员变量声明部分,即可知道 CameraBaseT 对应 CameraBase<Camera>

sp<TCamUser>                     mCamera;
status_t                         mStatus;
sp<TCamListener>                 mListener;
const int                        mCameraId;

/***** NOTE THIS *****/    
typedef CameraBase<TCam> CameraBaseT;

  (4)framework/av/camera/CameraBase.cpp

template <typename TCam, typename TCamTraits>
sp<TCam> CameraBase<TCam, TCamTraits>::connect(int cameraId,
           const String16& clientPackageName,
           int clientUid, int clientPid)
{
    ALOGV("%s: connect", __FUNCTION__);
    /***** NOTE THIS *****/
    sp<TCam> c = new TCam(cameraId); //实例化一个 Camera,通过 Camera 获取 ICameraClient 指针。
sp<TCamCallbacks> cl = c;
    const sp<::android::hardware::ICameraService> cs = getCameraService();

    binder::Status ret;
    if (cs != nullptr) {
        /***** NOTE THIS *****/
        TCamConnectService fnConnectService = TCamTraits::fnConnectService; //通过 getCameraService()函数获取 ICameraService。
        ret = (cs.get()->*fnConnectService)(cl, cameraId, 
                  clientPackageName, clientUid,
                  clientPid, /*out*/ &c->mCamera);//通过 ICameraService::connect()函数获得一个 mCamera, 即 ICamera实例。
    }
    if (ret.isOk() && c->mCamera != nullptr) {
        /***** NOTE THIS *****/
        IInterface::asBinder(c->mCamera)->linkToDeath(c); //将 ICamera 实例与 Binder 建立联系。
        c->mStatus = NO_ERROR;
    } else {
        ALOGW("An error occurred while connecting to camera %d: %s", cameraId,
            (cs != nullptr) ? "Service not available" : ret.toString8().string());
        c.clear();
    }
    return c;
}
// establish binder interface to camera service
template <typename TCam, typename TCamTraits>
const sp<::android::hardware::ICameraService> CameraBase<TCam, TCamTraits>::getCameraService()
{
    Mutex::Autolock _l(gLock);

    /***** NOTE THIS *****/
    if (gCameraService.get() == 0) {//gCameraService 是一个 ICameraService,调用get 函数,如果能获取到 ICameraService 则返回。
        char value[PROPERTY_VALUE_MAX];
        property_get("config.disable_cameraservice", value, "0");
        if (strncmp(value, "0", 2) != 0 && strncasecmp(value, "false", 6) != 0) {
            return gCameraService;//首先调用 ICameraService
        }
      
        /***** NOTE THIS *****/
    //若没有返回,则通过 IServiceManager 来获取一个 ICameraService,这个过程中主要是通过 IBinder 来进行数据获取。
        sp<IServiceManager> sm = defaultServiceManager();
        sp<IBinder> binder;
        do {
            binder = sm->getService(String16(kCameraServiceName));
            if (binder != 0) {
                break;
            }
            ALOGW("CameraService not published, waiting...");
            usleep(kCameraServicePollDelay);
        } while(true);

        if (gDeathNotifier == NULL) {
            gDeathNotifier = new DeathNotifier();
        }
        binder->linkToDeath(gDeathNotifier);
        /***** NOTE THIS *****/
        gCameraService = interface_cast<::android::hardware::ICameraService>(binder);
    }
    ALOGE_IF(gCameraService == 0, "no CameraService!?");
    return gCameraService;
}

 4.ICameraService:

  • 这一节主要是了解一下关于 Binder 通讯中的一些内部逻辑。
  • 实际上在 CameraBase 中,所调用的 connect 对应的是 CameraService::connect() 。

 (1)frameworks/av/camera/aidl/android/hardware/ICameraService.aidl

/**
 * Open a camera device through the old camera API
 */
ICamera connect(ICameraClient client,
        int cameraId,
        String opPackageName,
        int clientUid, int clientPid);

 (2)out/target/product/generic/obj/SHARED_LIBRARIES/libcamera_client_intermediates/aidl-generated/src/aidl/android/hardware/ICameraService.cpp

//这个 ICameraService.cpp 以及其头文件 ICameraService.h 都是根据其对应的 aidl 文件自动生成的。

::android::binder::Status BpCameraService::connect(const ::android::sp<::android::hardware::ICameraClient>& client,  
    int32_t cameraId, const ::android::String16& opPackageName, 
    int32_t clientUid, int32_t clientPid, 
    ::android::sp<::android::hardware::ICamera>* _aidl_return) 
 //这里是 BpCameraservice,它继承了 ICameraService,同时也继承了 BpInterface。
{
::android::Parcel _aidl_data; //Parcel 可以看成是 Binder 通讯中的信息传递中介,后面将相应的数据写入Parcel
::android::Parcel _aidl_reply;//返回的 reply 数据判断是否有 error
::android::status_t _aidl_ret_status = ::android::OK;
::android::binder::Status _aidl_status;
_aidl_ret_status = _aidl_data.writeInterfaceToken(getInterfaceDescriptor()); //首先把。

/***** NOTE THIS *****/
if (((_aidl_ret_status) != (::android::OK))) {
goto _aidl_error;
}
_aidl_ret_status = _aidl_data.writeStrongBinder(::android::hardware::ICameraClient::asBinder(client));
if (((_aidl_ret_status) != (::android::OK))) {
goto _aidl_error;
}
_aidl_ret_status = _aidl_data.writeInt32(cameraId);
if (((_aidl_ret_status) != (::android::OK))) {
goto _aidl_error;
}
_aidl_ret_status = _aidl_data.writeString16(opPackageName);
if (((_aidl_ret_status) != (::android::OK))) {
goto _aidl_error;
}
_aidl_ret_status = _aidl_data.writeInt32(clientUid);
if (((_aidl_ret_status) != (::android::OK))) {
goto _aidl_error;
}
_aidl_ret_status = _aidl_data.writeInt32(clientPid);
if (((_aidl_ret_status) != (::android::OK))) {
goto _aidl_error;
}

/***** NOTE THIS *****/
_aidl_ret_status = remote()->transact(ICameraService::CONNECT, _aidl_data, &_aidl_reply); 
//调用远程接口 remote() 中的处理函数 transact()。
if (((_aidl_ret_status) != (::android::OK))) {
goto _aidl_error;
}
_aidl_ret_status = _aidl_status.readFromParcel(_aidl_reply);
if (((_aidl_ret_status) != (::android::OK))) {
goto _aidl_error;
}
if (!_aidl_status.isOk()) {
return _aidl_status;
}
_aidl_ret_status = _aidl_reply.readStrongBinder(_aidl_return);
if (((_aidl_ret_status) != (::android::OK))) {
goto _aidl_error;
}
_aidl_error:
_aidl_status.setFromStatusT(_aidl_ret_status);
return _aidl_status;
}

  BnCameraService::onTransact():消息处理函数

case Call::CONNECT:
{
::android::sp<::android::hardware::ICameraClient> in_client;
int32_t in_cameraId;
::android::String16 in_opPackageName;
int32_t in_clientUid;
int32_t in_clientPid;
/***** NOTE THIS *****/
::android::sp<::android::hardware::ICamera> _aidl_return;

if (!(_aidl_data.checkInterface(this))) {
_aidl_ret_status = ::android::BAD_TYPE;
break;
}

//接收 Bp 传来的数据
_aidl_ret_status = _aidl_data.readStrongBinder(&in_client);
if (((_aidl_ret_status) != (::android::OK))) {
break;
}
_aidl_ret_status = _aidl_data.readInt32(&in_cameraId);
if (((_aidl_ret_status) != (::android::OK))) {
break;
}
_aidl_ret_status = _aidl_data.readString16(&in_opPackageName);
if (((_aidl_ret_status) != (::android::OK))) {
break;
}
_aidl_ret_status = _aidl_data.readInt32(&in_clientUid);
if (((_aidl_ret_status) != (::android::OK))) {
break;
}
_aidl_ret_status = _aidl_data.readInt32(&in_clientPid);
if (((_aidl_ret_status) != (::android::OK))) {
break;
}

/***** NOTE THIS *****/
::android::binder::Status _aidl_status(connect(in_client, in_cameraId, in_opPackageName, in_clientUid, in_clientPid, &_aidl_return)); //调用了具体的 connect 函数获取 ICamera 并且返
_aidl_ret_status = _aidl_status.writeToParcel(_aidl_reply);
if (((_aidl_ret_status) != (::android::OK))) {
break;
}
if (!_aidl_status.isOk()) {
break;
}

/***** NOTE THIS *****/
_aidl_ret_status = _aidl_reply->writeStrongBinder(::android::hardware::ICamera::asBinder(_aidl_return));
if (((_aidl_ret_status) != (::android::OK))) {
break;
}
}
break;

 5.ICamera:

  (1)frameworks/av/camera/ICamera.cpp

virtual status_t connect(const sp<ICameraClient>& cameraClient) //BpCamera 类中的 connect() 函数
{
    Parcel data, reply;
    data.writeInterfaceToken(ICamera::getInterfaceDescriptor());
    data.writeStrongBinder(IInterface::asBinder(cameraClient));
    remote()->transact(CONNECT, data, &reply);
    return reply.readInt32();
}

  BnCamera 类中,onTransact 函数则有相应的处理:

case CONNECT: {
        CHECK_INTERFACE(ICamera, data, reply);
        sp<ICameraClient> cameraClient = interface_cast<ICameraClient>(data.readStrongBinder());
        reply->writeInt32(connect(cameraClient));
        return NO_ERROR;
    } break;

  (2)frameworks/av/services/camera/libcameraservice/CameraService.cpp

Status CameraService::connect( const sp<ICameraClient>& cameraClient,
        int cameraId,
        const String16& clientPackageName,
        int clientUid,
        int clientPid,
        /*out*/
        sp<ICamera>* device) {

    ATRACE_CALL();
    Status ret = Status::ok();
    String8 id = String8::format("%d", cameraId);
    sp<Client> client = nullptr;
//真正实现逻辑是在 connectHelper() 函数中 
    ret = connectHelper<ICameraClient,Client>(cameraClient, id,
            CAMERA_HAL_API_VERSION_UNSPECIFIED, clientPackageName, clientUid, clientPid, API_1,
            /*legacyMode*/ false, /*shimUpdateOnly*/ false,
            /*out*/client);

    if(!ret.isOk()) {
        logRejected(id, getCallingPid(), String8(clientPackageName),
                ret.toString8());
        return ret;
    }

    *device = client; //获得一个客户端实例并且通过 *device 返回
    return ret;
}

  (3)frameworks/av/services/camera/libcameraservice/CameraService.h

sp<BasicClient> clientTmp = nullptr;
        std::shared_ptr<resource_policy::ClientDescriptor<String8, sp<BasicClient>>> partial;
        if ((err = handleEvictionsLocked(cameraId, 
            originalClientPid, effectiveApiLevel,
            IInterface::asBinder(cameraCb), clientName8, 
            /*out*/&clientTmp,
            /*out*/&partial)) != NO_ERROR) {
            /***** do something *****/
        }

        /***** NOTE THIS *****/
        if (clientTmp.get() != nullptr) { //如果客户端实例已经存在于 MediaRecorder ,则直接将其取出返回
            // Handle special case for API1 MediaRecorder where the existing client is returned
            device = static_cast<CLIENT*>(clientTmp.get()); 
            return ret;
        }

        // give flashlight a chance to close devices if necessary.
        mFlashlight->prepareDeviceOpen(cameraId);

        // TODO: Update getDeviceVersion + HAL interface to use strings for Camera IDs
        int id = cameraIdToInt(cameraId);
        if (id == -1) {
            ALOGE("%s: Invalid camera ID %s, cannot get device version from HAL.", __FUNCTION__,
                    cameraId.string());
            return STATUS_ERROR_FMT(ERROR_ILLEGAL_ARGUMENT,
                    "Bad camera ID \"%s\" passed to camera open", cameraId.string());
        }

        int facing = -1;
        /***** NOTE THIS *****/
        int deviceVersion = getDeviceVersion(id, /*out*/&facing); //获取 deviceVersion,然后再调用 makeClient() 函数创建一个客户端。
        sp<BasicClient> tmp = nullptr;
        if(!(ret = makeClient(this, cameraCb, 
          clientPackageName, id, facing, clientPid,
          clientUid, getpid(), legacyMode, halVersion, 
          deviceVersion, effectiveApiLevel,
          /*out*/&tmp)).isOk()) {
            return ret;
        }
        client = static_cast<CLIENT*>(tmp.get());

        LOG_ALWAYS_FATAL_IF(client.get() == nullptr, "%s: CameraService in invalid state",
                __FUNCTION__);

        /***** NOTE THIS *****/
        if ((err = client->initialize(mModule)) != OK) { 
        //调用其 initialize() 函数进行初始化,注意其传入的参数是 mModule,这个参数是连接 Libraries 与 HAL 的关键参数。
            /***** do somthing *****/
        }

        // Update shim paremeters for legacy clients
        if (effectiveApiLevel == API_1) {
            // Assume we have always received a Client subclass for API1
            sp<Client> shimClient = reinterpret_cast<Client*>(client.get());
            String8 rawParams = shimClient->getParameters();
            CameraParameters params(rawParams);

            auto cameraState = getCameraState(cameraId);
            if (cameraState != nullptr) {
                cameraState->setShimParams(params);
            } else {
                ALOGE("%s: Cannot update shim parameters for camera %s, no such device exists.",
                        __FUNCTION__, cameraId.string());
            }
        }

        if (shimUpdateOnly) {
            // If only updating legacy shim parameters, immediately disconnect client
            mServiceLock.unlock();
            client->disconnect();
            mServiceLock.lock();
        } else {
            // Otherwise, add client to active clients list
            finishConnectLocked(client, partial);
        }
    } // lock is destroyed, allow further connect calls

    // Important: release the mutex here so the client can call back into the service from its
    // destructor (can be at the end of the call)
    device = client;

  (4)frameworks/av/services/camera/libcameraservice/api1/CameraClient.cpp

status_t CameraClient::initialize(CameraModule *module) {
    int callingPid = getCallingPid();
    status_t res;

    LOG1("CameraClient::initialize E (pid %d, id %d)", callingPid, mCameraId);

    // Verify ops permissions
    res = startCameraOps();
    if (res != OK) {
        return res;
    }

    char camera_device_name[10];
    snprintf(camera_device_name, sizeof(camera_device_name), "%d", mCameraId);

    /***** NOTE THIS *****/
    mHardware = new CameraHardwareInterface(camera_device_name); //获取 CameraHardwareInterface 实例。
    res = mHardware->initialize(module);
    if (res != OK) {
        ALOGE("%s: Camera %d: unable to initialize device: %s (%d)",
                __FUNCTION__, mCameraId, strerror(-res), res);
        mHardware.clear();
        return res;
    }

   //设置三个回调函数(这里与数据流密切相关)
    mHardware->setCallbacks(notifyCallback,
            dataCallback,
            dataCallbackTimestamp,
            (void *)(uintptr_t)mCameraId);

    // Enable zoom, error, focus, and metadata messages by default
    enableMsgType(CAMERA_MSG_ERROR | CAMERA_MSG_ZOOM | CAMERA_MSG_FOCUS |
                  CAMERA_MSG_PREVIEW_METADATA | CAMERA_MSG_FOCUS_MOVE);

    LOG1("CameraClient::initialize X (pid %d, id %d)", callingPid, mCameraId);
    return OK;
}

 6.HAL:frameworks/av/services/camera/libcameraservice/device1/CameraHardwareInterface.h

status_t initialize(CameraModule *module)
{
    ALOGI("Opening camera %s", mName.string());
    camera_info info;
    status_t res = module->getCameraInfo(atoi(mName.string()), &info); //通过 module,从 HAL 层的库中调用相关的函数获取 Camera 设备信息
    if (res != OK) {
        return res;
    }

    int rc = OK;
  //根据模块 API 的版本,判断是用 open 函数还是用 openLegacy,调用 open 后,通过 HAL 库与 Linux Kernel交互
    if (module->getModuleApiVersion() >= CAMERA_MODULE_API_VERSION_2_3 &&
        info.device_version > CAMERA_DEVICE_API_VERSION_1_0) {
        // Open higher version camera device as HAL1.0 device.
        rc = module->openLegacy(mName.string(),
                                 CAMERA_DEVICE_API_VERSION_1_0,
                                 (hw_device_t **)&mDevice);
    } else {
        rc = module->open(mName.string(), (hw_device_t **)&mDevice);
    }
    if (rc != OK) {
        ALOGE("Could not open camera %s: %d", mName.string(), rc);
        return rc;
    }
    initHalPreviewWindow();
    return rc;
}

   至此, Camera1的Open调用流程分析完毕。

三、Camera hw_get_module() 相关逻辑

  本节由hw_get_module() 函数入手,去探究 Libraries 层是如何调用 HAL 层的库中的函数的,CameraService 是在开机时就会启动,会调用一个名为 onFirstRef() 的成员函数,下面就从这里开始分析。

   1. CameraService:framework/av/services/camera/libcameraservice/CameraService.cpp

BnCameraService::onFirstRef(); //首先调用其基类的 onFirstRef 函数

    // Update battery life tracking if service is restarting
    BatteryNotifier& notifier(BatteryNotifier::getInstance()); 
   //更新 notifier (从BatteryNotifier看应该跟电池有关,估计是检测电池电量过低就不开启闪光灯或者相机)。
    notifier.noteResetCamera();
    notifier.noteResetFlashlight();

    camera_module_t *rawModule;
    /*** NOTE THIS ***/
    int err = hw_get_module(CAMERA_HARDWARE_MODULE_ID, 
            (const hw_module_t **)&rawModule);
    //通过 hw_get_module 函数获取 rawModule。
    if (err < 0) {
        ALOGE("Could not load camera HAL module: %d (%s)", err, strerror(-err));
        logServiceError("Could not load camera HAL module", err);
        return;
    }

    /*** NOTE THIS ***/
    mModule = new CameraModule(rawModule); //利用 rawModule 创建 mModule 的实例,mModule 是 CameraModule 类。
    err = mModule->init();

 2. hardware:

  (1)hardware/libhardware/include/hardware/hardware.h

/**
 * Name of the hal_module_info
 */
#define HAL_MODULE_INFO_SYM         HMI

/**
 * Name of the hal_module_info as a string
 */
#define HAL_MODULE_INFO_SYM_AS_STR  "HMI"
/**
 * Get the module info associated with a module by id.
 *
 * @return: 0 == success, <0 == error and *module == NULL
 */ 
int hw_get_module(const char *id, const struct hw_module_t **module); 
//作用是通过传入的 id 来获取模块相关的信息(成功则返回 0,出错则返回值小于 0 且 *module == NULL)

/**
 * Get the module info associated with a module instance by class 'class_id'
 * and instance 'inst'.
 *
 * Some modules types necessitate multiple instances. For example audio supports
 * multiple concurrent interfaces and thus 'audio' is the module class
 * and 'primary' or 'a2dp' are module interfaces. This implies that the files
 * providing these modules would be named audio.primary.<variant>.so and
 * audio.a2dp.<variant>.so
 *
 * @return: 0 == success, <0 == error and *module == NULL
 */
int hw_get_module_by_class(const char *class_id, const char *inst, 
                           const struct hw_module_t **module);

//作用是通过 class_id 获取与模块实例相关的信息。

  (2)hardware/libhardware/hardware.c

static const char *variant_keys[] = {
    "ro.hardware",  /* This goes first so that it can pick up a different
                       file on the emulator. */
    "ro.product.board",
    "ro.board.platform",
    "ro.arch"
};
int hw_get_module(const char *id, const struct hw_module_t **module)
{
   return hw_get_module_by_class(id, NULL, module);
//读取库文件,尝试的顺序是:ro.hardware、ro.product.board、ro.board.platform、ro.arch、default
}

...
//通过 load 函数加载模块
    /* First try a property specific to the class and possibly instance */
    snprintf(prop_name, sizeof(prop_name), "ro.hardware.%s", name);
    if (property_get(prop_name, prop, NULL) > 0) {
        if (hw_module_exists(path, sizeof(path), name, prop) == 0) {
            goto found;
        }
    }

    /* Loop through the configuration variants looking for a module */
    for (i=0 ; i<HAL_VARIANT_KEYS_COUNT; i++) {
        if (property_get(variant_keys[i], prop, NULL) == 0) {
            continue;
        }
        if (hw_module_exists(path, sizeof(path), name, prop) == 0) {
            goto found;
        }
    }

    /* Nothing found, try the default */
    if (hw_module_exists(path, sizeof(path), name, "default") == 0) {
        goto found;
    }

    return -ENOENT;

found:
    /* load the module, if this fails, we're doomed, and we should not try
     * to load a different variant. */
    /*** NOTE THIS ***/
    return load(class_id, path, module);

  load()

  NOTE:
        为了获取动态链接库中的结构体,我们需要用到一个字符串 sym,sym 对应宏 HAL_MODULE_INFO_SYM_AS_STR,即 “HMI”。
        动态链接库 .so 文件,是一个 ELF 文件。
        ELF:Executable and Linkable Format,可执行链接格式,ELF 文件头保存了一个路线图,用于描述文件的组织结构。
        通过 readelf -s 命令,我们可以查看对应的 .so 文件描述,可以看到其中有一个 Name 属性为 HMI ,其对应的位置就是我们所需要的结构体 hw_module_t。
        于是通过 HMI 字段,就可以从动态链接库中读取出相应的结构体,从而得以在 Libraries 层中调用 HAL 层的库函数。

static int load(const char *id,
        const char *path,
        const struct hw_module_t **pHmi)
{
    int status = -EINVAL;
    void *handle = NULL;
    struct hw_module_t *hmi = NULL;

    /*
     * load the symbols resolving undefined symbols before
     * dlopen returns. Since RTLD_GLOBAL is not or'd in with
     * RTLD_NOW the external symbols will not be global
     */
    /*** NOTE THIS ***/
    handle = dlopen(path, RTLD_NOW); //调用 dlopen() 函数获取一个 handle。
    if (handle == NULL) {
        char const *err_str = dlerror();
        ALOGE("load: module=%s\n%s", path, err_str?err_str:"unknown");
        status = -EINVAL;
        goto done;
    }

    /* Get the address of the struct hal_module_info. */
    /*** NOTE THIS ***/
    const char *sym = HAL_MODULE_INFO_SYM_AS_STR;
    hmi = (struct hw_module_t *)dlsym(handle, sym); //调用 dlsym() 函数从动态链接库中获取 hw_module_t 类型的 hmi。
    if (hmi == NULL) {
        ALOGE("load: couldn't find symbol %s", sym);
        status = -EINVAL;
        goto done;
    }

    /* Check that the id matches */
    if (strcmp(id, hmi->id) != 0) {
        ALOGE("load: id=%s != hmi->id=%s", id, hmi->id);
        status = -EINVAL;
        goto done;
    }

    hmi->dso = handle;
        /* success */
    status = 0;

/*** NOTE THIS ***/
done:
    if (status != 0) {
        hmi = NULL;
        if (handle != NULL) {
            dlclose(handle);
            handle = NULL;
        }
    } else {
        ALOGV("loaded HAL id=%s path=%s hmi=%p handle=%p",
                id, path, *pHmi, handle);
    }

    *pHmi = hmi;

    return status;
}

  至此,获得了最终的 rawModule,然后回到 onFirstRef() 中继续分析。

 3. CameraModule:

  (1)frameworks/av/services/camera/libcameraservice/common/CameraModule.cpp

CameraModule::CameraModule(camera_module_t *module) {
    if (module == NULL) {
        ALOGE("%s: camera hardware module must not be null", 
                __FUNCTION__);
        assert(0);
    }
    mModule = module;//mModule 是 camera_module_t 类型。
}

  init()

  • 调用 mModule 的 init() 函数。
  • 如果没有指定的 init() 函数,则 init 流程到这里就可以结束了。
int CameraModule::init() {
    ATRACE_CALL();
    int res = OK;
    if (getModuleApiVersion() >= CAMERA_MODULE_API_VERSION_2_4 &&
            mModule->init != NULL) {
        ATRACE_BEGIN("camera_module->init");
        res = mModule->init();
        ATRACE_END();
    }
    mCameraInfoMap.setCapacity(getNumberOfCameras());
    return res;
}

  (2)相关代码: 

hardware/libhardware/include/hardware/camera_common.h
hardware/qcom/camera/QCamera2/QCamera2Factory.h
hardware/qcom/camera/QCamera2/QCamera2Factory.cpp
hardware/qcom/camera/QCamera2/QCamera2Hal.cpp

 

  简图总结:

  

 

   上面从 CameraService::onFirstRef() 入手,逐渐理顺了以 hw_get_module() 为中心的一个调用逻辑。实际上,Android HAL 层有一个通用的入口,即宏 HAL_MODULE_INFO_SYM,通过它,可以获取 HAL 层中的模块实例,从而调用 HAL 层所提供的函数。理解了 HAL 层的入口,接下来就可以去对 Camera.startPreview() 的控制流程进行分析,从而再次加深对 Camera 控制流的理解。
 

四、Camera.startPreview()流程

 1.Frameworks:frameworks/base/core/java/android/hardware/Camera.java

/**
     * Starts capturing and drawing preview frames to the screen.
     * Preview will not actually start until a surface is supplied
     * with {@link #setPreviewDisplay(SurfaceHolder)} or
     * {@link #setPreviewTexture(SurfaceTexture)}.
     *
     * <p>If {@link #setPreviewCallback(Camera.PreviewCallback)},
     * {@link #setOneShotPreviewCallback(Camera.PreviewCallback)}, or
     * {@link #setPreviewCallbackWithBuffer(Camera.PreviewCallback)} were
     * called, {@link Camera.PreviewCallback#onPreviewFrame(byte[], Camera)}
     * will be called when preview data becomes available.
     */
    public native final void startPreview();  //给上层 application 提供一个接口, 进入 Runtime 层。

 2.Android Runtime:frameworks/base/core/jni/android_hardware_Camera.cpp

static void android_hardware_Camera_startPreview(JNIEnv *env, jobject thiz)
{
    ALOGV("startPreview");
    sp<Camera> camera = get_native_camera(env, thiz, NULL); //调用 get_native_camera() 函数获取一个 Camera 实例。
    if (camera == 0) return;

    if (camera->startPreview() != NO_ERROR) {
        jniThrowRuntimeException(env, "startPreview failed");
        return;
    }
}
sp<Camera> get_native_camera(JNIEnv *env, jobject thiz, JNICameraContext** pContext)
{
    sp<Camera> camera;
    Mutex::Autolock _l(sLock);
    JNICameraContext* context = reinterpret_cast<JNICameraContext*>(env->GetLongField(thiz, fields.context)); 
//从 DVM 中获取关于 Camera 的上下文。
    if (context != NULL) {
        camera = context->getCamera(); //从上下文信息中获取 Camera 实例。
    }
    ALOGV("get_native_camera: context=%p, camera=%p", context, camera.get());
    if (camera == 0) {
        jniThrowRuntimeException(env,
                "Camera is being used after Camera.release() was called");
    }

    if (pContext != NULL) *pContext = context;
    return camera;
}

 3. Libraries:

  (1)frameworks/av/camera/Camera.cpp

// start preview mode
status_t Camera::startPreview()
{
    ALOGV("startPreview");
    sp <::android::hardware::ICamera> c = mCamera; 
    //mCamera 即是在 connect 过程中返回的 CameraClient,它具体实现了 startPreview() 接口。
    if (c == 0) return NO_INIT;
    return c->startPreview();
}

  (2)frameworks/av/services/camera/libcameraservice/api1/CameraClient.cpp

// start preview mode
status_t CameraClient::startPreview() {
    LOG1("startPreview (pid %d)", getCallingPid());
    return startCameraMode(CAMERA_PREVIEW_MODE); //通过 startCameraMode 函数进入具体的实现逻辑。
}

  startCameraMode()

// start preview or recording
status_t CameraClient::startCameraMode(camera_mode mode) { //根据传入的参数 CAMERA_PREVIEW_MODE 确定进入的分支。
    LOG1("startCameraMode(%d)", mode);
    Mutex::Autolock lock(mLock);
    status_t result = checkPidAndHardware();
    if (result != NO_ERROR) return result;

    switch(mode) {
        case CAMERA_PREVIEW_MODE:
            if (mSurface == 0 && mPreviewWindow == 0) {
                LOG1("mSurface is not set yet.");
                // still able to start preview in this case.
            }
            return startPreviewMode();//调用 startPreviewMode() 。
        case CAMERA_RECORDING_MODE:
            if (mSurface == 0 && mPreviewWindow == 0) {
                ALOGE("mSurface or mPreviewWindow must be set before startRecordingMode.");
                return INVALID_OPERATION;
            }
            return startRecordingMode();
        default:
            return UNKNOWN_ERROR;
    }
}

  startPreviewMode()

status_t CameraClient::startPreviewMode() {
    LOG1("startPreviewMode");
    status_t result = NO_ERROR;

    // if preview has been enabled, nothing needs to be done
    if (mHardware->previewEnabled()) { 
        return NO_ERROR; //如果预览已经存在,则直接返回成功信息。
    }

    if (mPreviewWindow != 0) {
        mHardware->setPreviewScalingMode( 
//mHardware 是 CameraHardwareInterface 的实例,在 connect 过程的最后被初始化。
            NATIVE_WINDOW_SCALING_MODE_SCALE_TO_WINDOW);
        mHardware->setPreviewTransform(mOrientation);
    }
    mHardware->setPreviewWindow(mPreviewWindow);  
//通过 mHardware 调用 setPreviewWindow() 和 startPreview() 接口。
    result = mHardware->startPreview();
    if (result == NO_ERROR) {
        mCameraService->updateProxyDeviceState( //进入 HAL 层。
            ICameraServiceProxy::CAMERA_STATE_ACTIVE,
            String8::format("%d", mCameraId));
    }
    return result;
}

 4. HAL:

  (1)frameworks/av/services/camera/libcameraservice/device1/CameraHardwareInterface.h

/**
     * Returns true if preview is enabled.
     */
    int previewEnabled()
    {
        ALOGV("%s(%s)", __FUNCTION__, mName.string());
        if (mDevice->ops->preview_enabled) //mDevice 即是通过 hw_get_module() 相关流程进行初始化的设备实例,它的类型是 camera_device_t 。
            return mDevice->ops->preview_enabled(mDevice); //如果 preview 存在,则返回 true 。
        return false;
    }

  setPreviewWindow()

/** Set the ANativeWindow to which preview frames are sent */
    status_t setPreviewWindow(const sp<ANativeWindow>& buf)
    {
        ALOGV("%s(%s) buf %p", __FUNCTION__, mName.string(), buf.get());
        if (mDevice->ops->set_preview_window) { //通过 mDevice->ops 继续向下调用
            mPreviewWindow = buf;
            if (buf != nullptr) {
                if (mPreviewScalingMode != NOT_SET) {
                    setPreviewScalingMode(mPreviewScalingMode);
                }
                if (mPreviewTransform != NOT_SET) {
                    setPreviewTransform(mPreviewTransform);
                }
            }
            mHalPreviewWindow.user = this;
            ALOGV("%s &mHalPreviewWindow %p mHalPreviewWindow.user %p", __FUNCTION__,
                    &mHalPreviewWindow, mHalPreviewWindow.user);
            return mDevice->ops->set_preview_window(mDevice,
                    buf.get() ? &mHalPreviewWindow.nw : 0);
        }
        return INVALID_OPERATION;
    }

  startPreview()

/**
   关于 mDevice,结合 Camera.open() 流程与 hw_get_module() 相关逻辑,可以知道它的逻辑是这样的:
        在 CameraService 启动时,会调用 onFirstRef() 对 module 进行初始化,获取 module 实例。
        在 open 过程中,CameraClient 连接 CameraServer 成功时,会实例化 CameraHardwareInterface,并传入 module 实例对其初始化。
        在初始化过程中,通过 module 实例对应的 open 方法,我们获得一个 device 实例,即 mDevice,这对应了具体的摄像头设备。
        通过 mDevice 就可以将对应的指令传达到硬件设备。
     */
    status_t startPreview()
    {
        ALOGV("%s(%s)", __FUNCTION__, mName.string());
        if (mDevice->ops->start_preview)
            return mDevice->ops->start_preview(mDevice);
        return INVALID_OPERATION;
    }

  (2)hardware/libhardware/include/hardware/camera.h

typedef struct camera_device { //这里就声明了要追踪的 camera_device_t 。
    /**
     * camera_device.common.version must be in the range
     * HARDWARE_DEVICE_API_VERSION(0,0)-(1,FF). CAMERA_DEVICE_API_VERSION_1_0 is
     * recommended.
     */
    hw_device_t common;
    camera_device_ops_t *ops;
    void *priv;
} camera_device_t;

  其中struct camera_device_ops:所有关于 Camera 设备的操作对应的函数指针都在这里声明了。

typedef struct camera_device_ops {
    int (*set_preview_window)(struct camera_device *,
            struct preview_stream_ops *window);

    void (*set_callbacks)(struct camera_device *,
            camera_notify_callback notify_cb,
            camera_data_callback data_cb,
            camera_data_timestamp_callback data_cb_timestamp,
            camera_request_memory get_memory,
            void *user);

    void (*enable_msg_type)(struct camera_device *, int32_t msg_type);

    void (*disable_msg_type)(struct camera_device *, int32_t msg_type);

    int (*msg_type_enabled)(struct camera_device *, int32_t msg_type);

    /**
     * Start preview mode.
     */
    int (*start_preview)(struct camera_device *);

    void (*stop_preview)(struct camera_device *);

    int (*preview_enabled)(struct camera_device *);

    int (*store_meta_data_in_buffers)(struct camera_device *, int enable);

    int (*start_recording)(struct camera_device *);

    void (*stop_recording)(struct camera_device *);

    int (*recording_enabled)(struct camera_device *);

    void (*release_recording_frame)(struct camera_device *,
                    const void *opaque);

    int (*auto_focus)(struct camera_device *);

    int (*cancel_auto_focus)(struct camera_device *);

    int (*take_picture)(struct camera_device *);

    int (*cancel_picture)(struct camera_device *);

    int (*set_parameters)(struct camera_device *, const char *parms);

    char *(*get_parameters)(struct camera_device *);

    void (*put_parameters)(struct camera_device *, char *);

    int (*send_command)(struct camera_device *,
                int32_t cmd, int32_t arg1, int32_t arg2);

    void (*release)(struct camera_device *);

    int (*dump)(struct camera_device *, int fd);
} camera_device_ops_t;

  (3)hardware/ti/omap4-aah/camera/CameraHal_Module.cpp 在 open 流程中,就指定了 ops 中指针的对应关系。

     memset(camera_device, 0, sizeof(*camera_device));
        memset(camera_ops, 0, sizeof(*camera_ops));

        camera_device->base.common.tag = HARDWARE_DEVICE_TAG;
        camera_device->base.common.version = 0;
        camera_device->base.common.module = (hw_module_t *)(module);
        camera_device->base.common.close = camera_device_close;
        camera_device->base.ops = camera_ops;

        camera_ops->set_preview_window = camera_set_preview_window;
        camera_ops->set_callbacks = camera_set_callbacks;
        camera_ops->enable_msg_type = camera_enable_msg_type;
        camera_ops->disable_msg_type = camera_disable_msg_type;
        camera_ops->msg_type_enabled = camera_msg_type_enabled;
        camera_ops->start_preview = camera_start_preview;
        camera_ops->stop_preview = camera_stop_preview;
        camera_ops->preview_enabled = camera_preview_enabled;
        camera_ops->store_meta_data_in_buffers = camera_store_meta_data_in_buffers;
        camera_ops->start_recording = camera_start_recording;
        camera_ops->stop_recording = camera_stop_recording;
        camera_ops->recording_enabled = camera_recording_enabled;
        camera_ops->release_recording_frame = camera_release_recording_frame;
        camera_ops->auto_focus = camera_auto_focus;
        camera_ops->cancel_auto_focus = camera_cancel_auto_focus;
        camera_ops->take_picture = camera_take_picture;
        camera_ops->cancel_picture = camera_cancel_picture;
        camera_ops->set_parameters = camera_set_parameters;
        camera_ops->get_parameters = camera_get_parameters;
        camera_ops->put_parameters = camera_put_parameters;
        camera_ops->send_command = camera_send_command;
        camera_ops->release = camera_release;
        camera_ops->dump = camera_dump;

        *device = &camera_device->base.common;

        // -------- TI specific stuff --------

        camera_device->cameraid = cameraid;

  camera_start_preview()

int camera_start_preview(struct camera_device * device)
{
    CAMHAL_LOG_MODULE_FUNCTION_NAME;

    int rv = -EINVAL;
    ti_camera_device_t* ti_dev = NULL;

    if(!device)
        return rv;

    ti_dev = (ti_camera_device_t*) device;

    rv = gCameraHals[ti_dev->cameraid]->startPreview(); //gCameraHals 是 CameraHal * 。

    return rv;
}

  (4)hardware/ti/omap4-aah/camera/CameraHal.cpp : 将 Camera Hardware Interface 映射到 V4L2

status_t CameraHal::startPreview() {
    LOG_FUNCTION_NAME;

    status_t ret = cameraPreviewInitialization(); //首先调用了 cameraPreviewInitialization() 函数进行初始化。

    if (!mPreviewInitializationDone) return ret;

    mPreviewInitializationDone = false;

    if(mDisplayAdapter.get() != NULL) {
        CAMHAL_LOGDA("Enabling display");
        int width, height;
        mParameters.getPreviewSize(&width, &height);

#if PPM_INSTRUMENTATION || PPM_INSTRUMENTATION_ABS
        ret = mDisplayAdapter->enableDisplay(width, height, &mStartPreview);
#else
        ret = mDisplayAdapter->enableDisplay(width, height, NULL);
#endif

        if ( ret != NO_ERROR ) {
            CAMHAL_LOGEA("Couldn't enable display");
            CAMHAL_ASSERT_X(false,
                "At this stage mCameraAdapter->mStateSwitchLock is still locked, "
                "deadlock is guaranteed");

            goto error;
        }
    }

    CAMHAL_LOGDA("Starting CameraAdapter preview mode");

    ret = mCameraAdapter->sendCommand(CameraAdapter::CAMERA_START_PREVIEW); 
    //通过 CameraAdapter 发送 CAMERA_START_PREVIEW 指令,若成功执行,则完成流程。

    if(ret!=NO_ERROR) {
        CAMHAL_LOGEA("Couldn't start preview w/ CameraAdapter");
        goto error;
    }
    CAMHAL_LOGDA("Started preview");

    mPreviewEnabled = true;
    mPreviewStartInProgress = false;
    return ret;

    error:

        CAMHAL_LOGEA("Performing cleanup after error");

        //Do all the cleanup
        freePreviewBufs();
        mCameraAdapter->sendCommand(CameraAdapter::CAMERA_STOP_PREVIEW);
        if(mDisplayAdapter.get() != NULL) {
            mDisplayAdapter->disableDisplay(false);
        }
        mAppCallbackNotifier->stop();
        mPreviewStartInProgress = false;
        mPreviewEnabled = false;
        LOG_FUNCTION_NAME_EXIT;

        return ret;
}

  cameraPreviewInitialization()

  • 代码中不断使用 mCameraAdapter->sendCommand() 来发送指令,并获取一些数据。
  • 指令发送到对应的 Adapter (如 V4L Adapter),就会调用相应的函数进行处理。

/**
   @brief Set preview mode related initialization
          -> Camera Adapter set params
          -> Allocate buffers
          -> Set use buffers for preview
   @param none
   @return NO_ERROR
   @todo Update function header with the different errors that are possible
*/
status_t CameraHal::cameraPreviewInitialization()
{

    status_t ret = NO_ERROR;
    CameraAdapter::BuffersDescriptor desc;
    CameraFrame frame;
    unsigned int required_buffer_count;
    unsigned int max_queueble_buffers;

#if PPM_INSTRUMENTATION || PPM_INSTRUMENTATION_ABS
        gettimeofday(&mStartPreview, NULL);
#endif

    LOG_FUNCTION_NAME;

    if (mPreviewInitializationDone) {
        return NO_ERROR;
    }

    if ( mPreviewEnabled ){
      CAMHAL_LOGDA("Preview already running");
      LOG_FUNCTION_NAME_EXIT;
      return ALREADY_EXISTS;
    }

    if ( NULL != mCameraAdapter ) {
      ret = mCameraAdapter->setParameters(mParameters); 
    //通过 Adapter 设置相关参数;
    }

    if ((mPreviewStartInProgress == false) && (mDisplayPaused == false)){
      ret = mCameraAdapter->sendCommand(CameraAdapter::CAMERA_QUERY_RESOLUTION_PREVIEW,( int ) &frame);
      if ( NO_ERROR != ret ){
        CAMHAL_LOGEB("Error: CAMERA_QUERY_RESOLUTION_PREVIEW %d", ret);
        return ret;
      }

      ///Update the current preview width and height
      mPreviewWidth = frame.mWidth;
      mPreviewHeight = frame.mHeight;
    }

    ///If we don't have the preview callback enabled and display adapter,
    if(!mSetPreviewWindowCalled || (mDisplayAdapter.get() == NULL)){
      CAMHAL_LOGD("Preview not started. Preview in progress flag set");
      mPreviewStartInProgress = true;
      ret = mCameraAdapter->sendCommand(CameraAdapter::CAMERA_SWITCH_TO_EXECUTING);
      if ( NO_ERROR != ret ){
        CAMHAL_LOGEB("Error: CAMERA_SWITCH_TO_EXECUTING %d", ret);
        return ret;
      }
      return NO_ERROR;
    }

    if( (mDisplayAdapter.get() != NULL) && ( !mPreviewEnabled ) && ( mDisplayPaused ) )
        {
        CAMHAL_LOGDA("Preview is in paused state");

        mDisplayPaused = false;
        mPreviewEnabled = true;
        if ( NO_ERROR == ret )
            {
            ret = mDisplayAdapter->pauseDisplay(mDisplayPaused);

            if ( NO_ERROR != ret )
                {
                CAMHAL_LOGEB("Display adapter resume failed %x", ret);
                }
            }
        //restart preview callbacks
        if(mMsgEnabled & CAMERA_MSG_PREVIEW_FRAME)
        {
            mAppCallbackNotifier->enableMsgType (CAMERA_MSG_PREVIEW_FRAME);
        }

        signalEndImageCapture();
        return ret;
        }

    required_buffer_count = atoi(mCameraProperties->get(CameraProperties::REQUIRED_PREVIEW_BUFS));

    ///Allocate the preview buffers
    ret = allocPreviewBufs(mPreviewWidth, mPreviewHeight, mParameters.getPreviewFormat(), required_buffer_count, max_queueble_buffers); //申请 Buffers 空间;

    if ( NO_ERROR != ret )
        {
        CAMHAL_LOGEA("Couldn't allocate buffers for Preview");
        goto error;
        }

    if ( mMeasurementEnabled )
        {

        ret = mCameraAdapter->sendCommand(CameraAdapter::CAMERA_QUERY_BUFFER_SIZE_PREVIEW_DATA,
                                          ( int ) &frame,
                                          required_buffer_count);
        if ( NO_ERROR != ret )
            {
            return ret;
            }

         ///Allocate the preview data buffers
        ret = allocPreviewDataBufs(frame.mLength, required_buffer_count);
        if ( NO_ERROR != ret ) {
            CAMHAL_LOGEA("Couldn't allocate preview data buffers");
            goto error;
           }

        if ( NO_ERROR == ret )
            {
       //在申请 Buffers 空间成功后设置相应的成员。
            desc.mBuffers = mPreviewDataBuffers;
            desc.mOffsets = mPreviewDataOffsets;
            desc.mFd = mPreviewDataFd;
            desc.mLength = mPreviewDataLength;
            desc.mCount = ( size_t ) required_buffer_count;
            desc.mMaxQueueable = (size_t) required_buffer_count;

            mCameraAdapter->sendCommand(CameraAdapter::CAMERA_USE_BUFFERS_PREVIEW_DATA,
                                        ( int ) &desc);
            }

        }

    ///Pass the buffers to Camera Adapter
    desc.mBuffers = mPreviewBuffers;
    desc.mOffsets = mPreviewOffsets;
    desc.mFd = mPreviewFd;
    desc.mLength = mPreviewLength;
    desc.mCount = ( size_t ) required_buffer_count;
    desc.mMaxQueueable = (size_t) max_queueble_buffers;

    ret = mCameraAdapter->sendCommand(CameraAdapter::CAMERA_USE_BUFFERS_PREVIEW,
                                      ( int ) &desc);

    if ( NO_ERROR != ret )
        {
        CAMHAL_LOGEB("Failed to register preview buffers: 0x%x", ret);
        freePreviewBufs();
        return ret;
        }

    ///Start the callback notifier
    ret = mAppCallbackNotifier->start(); //开启回调通知。

    if( ALREADY_EXISTS == ret )
        {
        //Already running, do nothing
        CAMHAL_LOGDA("AppCallbackNotifier already running");
        ret = NO_ERROR;
        }
    else if ( NO_ERROR == ret ) {
        CAMHAL_LOGDA("Started AppCallbackNotifier..");
        mAppCallbackNotifier->setMeasurements(mMeasurementEnabled);
        }
    else
        {
        CAMHAL_LOGDA("Couldn't start AppCallbackNotifier");
        goto error;
        }

    if (ret == NO_ERROR) mPreviewInitializationDone = true;
   //将 Buffers 对应到相应的回调函数中,以供上层 APP 获取预览所需的数据。
    mAppCallbackNotifier->startPreviewCallbacks(mParameters, mPreviewBuffers, mPreviewOffsets, mPreviewFd, mPreviewLength, required_buffer_count);
    return ret;
    error:
        CAMHAL_LOGEA("Performing cleanup after error");
        //Do all the cleanup
        freePreviewBufs();
        mCameraAdapter->sendCommand(CameraAdapter::CAMERA_STOP_PREVIEW);
        if(mDisplayAdapter.get() != NULL)
            {
            mDisplayAdapter->disableDisplay(false);
            }
        mAppCallbackNotifier->stop();
        mPreviewStartInProgress = false;
        mPreviewEnabled = false;
        LOG_FUNCTION_NAME_EXIT;
        return ret;
}

  (5)hardware/ti/omap4-aah/camera/BaseCameraAdapter.cpp : 各常量分别对应不同的命令。

const LUT cameraCommandsUserToHAL[] = {
    { "CAMERA_START_PREVIEW",                   CameraAdapter::CAMERA_START_PREVIEW },
    { "CAMERA_STOP_PREVIEW",                    CameraAdapter::CAMERA_STOP_PREVIEW },
    { "CAMERA_START_VIDEO",                     CameraAdapter::CAMERA_START_VIDEO },
    { "CAMERA_STOP_VIDEO",                      CameraAdapter::CAMERA_STOP_VIDEO },
    { "CAMERA_START_IMAGE_CAPTURE",             CameraAdapter::CAMERA_START_IMAGE_CAPTURE },
    { "CAMERA_STOP_IMAGE_CAPTURE",              CameraAdapter::CAMERA_STOP_IMAGE_CAPTURE },
    { "CAMERA_PERFORM_AUTOFOCUS",               CameraAdapter::CAMERA_PERFORM_AUTOFOCUS },
    { "CAMERA_CANCEL_AUTOFOCUS",                CameraAdapter::CAMERA_CANCEL_AUTOFOCUS },
    { "CAMERA_PREVIEW_FLUSH_BUFFERS",           CameraAdapter::CAMERA_PREVIEW_FLUSH_BUFFERS },
    { "CAMERA_START_SMOOTH_ZOOM",               CameraAdapter::CAMERA_START_SMOOTH_ZOOM },
    { "CAMERA_STOP_SMOOTH_ZOOM",                CameraAdapter::CAMERA_STOP_SMOOTH_ZOOM },
    { "CAMERA_USE_BUFFERS_PREVIEW",             CameraAdapter::CAMERA_USE_BUFFERS_PREVIEW },
    { "CAMERA_SET_TIMEOUT",                     CameraAdapter::CAMERA_SET_TIMEOUT },
    { "CAMERA_CANCEL_TIMEOUT",                  CameraAdapter::CAMERA_CANCEL_TIMEOUT },
    { "CAMERA_START_BRACKET_CAPTURE",           CameraAdapter::CAMERA_START_BRACKET_CAPTURE },
    { "CAMERA_STOP_BRACKET_CAPTURE",            CameraAdapter::CAMERA_STOP_BRACKET_CAPTURE },
    { "CAMERA_QUERY_RESOLUTION_PREVIEW",        CameraAdapter::CAMERA_QUERY_RESOLUTION_PREVIEW },
    { "CAMERA_QUERY_BUFFER_SIZE_IMAGE_CAPTURE", CameraAdapter::CAMERA_QUERY_BUFFER_SIZE_IMAGE_CAPTURE },
    { "CAMERA_QUERY_BUFFER_SIZE_PREVIEW_DATA",  CameraAdapter::CAMERA_QUERY_BUFFER_SIZE_PREVIEW_DATA },
    { "CAMERA_USE_BUFFERS_IMAGE_CAPTURE",       CameraAdapter::CAMERA_USE_BUFFERS_IMAGE_CAPTURE },
    { "CAMERA_USE_BUFFERS_PREVIEW_DATA",        CameraAdapter::CAMERA_USE_BUFFERS_PREVIEW_DATA },
    { "CAMERA_TIMEOUT_EXPIRED",                 CameraAdapter::CAMERA_TIMEOUT_EXPIRED },
    { "CAMERA_START_FD",                        CameraAdapter::CAMERA_START_FD },
    { "CAMERA_STOP_FD",                         CameraAdapter::CAMERA_STOP_FD },
    { "CAMERA_SWITCH_TO_EXECUTING",             CameraAdapter::CAMERA_SWITCH_TO_EXECUTING },
    { "CAMERA_USE_BUFFERS_VIDEO_CAPTURE",       CameraAdapter::CAMERA_USE_BUFFERS_VIDEO_CAPTURE },
#ifdef OMAP_ENHANCEMENT_CPCAM
    { "CAMERA_USE_BUFFERS_REPROCESS",           CameraAdapter::CAMERA_USE_BUFFERS_REPROCESS },
    { "CAMERA_START_REPROCESS",                 CameraAdapter::CAMERA_START_REPROCESS },
#endif
};

  BaseCameraAdapter::sendCommand()利用 switch 将不同的命令对应到各自的逻辑中。

case CameraAdapter::CAMERA_START_PREVIEW: 
//BaseCameraAdapter::startPreview() 具体操作在其子类中实现,后面通过子类 V4LCameraAdapter 继续分析。
        {

            CAMHAL_LOGDA("Start Preview");

        if ( ret == NO_ERROR )
            {
            ret = setState(operation);
            }

        if ( ret == NO_ERROR )
            {
            ret = startPreview();
            }

        if ( ret == NO_ERROR )
            {
            ret = commitState();
            }
        else
            {
            ret |= rollbackState();
            }

        break;

        }

  (6)hardware/ti/omap4-aah/camera/inc/V4LCameraAdapter/V4LCameraAdapter.h

private:

    class PreviewThread : public android::Thread {
            V4LCameraAdapter* mAdapter;//类 V4LCameraAdapter 继承了 BaseCameraAdapter
        public:
            PreviewThread(V4LCameraAdapter* hw) : //该线程不断执行 Adapter 中的 previewThread() 函数。
                    Thread(false), mAdapter(hw) { }
            virtual void onFirstRef() {
                run("CameraPreviewThread", android::PRIORITY_URGENT_DISPLAY);
            }
            virtual bool threadLoop() {
                mAdapter->previewThread();
                // loop until we need to quit
                return true;
            }
        };

    //Used for calculation of the average frame rate during preview
    status_t recalculateFPS();

    char * GetFrame(int &index);

    int previewThread();

  (7)hardware/ti/omap4-aah/camera/V4LCameraAdapter/V4LCameraAdapter.cpp

  startPreview()

status_t V4LCameraAdapter::startPreview()
{
    status_t ret = NO_ERROR;

    LOG_FUNCTION_NAME;
    android::AutoMutex lock(mPreviewBufsLock);

    if(mPreviewing) {
        ret = BAD_VALUE;
        goto EXIT;
    }

    for (int i = 0; i < mPreviewBufferCountQueueable; i++) {

        mVideoInfo->buf.index = i;
        mVideoInfo->buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
        mVideoInfo->buf.memory = V4L2_MEMORY_MMAP;

        ret = v4lIoctl(mCameraHandle, VIDIOC_QBUF, &mVideoInfo->buf); //通过 v4lIoctl() 函数从硬件获取需要的数据,并存入 Buffers。
        if (ret < 0) {
            CAMHAL_LOGEA("VIDIOC_QBUF Failed");
            goto EXIT;
        }
        nQueued++;
    }

    ret = v4lStartStreaming();// Create and start preview thread for receiving buffers from V4L Camera
    if(!mCapturing) {
        mPreviewThread = new PreviewThread(this);  //启动一个 PreviewThread,用于接收从 V4L 摄像头设备传回的数据。
        CAMHAL_LOGDA("Created preview thread");
    }

   //Update the flag to indicate we are previewing(设置标志,表明预览功能已开启)
    mPreviewing = true;
    mCapturing = false;

EXIT:
    LOG_FUNCTION_NAME_EXIT;
    return ret;
}

  previewThread()

int V4LCameraAdapter::previewThread()
{
    status_t ret = NO_ERROR;
    int width, height;
    CameraFrame frame;
    void *y_uv[2];
    int index = 0;
    int stride = 4096;
    char *fp = NULL;

    mParams.getPreviewSize(&width, &height);

    if (mPreviewing) {

        fp = this->GetFrame(index);
        if(!fp) {
            ret = BAD_VALUE;
            goto EXIT;
        }
        CameraBuffer *buffer = mPreviewBufs.keyAt(index);
        CameraFrame *lframe = (CameraFrame *)mFrameQueue.valueFor(buffer);
        if (!lframe) {
            ret = BAD_VALUE;
            goto EXIT;
        }

        debugShowFPS();

        if ( mFrameSubscribers.size() == 0 ) {
            ret = BAD_VALUE;
            goto EXIT;
        }
        y_uv[0] = (void*) lframe->mYuv[0];
        //y_uv[1] = (void*) lframe->mYuv[1];
        //y_uv[1] = (void*) (lframe->mYuv[0] + height*stride);
        convertYUV422ToNV12Tiler ( (unsigned char*)fp, (unsigned char*)y_uv[0], width, height); //获取设备传回的数据,并进行一些格式转换操作.
        CAMHAL_LOGVB("##...index= %d.;camera buffer= 0x%x; y= 0x%x; UV= 0x%x.",index, buffer, y_uv[0], y_uv[1] );

#ifdef SAVE_RAW_FRAMES
        unsigned char* nv12_buff = (unsigned char*) malloc(width*height*3/2);
        //Convert yuv422i to yuv420sp(NV12) & dump the frame to a file
        convertYUV422ToNV12 ( (unsigned char*)fp, nv12_buff, width, height);
        saveFile( nv12_buff, ((width*height)*3/2) );
        free (nv12_buff);
#endif
     //给帧数据进行一些必要的参数设置,如帧大小、时间戳等。
        frame.mFrameType = CameraFrame::PREVIEW_FRAME_SYNC;
        frame.mBuffer = buffer;
        frame.mLength = width*height*3/2;
        frame.mAlignment = stride;
        frame.mOffset = 0;
        frame.mTimestamp = systemTime(SYSTEM_TIME_MONOTONIC);
        frame.mFrameMask = (unsigned int)CameraFrame::PREVIEW_FRAME_SYNC;

        if (mRecording)
        {
            frame.mFrameMask |= (unsigned int)CameraFrame::VIDEO_FRAME_SYNC;
            mFramesWithEncoder++;
        }

        ret = setInitFrameRefCount(frame.mBuffer, frame.mFrameMask);
        if (ret != NO_ERROR) {
            CAMHAL_LOGDB("Error in setInitFrameRefCount %d", ret);
        } else {
            ret = sendFrameToSubscribers(&frame); //将帧数据发送给用户
        }
    }
EXIT:

    return ret;
}

  (8)hardware/ti/omap4-aah/camera/AppCallbackNotifier.cpp:预览功能初始化的部分,调用到了 AppCallbackNotifier 类的函数。

status_t AppCallbackNotifier::startPreviewCallbacks(android::CameraParameters &params, CameraBuffer *buffers, uint32_t *offsets, int fd, size_t length, size_t count)
{
    unsigned int *bufArr;
    int size = 0;

    LOG_FUNCTION_NAME;

    android::AutoMutex lock(mLock);

    if ( NULL == mFrameProvider )
        {
        CAMHAL_LOGEA("Trying to start video recording without FrameProvider");
        return -EINVAL;
        }

    if ( mPreviewing )
        {
        CAMHAL_LOGDA("+Already previewing");
        return NO_INIT;
        }

    int w,h;
    ///Get preview size
    params.getPreviewSize(&w, &h);

    // save preview pixel format, size and stride
    mPreviewWidth = w;
    mPreviewHeight = h;
    mPreviewStride = 4096;
    mPreviewPixelFormat = CameraHal::getPixelFormatConstant(params.getPreviewFormat());
    size = CameraHal::calculateBufferSize(mPreviewPixelFormat, w, h);

    mPreviewMemory = mRequestMemory(-1, size, AppCallbackNotifier::MAX_BUFFERS, NULL);
    if (!mPreviewMemory) {
        return NO_MEMORY;
    }

    for (int i=0; i < AppCallbackNotifier::MAX_BUFFERS; i++) {
        mPreviewBuffers[i].type = CAMERA_BUFFER_MEMORY;
        mPreviewBuffers[i].opaque = (unsigned char*) mPreviewMemory->data + (i*size);
        mPreviewBuffers[i].mapped = mPreviewBuffers[i].opaque;
    }

    if ( mCameraHal->msgTypeEnabled(CAMERA_MSG_PREVIEW_FRAME ) ) {
         mFrameProvider->enableFrameNotification(CameraFrame::PREVIEW_FRAME_SYNC);
    }

    if ( mCameraHal->msgTypeEnabled(CAMERA_MSG_POSTVIEW_FRAME) ) {
         mFrameProvider->enableFrameNotification(CameraFrame::SNAPSHOT_FRAME); //同步预览帧
    }

    mPreviewBufCount = 0;

    mPreviewing = true;

    LOG_FUNCTION_NAME_EXIT;

    return NO_ERROR;
}

  以上回调函数在这里设置:

void AppCallbackNotifier::setCallbacks(CameraHal* cameraHal,
                                        camera_notify_callback notify_cb,
                                        camera_data_callback data_cb,
                                        camera_data_timestamp_callback data_cb_timestamp,
                                        camera_request_memory get_memory,
                                        void *user)
{
    android::AutoMutex lock(mLock);

    LOG_FUNCTION_NAME;

    mCameraHal = cameraHal;
    mNotifyCb = notify_cb;
    mDataCb = data_cb;
    mDataCbTimestamp = data_cb_timestamp;
    mRequestMemory = get_memory;
    mCallbackCookie = user;

    LOG_FUNCTION_NAME_EXIT;
}

  notifyEvent()

case CameraHalEvent::EVENT_METADATA: //预览元数据

     metaEvtData = evt->mEventData->metadataEvent;

     if ( ( NULL != mCameraHal ) &&
          ( NULL != mNotifyCb) &&
          ( mCameraHal->msgTypeEnabled(CAMERA_MSG_PREVIEW_METADATA) ) )
         {
         // WA for an issue inside CameraService
         camera_memory_t *tmpBuffer = mRequestMemory(-1, 1, 1, NULL);
       //申请一个 camera_memory_t 对应的 Buffers 空间后,就调用回调函数将元数据往上层进行传输了。

         mDataCb(CAMERA_MSG_PREVIEW_METADATA,
                 tmpBuffer,
                 0,
                 metaEvtData->getMetadataResult(),
                 mCallbackCookie);

         metaEvtData.clear();

         if ( NULL != tmpBuffer ) {
             tmpBuffer->release(tmpBuffer);
         }

         }

     break;

  简图总结: HAL 层中,CameraHardwareInterface 是通用的入口,而真正实现与驱动层的对接是与平台相关,不同平台有不同的实现方案。

  

 

五、Camera.takePicture()流程

  Camera API 1 中,数据流主要是通过函数回调的方式,依照从下往上的方向,逐层 return 到 Applications 中。

 1.Open 时设置回调:frameworks/av/services/camera/libcameraservice/device1/CameraHardwareInterface.h

/** Set the notification and data callbacks */
void setCallbacks(notify_callback notify_cb,
                  data_callback data_cb,
                  data_callback_timestamp data_cb_timestamp,
                  void* user)
{
    mNotifyCb = notify_cb; //置 data 回调以及 dataTimestamp 回调,对应的是函数指针 mDataCb 与 mDataCvTimestamp 。
    mDataCbTimestamp = data_cb_timestamp;
    mCbUser = user;

    ALOGV("%s(%s)", __FUNCTION__, mName.string());

    if (mDevice->ops->set_callbacks) { 
//注:设置 mDevice->ops 对应回调函数时,传入的不是之前设置的函数指针,而是 __data_cb 这样的函数。在该文件中,实现了 __data_cb ,将回调函数做了一层封装。
        mDevice->ops->set_callbacks(mDevice,
                               __notify_cb,
                               __data_cb,
                               __data_cb_timestamp,
                               __get_memory,
                               this);
    }
}

  __data_cb():对原 callback 函数简单封装,附加了一个防止数组越界判断。

static void __data_cb(int32_t msg_type,
                      const camera_memory_t *data, unsigned int index,
                      camera_frame_metadata_t *metadata,
                      void *user)
{
    ALOGV("%s", __FUNCTION__);
    CameraHardwareInterface *__this =
            static_cast<CameraHardwareInterface *>(user);
    sp<CameraHeapMemory> mem(static_cast<CameraHeapMemory *>(data->handle));
    if (index >= mem->mNumBufs) {
        ALOGE("%s: invalid buffer index %d, max allowed is %d", __FUNCTION__,
             index, mem->mNumBufs);
        return;
    }
    __this->mDataCb(msg_type, mem->mBuffers[index], metadata, __this->mCbUser);
}

 2.控制流:

  (1)frameworks/base/core/java/android/hardware/Camera.java

  takePicture()

public final void takePicture(ShutterCallback shutter, PictureCallback raw,
        PictureCallback postview, PictureCallback jpeg) {
    mShutterCallback = shutter; //设置快门回调。
    mRawImageCallback = raw; //设置各种类型的图片数据回调。
    mPostviewCallback = postview;
    mJpegCallback = jpeg;

    // If callback is not set, do not send me callbacks.
    int msgType = 0;
    if (mShutterCallback != null) {
        msgType |= CAMERA_MSG_SHUTTER;
    }
    if (mRawImageCallback != null) {
        msgType |= CAMERA_MSG_RAW_IMAGE;
    }
    if (mPostviewCallback != null) {
        msgType |= CAMERA_MSG_POSTVIEW_FRAME;
    }
    if (mJpegCallback != null) {
        msgType |= CAMERA_MSG_COMPRESSED_IMAGE;
    }

    native_takePicture(msgType); 
//调用 JNI takePicture 方法,传入的参数 msgType 是根据相应 CallBack 是否存在而确定的,每种 Callback 应该对应一个二进制中的数位(如 1,10,100 中 1 的//位置)
    mFaceDetectionRunning = false;
}

 3.Android Runtime:frameworks/base/core/jni/android_hardware_Camera.cpp

static void android_hardware_Camera_takePicture(JNIEnv *env, jobject thiz, jint msgType)
{
    ALOGV("takePicture");
    JNICameraContext* context;
    sp<Camera> camera = get_native_camera(env, thiz, &context); 
    //获取已经打开的 camera 实例,调用其 takePicture() 接口。
    if (camera == 0) return;

    /*
     * When CAMERA_MSG_RAW_IMAGE is requested, if the raw image callback
     * buffer is available, CAMERA_MSG_RAW_IMAGE is enabled to get the
     * notification _and_ the data; otherwise, CAMERA_MSG_RAW_IMAGE_NOTIFY
     * is enabled to receive the callback notification but no data.
     *
     * Note that CAMERA_MSG_RAW_IMAGE_NOTIFY is not exposed to the
     * Java application.
     *
     注意,在这个函数中,对于 RAW_IMAGE 有一些附加操作:
        如果设置了 RAW 的 callback ,则要检查上下文中,是否能找到对应 Buffer。
        若无法找到 Buffer ,则将 CAMERA_MSG_RAW_IMAGE 的信息去掉,换成 CAMERA_MSG_RAW_IMAGE_NOTIFY。
        替换后,就只会获得 notification 的消息,而没有对应的图像数据。
    */
    if (msgType & CAMERA_MSG_RAW_IMAGE) {
        ALOGV("Enable raw image callback buffer");
        if (!context->isRawImageCallbackBufferAvailable()) {
            ALOGV("Enable raw image notification, since no callback buffer exists");
            msgType &= ~CAMERA_MSG_RAW_IMAGE;
            msgType |= CAMERA_MSG_RAW_IMAGE_NOTIFY;
        }
    }

    if (camera->takePicture(msgType) != NO_ERROR) {
        jniThrowRuntimeException(env, "takePicture failed");
        return;
    }
}

 4.C/C++ Libraries

  (1)frameworks/av/camera/Camera.cpp

// take a picture
status_t Camera::takePicture(int msgType)
{
    ALOGV("takePicture: 0x%x", msgType);
    sp <::android::hardware::ICamera> c = mCamera;
    if (c == 0) return NO_INIT;
    return c->takePicture(msgType); //获取一个 ICamera,调用其 takePicture 接口。
}

  (2)frameworks/av/camera/ICamera.cpp

// take a picture - returns an IMemory (ref-counted mmap)
status_t takePicture(int msgType)
{
    ALOGV("takePicture: 0x%x", msgType);
    Parcel data, reply;
    data.writeInterfaceToken(ICamera::getInterfaceDescriptor());
    data.writeInt32(msgType);
    remote()->transact(TAKE_PICTURE, data, &reply); 
    //利用 Binder 机制发送相应指令到服务端,实际调用到的是 CameraClient::takePicture() 函数。
    status_t ret = reply.readInt32();
    return ret;
}

  (3)frameworks/av/services/camera/libcameraservice/api1/CameraClient.cpp

// take a picture - image is returned in callback
status_t CameraClient::takePicture(int msgType) {
    LOG1("takePicture (pid %d): 0x%x", getCallingPid(), msgType);

    Mutex::Autolock lock(mLock);
    status_t result = checkPidAndHardware();
    if (result != NO_ERROR) return result;

    if ((msgType & CAMERA_MSG_RAW_IMAGE) && 
        (msgType & CAMERA_MSG_RAW_IMAGE_NOTIFY)) {
        ALOGE("CAMERA_MSG_RAW_IMAGE and CAMERA_MSG_RAW_IMAGE_NOTIFY"
                " cannot be both enabled");
        return BAD_VALUE;
    }
//注:CAMERA_MSG_RAW_IMAGE 指令与 CAMERA_MSG_RAW_IMAGE_NOTIFY 指令不能同时有效,需要进行对应的检查。
// We only accept picture related message types
// and ignore other types of messages for takePicture().
    int picMsgType = msgType //对传入的指令过滤,只留下与 takePicture() 操作相关的。
                        & (CAMERA_MSG_SHUTTER |
                           CAMERA_MSG_POSTVIEW_FRAME |
                           CAMERA_MSG_RAW_IMAGE |
                           CAMERA_MSG_RAW_IMAGE_NOTIFY |
                           CAMERA_MSG_COMPRESSED_IMAGE);

    enableMsgType(picMsgType);

    return mHardware->takePicture(); //调用 CameraHardwareInterface 中的 takePicture() 接口。
}

 5.数据流:由于数据流是通过 callback 函数实现的,所以探究其流程的时候我是从底层向上层进行分析的

  (1)HAL:frameworks/av/services/camera/libcameraservice/device1/CameraHardwareInterface.h

/**
 * Take a picture.
 */
status_t takePicture()
{
    ALOGV("%s(%s)", __FUNCTION__, mName.string());
    if (mDevice->ops->take_picture)
        return mDevice->ops->take_picture(mDevice); //通过 mDevice 中设置的函数指针,调用 HAL 层中具体平台对应的 takePicture 操作的实现逻辑。
    return INVALID_OPERATION;
}

  __data_cb()该回调函数是在同文件中实现的 setCallbacks() 函数中设置的,Camera 设备获得数据后,就会往上传输,在 HAL 层中会调用到这个回调函数。

static void __data_cb(int32_t msg_type,
                      const camera_memory_t *data, unsigned int index,
                      camera_frame_metadata_t *metadata,
                      void *user)
{
    ALOGV("%s", __FUNCTION__);
    CameraHardwareInterface *__this =
            static_cast<CameraHardwareInterface *>(user);
    sp<CameraHeapMemory> mem(static_cast<CameraHeapMemory *>(data->handle));
    if (index >= mem->mNumBufs) {
        ALOGE("%s: invalid buffer index %d, max allowed is %d", __FUNCTION__,
             index, mem->mNumBufs);
        return;
    }
    __this->mDataCb(msg_type, mem->mBuffers[index], metadata, __this->mCbUser); 
   // mDataCb 指针对应的是 CameraClient 类中实现的 dataCallback()。
}

  (2)C/C++ Libraries:

    frameworks/av/services/camera/libcameraservice/api1/CameraClient.cpp

void CameraClient::dataCallback(int32_t msgType, 
        const sp<IMemory>& dataPtr, camera_frame_metadata_t *metadata, void* user) {
    LOG2("dataCallback(%d)", msgType);
//该回调在initialize() 函数中设置到 CameraHardwareInterface 中。
    sp<CameraClient> client = static_cast<CameraClient*>(getClientFromCookie(user).get()); 
 //启动这个回调后,就从 Cookie 中获取已连接的客户端。
    if (client.get() == nullptr) return;

    if (!client->lockIfMessageWanted(msgType)) return;
    if (dataPtr == 0 && metadata == NULL) {
        ALOGE("Null data returned in data callback");
        client->handleGenericNotify(CAMERA_MSG_ERROR, UNKNOWN_ERROR, 0);
        return;
    }

    switch (msgType & ~CAMERA_MSG_PREVIEW_METADATA) { //根据 msgType,启动对应的 handle 操作。
        case CAMERA_MSG_PREVIEW_FRAME:
            client->handlePreviewData(msgType, dataPtr, metadata);
            break;
        case CAMERA_MSG_POSTVIEW_FRAME:
            client->handlePostview(dataPtr);
            break;
        case CAMERA_MSG_RAW_IMAGE:
            client->handleRawPicture(dataPtr);
            break;
        case CAMERA_MSG_COMPRESSED_IMAGE:
            client->handleCompressedPicture(dataPtr);
            break;
        default:
            client->handleGenericData(msgType, dataPtr, metadata);
            break;
    }
}

  handleRawPicture()

// picture callback - raw image ready
void CameraClient::handleRawPicture(const sp<IMemory>& mem) {
    disableMsgType(CAMERA_MSG_RAW_IMAGE);

    ssize_t offset;
    size_t size;
    sp<IMemoryHeap> heap = mem->getMemory(&offset, &size);

    sp<hardware::ICameraClient> c = mRemoteCallback;
//在 open 流程中,connect() 函数调用时,mRemoteCallback 已经设置为一个客户端实例,其对应的是 ICameraClient 的强指针。
    mLock.unlock();
    if (c != 0) {
        c->dataCallback(CAMERA_MSG_RAW_IMAGE, mem, NULL); 
//基于 Binder 机制来启动客户端的 dataCallback客户端的,dataCallback 是实现在 Camera 类中。
    }
}

  frameworks/av/camera/Camera.cpp

// callback from camera service when frame or image is ready
void Camera::dataCallback(int32_t msgType, const sp<IMemory>& dataPtr,
                          camera_frame_metadata_t *metadata)
{
    sp<CameraListener> listener;
    {
        Mutex::Autolock _l(mLock);
        listener = mListener;
    }
    if (listener != NULL) {
        listener->postData(msgType, dataPtr, metadata); 
    //调用 CameraListener 的 postData 接口(android_hardware_Camera.cpp中实现),将数据继续向上传输。
    }
}

  (3)Android Runtime :frameworks/base/core/jni/android_hardware_Camera.cpp

void JNICameraContext::postData(int32_t msgType, const sp<IMemory>& dataPtr,
                                camera_frame_metadata_t *metadata) 
//postData是 JNICameraContext 类的成员函数,该类继承了 CameraListener。
{
    // VM pointer will be NULL if object is released
    Mutex::Autolock _l(mLock);
    JNIEnv *env = AndroidRuntime::getJNIEnv(); //首先获取虚拟机指针
    if (mCameraJObjectWeak == NULL) {
        ALOGW("callback on dead camera object");
        return;
    }

    int32_t dataMsgType = msgType & ~CAMERA_MSG_PREVIEW_METADATA; 
//然后过滤掉 CAMERA_MSG_PREVIEW_METADATA 信息
// return data based on callback type
    switch (dataMsgType) {
        case CAMERA_MSG_VIDEO_FRAME:
            // should never happen
            break;

        // For backward-compatibility purpose, if there is no callback
        // buffer for raw image, the callback returns null.
        case CAMERA_MSG_RAW_IMAGE:
            ALOGV("rawCallback");
            if (mRawImageCallbackBuffers.isEmpty()) {
                env->CallStaticVoidMethod(mCameraJClass, fields.post_event,
                        mCameraJObjectWeak, dataMsgType, 0, 0, NULL);
            } else {
                copyAndPost(env, dataPtr, dataMsgType); 
            //关键是在于 copyAndPost() 函数
            }
            break;

        // There is no data.
        case 0:
            break;

        default:
            ALOGV("dataCallback(%d, %p)", dataMsgType, dataPtr.get());
            copyAndPost(env, dataPtr, dataMsgType);
            break;
    }

    // post frame metadata to Java
    if (metadata && (msgType & CAMERA_MSG_PREVIEW_METADATA)) {
        postMetadata(env, CAMERA_MSG_PREVIEW_METADATA, metadata);
    }
}copyAndPost()void JNICameraContext::copyAndPost(JNIEnv* env, const sp<IMemory>& dataPtr, int msgType){
    jbyteArray obj = NULL;

    // allocate Java byte array and copy data
    if (dataPtr != NULL) { //首先确认 Memory 中数据是否存在
        ssize_t offset;
        size_t size;
        sp<IMemoryHeap> heap = dataPtr->getMemory(&offset, &size); 
        ALOGV("copyAndPost: off=%zd, size=%zu", offset, size);
        uint8_t *heapBase = (uint8_t*)heap->base();

        if (heapBase != NULL) {
            const jbyte* data = reinterpret_cast<const jbyte*>(heapBase + offset);

            if (msgType == CAMERA_MSG_RAW_IMAGE) {
                obj = getCallbackBuffer(env, &mRawImageCallbackBuffers, size);
            } else if (msgType == CAMERA_MSG_PREVIEW_FRAME && mManualBufferMode) {
                obj = getCallbackBuffer(env, &mCallbackBuffers, size);

                if (mCallbackBuffers.isEmpty()) {
                    ALOGV("Out of buffers, clearing callback!");
                    mCamera->setPreviewCallbackFlags(CAMERA_FRAME_CALLBACK_FLAG_NOOP);
                    mManualCameraCallbackSet = false;

                    if (obj == NULL) {
                        return;
                    }
                }
            } else {
                ALOGV("Allocating callback buffer");
                obj = env->NewByteArray(size);
            }

            if (obj == NULL) {
                ALOGE("Couldn't allocate byte array for JPEG data");
                env->ExceptionClear();
            } else {
                env->SetByteArrayRegion(obj, 0, size, data);
            }
        } else {
            ALOGE("image heap is NULL");
        }
    }

    // post image data to Java
    env->CallStaticVoidMethod(mCameraJClass, fields.post_event, 
            mCameraJObjectWeak, msgType,0, 0, obj);
   //将图像传给 Java 端
    if (obj) {
        env->DeleteLocalRef(obj);
    }
}

  (4)frameworks/base/core/java/android/hardware/Camera.java

private static void postEventFromNative(Object camera_ref, 
                                        int what, int arg1, int arg2, Object obj)
//继承了 Handler 类
{
    Camera c = (Camera)((WeakReference)camera_ref).get(); 
//首先确定 Camera 是否已经实例化。
    if (c == null)
        return;

    if (c.mEventHandler != null) {
        Message m = c.mEventHandler.obtainMessage(what, arg1, arg2, obj);
//通过 Camera 的成员 mEventHandler 的 obtainMessage 方法将从 Native 环境中获得的数据封装成 Message 类的一个实例,然后//调用 sendMessage() 方法将数据传出。 
c.mEventHandler.sendMessage(m); 

    }
}
@Override
public void handleMessage(Message msg) { //继承了 Handler 类
    switch(msg.what) {
    case CAMERA_MSG_SHUTTER:
        if (mShutterCallback != null) {
            mShutterCallback.onShutter();
        }
        return;

    case CAMERA_MSG_RAW_IMAGE:
        if (mRawImageCallback != null) {
            mRawImageCallback.onPictureTaken((byte[])msg.obj, mCamera);
        }
        return;

    case CAMERA_MSG_COMPRESSED_IMAGE:
        if (mJpegCallback != null) {
            mJpegCallback.onPictureTaken((byte[])msg.obj, mCamera);
        }
        return;

    case CAMERA_MSG_PREVIEW_FRAME:
        PreviewCallback pCb = mPreviewCallback;
        if (pCb != null) {
            if (mOneShot) {
                // Clear the callback variable before the callback
                // in case the app calls setPreviewCallback from
                // the callback function
                mPreviewCallback = null;
            } else if (!mWithBuffer) {
                // We're faking the camera preview mode to prevent
                // the app from being flooded with preview frames.
                // Set to oneshot mode again.
                setHasPreviewCallback(true, false);
            }
            pCb.onPreviewFrame((byte[])msg.obj, mCamera);
        }
        return;

    case CAMERA_MSG_POSTVIEW_FRAME:
        if (mPostviewCallback != null) {
            mPostviewCallback.onPictureTaken((byte[])msg.obj, mCamera); 
//通过调用这个方法,底层传输到此的数据最终发送到最上层的 Java 应用中,上层应用通过解析 Message 得到图像数据。
        }
        return;

    case CAMERA_MSG_FOCUS:
        AutoFocusCallback cb = null;
        synchronized (mAutoFocusCallbackLock) {
            cb = mAutoFocusCallback;
        }
        if (cb != null) {
            boolean success = msg.arg1 == 0 ? false : true;
            cb.onAutoFocus(success, mCamera);
        }
        return;

    case CAMERA_MSG_ZOOM:
        if (mZoomListener != null) {
            mZoomListener.onZoomChange(msg.arg1, msg.arg2 != 0, mCamera);
        }
        return;

    case CAMERA_MSG_PREVIEW_METADATA:
        if (mFaceListener != null) {
            mFaceListener.onFaceDetection((Face[])msg.obj, mCamera);
        }
        return;

    case CAMERA_MSG_ERROR :
        Log.e(TAG, "Error " + msg.arg1);
        if (mErrorCallback != null) {
            mErrorCallback.onError(msg.arg1, mCamera);
        }
        return;

    case CAMERA_MSG_FOCUS_MOVE:
        if (mAutoFocusMoveCallback != null) {
            mAutoFocusMoveCallback.onAutoFocusMoving(msg.arg1 == 0 ? false : true, mCamera);
        }
        return;

    default:
        Log.e(TAG, "Unknown message type " + msg.what);
        return;
    }
}

  简图总结:

  

 

 总结:

  不管是控制流还是数据流,都是要通过五大层次依次执行下一步的。控制流是将命令从顶层流向底层,而数据流则是将底层的数据流向顶层。如果要自定义一个对数据进行处理的 C++ 功能库,并将其加入相机中,可以通过对 HAL 层进行一些修改,将 RAW 图像流向自己的处理库,再将处理后的 RAW 图像传回 HAL 层(需要在 HAL 层对 RAW 格式进行一些处理才能把图像上传),最后通过正常的回调流程把图像传到顶层应用中。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值