Android 8.1高通msm8950平台Camera HAL层架构

Android 8.1高通msm8950平台Camera HAL层架构

查看源码之前,如果不懂HAL层与Kernel驱动层基础使用的,可简单看下这两篇文章
在Ubuntu上为Android系统编写Linux内核驱动程序
Android增加硬件抽象层(HAL)模块访问Linux内核驱动程序

大致简单对上述两篇文章总结一下:
Kernel驱动层 ,我们要定义一个全局static的file_operations变量,按照其特定的格式配置其读写操作方法,并实现这些方法;如下格式:

static struct file_operations hello_fops = {
	.owner = THIS_MODULE,
	.open = hello_open,
	.release = hello_release,
	.read = hello_read,
	.write = hello_write, 

HAL层 ,我们同样也要按照系统规定的结构体格式去创建响应的结构体,HAL的模块module、设备device和方法method,如module:

硬件模块结构体
struct hello_module_t {
	hw_module_t类型必须是结构体的第一个,这是因为系统在操作此结构体时,会将结构体指针赋值给
	hw_module_t类型的数据,因为刚好是第一个,可以无需转换直接赋值
	struct hw_module_t common;
};

这样系统运行时,系统会在dev目录下生成虚拟设备节点,只需要使用系统函数open(“dev/xxx”)打开该节点就可以和kernel驱动连接访问;同理HAL也会生层一个全局module节点,在HAL的上层使用hw_get_module(“module_name”)系统函数就会获取这个全局module节点,并能够访问该module配置的属性方法!


HAL的module定义节点

hw_module_t节点(可以理解为一个全局结构体变量)定义,标志着你从framework层访问HAL层入口,知道入口,才能按图索骥理清逻辑

以Camera HAL为例进行学习:

Android 8.1.0源码中,声明camera的module结构体在hardware目录下的camera_common.h和camera3.h两个头文件里面,其中声明了camera模块的module结构体内容和相关的方法、回调以及一些camera的参数类型;而高通引用定义这些module则在QCamera2Hal.cpp中,如下:

以下是固定格式
static hw_module_t camera_common = { 
    .tag                    = HARDWARE_MODULE_TAG,
    .module_api_version     = CAMERA_MODULE_API_VERSION_2_4,
    .hal_api_version        = HARDWARE_HAL_API_VERSION,
    .id                     = CAMERA_HARDWARE_MODULE_ID,                                                                                                                                                                                                                                 
    .name                   = "QCamera Module",
    .author                 = "Qualcomm Innovation Center Inc",
    .methods                = &qcamera::QCamera2Factory::mModuleMethods,
    .dso                    = NULL,
    .reserved               = {0} 
};

camera_module_t HAL_MODULE_INFO_SYM = { 
	第一个common必须是hw_module_t类型,方便指针转换
    .common                 = camera_common,
    以下都是该module提供的方法
    .get_number_of_cameras  = qcamera::QCamera2Factory::get_number_of_cameras,
    .get_camera_info        = qcamera::QCamera2Factory::get_camera_info,
    .set_callbacks          = qcamera::QCamera2Factory::set_callbacks,
    .get_vendor_tag_ops     = qcamera::QCamera3VendorTags::get_vendor_tag_ops,
    .open_legacy            = qcamera::QCamera2Factory::open_legacy,
    .set_torch_mode         = qcamera::QCamera2Factory::set_torch_mode,
    .init                   = NULL,
    .reserved               = {0} 
};

QCamera2Factory.cpp
struct hw_module_methods_t QCamera2Factory::mModuleMethods = {                                                                                                                                                                                                                           
    .open = QCamera2Factory::camera_device_open,
};

那么framework是如何获取这个module的呢?鄙人理解,上面的module是一个全局对象,系统会将它加入到一个所有全局module列表,根据其声明的ID来对应,最后想要拿到这个module,就用系统函数hw_get_module获取,如下就是CameraProvider中获取module的流程图:
在这里插入图片描述


以module的get_number_of_cameras为例看看

因为hal层入口都是从hal module结构体进入调用,我们查看CameraProvider发现其获取module后,调用的第一个方法是get_number_of_cameras,那我们就从这个方法开始,开始之前我们要先认识一个重要的结构体:

typedef struct {
    int8_t num_cam;  相机数量
    cam_shim_ops保存了三个函数指针,用于打开、关闭释放相机的session的
    mm_camera_shim_ops_t cam_shim_ops;            
    二维数组,一维对应相机index  二维对应相机name 那么为video0 video1等等 注意这个videoX并不与一维index对应                                                                                                                                                                                                                                       
    char video_dev_name[MM_CAMERA_MAX_NUM_SENSORS][MM_CAMERA_DEV_NAME_LEN]; 
    相机对象数组 MM_CAMERA_MAX_NUM_SENSORS宏定义最大数量为5 每个相机实体设备对应一个该结构体
    mm_camera_obj_t *cam_obj[MM_CAMERA_MAX_NUM_SENSORS];               
    info存储每个相机的朝向、类型、旋转和
    struct camera_info info[MM_CAMERA_MAX_NUM_SENSORS];
    每个相机的类型STANDALONE AUX MAIN三种类型
    cam_sync_type_t cam_type[MM_CAMERA_MAX_NUM_SENSORS];
    cam_sync_mode_t cam_mode[MM_CAMERA_MAX_NUM_SENSORS];
    每个相机的是否支持YUV
    uint8_t is_yuv[MM_CAMERA_MAX_NUM_SENSORS]; // 1=CAM_SENSOR_YUV, 0=CAM_SENSOR_RAW
} mm_camera_ctrl_t;

其中mm_camera_obj_t是针对每个相机会有一个该结构体,其他成员变量信息见注释;
get_number_of_cameras如下流程图,调用逻辑相对简单,但是函数内部逻辑比较复杂:
在这里插入图片描述

首先看看QCamera2Factory的get_number_of_cameras:

全局变量
QCamera2Factory *gQCamera2Factory = NULL;
定义支持了HAL1就声明,这里我们使用的HAL3
#ifdef QCAMERA_HAL1_SUPPORT
QCameraMuxer *gQCameraMuxer = NULL;
#endif

int QCamera2Factory::get_number_of_cameras()
{
    int numCameras = 0;
	创建一个全局的,所有camera都会用到这个全局factory对象
    if (!gQCamera2Factory) {
        gQCamera2Factory = new QCamera2Factory();
        if (!gQCamera2Factory) {
            LOGE("Failed to allocate Camera2Factory object");
            return 0;
        }
    }

    numCameras = gQCamera2Factory->getNumberOfCameras();
    返回相机数量,这个数量在gQCamera2Factory构造方法中得到的
    return numCameras;
}

QCamera2Factory::QCamera2Factory()
{
    mHalDescriptors = NULL;
    mCallbacks = NULL;                                                                                                                                                                                                                                                                   
    这个函数在mm_camera_interface.c里面,它会与驱动交互获取相机数量
     mNumOfCameras = get_num_of_cameras();
    hal_desc是h头文件的结构体,其成员一个camera_id 一个hal版本号
        mHalDescriptors = new hal_desc[mNumOfCameras];
        if ( NULL != mHalDescriptors) {
            uint32_t cameraId = 0;
            for (int i = 0; i < mNumOfCameras ; i++, cameraId++) {
            	对于hal层,相机id就是从0 1开始的,但是kernel则不一定
                mHalDescriptors[i].cameraId = cameraId;
                 Set Device version to 3.x when both HAL3 is enabled & its BAYER sensor
                if (isHAL3Enabled && !(is_yuv_sensor(cameraId))) {
                    mHalDescriptors[i].device_version =
                            CAMERA_DEVICE_API_VERSION_3_0;
                } else {
                    mHalDescriptors[i].device_version =
                            CAMERA_DEVICE_API_VERSION_1_0;
                }
            }
        } else {
            LOGE("Not enough resources to allocate HAL descriptor table!");
        }
}

mm_camera_interface是HAL与Kernel驱动层交互的代码,这部分相对较晦涩难懂,我们慢慢看吧!

static mm_camera_ctrl_t g_cam_ctrl; 
uint8_t get_num_of_cameras()
{
    int rc = 0;
    int i = 0;
    int dev_fd = -1;
    struct media_device_info mdev_info;
    int num_media_devices = 0;
    int8_t num_cameras = 0;
    char subdev_name[32];
    int32_t sd_fd = -1;
    struct sensor_init_cfg_data cfg;
    char prop[PROPERTY_VALUE_MAX];
    
   if (g_shim_initialized == FALSE) {
    这句很关键,shim lib会加载高通的mct库,hal很多数据交互不直接与Kernel接触,而是通过shim与之接触,这里就是shim相关的初始化
        if (mm_camera_load_shim_lib() < 0) { 
            LOGE("Failed to module shim library");
            return 0;
        } else {
            g_shim_initialized = TRUE;
        }    
    } else {
        LOGH("module shim layer already intialized");
    }
    因为get_num_of_cameras是进入hal和驱动的第一个函数,所以这个全局变量在这个函数给他赋值,这里
    先重置这个全局变量
	memset (&g_cam_ctrl, 0, sizeof (g_cam_ctrl));
	    while (1) {                                                                                                                                                                                                                                                                          
        uint32_t num_entities = 1U;
        char dev_name[32];
        设置dev_name为 /dev/media0 1 2 直到后面的open打开失败退出
        snprintf(dev_name, sizeof(dev_name), "/dev/media%d", num_media_devices);
        系统函数打开/dev/mediaX下的虚拟设备,与驱动交互
        dev_fd = open(dev_name, O_RDWR | O_NONBLOCK);
        if (dev_fd < 0) {
            LOGD("Done discovering media devices\n");
            break;
        }
        num_media_devices++;
        ioctl系统函数,MEDIA_IOC_DEVICE_INFO命令码查询设备信息,结果保存在mdev_info
        rc = ioctl(dev_fd, MEDIA_IOC_DEVICE_INFO, &mdev_info);
        if (rc < 0) {
            LOGE("Error: ioctl media_dev failed: %s\n", strerror(errno));
            close(dev_fd);
            dev_fd = -1;
            break;
        }
        
        MSM_CONFIGURATION_NAME为msm_camera,也就是虚拟设备dev/mediaX读取的其属性为msm_config才行
        if (strncmp(mdev_info.model, MSM_CONFIGURATION_NAME,
          sizeof(mdev_info.model)) != 0) {
            close(dev_fd);
            dev_fd = -1;
            continue;
        }
        
        进入这个内部while循环,说明这个media虚拟设备和相机相关
        while (1) {
            struct media_entity_desc entity;
            memset(&entity, 0, sizeof(entity));
            entity.id = num_entities++;
            LOGD("entity id %d", entity.id);
            MEDIA_IOC_ENUM_ENTITIES宏定义_IOWR('|', 0x01, struct media_entity_desc)枚举媒体设备信息
            rc = ioctl(dev_fd, MEDIA_IOC_ENUM_ENTITIES, &entity);
            if (rc < 0) {
                ...
                break;
            }
            LOGD("entity name %s type %d group id %d",
                entity.name, entity.type, entity.group_id);
            如果这个设备是v4l2类型,并且是camera sensor组,说明找到了
            if (entity.type == MEDIA_ENT_T_V4L2_SUBDEV &&
                entity.group_id == MSM_CAMERA_SUBDEV_SENSOR_INIT) {
                snprintf(subdev_name, sizeof(dev_name), "/dev/%s", entity.name);
                break;
            }
        }
        close(dev_fd);
        dev_fd = -1;
    }
    num_media_devices = 0;
    在来一次while循环
    while (1) {
        uint32_t num_entities = 1U;
        char dev_name[32];
        
        同上个while循环功能
        snprintf(dev_name, sizeof(dev_name), "/dev/media%d", num_media_devices);
        dev_fd = open(dev_name, O_RDWR | O_NONBLOCK);
        if (dev_fd < 0) {
            LOGD("Done discovering media devices: %s\n", strerror(errno));
            break;
        }   
        num_media_devices++;
        memset(&mdev_info, 0, sizeof(mdev_info));
        rc = ioctl(dev_fd, MEDIA_IOC_DEVICE_INFO, &mdev_info);
        if (rc < 0) {
            LOGE("Error: ioctl media_dev failed: %s\n", strerror(errno));
            close(dev_fd);
            dev_fd = -1;
            num_cameras = 0;
            break;
        }   
        这次是查找model为msm_camera的,更为精准的查找
        if(strncmp(mdev_info.model, MSM_CAMERA_NAME, sizeof(mdev_info.model)) != 0) {
            close(dev_fd);
            dev_fd = -1;
            continue;
        }   
                                                                                                                                                                                                                                                                                         
        while (1) {
            struct media_entity_desc entity;
            memset(&entity, 0, sizeof(entity));
            entity.id = num_entities++;
            枚举这个media媒体信息
            rc = ioctl(dev_fd, MEDIA_IOC_ENUM_ENTITIES, &entity);
            if (rc < 0) {
                LOGD("Done enumerating media entities\n");
                rc = 0;
                break;
            }
            类型为v4l2,把这个媒体实体的name写入到全局变量g_cam_ctrl的video_dev_name数据中去,这个name值是videoX,X是序号index,它有可能不是0 1
            if(entity.type == MEDIA_ENT_T_DEVNODE_V4L && entity.group_id == QCAMERA_VNODE_GROUP_ID) {
                strlcpy(g_cam_ctrl.video_dev_name[num_cameras],
                     entity.name, sizeof(entity.name));
                LOGE("dev_info[id=%d,name='%s']\n",
                    (int)num_cameras, g_cam_ctrl.video_dev_name[num_cameras]);
                找到一个相机数量++    
                num_cameras++;
                break;
            }
        }
        close(dev_fd);
        dev_fd = -1;
        if (num_cameras >= MM_CAMERA_MAX_NUM_SENSORS) {
            LOGW("Maximum number of camera reached %d", num_cameras);
            break;
        }
    }
    相机数量赋值
    g_cam_ctrl.num_cam = num_cameras;
	这个函数还会遍历每个相机具体信息,如朝向、yuv支持、旋转等信息保存在g_cam_ctrl里面
    get_sensor_info();                                                                                                                                                                                                                                                                   
    sort_camera_info(g_cam_ctrl.num_cam);
    LOGE("num_cameras=%d\n", (int)g_cam_ctrl.num_cam);
    return(uint8_t)g_cam_ctrl.num_cam;
}

以上代码注意是循环遍历/dev/mediaX节点,知道其中名字为msm_camera属性的设备,然后记录到全局g_cam_ctrl中,并将数量返回;其他细节可查看上述源码中的注释,kernel驱动层就不继续查看了,感兴趣的可自行分析;
上面还有很关键的shim,会引入mct库,不懂的可以看看我写的这篇解析高通vendor层mct框架

以module的get_camera_info为例看看

在这里插入图片描述
流程如上图,camera_module_t的get_camera_info指针指向QCamera2Factory的get_camera_info,进而调用其全局变量gQCamera2Factory.getCameraInfo函数,在到QCamera3HWI中获取相机属性函数getCamInfo,由于第一次进入只知道相机Id,其属性还需要调用camera_open打开相机才能得到相机参数,这里先看看getCamInfo:

int QCamera3HardwareInterface::getCamInfo(uint32_t cameraId,
        struct camera_info *info)
{
    ATRACE_CALL();
    int rc = 0;

    pthread_mutex_lock(&gCamLock);
    if (NULL == gCamCapability[cameraId]) {
    	初始化中会打开相机
        rc = initCapabilities(cameraId);
        if (rc < 0) {
            pthread_mutex_unlock(&gCamLock);
            return rc;
        }
    }

    if (NULL == gStaticMetadata[cameraId]) {
        rc = initStaticMetadata(cameraId);
        if (rc < 0) {
            pthread_mutex_unlock(&gCamLock);
            return rc;
        }
    }

    switch(gCamCapability[cameraId]->position) {
    case CAM_POSITION_BACK:
    case CAM_POSITION_BACK_AUX:
        info->facing = CAMERA_FACING_BACK;
        break;

    case CAM_POSITION_FRONT:
    case CAM_POSITION_FRONT_AUX:
        info->facing = CAMERA_FACING_FRONT;
        break;

    default:
        LOGE("Unknown position type %d for camera id:%d",
                gCamCapability[cameraId]->position, cameraId);
        rc = -1;
        break;
    }

initCapabilities函数先不分析,直接分析camera_open,分析完后我们在分析这个函数

camera_open

讲述代码之前先看看camera_open重要的结构体:
一个相机对应一个以下的结构体

typedef struct mm_camera_obj {
	//对上层hal提供的camera id(低8bit),高16bit是全局参数g_handler_history_count
    uint32_t my_hdl;		
    int ref_count;              //相机obj被引用的次数
    int32_t ctrl_fd;            //返回文件标识fd,打开虚拟设备/dev/videoX
    int32_t ds_fd; 				/* 与驱动层socket交互的文件描述符,新版本已废弃此种交互 */
    pthread_mutex_t cam_lock;
    pthread_mutex_t cb_lock; /* lock for evt cb */
    mm_channel_t ch[MM_CAMERA_CHANNEL_MAX];
    mm_camera_evt_obj_t evt;		//回调接口,回调给上层应用的
    mm_camera_poll_thread_t evt_poll_thread; /* evt poll thread 事件轮询线程 */
    mm_camera_cmd_thread_t evt_thread;       /* 事件分发线程 */
    mm_camera_vtbl_t vtbl;

    pthread_mutex_t evt_lock;
    pthread_cond_t evt_cond;
    mm_camera_event_t evt_rcvd;

    pthread_mutex_t msg_lock; /* lock for sending msg through socket */
    uint32_t sessionid; /* camera服务端的session Id,把camera驱动层对应的相机当做服务端 */
} mm_camera_obj_t; 

/*相机操作的虚拟表*/
typedef struct {
    uint32_t camera_handle;  //相机的唯一标识
    mm_camera_ops_t *ops;		//相机的配置参数
} mm_camera_vtbl_t; 

继续接着camera_open开始,以下的函数都是围绕着这个结构体展开的,camera_open在mm_camera_interface.c里面,我们看看:

int32_t camera_open(uint8_t camera_idx, mm_camera_vtbl_t **camera_vtbl)
{
    int32_t rc = 0; 
    mm_camera_obj_t *cam_obj = NULL;

#ifdef QCAMERA_REDEFINE_LOG
    mm_camera_set_dbg_log_properties();
#endif

    LOGD("E camera_idx = %d\n", camera_idx);
    如果传入的相机id大于camera驱动的最大相机数量,参数错误
    if (camera_idx >= g_cam_ctrl.num_cam) {
        LOGE("Invalid camera_idx (%d)", camera_idx);
        return -EINVAL;
    }    

    pthread_mutex_lock(&g_intf_lock);
     已经打开过这个id的相机
    if(NULL != g_cam_ctrl.cam_obj[camera_idx]) {
        增加这个相机对象被引用的次数,这个g_cam_ctrl.cam_obj就是一个mm_camera_obj_t类型数组
        g_cam_ctrl.cam_obj[camera_idx]->ref_count++;
        pthread_mutex_unlock(&g_intf_lock);
        LOGD("opened alreadyn");
        *camera_vtbl = &g_cam_ctrl.cam_obj[camera_idx]->vtbl;
        return rc;
    }    

    该相机没有被打开过
    cam_obj = (mm_camera_obj_t *)malloc(sizeof(mm_camera_obj_t));
    if(NULL == cam_obj) {
        pthread_mutex_unlock(&g_intf_lock);
        LOGE("no mem");
        return -EINVAL;
    }    

     initialize camera obj 
    memset(cam_obj, 0, sizeof(mm_camera_obj_t));
    cam_obj->ctrl_fd = -1;
    cam_obj->ds_fd = -1;
    cam_obj->ref_count++;
    这个mm_camera_util_generate_handler函数就是(g_handler_history_count++ <<8 ) | camera_idx
    cam_obj->my_hdl = mm_camera_util_generate_handler(camera_idx);
    相机唯一标志my_hdl
    cam_obj->vtbl.camera_handle = cam_obj->my_hdl; /* set handler */
    相机的配置方法,mm_camera_ops是一个全局结构体,里面有相机的很多配置方法
    cam_obj->vtbl.ops = &mm_camera_ops;
    pthread_mutex_init(&cam_obj->cam_lock, NULL);
    /* unlock global interface lock, if not, in dual camera use case,
      * current open will block operation of another opened camera obj*/
    pthread_mutex_lock(&cam_obj->cam_lock);
    锁住当前相机锁,解锁全局相机锁,防止其他相机此时打开被阻塞
    pthread_mutex_unlock(&g_intf_lock);

    真正的打开函数
    rc = mm_camera_open(cam_obj);
    pthread_mutex_lock(&g_intf_lock);
    if (rc != 0) {
        LOGE("mm_camera_open err = %d", rc);
        pthread_mutex_destroy(&cam_obj->cam_lock);
        g_cam_ctrl.cam_obj[camera_idx] = NULL;
        free(cam_obj);
        cam_obj = NULL;
        pthread_mutex_unlock(&g_intf_lock);
        *camera_vtbl = NULL;
        return rc;
    } else {
        LOGD("Open succeded\n");
        g_cam_ctrl.cam_obj[camera_idx] = cam_obj;
        pthread_mutex_unlock(&g_intf_lock);
        *camera_vtbl = &cam_obj->vtbl;
        return 0;
    }
}

其实在上面这个步骤,就已经为mm_camera_obj_t的vbt成员赋值了,返回这个即可;但是此时相机并没有真正打开,也不能交互,所以还需要真正的打开;
mm_camera_open在mm_camera.c源文件里面,这里面就涉及了相机的交互模式,各种配置等:

int32_t mm_camera_open(mm_camera_obj_t *my_obj)
{
    char dev_name[MM_CAMERA_DEV_NAME_LEN];
    int32_t rc = 0;
    宏定义最多20次尝试
    int8_t n_try=MM_CAMERA_DEV_OPEN_TRIES;
    uint8_t sleep_msec=MM_CAMERA_DEV_OPEN_RETRY_SLEEP;
    int cam_idx = 0;
    const char *dev_name_value = NULL;
    int l_errno = 0;
    pthread_condattr_t cond_attr;

    LOGD("begin\n");

    if (NULL == my_obj) {
        goto on_error;
    }
    my_hdl中包含相机index
    dev_name_value = mm_camera_util_get_dev_name(my_obj->my_hdl);
    if (NULL == dev_name_value) {
        goto on_error;
    }
    snprintf(dev_name, sizeof(dev_name), "/dev/%s",
             dev_name_value);
    将dev_name中的数字赋值cam_idx
    sscanf(dev_name, "/dev/video%d", &cam_idx);
    LOGD("dev name = %s, cam_idx = %d", dev_name, cam_idx);

    do{
        n_try--;
        errno = 0;
        打开相机设备dev/videoX 返回句柄 O_NONBLOCK非阻塞方式,读取不到立即返回-1
        my_obj->ctrl_fd = open(dev_name, O_RDWR | O_NONBLOCK);
        l_errno = errno;
        LOGD("ctrl_fd = %d, errno == %d", my_obj->ctrl_fd, l_errno);
        if((my_obj->ctrl_fd >= 0) || (errno != EIO && errno != ETIMEDOUT) || (n_try <= 0 )) {
            break;
        }
        LOGE("Failed with %s error, retrying after %d milli-seconds",
              strerror(errno), sleep_msec);
        usleep(sleep_msec * 1000U);
    }while (n_try > 0);

    if (my_obj->ctrl_fd < 0) {
    	....
    } else {
    	该函数会调用系统函数ioctl传入VIDIOC_G_CTRL命令码获取打开的虚拟设备/dev/video对应的sessionId
        mm_camera_get_session_id(my_obj, &my_obj->sessionid);
        LOGH("Camera Opened id = %d sessionid = %d", cam_idx, my_obj->sessionid);
    }
#ifdef DAEMON_PRESENT
    /* open domain socket*/
    n_try = MM_CAMERA_DEV_OPEN_TRIES;
    do {
        n_try--;
        创建一个本地socket服务端名称为/data/vendor/camera/cam_socket%d的UDP连接,此已废弃
        my_obj->ds_fd = mm_camera_socket_create(cam_idx, MM_CAMERA_SOCK_TYPE_UDP);
        l_errno = errno;
        ......
        usleep(sleep_msec * 1000U);
    } while (n_try > 0);

   .......
#else /* DAEMON_PRESENT */
    cam_status_t cam_status;
    session回调方式与Kernel层交互,目前是session交互,mm_camera_module_event_handler回调函数指针,在mm_camera_interface里面,回调函数将事件传递到evt_thread线程中去
    cam_status = mm_camera_module_open_session(my_obj->sessionid,
            mm_camera_module_event_handler);
    if (cam_status < 0) {
        ......
        goto on_error;
    }
#endif /* DAEMON_PRESENT */

    pthread_condattr_init(&cond_attr);
    pthread_condattr_setclock(&cond_attr, CLOCK_MONOTONIC);

    pthread_mutex_init(&my_obj->msg_lock, NULL);
    pthread_mutex_init(&my_obj->cb_lock, NULL);
    pthread_mutex_init(&my_obj->evt_lock, NULL);
    pthread_cond_init(&my_obj->evt_cond, &cond_attr);
    pthread_condattr_destroy(&cond_attr);
 LOGD("Launch evt Thread in Cam Open");
    snprintf(my_obj->evt_thread.threadName, THREAD_NAME_SIZE, "CAM_Dispatch");
    创建一个event线程,线程主体是mm_camera_dispatch_app_event函数,参数为my_obj
    mm_camera_cmd_thread_launch(&my_obj->evt_thread,
                                mm_camera_dispatch_app_event,
                                (void *)my_obj);

    /* launch event poll thread
     * we will add evt fd into event poll thread upon user first register for evt */
    LOGD("Launch evt Poll Thread in Cam Open");
    snprintf(my_obj->evt_poll_thread.threadName, THREAD_NAME_SIZE, "CAM_evntPoll");
    创建Poll thread线程
    mm_camera_poll_thread_launch(&my_obj->evt_poll_thread,
                                 MM_CAMERA_POLL_TYPE_EVT);
    向kernel层订阅事件
    mm_camera_evt_sub(my_obj, TRUE);

    /* unlock cam_lock, we need release global intf_lock in camera_open(),
     * in order not block operation of other Camera in dual camera use case.*/
    pthread_mutex_unlock(&my_obj->cam_lock);
    LOGD("end (rc = %d)\n", rc);
    return rc;
	....省略后面代码.....
}

小结一下:
mm_camera_open主要就是打开驱动虚拟设备/dev/videoX,将文件句柄保存到fd中,同时通过ioctl系统函数获取到此次打开虚拟设备的session id并设置kernel到hal的回调函数为mm_camera_module_event_handler;最后就是创建两个线程并将线程节点挂载到mm_camera_obj结构体上;先来看看这两个线程evt thread和poll thread:


evt thread线程

惯例,先看看关键结构体mm_camera_cmd_thread_t

typedef struct {
    uint8_t is_active;     /*描述线程状态 */
    cam_queue_t cmd_queue; /* 命令队列 */
    pthread_t cmd_pid;           /* 线程pid */
    cam_semaphore_t cmd_sem;     /* 命令信号量 */
    cam_semaphore_t sync_sem;     /* 同步信号量 */
    mm_camera_cmd_cb_t cb;       /* 具体的命令 */
    void* user_data;             /* 传递的参数 */
    char threadName[THREAD_NAME_SIZE];
} mm_camera_cmd_thread_t; 

结构体拥有信号量,涉及多线程访问中的协调,信号量的初值为可用资源数量。当进程需要使用资源时,需要对该信号量执行 wait() 操作(减少信号量的计数)。当进程释放资源时,需要对该信号量执行 signal() 操作(增加信号量的计数)。当信号量的计数为 0 时,所有资源都在使用中。之后,需要使用资源的进程将会阻塞,直到计数大于 0。不懂信号量是何物的可以看看这篇文章
这个user_data实质就是mm_camera_obj;

回归主题,看看这个evt thread如何创建的?

这个user_data是mm_camera_obj类型 cb为mm_camera_dispatch_app_event函数指针
int32_t mm_camera_cmd_thread_launch(mm_camera_cmd_thread_t * cmd_thread,                                                                                                                                                                                                                 
                                    mm_camera_cmd_cb_t cb,
                                    void* user_data)
{
    int32_t rc = 0;
	初始化信号量和命令队列
    cam_sem_init(&cmd_thread->cmd_sem, 0);
    cam_sem_init(&cmd_thread->sync_sem, 0);
    cam_queue_init(&cmd_thread->cmd_queue);
    cmd_thread->cb = cb;
    cmd_thread->user_data = user_data;
    cmd_thread->is_active = TRUE;
        
    创建线程,该线程主体是mm_camera_cmd_thread函数
    pthread_create(&cmd_thread->cmd_pid,
                   NULL,
                   mm_camera_cmd_thread,
                   (void *)cmd_thread);
    return rc;
}    

入队列
mm_camera_cmd_thread是线程主要执行的地方,它会遍历mm_camera_cmd_thread_t的cmd_queue队列,有任务就取出来执行,那任务添加的地方在哪里呢?还记得上面open session配置的回调,回调函数会写入,我们先看看session的回调

 来自kernel驱动的回调,经测试执行拍照、录像等动作时此函数会触发
int mm_camera_module_event_handler(uint32_t session_id, cam_event_t *event)
{
    mm_camera_event_t evt;

    LOGD("session_id:%d, cmd:0x%x", session_id, event->server_event_type);
    memset(&evt, 0, sizeof(mm_camera_event_t));

    evt = *event;
     这个mm_camera_util_get_camera_by_session_id就是遍历g_cam_ctrl的所有相机对象,找到和这个sessionid相等返回
    mm_camera_obj_t *my_obj =
         mm_camera_util_get_camera_by_session_id(session_id);
    if (!my_obj) {
        LOGE("my_obj:%p", my_obj);
        return FALSE;
    }
    //此次相机驱动回调的时间类型
    switch( evt.server_event_type) {
       case CAM_EVENT_TYPE_DAEMON_PULL_REQ:
       case CAM_EVENT_TYPE_CAC_DONE:
       case CAM_EVENT_TYPE_DAEMON_DIED:
       case CAM_EVENT_TYPE_INT_TAKE_JPEG:
       case CAM_EVENT_TYPE_INT_TAKE_RAW:
           mm_camera_enqueue_evt(my_obj, &evt);
           break;
       default:
           LOGE("cmd:%x from shim layer is not handled", evt.server_event_type);
           break;
   }
   return TRUE;
}
evt thread线程队列入队列
int32_t mm_camera_enqueue_evt(mm_camera_obj_t *my_obj,
                              mm_camera_event_t *event)
{
    int32_t rc = 0;
    创建节点node
    mm_camera_cmdcb_t *node = NULL;
    node = (mm_camera_cmdcb_t *)malloc(sizeof(mm_camera_cmdcb_t));
    if (NULL != node) {
        memset(node, 0, sizeof(mm_camera_cmdcb_t));
        node->cmd_type = MM_CAMERA_CMD_TYPE_EVT_CB;
        node->u.evt = *event;
                                                                                                                                                                                                                                                                                         
        入队
        cam_queue_enq(&(my_obj->evt_thread.cmd_queue), node);
        唤醒线程
        cam_sem_post(&(my_obj->evt_thread.cmd_sem));
    } else {
        LOGE("No memory for mm_camera_node_t");
        rc = -1;
    }
    return rc;
}

openSession的回调函数会往evt thread添加数据,还有一个地方也会往这个地方添加数据,就是mm_camera_event_notify,那这个函数又是哪个地方调用了呢,暂不透露,继续往下看

出队列
evt thread线程出队列端,也就是线程执行的主体函数mm_camera_cmd_thread:

static void *mm_camera_cmd_thread(void *data)                                                                                                                                                                                                                                            
{   
    int running = 1;
    int ret;
    mm_camera_cmd_thread_t *cmd_thread =
                (mm_camera_cmd_thread_t *)data;
    mm_camera_cmdcb_t* node = NULL;
                            
    mm_camera_cmd_thread_name(cmd_thread->threadName);
    do {    
        do {
        	信号量等待
            ret = cam_sem_wait(&cmd_thread->cmd_sem);
            if (ret != 0 && errno != EINVAL) {
                LOGE("cam_sem_wait error (%s)",
                            strerror(errno));
                return NULL;
            }
        } while (ret != 0);
            
        取出数据
        node = (mm_camera_cmdcb_t*)cam_queue_deq(&cmd_thread->cmd_queue);
        while (node != NULL) {
            switch (node->cmd_type) {
            case MM_CAMERA_CMD_TYPE_EVT_CB:
            case MM_CAMERA_CMD_TYPE_DATA_CB:
            case MM_CAMERA_CMD_TYPE_REQ_DATA_CB:
            case MM_CAMERA_CMD_TYPE_SUPER_BUF_DATA_CB:
            case MM_CAMERA_CMD_TYPE_CONFIG_NOTIFY:
            case MM_CAMERA_CMD_TYPE_START_ZSL:
            case MM_CAMERA_CMD_TYPE_STOP_ZSL:
            case MM_CAMERA_CMD_TYPE_GENERAL:
            case MM_CAMERA_CMD_TYPE_FLUSH_QUEUE:
                if (NULL != cmd_thread->cb) {
                	会执行这个cb回调,传入node节点和user_data参数
                	这个cb在launch这个线程时传入的为mm_camera_dispatch_app_event
                    cmd_thread->cb(node, cmd_thread->user_data);
                }
                break;
            case MM_CAMERA_CMD_TYPE_EXIT:
            default:
                running = 0;
                break;
            }
            释放节点
            free(node);
            node = (mm_camera_cmdcb_t*)cam_queue_deq(&cmd_thread->cmd_queue);
        } /* (node != NULL) */      
    } while (running);
    return NULL; 
}

最后看看这个mm_camera_dispatch_app_event是怎么处理node中的东西的?

static void mm_camera_dispatch_app_event(mm_camera_cmdcb_t *cmd_cb,
                                         void* user_data)
{
    int i;
    mm_camera_event_t *event = &cmd_cb->u.evt;
    mm_camera_obj_t * my_obj = (mm_camera_obj_t *)user_data;
    if (NULL != my_obj) {
        mm_camera_cmd_thread_name(my_obj->evt_thread.threadName);
        pthread_mutex_lock(&my_obj->cb_lock);
        for(i = 0; i < MM_CAMERA_EVT_ENTRY_MAX; i++) {
        将这次数据在此回调,回调接口为my_obj的evt,my_obj是mm_camera_obj类型
            if(my_obj->evt.evt[i].evt_cb) {
                my_obj->evt.evt[i].evt_cb(
                    my_obj->my_hdl,
                    event,
                    my_obj->evt.evt[i].user_data);
            }
        }
        pthread_mutex_unlock(&my_obj->cb_lock);
    }
}

小结:
evt thread数据来自于kernel,使用信号量协调控制多线程资源交互,通过session回调函数mm_camera_module_event_handler将kernel事件入队到evt thread的cmd queue中,然后evt thread从队列中取出数据,再次给到mm_camera_obj中的回调者evt,这个evt是上层回调任务,目前还没有看到是回调是怎么注册到evt中的,我们后面慢慢看来


poll thread线程

惯例,还是得看看关键结构体mm_camera_poll_thread_t:

typedef struct {
    mm_camera_poll_thread_type_t poll_type;
    数组存放poll_type对应的信息
     如果type为MM_CAMERA_POLL_TYPE_EVT数组则只有index 0的内容才有效
     如果type为MM_CAMERA_POLL_TYPE_DATA则依赖于有效的流fd文件描述符
    mm_camera_poll_entry_t poll_entries[MAX_STREAM_NUM_IN_BUNDLE];
    管道pipeo 一个读端p[0] 一个写端p[1]
    int32_t pfds[2];
    pthread_t pid;
    
    int32_t state;
    int timeoutms;
    uint32_t cmd;
    poll_fds比上面的poll_entries容量多1,因为index0固定为管道端口的读取端
    struct pollfd poll_fds[MAX_STREAM_NUM_IN_BUNDLE + 1];
    现在使用管道的个数
    uint8_t num_fds;  
    pthread_mutex_t mutex;
    pthread_cond_t cond_v;
    int32_t status;
    char threadName[THREAD_NAME_SIZE];
} mm_camera_poll_thread_t; 

typedef struct {
	文件描述符,不要和上面的管道端口搞混了
    int32_t fd;
    又一个回调上层接口
    mm_camera_poll_notify_t notify_cb;
    uint32_t handler;
    void* user_data;
} mm_camera_poll_entry_t;

从上面结构体可以大致猜测到,该线程使用了管道来处理线程之间的交互,创建管道后,会产生两个端口或者说是文件描述符,一个往管道写数据,一个是读数据,管道是单向的,具体使用可参考这篇文章结构体使用
那看看这个 poll thread如何启动的?

在camera open中传递进来的type是MM_CAMERA_POLL_TYPE_EVT
int32_t mm_camera_poll_thread_launch(mm_camera_poll_thread_t * poll_cb,
                                     mm_camera_poll_thread_type_t poll_type)
{           
    int32_t rc = 0;
    size_t i = 0, cnt = 0;
    poll_cb->poll_type = poll_type;
    pthread_condattr_t cond_attr;

    初始化所有流对应的fd都为-1
    cnt = sizeof(poll_cb->poll_fds) / sizeof(poll_cb->poll_fds[0]);
    for (i = 0; i < cnt; i++) {
        poll_cb->poll_fds[i].fd = -1;
    }
    cnt = sizeof(poll_cb->poll_entries) / sizeof(poll_cb->poll_entries[0]);
    for (i = 0; i < cnt; i++) {
        poll_cb->poll_entries[i].fd = -1;
    }
    poll_cb->pfds[0] = -1;
    poll_cb->pfds[1] = -1;
    创建管道
    rc = pipe(poll_cb->pfds);
    if(rc < 0) {
        LOGE("pipe open rc=%d\n", rc);
        return -1;
    }
    poll_cb->timeoutms = -1;  /* Infinite seconds */
	....
    poll_cb->status = 0;
    poll thread线程主体mm_camera_poll_thread 
    pthread_create(&poll_cb->pid, NULL, mm_camera_poll_thread, (void *)poll_cb);
    等待poll thread的mm_camera_poll_thread成功执行
    if(!poll_cb->status) {
        pthread_cond_wait(&poll_cb->cond_v, &poll_cb->mutex);
    }
    pthread_mutex_unlock(&poll_cb->mutex);
    LOGD("End");
    return rc;
}

在看线程主体之前,先看看往这个线程添加数据的函数:

传入该函数的参数可以参考这个
rc = mm_camera_poll_thread_add_poll_fd(&my_obj->evt_poll_thread,
                                               my_obj->my_hdl,
                                               my_obj->ctrl_fd,
                                               mm_camera_event_notify,
                                               (void*)my_obj,
                                               mm_camera_sync_call);
看看上面的调用方式,更好分析这个函数
int32_t mm_camera_poll_thread_add_poll_fd(mm_camera_poll_thread_t * poll_cb,
                                          uint32_t handler,
                                          int32_t fd,
                                          mm_camera_poll_notify_t notify_cb,
                                          void* userdata,
                                          mm_camera_call_type_t call_type)
{
    int32_t rc = -1;
    uint8_t idx = 0;

    if (MM_CAMERA_POLL_TYPE_DATA == poll_cb->poll_type) {
        解析handler获取cameraId
        idx = mm_camera_util_get_index_by_handler(handler);
    } else {
        如果是EVT type,idx只能为0
        idx = 0;
    }

    if (MAX_STREAM_NUM_IN_BUNDLE > idx) {
    	根据相机index保存的响应的数据
        poll_cb->poll_entries[idx].fd = fd;
        poll_cb->poll_entries[idx].handler = handler;
        通知函数notify_cb为mm_camera_event_notify
        poll_cb->poll_entries[idx].notify_cb = notify_cb;
        poll_cb->poll_entries[idx].user_data = userdata;
        /* send poll entries updated signal to poll thread */
        同步调用,必须要等待返回才能继续执行
        if (call_type == mm_camera_sync_call ) {
            rc = mm_camera_poll_sig(poll_cb, MM_CAMERA_PIPE_CMD_POLL_ENTRIES_UPDATED);
            异步调用
        } else {
            rc = mm_camera_poll_sig_async(poll_cb, MM_CAMERA_PIPE_CMD_POLL_ENTRIES_UPDATED_ASYNC );
        }
    } else {
        LOGE("invalid handler %d (%d)", handler, idx);
    }
    return rc;
}
通知管道另一端可以读了
static int32_t mm_camera_poll_sig(mm_camera_poll_thread_t *poll_cb,
                                  uint32_t cmd)
{
    /* send through pipe */
    /* get the mutex */
    mm_camera_sig_evt_t cmd_evt;

    LOGD("E cmd = %d",cmd);
    memset(&cmd_evt, 0, sizeof(cmd_evt));
    cmd_evt.cmd = cmd;
    pthread_mutex_lock(&poll_cb->mutex);
    /* reset the statue to false */
    poll_cb->status = FALSE;
    /* send cmd to worker */

    向管道pipe写端写入数据
    ssize_t len = write(poll_cb->pfds[1], &cmd_evt, sizeof(cmd_evt));
    if(len < 1) {
        写入失败返回
        return 0;
    }
    LOGD("begin IN mutex write done, len = %lld",
            (long long int)len);
    /* wait till worker task gives positive signal */
    同步方式,必须等待被读取后才能返回
    if (FALSE == poll_cb->status) {
        LOGD("wait");
        等待cond_v条件变量释放,也就是读端读取到数据
        pthread_cond_wait(&poll_cb->cond_v, &poll_cb->mutex);
    }
    /* done */
    pthread_mutex_unlock(&poll_cb->mutex);
    return 0;
}

上面函数已经将数据写入到poll_cb的poll_entries里面去了,看看poll thread的主体函数:

static void *mm_camera_poll_thread(void *data)
{           
    mm_camera_poll_thread_t *poll_cb = (mm_camera_poll_thread_t *)data;
    mm_camera_cmd_thread_name(poll_cb->threadName);
    把管道的读端口赋值poll_fds,同时增加使用num_fds
    poll_cb->poll_fds[poll_cb->num_fds++].fd = poll_cb->pfds[0];
    释放条件变量cond_v,上面的launch会继续往下执行
    mm_camera_poll_sig_done(poll_cb);
    将mm_camera_poll_thread_t的state赋值为MM_CAMERA_POLL_TASK_STATE_POLL
    mm_camera_poll_set_state(poll_cb, MM_CAMERA_POLL_TASK_STATE_POLL);                                                                                                                                                                                                                   
    return mm_camera_poll_fn(poll_cb);
}   

static void *mm_camera_poll_fn(mm_camera_poll_thread_t *poll_cb)
{           
    int rc = 0, i;
    do {
         for(i = 0; i < poll_cb->num_fds; i++) {
          //events表示设置读取管道感兴趣的类型           
            poll_cb->poll_fds[i].events = POLLIN|POLLRDNORM|POLLPRI;
         }

         读取管道内容poll_fds是被赋值了fd的数据,num_fds是有多少个fd
         rc = poll(poll_cb->poll_fds, poll_cb->num_fds, poll_cb->timeoutms);
         if(rc > 0) {
            poll_fds[0]为管道读取端口,读取到的实际事件revents是POLLIN和POLLRDORM类型
            if ((poll_cb->poll_fds[0].revents & POLLIN) &&
                (poll_cb->poll_fds[0].revents & POLLRDNORM)) {
                因为poll_add管道写端,数据是写到了poll_entries上的,以下函数会将poll_entries对应index的
                poll_fds附上值(附上fd和event变量)
                mm_camera_poll_proc_pipe(poll_cb);                                                                                                                                                                                                                                       
            } else {
                for(i=1; i<poll_cb->num_fds; i++) {
                    evt事件类型
                    if ((poll_cb->poll_type == MM_CAMERA_POLL_TYPE_EVT) &&
                        (poll_cb->poll_fds[i].revents & POLLPRI)) {
                        poll_entries的i都要减1,因为它的数组长度比poll_cb少一个,减一刚好对应
                        if (NULL != poll_cb->poll_entries[i-1].notify_cb) {
                            poll_cb->poll_entries[i-1].notify_cb(poll_cb->poll_entries[i-1].user_data);
                        }
                    }
					数据事件类型
                    if ((MM_CAMERA_POLL_TYPE_DATA == poll_cb->poll_type) &&
                        (poll_cb->poll_fds[i].revents & POLLIN) &&
                        (poll_cb->poll_fds[i].revents & POLLRDNORM)) {
                        和上面一样,都会调用写入的notify_cb回调,这个回调是是poll add进入的时候传入的mm_camera_event_notify
                        if (NULL != poll_cb->poll_entries[i-1].notify_cb) {
                            poll_cb->poll_entries[i-1].notify_cb(poll_cb->poll_entries[i-1].user_data);
                        }
                    }
                }
            }
        } else {
            /* in error case sleep 10 us and then continue. hard coded here */
            usleep(10);
            continue;
        }
    } while ((poll_cb != NULL) && (poll_cb->state == MM_CAMERA_POLL_TASK_STATE_POLL));
    return NULL;
}      

也是一个数据回调,回调接口为mm_camera_event_notify,是不是很熟悉!这个回调接口连接了evt thread的入口;


下面以一个例子,将这两个线程连接起来吧!

订阅VIDIOC_SUBSCRIBE_EVENT事件

在camera_open的时候除了开启两个线程之外,还向kernel层订阅了VIDIOC_SUBSCRIBE_EVENT事件,在mm_camera_open函数最后几行代码调用了以下这个函数,如下:

flag为true
int32_t mm_camera_evt_sub(mm_camera_obj_t * my_obj,
                          uint8_t reg_flag)
{
    int32_t rc = 0;
    struct v4l2_event_subscription sub;

    memset(&sub, 0, sizeof(sub));
    sub.type = MSM_CAMERA_V4L2_EVENT_TYPE;
    sub.id = MSM_CAMERA_MSM_NOTIFY;
    if(FALSE == reg_flag) {
        /* unsubscribe */
        rc = ioctl(my_obj->ctrl_fd, VIDIOC_UNSUBSCRIBE_EVENT, &sub);
        if (rc < 0) {
            LOGE("unsubscribe event rc = %d, errno %d",
                     rc, errno);
            return rc;
        }
        取消订阅事件
        rc = mm_camera_poll_thread_del_poll_fd(&my_obj->evt_poll_thread,
                                               my_obj->my_hdl,
                                               mm_camera_sync_call);
    } else {
        订阅事件MSM_CAMERA_V4L2_EVENT_TYPE类型
        rc = ioctl(my_obj->ctrl_fd, VIDIOC_SUBSCRIBE_EVENT, &sub);
        if (rc < 0) {
            LOGE("subscribe event rc = %d, errno %d",
             rc, errno);
            return rc;
        }
        订阅成功就往poll thread添加一个任务
        rc = mm_camera_poll_thread_add_poll_fd(&my_obj->evt_poll_thread,
                                               my_obj->my_hdl,
                                               my_obj->ctrl_fd,
                                               mm_camera_event_notify,
                                               (void*)my_obj,
                                               mm_camera_sync_call);
    }
    return rc;
}

到这里,我大致猜测得到这个事件的交互模式了;我们打开kernel相机后,同时向Kernel订阅了VIDIOC_SUBSCRIBE_EVENT类型事件,与此同时,我们向poll thread添加了一个fd任务,并且这个任务回调为mm_camera_event_notify函数,添加fd任务中,会将我们的添加的fd根据其camera index写入到poll_cb的poll_fds和其对应的event感兴趣事件,而回调则写入到对应的poll_entries数组中去,然后poll thread一直poll轮询poll_fds中有效的管道,如果有事件发生,产生对应的事件回调poll_entries保存的回调函数mm_camera_event_notify中,在将事件写入到evt thread中,进而回传到上层回调中

最后,相机也成功打开,总结一下mm_camera_open:

  1. 打开/dev/videoX设备和session,保存到mm_camera_obj
  2. 创建evt thread和poll thread,同时为此相机建立好数据交互,从kernel到hal

还有一个问题,最后一步evt回调到上层哪个地方?这里先简单说下
在调用camera hal层的module配置的方法camera_device_open方法时,会层层调用到QCamera3HWI.cpp这个文件里面相关的cameraOpen,其中有一句代码如下:

mCameraHandle类型为mm_camera_vtbl_t,它是一开始mm_camera_open返回的相机对象,ops是这个相机可调用的方法                                                                                                                                                                                                                                                              
rc = mCameraHandle->ops->register_event_notify(mCameraHandle->camera_handle,
        camEvtHandle, (void *)this);

mm_camera_interface.c
static mm_camera_ops_t mm_camera_ops = {
    .query_capability = mm_camera_intf_query_capability,
    .register_event_notify = mm_camera_intf_register_event_notify,  
}

如上代码,camEvtHandle就是注册进去的回调函数,并且上层openCamera会对应一个hal的QCamera3HardwareInterface对象,这个对象内部有hw_device_t设备结构体,将它返回给上层,就对应好
register_event_notify对应mm_camera_intf_register_event_notify方法,这个方法又会去调用mm_camera_register_event_notify方法,:

int32_t mm_camera_register_event_notify_internal(mm_camera_obj_t *my_obj,
                                                 mm_camera_event_notify_t evt_cb,
                                                 void * user_data)
{
    int i;
    int rc = -1;
    mm_camera_evt_obj_t *evt_array = NULL;

    pthread_mutex_lock(&my_obj->cb_lock);
    evt_array = &my_obj->evt;
    if(evt_cb) {
        把回调注册进去
        for(i = 0; i < MM_CAMERA_EVT_ENTRY_MAX; i++) {
            if(evt_array->evt[i].user_data == NULL) {
                evt_array->evt[i].evt_cb = evt_cb;
                evt_array->evt[i].user_data = user_data;
                evt_array->reg_count++;
                rc = 0;
                break;
            }
        }
    } else {
        如果evt_cb是空,就取消注解
        for(i = 0; i < MM_CAMERA_EVT_ENTRY_MAX; i++) {
            if(evt_array->evt[i].user_data == user_data) {
                evt_array->evt[i].evt_cb = NULL;
                evt_array->evt[i].user_data = NULL;
                evt_array->reg_count--;
                rc = 0;
                break;
            }
        }
    }
    pthread_mutex_unlock(&my_obj->cb_lock);
    return rc;
}

最后用一张图来总结这个交互过程:
在这里插入图片描述


继续getCamInfo分析

直接进入initCapabilities内部看看

int QCamera3HardwareInterface::initCapabilities(uint32_t cameraId)
{
    int rc = 0;
    mm_camera_vtbl_t *cameraHandle = NULL;
    QCamera3HeapMemory *capabilityHeap = NULL;
	rc = camera_open((uint8_t)cameraId, &cameraHandle);
    if (rc) {
        LOGE("camera_open failed. rc = %d", rc);
        goto open_failed;
    }
	调用ops的查询能力接口
    rc = cameraHandle->ops->query_capability(cameraHandle->camera_handle);
    if(rc < 0) {
        LOGE("failed to query capability");
        goto query_failed;
}

上面这个ops哪里来的?是camera_open函数打开传递而来的,这个上面分析过,在mm_camera_interface.c中,ops成员是它赋值的:

static mm_camera_ops_t mm_camera_ops = {
    .query_capability = mm_camera_intf_query_capability,                                                                                                                                                                                                                                 
    .register_event_notify = mm_camera_intf_register_event_notify,
    .....
}
static int32_t mm_camera_intf_query_capability(uint32_t camera_handle)
{
    int32_t rc = -1;
    mm_camera_obj_t * my_obj = NULL;

    LOGD("E: camera_handler = %d ", camera_handle);

    pthread_mutex_lock(&g_intf_lock);
    my_obj = mm_camera_util_get_camera_by_handler(camera_handle);

    if(my_obj) {
        pthread_mutex_lock(&my_obj->cam_lock);
        pthread_mutex_unlock(&g_intf_lock);
        更底层的调用查询相机参数
        rc = mm_camera_query_capability(my_obj);
    } else {
        pthread_mutex_unlock(&g_intf_lock);
    }
    LOGD("X rc = %d", rc);
    return rc;
}

这个mm_camera_query_capability函数在mm_camera.c里面,如下:

int32_t mm_camera_query_capability(mm_camera_obj_t *my_obj)
{
    int32_t rc = 0; 
	
    cam_shim_packet_t *shim_cmd;
    cam_shim_cmd_data shim_cmd_data;
    memset(&shim_cmd_data, 0, sizeof(shim_cmd_data));
    封装查询请求
    shim_cmd_data.command = MSM_CAMERA_PRIV_QUERY_CAP;
    shim_cmd_data.stream_id = 0; 
    shim_cmd_data.value = NULL;
    创建请求参数
    shim_cmd = mm_camera_create_shim_cmd_packet(CAM_SHIM_GET_PARM,
            my_obj->sessionid,&shim_cmd_data);
    发送请求        
    rc = mm_camera_module_send_cmd(shim_cmd);
    mm_camera_destroy_shim_cmd_packet(shim_cmd);
    pthread_mutex_unlock(&my_obj->cam_lock);
    return rc;
}

这个就是hal与vendor层交互的入口,几乎所有hal层请求都会经过这个shim将事件send到vendor层,再由vendor与kernel层交互

int32_t mm_camera_module_send_cmd(cam_shim_packet_t *event)                                                                                                                                                                                                                              
{
    int32_t rc = -1;
    if(g_cam_ctrl.cam_shim_ops.mm_camera_shim_send_cmd) {
    调用shim配置的方法mm_camera_shim_send_cmd
        rc = g_cam_ctrl.cam_shim_ops.mm_camera_shim_send_cmd(event);
    }    
    return rc;
}

问题来了,这个mm_camera_shim_send_cmd是什么时候配置的?又是配置的是什么函数?

答案就在初始化shim的时候,在上面mm_camera_interface.c的get_num_of_cameras方法中,我们调用了mm_camera_load_shim_lib函数,看看这个函数:

int32_t mm_camera_load_shim_lib()
{
    const char* error = NULL;
    void *qdaemon_lib = NULL;

    LOGD("E");
    qdaemon_lib = dlopen(SHIMLAYER_LIB, RTLD_NOW);
    .....
	类似于反射拿到vendor层的函数指针
    *(void **)&mm_camera_shim_module_init =
            dlsym(qdaemon_lib, "mct_shimlayer_process_module_init");
    if (!mm_camera_shim_module_init) {
        error = dlerror();
        LOGE("dlsym failed with error code %s", error ? error: "");
        dlclose(qdaemon_lib);
        return -1;
    }
	执行函数,然后结果保存在cam_shim_ops
    return mm_camera_shim_module_init(&g_cam_ctrl.cam_shim_ops);
}

cam_shim_ops传入进去就会为我们配置mm_camera_shim_send_cmd方法了;

进入vendor

这里就会走到解析高通vendor层mct框架这篇文章里面的初始化流程了,初始化各个module:

int mct_shimlayer_process_module_init(mm_camera_shim_ops_t
  *shim_ops_tbl)
{
  mct_module_t *temp = NULL;
  int32_t enabled_savemem = 0;
  char savemem[128];
  char config_node_name[MAX_DEV_NAME_SIZE];
  char dev_name[MAX_DEV_NAME_SIZE];
  int rc = 0;
  struct msm_v4l2_event_data event_data;

  if (!shim_ops_tbl) {
    CLOGE(CAM_SHIM_LAYER, "Ops table NULL");
    return FALSE;
  }

  #if defined(LOG_DEBUG)
    cam_debug_open();
  #endif                                                                                                                                                                                                                                                                                 

  pthread_mutex_init(&session_mutex, NULL);
  //config_node_name 获取为video0 video1 ...
  if (get_config_node_name(config_node_name) == FALSE) {
    CLOGE(CAM_SHIM_LAYER, "Failed to get config node name");
  }

  snprintf(dev_name, sizeof(dev_name), "/dev/%s", config_node_name);
  //非阻塞式打开这个虚拟设备
  config_fd = open(dev_name, O_RDWR | O_NONBLOCK);
  if (config_fd < 0) {
    CLOGE(CAM_SHIM_LAYER, "Failed to open config node");
  }

  property_get("cameradaemon.SaveMemAtBoot", savemem, "0");
  enabled_savemem = atoi(savemem);
        //初始化各个sensor
  if (mct_shimlayer_module_sensor_init() == FALSE) {
    CLOGE(CAM_SHIM_LAYER, "sensor module init failed");
    return FALSE;
  }
  if (enabled_savemem != 1) {
          //初始化其他各国模块
    if (mct_shimlayer_module_init() == FALSE) {
      CLOGE(CAM_SHIM_LAYER, "Module init failed");
      return FALSE;
    }
  }
  /*Sending IOCTL to inform kernel that daemon is not present */
  //静止DAEMON模式
  rc = ioctl(config_fd, MSM_CAM_V4L2_IOCTL_DAEMON_DISABLED, &event_data);
  if (rc < 0) {
    CLOGE(CAM_SHIM_LAYER, "Failed to send Daemon disabled IOCTL to kernel");
  }
  配置函数指针,返回给hal使用
  shim_ops_tbl->mm_camera_shim_open_session = mct_shimlayer_start_session;
  shim_ops_tbl->mm_camera_shim_close_session = mct_shimlayer_stop_session;
  shim_ops_tbl->mm_camera_shim_send_cmd = mct_shimlayer_process_event;
  return TRUE;
}

重点在最后一句,我们找的函数指针指向的是mct_shimlayer_process_event函数:

int mct_shimlayer_process_event(cam_shim_packet_t *packet)
{

  cam_shim_cmd_data *parm_event;
  cam_reg_buf_t *reg_buf;
  cam_shim_stream_cmd_packet_t *bundle_event;
  boolean rc = FALSE;
  uint32_t session_id;
  uint32_t cmd_type;
	.....
  switch (packet->cmd_type) {
    case CAM_SHIM_SET_PARM:
    case CAM_SHIM_GET_PARM: {
      session_id = packet->session_id;
      parm_event = &packet->cmd_data;
      传入参数
      rc = mct_shimlayer_handle_parm(packet->cmd_type, session_id, parm_event);
      ....
    }
      break;

    case CAM_SHIM_REG_BUF: {
      session_id = packet->session_id;
      reg_buf = &packet->reg_buf;
      获取缓存
      rc = mct_shimlayer_reg_buffer(session_id, reg_buf);
      .....
    }
      break;

    case CAM_SHIM_BUNDLE_CMD: {
      session_id = packet->session_id;
      bundle_event = &packet->bundle_cmd;
      传递事件
      rc = mct_shimlayer_handle_bundle_event(session_id, bundle_event);
      ....
    }
    ....
  }
  return CAM_STATUS_SUCCESS;
}

static boolean mct_shimlayer_handle_parm(cam_shim_cmd_type cmd_type,
  uint32_t session_id, cam_shim_cmd_data *parm_event)
{
  mct_serv_msg_t serv_msg;
  struct v4l2_event msg;
  struct msm_v4l2_event_data *data =
    (struct msm_v4l2_event_data*)&msg.u.data[0];

  data->command = parm_event->command;
  data->stream_id = parm_event->stream_id;

  if (parm_event->value) {
    data->arg_value = *(unsigned int *)parm_event->value;
  }
  data->session_id = session_id;

  if (cmd_type == CAM_SHIM_GET_PARM) {
    cmd_type = MSM_CAMERA_GET_PARM;
  } else {
    cmd_type = MSM_CAMERA_SET_PARM;
  }

  serv_msg.msg_type  = SERV_MSG_HAL;
  serv_msg.u.hal_msg = msg;
  serv_msg.u.hal_msg.id = cmd_type;
  这里就会将事件发送到pipeline的server_cmd队列,这里就接上server thread
  if (mct_controller_proc_serv_msg(&serv_msg) == FALSE) {
    CLOGE(CAM_SHIM_LAYER, "HAL event processing failed");
    return FALSE;
  }

  return TRUE;
}

简单阐述一下,hal层将请求参数通过shim传递给vendor层,vendor层会将请求封装到pipeline的server_cmd队列中,而其中有一个server thread线程轮询这个队列,然后传递给kernel层处理;后续又结果,又通过start session配置的event cb回调回来

  • 3
    点赞
  • 18
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

帅气好男人_Jack

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值