Android摄像头相关源码分析: 设备驱动, HAL, Framework

Table of Contents

1 序

本文分析的Android源代码来自Android-X86, 对应的版本是5.1, 因此可能与手机上的Android系统有点差异.

2 V4L2

V4L2是linux针对摄像头等视频设备的驱动, 应用程序只要对摄像头设备通过open获取文件描述符, 然后就能使用read, write, ioctl等操作对摄像头进行操作了. 在android HAL中同样也是这么做的, 直接对设备的操作相关的代码位于/hardware/libcamera/V4L2Camera.cpp中. 由于我的项目中使用了一个虚拟摄像头设备v4l2loopback, 但是有些android中用到的ioctl该设备的动作不符合预期, 需要进行修改, 所以本小节针对v4l2的做一个介绍, 内容主要来源于v4l2的官方文档.

V4L2是一套大而全的设备驱动, 但是很多功能对于摄像头来说用不到, 在android的HAL中, 用到的ioctl有:

接下来就对这些ioctl做一个说明.

2.1 ioctls

VIDIOC_QUERYCAP

用于查询设备的功能和类型, 基本上每个V4L2应用在open设备之后都要用这个ioctl来确定设备的类型. 使用时需要给定一个v4l2_capability类型的数据结构用于传出输出结果, 对于摄像头设备来说, v4l2_capability->capability的V4L2_CAP_VIDEO_CAPTURE位以及V4L2_CAP_STREAMING位需要为1.

V4L2_CAP_VIDEO_CAPTURE表示该设备支持视频捕获, 这也是摄像头的基本功能.

V4L2_CAP_STREAMING表示该设备支持Streaming I/O, 这是一种内存映射的方式来在内核与应用直接传输数据.

VIDIOC_ENUM_FMT

用于查询摄像头支持的图像格式. 使用时需要指定一个v4l2_fmtdesc类型的数据结构作为输出参数. 对于支持多种图像格式的设备来说, 需要设定v4l2_fmtdesc->index然后多次调用这个ioctl, 直到返回EINVAL. ioctl调用成功后, 可以通过v4l2_fmtdesc->pixelformat来获取该设备支持的图像格式, pixelformat可以是:

  • V4L2_PIX_FMT_MJPEG
  • V4L2_PIX_FMT_JPEG
  • V4L2_PIX_FMT_YUYV
  • V4L2_PIX_FMT_YVYU等.

VIDIOC_ENUM_FRAMESIZES

获取到图像格式之后, 还要进一步查询该种格式下设备支持的的分辨率, 这个ioctl就是干这个事. 使用是需要指定一个v4l2_frmsizeenum类型的数据结构, 并且设置v4l2_frmsizeenum->pixel_fromat为需要查询的图像格式, v4l2_frmsizeenum->index设置为0.

成功调用后, v4l2_frmivalenum->type可能有三种情况:

  1. V4L2_FRMSIZE_TYPE_DISCRETE: 可以递增的设置v4l2_frmsizeenum->index来重复调用直到返回EINVAL来获取该种图像格式下所有支持的分辨率. 此时, 可以通过v4l2_frmsizeenum->width和height来获取支持的分辨率的长宽.
  2. V4L2_FRMSIZE_TYPE_STEPWISE: 此时只有v4l2_frmsizeenum->stepwise是有效的, 并且不能再将index设为其他值重复调用此ioctl.
  3. V4L2_FRMSIZE_TYPE_CONTINUOUS: STEPWISE的一种特殊情况, 此时同样只有stepwise有效, 并且stepwise.step_width和stepwise.step_height都为1.

上述三种情况中, 第一种很好理解, 就是支持的分辨率. 但是STEPWISE和CONTINUOUS还不知道是什么意思. 可能是任意分辨率都支持的意思?

VIDIOC_ENUM_FRAMEINTERVALS

获取到图像格式以及图像分辨率之后, 还可以查询在这种格式及分辨率下摄像头设备支持的fps. 使用是需要给定v4l2_frmivalenum格式的一个数据结构, 并设置好index=0, pixel_format, width, height.

调用完成后, 同样需要检查v4l2_frmivalenum.type, 同样有DISCRETE, STEPWISE, CONTINUOUS三种情况.

VIDIOC_TRY_FMT/VIDIOC_S_FMT/VIDIOC_G_FMT

这三个ioctl用于设置以及获取图像格式. TRY_FMT和S_FMT的区别在于前者不改变驱动的状态.

需要设置图像格式时一般通过G_FMT先获取当前格式, 再修改参数, 最后S_FMT或者TRY_FMT.

VIDIOC_S_PARM/VIDIOC_G_PARM

用于设置和获取streaming io的参数. 需要指定一个v4l2_streamparm类型的数据结构.

VIDIOC_S_JPEGCOMP/VIDIOC_G_JPEGCOMP

用于设置和获取JPEG格式的相关参数

VIDIOC_REQBUFS

为了在用户程序和内核设备之间交换图像数据, 需要分配一块内存. 这块内存可以在内核设备中分配, 然后用户程序通过mmap映射到用户空间, 也可以在用户空间分配, 然后内核设备进入用户指针IO模式. 这两种情况都可以用这个ioctl来进行初始化. 使用时需要给定一个v4l2_requestbuffers, 并设定好type, memory, count. 可以多次调用这个ioctl来重新设置参数. count设为0表示free 掉所有内存.

VIDIOC_QUERYBUF

VIDIOC_REQBUFS之后, 可以随时通过此ioctl查询buffer的当前状态. 使用时需要给定一个v4l2_buffer类型的数据结构. 并设定好type和index. 其中index的有效值为[0, count-1]. 这里的count是VIDIOC_REQBUFS返回的count.

VIDIOC_QBUF/VIDIOC_DQBUF

VIDIOC_REQBUFS分配了内存后, V4l2设备驱动还不能直接用这些内存, 需要通过VIDIOC_QBUF将分配好的内存中的一帧压入驱动的接收队列中, 然后通过VIDIOC_DQBUF推出一帧数据的内存. 如果是摄像头这样的CAPTURE设备, 那压入的就是一个空的内存区间, 等到摄像头拍摄到数据填充到了这片内存之后, 就能推出一片包含有效图像数据的帧了. 如果摄像头还没完成填充动作, 那么VIDIOC_DQBUF就会阻塞在那, 除非在open时设置了O_NONBLOCK标记位.

VIDIOC_STREAMON/VIDIOC_STREAMOFF

STREAMON表示开始工作, 只有在这个ioctl调用之后, 摄像头才能开始捕捉图像, 填充数据. 相对的, STREAMOFF则表示停止工作, 此时在设备驱动中尚未被DQBUF 的图像会丢失.

3 像素编码格式

由于底层涉及到图像, 摄像头相关的代码中出现了不少像素编码格式, 所以简单了解一下摄像头会遇到的格式.

3.1 RGB

很简单, 一个像素由RGB三种颜色组成, 每种颜色占用8位, 所以一个像素需要3 字节存储. 有些RGB编码格式会用更少的位保存一些颜色, 例如RGB844, 绿色和蓝色分别用4位来保存, 这样一个像素就只需要2字节. 还有RGBA, 增加了一个alpha通道, 这样一个像素就需要32位来保存.

3.2 YUV

YUV同样有三个通道, Y表示亮度, 由RGB三色特定部分叠加到一起, U和V则分别是红色与亮度的差异和蓝色与亮度的差异. Y一般占用8位, 而UV则可以省略, 因此衍生出了YUV444, YUV420, YUV411等一系列编码. 名称中的三位数字表示YUC 三个信道的比例, 例如YUV444表示三种信道1:1:1, 而Y占用8位, 因此一个像素占用24位. YUV420并不是说V就完全省略了, 而是一行4:1:0, 一行4:0:1.

android系统的摄像头预览默认采用YUV420sp编码. YUV420又分为YUV420p和YUV420sp, 两者的区别在于UV数据的存放顺序不一样:

Android_Hardware_Camera_20160605_142139.png

 

Figure 1: 图片来自http://blog.csdn.net/jefry_xdz/article/details/7931018

4 Android Camera

4.1 Hardware

在Android-x85 5.1中, HAL层摄像头相关的类之间的关系大概如图所示:

android_camera_uml.png

SurfaceSize是对一个Surface的长宽参数的封装. SurfaceDesc是对一个Surface 的长宽参数, fps参数的封装.

V4L2Camera类是对V4L2设备驱动的一层封装, 直接通过ioctl控制V4L2设备.

CameraParameters是对摄像头参数的封装. 其中flatten和unfaltten相当于是对CameraParameters的序列化和反序列化.

camera_device其实是一个类似于抽象类的结构体, 定义了一些列摄像头应该实现的接口, 而CameraHardware继承了这个camera_device, 表示一个摄像头的抽象, 这个类主要实现了android定义的一个摄像头应该实现的动作, 如startPreview. 其底层通过V4L2Camera对象来操作摄像头设备, 每个CameraHardware对象都包含一个V4L2Camera对象. 每个实例还包含了一个CameraParameters对象, 保存摄像头的相关参数.

CameraFactory是一个摄像头的管理类, 整个安卓系统中只有一个实例, 其中通过读配置文件的方式创建了多个CameraHardware实例.

CameraFactory

CameraFactory类扮演着是一个摄像头设备的管理员的角色, 这个类查询手机上有几个摄像头, 这几个摄像头的设备路径分别是什么, 旋转角度是多少, 朝向(前置还是后置)等信息. Android-x86通过读取一个配置文件来获取机器上有多少个摄像头:

hardware/libcamera/CameraFactory.cpp

 
  1. void CameraFactory::parseConfig(const char* configFile)

  2. {

  3. ALOGD("CameraFactory::parseConfig: configFile = %s", configFile);

  4.  
  5. FILE* config = fopen(configFile, "r");

  6. if (config != NULL) {

  7. char line[128];

  8. char arg1[128];

  9. char arg2[128];

  10. int arg3;

  11.  
  12. while (fgets(line, sizeof line, config) != NULL) {

  13. int lineStart = strspn(line, " \t\n\v" );

  14.  
  15. if (line[lineStart] == '#')

  16. continue;

  17.  
  18. sscanf(line, "%s %s %d", arg1, arg2, &arg3);

  19. if (arg3 != 0 && arg3 != 90 && arg3 != 180 && arg3 != 270)

  20. arg3 = 0;

  21.  
  22. if (strcmp(arg1, "front") == 0) {

  23. newCameraConfig(CAMERA_FACING_FRONT, arg2, arg3);

  24. } else if (strcmp(arg1, "back") == 0) {

  25. newCameraConfig(CAMERA_FACING_BACK, arg2, arg3);

  26. } else {

  27. ALOGD("CameraFactory::parseConfig: Unrecognized config line '%s'", line);

  28. }

  29. }

  30. } else {

  31. ALOGD("%s not found, using camera configuration defaults", CONFIG_FILE);

  32. if (access(DEFAULT_DEVICE_BACK, F_OK) != -1){

  33. ALOGD("Found device %s", DEFAULT_DEVICE_BACK);

  34. newCameraConfig(CAMERA_FACING_BACK, DEFAULT_DEVICE_BACK, 0);

  35. }

  36. if (access(DEFAULT_DEVICE_FRONT, F_OK) != -1){

  37. ALOGD("Found device %s", DEFAULT_DEVICE_FRONT);

  38. newCameraConfig(CAMERA_FACING_FRONT, DEFAULT_DEVICE_FRONT, 0);

  39. }

  40. }

  41. }

配置文件的路径位于/etc/camera.cfg, 格式为”front/back path_to_device orientation”, 例如”front /dev/video0 0″

值得一提的另一个函数是 cameraDeviceOpen, APP在打开摄像头的过程中会通过这个函数获取摄像头:

 
  1. 1: int CameraFactory::cameraDeviceOpen(const hw_module_t* module,int camera_id, hw_device_t** device)

  2. 2: {

  3. 3: ALOGD("CameraFactory::cameraDeviceOpen: id = %d", camera_id);

  4. 4:

  5. 5: *device = NULL;

  6. 6:

  7. 7: if (!mCamera || camera_id < 0 || camera_id >= getCameraNum()) {

  8. 8: ALOGE("%s: Camera id %d is out of bounds (%d)",

  9. 9: __FUNCTION__, camera_id, getCameraNum());

  10. 10: return -EINVAL;

  11. 11: }

  12. 12:

  13. 13: if (!mCamera[camera_id]) {

  14. 14: mCamera[camera_id] = new CameraHardware(module, mCameraDevices[camera_id]);

  15. 15: }

  16. 16: return mCamera[camera_id]->connectCamera(device);

  17. 17: }

从第13行可知, 当android系统启动的时候, 就已经构建好了CameraFactory对象并且通过配置文件读取到机器中有多少摄像头, 对应的设备路径是什么. 但是此时并没有创建CameraHardware对象, 而是直到对应的摄像头第一次被打开的时候才创建.

CameraFactory类整个android系统只有一个实例, 那就是定义在CameraFactory.cpp中的gCameraFactory. camera_module_t定义了几个函数指针, 指向了CameraFactory中的static函数, 当调用这些指针指向的函数的时候, 实质上是在调用gCameraFactory对象中的相应方法:

hardware/libcamera/CameraHal.cpp

 
  1. camera_module_t HAL_MODULE_INFO_SYM = {

  2. common: {

  3. tag: HARDWARE_MODULE_TAG,

  4. version_major: 1,

  5. version_minor: 0,

  6. id: CAMERA_HARDWARE_MODULE_ID,

  7. name: "Camera Module",

  8. author: "The Android Open Source Project",

  9. methods: &android::CameraFactory::mCameraModuleMethods,

  10. dso: NULL,

  11. reserved: {0},

  12. },

  13. get_number_of_cameras: android::CameraFactory::get_number_of_cameras,

  14. get_camera_info: android::CameraFactory::get_camera_info,

  15. };

上述代码定义了一个camera_module_t, 相应的函数定义如下:

hardware/libcamera/CameraFactory.cpp

 
  1. int CameraFactory2::device_open(const hw_module_t* module,

  2. const char* name,

  3. hw_device_t** device)

  4. {

  5. ALOGD("CameraFactory2::device_open: name = %s", name);

  6.  
  7. /*

  8. * Simply verify the parameters, and dispatch the call inside the

  9. * CameraFactory instance.

  10. */

  11.  
  12. if (module != &HAL_MODULE_INFO_SYM.common) {

  13. ALOGE("%s: Invalid module %p expected %p",

  14. __FUNCTION__, module, &HAL_MODULE_INFO_SYM.common);

  15. return -EINVAL;

  16. }

  17. if (name == NULL) {

  18. ALOGE("%s: NULL name is not expected here", __FUNCTION__);

  19. return -EINVAL;

  20. }

  21.  
  22. int camera_id = atoi(name);

  23. return gCameraFactory.cameraDeviceOpen(module, camera_id, device);

  24. }

  25.  
  26. int CameraFactory2::get_number_of_cameras(void)

  27. {

  28. ALOGD("CameraFactory2::get_number_of_cameras");

  29. return gCameraFactory.getCameraNum();

  30. }

  31.  
  32. int CameraFactory2::get_camera_info(int camera_id,

  33. struct camera_info* info)

  34. {

  35. ALOGD("CameraFactory2::get_camera_info");

  36. return gCameraFactory.getCameraInfo(camera_id, info);

  37. }

camera_device

camera_device同时被定义为了camera_device_t, 此处涉及到HAL的扩展规范. Android HAL定义了三个数据类型, struct hw_module_tstruct hw_module_methods_tstruct hw_device_t, 分别表示模块类型, 模块方法和设备类型. 当需要扩展HAL, 增加一种设备的时候, 就要实现以上这三种数据结构. 例如对于摄像头来说, 就需要定义camera_module_t, camera_device_t, 以及为hw_module_methods_t中的函数指针赋值, 其中只有一个open函数, 相当于初始化模块. 而且HAL还规定camera_module_t的第一个成员必须是hw_module_t, camera_device_t的第一个成员必须是hw_device_t, 接下来的其他成员可以自己定义.

更详细的原理说明参加这里.

在Android-x86中, camera_device_t的定义位于hardware/libhardware/include/hardware/hardware.h

hardware/libhardware/include/hardware/hardware.h

 
  1. typedef struct camera_device {

  2. hw_device_t common;

  3. camera_device_ops_t *ops;

  4. void *priv;

  5. } camera_device_t;

其中的camera_device_ops_t摄像头模块自己定义的一组函数接口, 在同一个文件中, 太长了就不贴出来了.

camera_module_t位于同一个目录下的camera_common.h中

hardware/libhardware/include/hardware/camera_common.h

 
  1. typedef struct camera_module {

  2. hw_module_t common;

  3. int (*get_number_of_cameras)(void);

  4. int (*get_camera_info)(int camera_id, struct camera_info *info);

  5. int (*set_callbacks)(const camera_module_callbacks_t *callbacks);

  6. void (*get_vendor_tag_ops)(vendor_tag_ops_t* ops);

  7. int (*open_legacy)(const struct hw_module_t* module, const char* id,

  8. uint32_t halVersion, struct hw_device_t** device);

  9.  
  10. /* reserved for future use */

  11. void* reserved[7];

  12. } camera_module_t;

在framework调用HAL代码时, 通过hw_module_t->methods->open获取hw_device_t, 然后强制转化为camera_device_t, 就能调用camera_device_t->ops中的摄像头相关的函数了. 对于Android-x86的摄像头来说, ops中的函数指针的赋值位于继承了camera_device的CameraHardware类中.

CameraHardware

CameraHardware的众多接口主要是为了完成三个动作: 预览, 录制, 拍照. 其他的函数大多是设置参数等准备工作. 本小节以预览为例对代码流程做一个说明.

首先是对CameraHardware对象的初始化参数, 对应的函数为initDefaultParameters. 该函数通过调用V4L2Camera的getBestPreviewFmtgetBestPictureFmtgetAvailableSizesgetAvailableFps 来分别获取默认预览图像格式, 默认图像格式, 摄像头支持的分辨率和fps:

hardware/libcamera/CameraHardware.cpp

 
  1. int pw = MIN_WIDTH;

  2. int ph = MIN_HEIGHT;

  3. int pfps = 30;

  4. int fw = MIN_WIDTH;

  5. int fh = MIN_HEIGHT;

  6. SortedVector<SurfaceSize> avSizes;

  7. SortedVector<int> avFps;

  8.  
  9. if (camera.Open(mVideoDevice) != NO_ERROR) {

  10. ALOGE("cannot open device.");

  11. } else {

  12.  
  13. // Get the default preview format

  14. pw = camera.getBestPreviewFmt().getWidth();

  15. ph = camera.getBestPreviewFmt().getHeight();

  16. pfps = camera.getBestPreviewFmt().getFps();

  17.  
  18. // Get the default picture format

  19. fw = camera.getBestPictureFmt().getWidth();

  20. fh = camera.getBestPictureFmt().getHeight();

  21.  
  22. // Get all the available sizes

  23. avSizes = camera.getAvailableSizes();

  24.  
  25. // Add some sizes that some specific apps expect to find:

  26. // GTalk expects 320x200

  27. // Fring expects 240x160

  28. // And also add standard resolutions found in low end cameras, as

  29. // android apps could be expecting to find them

  30. // The V4LCamera handles those resolutions by choosing the next

  31. // larger one and cropping the captured frames to the requested size

  32.  
  33. avSizes.add(SurfaceSize(480,320)); // HVGA

  34. avSizes.add(SurfaceSize(432,320)); // 1.35-to-1, for photos. (Rounded up from 1.3333 to 1)

  35. avSizes.add(SurfaceSize(352,288)); // CIF

  36. avSizes.add(SurfaceSize(320,240)); // QVGA

  37. avSizes.add(SurfaceSize(320,200));

  38. avSizes.add(SurfaceSize(240,160)); // SQVGA

  39. avSizes.add(SurfaceSize(176,144)); // QCIF

  40.  
  41. // Get all the available Fps

  42. avFps = camera.getAvailableFps();

  43. }

然后将这些参数转为文本形式, 设置到CameraParameters对象中:

hardware/libcamera/CameraHardware.cpp

 
  1. // Antibanding

  2. p.set(CameraParameters::KEY_SUPPORTED_ANTIBANDING,"auto");

  3. p.set(CameraParameters::KEY_ANTIBANDING,"auto");

  4.  
  5. // Effects

  6. p.set(CameraParameters::KEY_SUPPORTED_EFFECTS,"none"); // "none,mono,sepia,negative,solarize"

  7. p.set(CameraParameters::KEY_EFFECT,"none");

  8.  
  9. // Flash modes

  10. p.set(CameraParameters::KEY_SUPPORTED_FLASH_MODES,"off");

  11. p.set(CameraParameters::KEY_FLASH_MODE,"off");

  12.  
  13. // Focus modes

  14. p.set(CameraParameters::KEY_SUPPORTED_FOCUS_MODES,"fixed");

  15. p.set(CameraParameters::KEY_FOCUS_MODE,"fixed");

  16.  
  17. #if 0

  18. p.set(CameraParameters::KEY_JPEG_THUMBNAIL_HEIGHT,0);

  19. p.set(CameraParameters::KEY_JPEG_THUMBNAIL_QUALITY,75);

  20. p.set(CameraParameters::KEY_SUPPORTED_JPEG_THUMBNAIL_SIZES,"0x0");

  21. p.set("jpeg-thumbnail-size","0x0");

  22. p.set(CameraParameters::KEY_JPEG_THUMBNAIL_WIDTH,0);

  23. #endif

  24.  
  25. // Picture - Only JPEG supported

  26. p.set(CameraParameters::KEY_SUPPORTED_PICTURE_FORMATS,CameraParameters::PIXEL_FORMAT_JPEG); // ONLY jpeg

  27. p.setPictureFormat(CameraParameters::PIXEL_FORMAT_JPEG);

  28. p.set(CameraParameters::KEY_SUPPORTED_PICTURE_SIZES, szs);

  29. p.setPictureSize(fw,fh);

  30. p.set(CameraParameters::KEY_JPEG_QUALITY, 85);

  31.  
  32. // Preview - Supporting yuv422i-yuyv,yuv422sp,yuv420sp, defaulting to yuv420sp, as that is the android Defacto default

  33. p.set(CameraParameters::KEY_SUPPORTED_PREVIEW_FORMATS,"yuv422i-yuyv,yuv422sp,yuv420sp,yuv420p"); // All supported preview formats

  34. p.setPreviewFormat(CameraParameters::PIXEL_FORMAT_YUV422SP); // For compatibility sake ... Default to the android standard

  35. p.set(CameraParameters::KEY_SUPPORTED_PREVIEW_FPS_RANGE, fpsranges);

  36. p.set(CameraParameters::KEY_SUPPORTED_PREVIEW_FRAME_RATES, fps);

  37. p.setPreviewFrameRate( pfps );

  38. p.set(CameraParameters::KEY_SUPPORTED_PREVIEW_SIZES, szs);

  39. p.setPreviewSize(pw,ph);

  40.  
  41. // Video - Supporting yuv422i-yuyv,yuv422sp,yuv420sp and defaulting to yuv420p

  42. p.set("video-size-values"/*CameraParameters::KEY_SUPPORTED_VIDEO_SIZES*/, szs);

  43. p.setVideoSize(pw,ph);

  44. p.set(CameraParameters::KEY_VIDEO_FRAME_FORMAT, CameraParameters::PIXEL_FORMAT_YUV420P);

  45. p.set("preferred-preview-size-for-video", "640x480");

  46.  
  47. // supported rotations

  48. p.set("rotation-values","0");

  49. p.set(CameraParameters::KEY_ROTATION,"0");

  50.  
  51. // scenes modes

  52. p.set(CameraParameters::KEY_SUPPORTED_SCENE_MODES,"auto");

  53. p.set(CameraParameters::KEY_SCENE_MODE,"auto");

  54.  
  55. // white balance

  56. p.set(CameraParameters::KEY_SUPPORTED_WHITE_BALANCE,"auto");

  57. p.set(CameraParameters::KEY_WHITE_BALANCE,"auto");

  58.  
  59. // zoom

  60. p.set(CameraParameters::KEY_SMOOTH_ZOOM_SUPPORTED,"false");

  61. p.set("max-video-continuous-zoom", 0 );

  62. p.set(CameraParameters::KEY_ZOOM, "0");

  63. p.set(CameraParameters::KEY_MAX_ZOOM, "100");

  64. p.set(CameraParameters::KEY_ZOOM_RATIOS, "100");

  65. p.set(CameraParameters::KEY_ZOOM_SUPPORTED, "false");

  66.  
  67. // missing parameters for Camera2

  68. p.set(CameraParameters::KEY_FOCAL_LENGTH, 4.31);

  69. p.set(CameraParameters::KEY_HORIZONTAL_VIEW_ANGLE, 90);

  70. p.set(CameraParameters::KEY_VERTICAL_VIEW_ANGLE, 90);

  71. p.set(CameraParameters::KEY_SUPPORTED_JPEG_THUMBNAIL_SIZES, "640x480,0x0");

当准备工作做完之后, 调用 CameraHardware::startPreview 函数, 这个函数只有三行, 首先加个锁, 然后调用 startPreviewLocked, 所有的预览的工作都是在这个函数中完成.

 

hardware/libcamera/CameraHardware.cpp

 

 
  1. status_t CameraHardware::startPreviewLocked()

  2. {

  3. ALOGD("CameraHardware::startPreviewLocked");

  4.  
  5. // 预览由一个独立的线程完成, 这几行检查预览是否已经开启. 一般来说是不会进入到if的

  6. if (mPreviewThread != 0) {

  7. ALOGD("CameraHardware::startPreviewLocked: preview already running");

  8. return NO_ERROR;

  9. }

  10.  
  11. // 通过CameraParameters获取预览的长宽.

  12. int width, height;

  13. // If we are recording, use the recording video size instead of the preview size

  14. if (mRecordingEnabled && mMsgEnabled & CAMERA_MSG_VIDEO_FRAME) {

  15. mParameters.getVideoSize(&width, &height);

  16. } else {

  17. mParameters.getPreviewSize(&width, &height);

  18. }

  19.  
  20. // 通过CameraParameters获取预览的fps

  21. int fps = mParameters.getPreviewFrameRate();

  22. ALOGD("CameraHardware::startPreviewLocked: Open, %dx%d", width, height);

  23.  
  24. // 调用V4L2Camera的open函数打开摄像头设备

  25. status_t ret = camera.Open(mVideoDevice);

  26. if (ret != NO_ERROR) {

  27. ALOGE("Failed to initialize Camera");

  28. return ret;

  29. }

  30. ALOGD("CameraHardware::startPreviewLocked: Init");

  31.  
  32. // 调用V4L2Camera的init函数初始化摄像头设备

  33. ret = camera.Init(width, height, fps);

  34. if (ret != NO_ERROR) {

  35. ALOGE("Failed to setup streaming");

  36. return ret;

  37. }

  38.  
  39. // 用户要求的预览的长宽可能摄像头设备不支持, 摄像头实际工作的长宽通过以下函数获取.

  40. /* Retrieve the real size being used */

  41. camera.getSize(width, height);

  42. ALOGD("CameraHardware::startPreviewLocked: effective size: %dx%d",width, height);

  43.  
  44. // 保存实际工作的长宽

  45. // If we are recording, use the recording video size instead of the preview size

  46. if (mRecordingEnabled && mMsgEnabled & CAMERA_MSG_VIDEO_FRAME) {

  47. /* Store it as the video size to use */

  48. mParameters.setVideoSize(width, height);

  49. } else {

  50. /* Store it as the preview size to use */

  51. mParameters.setPreviewSize(width, height);

  52. }

  53.  
  54. // ???

  55. /* And reinit the memory heaps to reflect the real used size if needed */

  56. initHeapLocked();

  57. ALOGD("CameraHardware::startPreviewLocked: StartStreaming");

  58.  
  59. // 通过V4L2Camera.StartStreaming让摄像头设备开始工作

  60. ret = camera.StartStreaming();

  61. if (ret != NO_ERROR) {

  62. ALOGE("Failed to start streaming");

  63. return ret;

  64. }

  65.  
  66. // 初始化预览窗口

  67. // setup the preview window geometry in order to use it to zoom the image

  68. if (mWin != 0) {

  69. ALOGD("CameraHardware::setPreviewWindow - Negotiating preview format");

  70. NegotiatePreviewFormat(mWin);

  71. }

  72.  
  73. ALOGD("CameraHardware::startPreviewLocked: starting PreviewThread");

  74.  
  75. // 开启一个线程处理预览工作

  76. mPreviewThread = new PreviewThread(this);

  77.  
  78. ALOGD("CameraHardware::startPreviewLocked: O - this:0x%p",this);

  79.  
  80. return NO_ERROR;

  81. }

再来看这个 PreviewThread, 这个类很简单, 就是调用了CameraHardware的previewThread方法, 这个方法根据fps计算出一个等待时间, 然后调用V4L2Camera的GrabRawFrame获取摄像头设备的图像, 然后转换成支持的图像格式, 最后放到显示窗口中显示图像.

V4L2Camera

V4L2Camera类主要是对V4L2设备的封装, 下面分析一下常用的几个接口, 如OpenInitStartStreamingGrabRawFrameEnumFrameIntervalsEnumFrameSizesEnumFrameFormats.

  • Open

    Open接口的逻辑比较简单, 通过 open 系统调用获取摄像头设备的文件描述符, 然后调用VIDIOC_QUERYCAP ioctl查询设备的能力, 由于是摄像头设备, 这里就要求是设备的V4L2_CAP_VIDEO_CAPTURE位被置为1, 所以有个检查. 最后调用EnumFrameFormats 获取摄像头支持的图像格式.

    hardware/libcamera/V4L2Camera.cpp

     
    1. int V4L2Camera::Open (const char *device)

    2. {

    3. int ret;

    4.  
    5. /* Close the previous instance, if any */

    6. Close();

    7.  
    8. memset(videoIn, 0, sizeof (struct vdIn));

    9.  
    10. if ((fd = open(device, O_RDWR)) == -1) {

    11. ALOGE("ERROR opening V4L interface: %s", strerror(errno));

    12. return -1;

    13. }

    14.  
    15. ret = ioctl (fd, VIDIOC_QUERYCAP, &videoIn->cap);

    16. if (ret < 0) {

    17. ALOGE("Error opening device: unable to query device.");

    18. return -1;

    19. }

    20.  
    21. if ((videoIn->cap.capabilities & V4L2_CAP_VIDEO_CAPTURE) == 0) {

    22. ALOGE("Error opening device: video capture not supported.");

    23. return -1;

    24. }

    25.  
    26. if (!(videoIn->cap.capabilities & V4L2_CAP_STREAMING)) {

    27. ALOGE("Capture device does not support streaming i/o");

    28. return -1;

    29. }

    30.  
    31. /* Enumerate all available frame formats */

    32. EnumFrameFormats();

    33.  
    34. return ret;

    35. }

  • EnumFrameFormats

    此函数获取摄像头设备的图像格式, 分辨率以及fps. 参见2.1可知, 设备支持的分辨率是对某个图像格式下才有意义, 不说明在什么图像格式下, 是无法获取支持的分辨率的, 同样, fps也是针对某个图像格式和某个分辨率的. 因此摄像头设备支持的图像格式, 分辨率和fps在调用完这个函数之后就全部知道了. 同时, 这个函数还设置好了 m_BestPreviewFmt 和 m_BestPictureFmt 这两个参数, 这两个参数会被用来设置预览的默认格式.

    hardware/libcamera/V4L2Camera.cpp

     
    1. bool V4L2Camera::EnumFrameFormats()

    2. {

    3. ALOGD("V4L2Camera::EnumFrameFormats");

    4. struct v4l2_fmtdesc fmt;

    5.  
    6. // Start with no modes

    7. m_AllFmts.clear();

    8.  
    9. memset(&fmt, 0, sizeof(fmt));

    10. fmt.index = 0;

    11. fmt.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;

    12.  
    13. // 遍历地获取设备所有支持的图像格式

    14. while (ioctl(fd,VIDIOC_ENUM_FMT, &fmt) >= 0) {

    15. fmt.index++;

    16. ALOGD("{ pixelformat = '%c%c%c%c', description = '%s' }",

    17. fmt.pixelformat & 0xFF, (fmt.pixelformat >> 8) & 0xFF,

    18. (fmt.pixelformat >> 16) & 0xFF, (fmt.pixelformat >> 24) & 0xFF,

    19. fmt.description);

    20.  
    21. // 获取该种格式下设备支持的分辨率和fps

    22. //enumerate frame sizes for this pixel format

    23. if (!EnumFrameSizes(fmt.pixelformat)) {

    24. ALOGE(" Unable to enumerate frame sizes.");

    25. }

    26. };

    27.  
    28. // 此时, 图像格式, 分辨率, fps已经都获取到了.

    29.  
    30. // Now, select the best preview format and the best PictureFormat

    31. m_BestPreviewFmt = SurfaceDesc();

    32. m_BestPictureFmt = SurfaceDesc();

    33.  
    34. unsigned int i;

    35. for (i=0; i<m_AllFmts.size(); i++) {

    36. SurfaceDesc s = m_AllFmts[i];

    37.  
    38. // 此处设置最佳的拍照参数, 由于是拍照, 对fps就没没什么要求, 只要分辨率大就可以了

    39. // 因此优先寻找一个分辨率最大的那个SurfaceDesc赋值给m_BestPictureFmt

    40. // Prioritize size over everything else when taking pictures. use the

    41. // least fps possible, as that usually means better quality

    42. if ((s.getSize() > m_BestPictureFmt.getSize()) ||

    43. (s.getSize() == m_BestPictureFmt.getSize() && s.getFps() < m_BestPictureFmt.getFps() )

    44. ) {

    45. m_BestPictureFmt = s;

    46. }

    47.  
    48. // 此处设置最佳的预览参数, 对于预览来说, fps的权重更高

    49. // 因此优先寻找fps高的SurfaceDesc赋值给m_BestPreviewFmt

    50. // Prioritize fps, then size when doing preview

    51. if ((s.getFps() > m_BestPreviewFmt.getFps()) ||

    52. (s.getFps() == m_BestPreviewFmt.getFps() && s.getSize() > m_BestPreviewFmt.getSize() )

    53. ) {

    54. m_BestPreviewFmt = s;

    55. }

    56.  
    57. }

    58.  
    59. return true;

    60. }

  • EnumFrameSizes

    此函数根据给定的pixfmt查询该格式下设备支持的分辨率.

    hardware/libcamera/V4L2Camera.cpp

     
    1. bool V4L2Camera::EnumFrameSizes(int pixfmt)

    2. {

    3. ALOGD("V4L2Camera::EnumFrameSizes: pixfmt: 0x%08x",pixfmt);

    4. int ret=0;

    5. int fsizeind = 0;

    6. struct v4l2_frmsizeenum fsize;

    7.  
    8. // 设置好v4l2_frmsizeenum

    9. memset(&fsize, 0, sizeof(fsize));

    10. fsize.index = 0;

    11. fsize.pixel_format = pixfmt;

    12. // 循环调用VIDIOC_ENUM_FRAMESIZES ioctl查询所有支持的分辨率

    13. while (ioctl(fd, VIDIOC_ENUM_FRAMESIZES, &fsize) >= 0) {

    14. fsize.index++;

    15. // 根据输出结果的type分情况讨论

    16. if (fsize.type == V4L2_FRMSIZE_TYPE_DISCRETE) {

    17. ALOGD("{ discrete: width = %u, height = %u }",

    18. fsize.discrete.width, fsize.discrete.height);

    19.  
    20. // 这个变量保存设备支持的DISCRETE类型的分辨率的个数

    21. fsizeind++;

    22.  
    23. // 继续查询这种分辨率下支持的fps

    24. if (!EnumFrameIntervals(pixfmt,fsize.discrete.width, fsize.discrete.height))

    25. ALOGD(" Unable to enumerate frame intervals");

    26. } else if (fsize.type == V4L2_FRMSIZE_TYPE_CONTINUOUS) { // 如果type是CONTINUOUS或STEPWISE, 则不做任何事

    27. ALOGD("{ continuous: min { width = %u, height = %u } .. "

    28. "max { width = %u, height = %u } }",

    29. fsize.stepwise.min_width, fsize.stepwise.min_height,

    30. fsize.stepwise.max_width, fsize.stepwise.max_height);

    31. ALOGD(" will not enumerate frame intervals.\n");

    32. } else if (fsize.type == V4L2_FRMSIZE_TYPE_STEPWISE) {

    33. ALOGD("{ stepwise: min { width = %u, height = %u } .. "

    34. "max { width = %u, height = %u } / "

    35. "stepsize { width = %u, height = %u } }",

    36. fsize.stepwise.min_width, fsize.stepwise.min_height,

    37. fsize.stepwise.max_width, fsize.stepwise.max_height,

    38. fsize.stepwise.step_width, fsize.stepwise.step_height);

    39. ALOGD(" will not enumerate frame intervals.");

    40. } else {

    41. ALOGE(" fsize.type not supported: %d\n", fsize.type);

    42. ALOGE(" (Discrete: %d Continuous: %d Stepwise: %d)",

    43. V4L2_FRMSIZE_TYPE_DISCRETE,

    44. V4L2_FRMSIZE_TYPE_CONTINUOUS,

    45. V4L2_FRMSIZE_TYPE_STEPWISE);

    46. }

    47. }

    48.  
    49. // 如果设备不支持任何DISCRETE类型的分辨率, 尝试通过VIDIOC_TRY_FMT对设备设置分辨率, 如果设置成功, 也认为

    50. // 这个摄像头设备支持这种分辨率

    51. if (fsizeind == 0) {

    52. /* ------ gspca doesn't enumerate frame sizes ------ */

    53. /* negotiate with VIDIOC_TRY_FMT instead */

    54. static const struct {

    55. int w,h;

    56. } defMode[] = {

    57. {800,600},

    58. {768,576},

    59. {768,480},

    60. {720,576},

    61. {720,480},

    62. {704,576},

    63. {704,480},

    64. {640,480},

    65. {352,288},

    66. {320,240}

    67. };

    68.  
    69. unsigned int i;

    70. for (i = 0 ; i < (sizeof(defMode) / sizeof(defMode[0])); i++) {

    71.  
    72. fsizeind++;

    73. struct v4l2_format fmt;

    74. fmt.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;

    75. fmt.fmt.pix.width = defMode[i].w;

    76. fmt.fmt.pix.height = defMode[i].h;

    77. fmt.fmt.pix.pixelformat = pixfmt;

    78. fmt.fmt.pix.field = V4L2_FIELD_ANY;

    79.  
    80. if (ioctl(fd,VIDIOC_TRY_FMT, &fmt) >= 0) {

    81. ALOGD("{ ?GSPCA? : width = %u, height = %u }\n", fmt.fmt.pix.width, fmt.fmt.pix.height);

    82.  
    83. // Add the mode descriptor

    84. m_AllFmts.add( SurfaceDesc( fmt.fmt.pix.width, fmt.fmt.pix.height, 25 ) );

    85. }

    86. }

    87. }

    88.  
    89. return true;

    90. }

    可以看到, Android对于分辨率的类型只认DISCRETE, CONTINUOUS和STEPWISE只输出个日志, 不做任何事.

  • EnumFrameIntervals

    该函数通过VIDIOC_ENUM_FRAMEINTERVALS ioctl查询指定图像格式和分辨率下设备支持的fps.

    hardware/libcamera/V4L2Camera.cpp

     
    1. bool V4L2Camera::EnumFrameIntervalsi(int pixfmt, int width, int height)

    2. {

    3. ALOGD("V4L2Camera::EnumFrameIntervals: pixfmt: 0x%08x, w:%d, h:%d",pixfmt,width,height);

    4.  
    5. struct v4l2_frmivalenum fival;

    6. int list_fps=0;

    7. // 设置参数

    8. memset(&fival, 0, sizeof(fival));

    9. fival.index = 0;

    10. fival.pixel_format = pixfmt;

    11. fival.width = width;

    12. fival.height = height;

    13.  
    14. ALOGD("\tTime interval between frame: ");

    15. // 遍历的调用ioctl获取所有支持的fps

    16. while (ioctl(fd,VIDIOC_ENUM_FRAMEINTERVALS, &fival) >= 0)

    17. {

    18. fival.index++;

    19. // 同样只认DISCRETE

    20. if (fival.type == V4L2_FRMIVAL_TYPE_DISCRETE) {

    21. ALOGD("%u/%u", fival.discrete.numerator, fival.discrete.denominator);

    22. // 新建一个SurfaceDesc添加到成员变量m_AllFmts中

    23. m_AllFmts.add( SurfaceDesc( width, height, fival.discrete.denominator ) );

    24. list_fps++;

    25. } else if (fival.type == V4L2_FRMIVAL_TYPE_CONTINUOUS) {

    26. ALOGD("{min { %u/%u } .. max { %u/%u } }",

    27. fival.stepwise.min.numerator, fival.stepwise.min.numerator,

    28. fival.stepwise.max.denominator, fival.stepwise.max.denominator);

    29. break;

    30. } else if (fival.type == V4L2_FRMIVAL_TYPE_STEPWISE) {

    31. ALOGD("{min { %u/%u } .. max { %u/%u } / "

    32. "stepsize { %u/%u } }",

    33. fival.stepwise.min.numerator, fival.stepwise.min.denominator,

    34. fival.stepwise.max.numerator, fival.stepwise.max.denominator,

    35. fival.stepwise.step.numerator, fival.stepwise.step.denominator);

    36. break;

    37. }

    38. }

    39.  
    40. // Assume at least 1fps

    41. if (list_fps == 0) {

    42. m_AllFmts.add( SurfaceDesc( width, height, 1 ) );

    43. }

    44.  
    45. return true;

    46. }

  • Init

    Init函数是V4L2Camera类中最为复杂的一个方法.

    hardware/libcamera/V4L2Camera.cpp

     
    1. int V4L2Camera::Init(int width, int height, int fps)

    2. {

    3. ALOGD("V4L2Camera::Init");

    4.  
    5. /* Initialize the capture to the specified width and height */

    6. static const struct {

    7. int fmt; /* PixelFormat */

    8. int bpp; /* bytes per pixel */

    9. int isplanar; /* If format is planar or not */

    10. int allowscrop; /* If we support cropping with this pixel format */

    11. } pixFmtsOrder[] = {

    12. {V4L2_PIX_FMT_YUYV, 2,0,1},

    13. {V4L2_PIX_FMT_YVYU, 2,0,1},

    14. {V4L2_PIX_FMT_UYVY, 2,0,1},

    15. {V4L2_PIX_FMT_YYUV, 2,0,1},

    16. {V4L2_PIX_FMT_SPCA501, 2,0,0},

    17. {V4L2_PIX_FMT_SPCA505, 2,0,0},

    18. {V4L2_PIX_FMT_SPCA508, 2,0,0},

    19. {V4L2_PIX_FMT_YUV420, 0,1,0},

    20. {V4L2_PIX_FMT_YVU420, 0,1,0},

    21. {V4L2_PIX_FMT_NV12, 0,1,0},

    22. {V4L2_PIX_FMT_NV21, 0,1,0},

    23. {V4L2_PIX_FMT_NV16, 0,1,0},

    24. {V4L2_PIX_FMT_NV61, 0,1,0},

    25. {V4L2_PIX_FMT_Y41P, 0,0,0},

    26. {V4L2_PIX_FMT_SGBRG8, 0,0,0},

    27. {V4L2_PIX_FMT_SGRBG8, 0,0,0},

    28. {V4L2_PIX_FMT_SBGGR8, 0,0,0},

    29. {V4L2_PIX_FMT_SRGGB8, 0,0,0},

    30. {V4L2_PIX_FMT_BGR24, 3,0,1},

    31. {V4L2_PIX_FMT_RGB24, 3,0,1},

    32. {V4L2_PIX_FMT_MJPEG, 0,1,0},

    33. {V4L2_PIX_FMT_JPEG, 0,1,0},

    34. {V4L2_PIX_FMT_GREY, 1,0,1},

    35. {V4L2_PIX_FMT_Y16, 2,0,1},

    36. };

    37.  
    38. int ret;

    39.  
    40. // If no formats, break here

    41. if (m_AllFmts.isEmpty()) {

    42. ALOGE("No video formats available");

    43. return -1;

    44. }

    45.  
    46. // Try to get the closest match ...

    47. SurfaceDesc closest;

    48. int closestDArea = -1;

    49. int closestDFps = -1;

    50. unsigned int i;

    51. int area = width * height;

    52. for (i = 0; i < m_AllFmts.size(); i++) {

    53. SurfaceDesc sd = m_AllFmts[i];

    54.  
    55. // Always choose a bigger or equal surface

    56. if (sd.getWidth() >= width &&

    57. sd.getHeight() >= height) {

    58.  
    59. int difArea = sd.getArea() - area;

    60. int difFps = my_abs(sd.getFps() - fps);

    61.  
    62. ALOGD("Trying format: (%d x %d), Fps: %d [difArea:%d, difFps:%d, cDifArea:%d, cDifFps:%d]",sd.getWidth(),sd.getHeight(),sd.getFps(), difArea, difFps, closestDArea, closestDFps);

    63.  
    64. // 从摄像头设备支持的分辨率中寻找一个长宽都大于等于输入的长宽, 并且面积差得最少的一个分辨率

    65. // 如果这种分辨率有多个, 就寻找一个fps差异最小的

    66. // 找到的这个SurfaceDesc赋值给closest变量

    67. if (closestDArea < 0 ||

    68. difArea < closestDArea ||

    69. (difArea == closestDArea && difFps < closestDFps)) {

    70.  
    71. // Store approximation

    72. closestDArea = difArea;

    73. closestDFps = difFps;

    74.  
    75. // And the new surface descriptor

    76. closest = sd;

    77. }

    78. }

    79. }

    80.  
    81. // 如果可用的分辨率中没有长宽都大于等于输入的长宽的分辨率

    82. if (closestDArea == -1) {

    83. ALOGE("Size not available: (%d x %d)",width,height);

    84. return -1;

    85. }

    86.  
    87. // 此时closest就是最接近输入参数的SurfaceDesc

    88. ALOGD("Selected format: (%d x %d), Fps: %d",closest.getWidth(),closest.getHeight(),closest.getFps());

    89.  
    90. // 如果closest的长宽并不完全等于输入的长宽, 说明需要剪短

    91. // Check if we will have to crop the captured image

    92. bool crop = width != closest.getWidth() || height != closest.getHeight();

    93.  
    94. // 遍历支持的像素格式

    95. // Iterate through pixel formats from best to worst

    96. ret = -1;

    97. for (i=0; i < (sizeof(pixFmtsOrder) / sizeof(pixFmtsOrder[0])); i++) {

    98.  
    99. // If we will need to crop, make sure to only select formats we can crop...

    100. // 当需要剪短并且选中的像素格式支持, 或者不需要剪短, 才进入这个if

    101. if (!crop || pixFmtsOrder[i].allowscrop) {

    102.  
    103. memset(&videoIn->format,0,sizeof(videoIn->format));

    104. videoIn->format.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;

    105. videoIn->format.fmt.pix.width = closest.getWidth();

    106. videoIn->format.fmt.pix.height = closest.getHeight();

    107. videoIn->format.fmt.pix.pixelformat = pixFmtsOrder[i].fmt;

    108.  
    109. // 通过VIDIOC_TRY_FMT设置摄像头设备使用的像素格式

    110. ret = ioctl(fd, VIDIOC_TRY_FMT, &videoIn->format);

    111. // 检查调用成功并且再次确认使用的确实是closest的参数

    112. if (ret >= 0 &&

    113. videoIn->format.fmt.pix.width == (uint)closest.getWidth() &&

    114. videoIn->format.fmt.pix.height == (uint)closest.getHeight()) {

    115. break;

    116. }

    117. }

    118. }

    119. if (ret < 0) {

    120. ALOGE("Open: VIDIOC_TRY_FMT Failed: %s", strerror(errno));

    121. return ret;

    122. }

    123.  
    124. // 真正设置像素格式

    125. /* Set the format */

    126. memset(&videoIn->format,0,sizeof(videoIn->format));

    127. videoIn->format.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;

    128. videoIn->format.fmt.pix.width = closest.getWidth();

    129. videoIn->format.fmt.pix.height = closest.getHeight();

    130. videoIn->format.fmt.pix.pixelformat = pixFmtsOrder[i].fmt;

    131. ret = ioctl(fd, VIDIOC_S_FMT, &videoIn->format);

    132. if (ret < 0) {

    133. ALOGE("Open: VIDIOC_S_FMT Failed: %s", strerror(errno));

    134. return ret;

    135. }

    136.  
    137.  
    138. // 查询当前使用的图像格式

    139. /* Query for the effective video format used */

    140. memset(&videoIn->format,0,sizeof(videoIn->format));

    141. videoIn->format.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;

    142. ret = ioctl(fd, VIDIOC_G_FMT, &videoIn->format);

    143. if (ret < 0) {

    144. ALOGE("Open: VIDIOC_G_FMT Failed: %s", strerror(errno));

    145. return ret;

    146. }

    147.  
    148. /* Note VIDIOC_S_FMT may change width and height. */

    149.  
    150. /* Buggy driver paranoia. */

    151. // 为裁剪准备参数

    152. unsigned int min = videoIn->format.fmt.pix.width * 2;

    153. if (videoIn->format.fmt.pix.bytesperline < min)

    154. videoIn->format.fmt.pix.bytesperline = min;

    155. min = videoIn->format.fmt.pix.bytesperline * videoIn->format.fmt.pix.height;

    156. if (videoIn->format.fmt.pix.sizeimage < min)

    157. videoIn->format.fmt.pix.sizeimage = min;

    158.  
    159. /* Store the pixel formats we will use */

    160. videoIn->outWidth = width;

    161. videoIn->outHeight = height;

    162. videoIn->outFrameSize = width * height << 1; // Calculate the expected output framesize in YUYV

    163. videoIn->capBytesPerPixel = pixFmtsOrder[i].bpp;

    164.  
    165. // 开始裁剪

    166. /* Now calculate cropping margins, if needed, rounding to even */

    167. int startX = ((closest.getWidth() - width) >> 1) & (-2);

    168. int startY = ((closest.getHeight() - height) >> 1) & (-2);

    169.  
    170. /* Avoid crashing if the mode found is smaller than the requested */

    171. if (startX < 0) {

    172. videoIn->outWidth += startX;

    173. startX = 0;

    174. }

    175. if (startY < 0) {

    176. videoIn->outHeight += startY;

    177. startY = 0;

    178. }

    179.  
    180. /* Calculate the starting offset into each captured frame */

    181. videoIn->capCropOffset = (startX * videoIn->capBytesPerPixel) +

    182. (videoIn->format.fmt.pix.bytesperline * startY);

    183.  
    184. ALOGI("Cropping from origin: %dx%d - size: %dx%d (offset:%d)",

    185. startX,startY,

    186. videoIn->outWidth,videoIn->outHeight,

    187. videoIn->capCropOffset);

    188.  
    189. /* sets video device frame rate */

    190. memset(&videoIn->params,0,sizeof(videoIn->params));

    191. videoIn->params.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;

    192. videoIn->params.parm.capture.timeperframe.numerator = 1;

    193. videoIn->params.parm.capture.timeperframe.denominator = closest.getFps();

    194.  
    195. // 设置fps

    196. /* Set the framerate. If it fails, it wont be fatal */

    197. if (ioctl(fd,VIDIOC_S_PARM,&videoIn->params) < 0) {

    198. ALOGE("VIDIOC_S_PARM error: Unable to set %d fps", closest.getFps());

    199. }

    200.  
    201. /* Gets video device defined frame rate (not real - consider it a maximum value) */

    202. if (ioctl(fd,VIDIOC_G_PARM,&videoIn->params) < 0) {

    203. ALOGE("VIDIOC_G_PARM - Unable to get timeperframe");

    204. }

    205.  
    206. ALOGI("Actual format: (%d x %d), Fps: %d, pixfmt: '%c%c%c%c', bytesperline: %d",

    207. videoIn->format.fmt.pix.width,

    208. videoIn->format.fmt.pix.height,

    209. videoIn->params.parm.capture.timeperframe.denominator,

    210. videoIn->format.fmt.pix.pixelformat & 0xFF, (videoIn->format.fmt.pix.pixelformat >> 8) & 0xFF,

    211. (videoIn->format.fmt.pix.pixelformat >> 16) & 0xFF, (videoIn->format.fmt.pix.pixelformat >> 24) & 0xFF,

    212. videoIn->format.fmt.pix.bytesperline);

    213.  
    214. /* Configure JPEG quality, if dealing with those formats */

    215. if (videoIn->format.fmt.pix.pixelformat == V4L2_PIX_FMT_JPEG ||

    216. videoIn->format.fmt.pix.pixelformat == V4L2_PIX_FMT_MJPEG) {

    217.  
    218. // 设置JPEG质量为100%

    219. /* Get the compression format */

    220. ioctl(fd,VIDIOC_G_JPEGCOMP, &videoIn->jpegcomp);

    221.  
    222. /* Set to maximum */

    223. videoIn->jpegcomp.quality = 100;

    224.  
    225. /* Try to set it */

    226. if(ioctl(fd,VIDIOC_S_JPEGCOMP, &videoIn->jpegcomp) >= 0)

    227. {

    228. ALOGE("VIDIOC_S_COMP:");

    229. if(errno == EINVAL)

    230. {

    231. videoIn->jpegcomp.quality = -1; //not supported

    232. ALOGE(" compression control not supported\n");

    233. }

    234. }

    235.  
    236. /* gets video stream jpeg compression parameters */

    237. if(ioctl(fd,VIDIOC_G_JPEGCOMP, &videoIn->jpegcomp) >= 0) {

    238. ALOGD("VIDIOC_G_COMP:\n");

    239. ALOGD(" quality: %i\n", videoIn->jpegcomp.quality);

    240. ALOGD(" APPn: %i\n", videoIn->jpegcomp.APPn);

    241. ALOGD(" APP_len: %i\n", videoIn->jpegcomp.APP_len);

    242. ALOGD(" APP_data: %s\n", videoIn->jpegcomp.APP_data);

    243. ALOGD(" COM_len: %i\n", videoIn->jpegcomp.COM_len);

    244. ALOGD(" COM_data: %s\n", videoIn->jpegcomp.COM_data);

    245. ALOGD(" jpeg_markers: 0x%x\n", videoIn->jpegcomp.jpeg_markers);

    246. } else {

    247. ALOGE("VIDIOC_G_COMP:");

    248. if(errno == EINVAL) {

    249. videoIn->jpegcomp.quality = -1; //not supported

    250. ALOGE(" compression control not supported\n");

    251. }

    252. }

    253. }

    254.  
    255. /* Check if camera can handle NB_BUFFER buffers */

    256. memset(&videoIn->rb,0,sizeof(videoIn->rb));

    257. videoIn->rb.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;

    258. videoIn->rb.memory = V4L2_MEMORY_MMAP;

    259. videoIn->rb.count = NB_BUFFER; // 定义在V4L2Camera.h中, 为4

    260.  
    261. // 要求设备分配内存

    262. ret = ioctl(fd, VIDIOC_REQBUFS, &videoIn->rb);

    263. if (ret < 0) {

    264. ALOGE("Init: VIDIOC_REQBUFS failed: %s", strerror(errno));

    265. return ret;

    266. }

    267.  
    268. for (int i = 0; i < NB_BUFFER; i++) {

    269.  
    270. memset (&videoIn->buf, 0, sizeof (struct v4l2_buffer));

    271. videoIn->buf.index = i;

    272. videoIn->buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;

    273. videoIn->buf.memory = V4L2_MEMORY_MMAP;

    274.  
    275. ret = ioctl (fd, VIDIOC_QUERYBUF, &videoIn->buf);

    276. if (ret < 0) {

    277. ALOGE("Init: Unable to query buffer (%s)", strerror(errno));

    278. return ret;

    279. }

    280.  
    281. // 通过mmap将内核设备刚刚分配的内存映射到用户空间

    282. videoIn->mem[i] = mmap (0,

    283. videoIn->buf.length,

    284. PROT_READ | PROT_WRITE,

    285. MAP_SHARED,

    286. fd,

    287. videoIn->buf.m.offset);

    288.  
    289. if (videoIn->mem[i] == MAP_FAILED) {

    290. ALOGE("Init: Unable to map buffer (%s)", strerror(errno));

    291. return -1;

    292. }

    293.  
    294. ret = ioctl(fd, VIDIOC_QBUF, &videoIn->buf);

    295. if (ret < 0) {

    296. ALOGE("Init: VIDIOC_QBUF Failed");

    297. return -1;

    298. }

    299.  
    300. nQueued++;

    301. }

    302.  
    303. // Reserve temporary buffers, if they will be needed

    304. size_t tmpbuf_size=0;

    305. switch (videoIn->format.fmt.pix.pixelformat)

    306. {

    307. case V4L2_PIX_FMT_JPEG:

    308. case V4L2_PIX_FMT_MJPEG:

    309. case V4L2_PIX_FMT_UYVY:

    310. case V4L2_PIX_FMT_YVYU:

    311. case V4L2_PIX_FMT_YYUV:

    312. case V4L2_PIX_FMT_YUV420: // only needs 3/2 bytes per pixel but we alloc 2 bytes per pixel

    313. case V4L2_PIX_FMT_YVU420: // only needs 3/2 bytes per pixel but we alloc 2 bytes per pixel

    314. case V4L2_PIX_FMT_Y41P: // only needs 3/2 bytes per pixel but we alloc 2 bytes per pixel

    315. case V4L2_PIX_FMT_NV12:

    316. case V4L2_PIX_FMT_NV21:

    317. case V4L2_PIX_FMT_NV16:

    318. case V4L2_PIX_FMT_NV61:

    319. case V4L2_PIX_FMT_SPCA501:

    320. case V4L2_PIX_FMT_SPCA505:

    321. case V4L2_PIX_FMT_SPCA508:

    322. case V4L2_PIX_FMT_GREY:

    323. case V4L2_PIX_FMT_Y16:

    324.  
    325. case V4L2_PIX_FMT_YUYV:

    326. // YUYV doesn't need a temp buffer but we will set it if/when

    327. // video processing disable control is checked (bayer processing).

    328. // (logitech cameras only)

    329. break;

    330.  
    331. case V4L2_PIX_FMT_SGBRG8: //0

    332. case V4L2_PIX_FMT_SGRBG8: //1

    333. case V4L2_PIX_FMT_SBGGR8: //2

    334. case V4L2_PIX_FMT_SRGGB8: //3

    335. // Raw 8 bit bayer

    336. // when grabbing use:

    337. // bayer_to_rgb24(bayer_data, RGB24_data, width, height, 0..3)

    338. // rgb2yuyv(RGB24_data, pFrameBuffer, width, height)

    339.  
    340. // alloc a temp buffer for converting to YUYV

    341. // rgb buffer for decoding bayer data

    342. tmpbuf_size = videoIn->format.fmt.pix.width * videoIn->format.fmt.pix.height * 3;

    343. if (videoIn->tmpBuffer)

    344. free(videoIn->tmpBuffer);

    345. videoIn->tmpBuffer = (uint8_t*)calloc(1, tmpbuf_size);

    346. if (!videoIn->tmpBuffer) {

    347. ALOGE("couldn't calloc %lu bytes of memory for frame buffer\n",

    348. (unsigned long) tmpbuf_size);

    349. return -ENOMEM;

    350. }

    351.  
    352.  
    353. break;

    354.  
    355. case V4L2_PIX_FMT_RGB24: //rgb or bgr (8-8-8)

    356. case V4L2_PIX_FMT_BGR24:

    357. break;

    358.  
    359. default:

    360. ALOGE("Should never arrive (1)- exit fatal !!\n");

    361. return -1;

    362. }

    363.  
    364. return 0;

    365. }

    总的来说, Init函数做了以下几件事:

    1. 根据用户要求的长宽和fps, 从设备支持的分辨率和fps中找一个大于用户要求并且最接近的分辨率和fps. 然后设置摄像头设备使用此分辨率和fps. 最后由于实际使用的分辨率比用户请求的大, 还要计算一个裁剪偏差值, 以后使用图片的时候把多出来的那部分裁减掉.
    2. 如果摄像头设备使用了JPEG或者MJPEG压缩, 则设置图片质量是100%.
    3. 要求设备分配内存, 并映射到用户空间的videoIn->mem中. 然后压入摄像头设备的输入队列, 至此, 摄像头设备已经做好了捕捉图像的准备, 就等streamon命令了.
  • StartStreaming

    StartStreaming函数很简单, 除了调用STREAMON ioctl之外只是设置了videoIn 的isStreaming标记:

    hardware/libcamera/V4L2Camera.cpp

     
    1. int V4L2Camera::StartStreaming ()

    2. {

    3. enum v4l2_buf_type type;

    4. int ret;

    5.  
    6. if (!videoIn->isStreaming) {

    7. type = V4L2_BUF_TYPE_VIDEO_CAPTURE;

    8.  
    9. ret = ioctl (fd, VIDIOC_STREAMON, &type);

    10. if (ret < 0) {

    11. ALOGE("StartStreaming: Unable to start capture: %s", strerror(errno));

    12. return ret;

    13. }

    14.  
    15. videoIn->isStreaming = true;

    16. }

    17.  
    18. return 0;

    19. }

    调用这个函数之后, 摄像头就开始工作了.

  • GrabRawFrame

    StartStreaming之后, 还需要获取拍摄到的摄像头内容, 于是要调用GrabRawFrame.

    hardware/libcamera/V4L2Camera.cpp

     
    1. void V4L2Camera::GrabRawFrame (void *frameBuffer, int maxSize)

    2. {

    3. LOG_FRAME("V4L2Camera::GrabRawFrame: frameBuffer:%p, len:%d",frameBuffer,maxSize);

    4. int ret;

    5.  
    6. /* DQ */

    7. // 调用DQBUF获取一帧数据

    8. memset(&videoIn->buf,0,sizeof(videoIn->buf));

    9. videoIn->buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;

    10. videoIn->buf.memory = V4L2_MEMORY_MMAP;

    11. ret = ioctl(fd, VIDIOC_DQBUF, &videoIn->buf);

    12. if (ret < 0) {

    13. ALOGE("GrabPreviewFrame: VIDIOC_DQBUF Failed");

    14. return;

    15. }

    16.  
    17. nDequeued++;

    18.  
    19. // Calculate the stride of the output image (YUYV) in bytes

    20. int strideOut = videoIn->outWidth << 1;

    21.  
    22. // And the pointer to the start of the image

    23. // Init中计算出来需要剪裁掉的偏移量, 此处就通过增加了偏移量来把图像剪裁为用户调用Init

    24. // 时设置的大小. 得到的src是图像的起始地址

    25. uint8_t* src = (uint8_t*)videoIn->mem[videoIn->buf.index] + videoIn->capCropOffset;

    26.  
    27. LOG_FRAME("V4L2Camera::GrabRawFrame - Got Raw frame (%dx%d) (buf:%d@0x%p, len:%d)",videoIn->format.fmt.pix.width,videoIn->format.fmt.pix.height,videoIn->buf.index,src,videoIn->buf.bytesused);

    28.  
    29. /* Avoid crashing! - Make sure there is enough room in the output buffer! */

    30. if (maxSize < videoIn->outFrameSize) {

    31.  
    32. ALOGE("V4L2Camera::GrabRawFrame: Insufficient space in output buffer: Required: %d, Got %d - DROPPING FRAME",videoIn->outFrameSize,maxSize);

    33.  
    34. } else {

    35.  
    36. // 根据像素格式分别处理, 最终把图像数据保存到了输出参数framebuffer中.

    37. switch (videoIn->format.fmt.pix.pixelformat)

    38. {

    39. case V4L2_PIX_FMT_JPEG:

    40. case V4L2_PIX_FMT_MJPEG:

    41. if(videoIn->buf.bytesused <= HEADERFRAME1) {

    42. // Prevent crash on empty image

    43. ALOGE("Ignoring empty buffer ...\n");

    44. break;

    45. }

    46.  
    47. if (jpeg_decode((uint8_t*)frameBuffer, strideOut, src, videoIn->outWidth, videoIn->outHeight) < 0) {

    48. ALOGE("jpeg decode errors\n");

    49. break;

    50. }

    51. break;

    52.  
    53. case V4L2_PIX_FMT_UYVY:

    54. uyvy_to_yuyv((uint8_t*)frameBuffer, strideOut,

    55. src, videoIn->format.fmt.pix.bytesperline, videoIn->outWidth, videoIn->outHeight);

    56. break;

    57.  
    58. case V4L2_PIX_FMT_YVYU:

    59. yvyu_to_yuyv((uint8_t*)frameBuffer, strideOut,

    60. src, videoIn->format.fmt.pix.bytesperline, videoIn->outWidth, videoIn->outHeight);

    61. break;

    62.  
    63. case V4L2_PIX_FMT_YYUV:

    64. yyuv_to_yuyv((uint8_t*)frameBuffer, strideOut,

    65. src, videoIn->format.fmt.pix.bytesperline, videoIn->outWidth, videoIn->outHeight);

    66. break;

    67.  
    68. case V4L2_PIX_FMT_YUV420:

    69. yuv420_to_yuyv((uint8_t*)frameBuffer, strideOut, src, videoIn->outWidth, videoIn->outHeight);

    70. break;

    71.  
    72. case V4L2_PIX_FMT_YVU420:

    73. yvu420_to_yuyv((uint8_t*)frameBuffer, strideOut, src, videoIn->outWidth, videoIn->outHeight);

    74. break;

    75.  
    76. case V4L2_PIX_FMT_NV12:

    77. nv12_to_yuyv((uint8_t*)frameBuffer, strideOut, src, videoIn->outWidth, videoIn->outHeight);

    78. break;

    79.  
    80. case V4L2_PIX_FMT_NV21:

    81. nv21_to_yuyv((uint8_t*)frameBuffer, strideOut, src, videoIn->outWidth, videoIn->outHeight);

    82. break;

    83.  
    84. case V4L2_PIX_FMT_NV16:

    85. nv16_to_yuyv((uint8_t*)frameBuffer, strideOut, src, videoIn->outWidth, videoIn->outHeight);

    86. break;

    87.  
    88. case V4L2_PIX_FMT_NV61:

    89. nv61_to_yuyv((uint8_t*)frameBuffer, strideOut, src, videoIn->outWidth, videoIn->outHeight);

    90. break;

    91.  
    92. case V4L2_PIX_FMT_Y41P:

    93. y41p_to_yuyv((uint8_t*)frameBuffer, strideOut, src, videoIn->outWidth, videoIn->outHeight);

    94. break;

    95.  
    96. case V4L2_PIX_FMT_GREY:

    97. grey_to_yuyv((uint8_t*)frameBuffer, strideOut,

    98. src, videoIn->format.fmt.pix.bytesperline, videoIn->outWidth, videoIn->outHeight);

    99. break;

    100.  
    101. case V4L2_PIX_FMT_Y16:

    102. y16_to_yuyv((uint8_t*)frameBuffer, strideOut,

    103. src, videoIn->format.fmt.pix.bytesperline, videoIn->outWidth, videoIn->outHeight);

    104. break;

    105.  
    106. case V4L2_PIX_FMT_SPCA501:

    107. s501_to_yuyv((uint8_t*)frameBuffer, strideOut, src, videoIn->outWidth, videoIn->outHeight);

    108. break;

    109.  
    110. case V4L2_PIX_FMT_SPCA505:

    111. s505_to_yuyv((uint8_t*)frameBuffer, strideOut, src, videoIn->outWidth, videoIn->outHeight);

    112. break;

    113.  
    114. case V4L2_PIX_FMT_SPCA508:

    115. s508_to_yuyv((uint8_t*)frameBuffer, strideOut, src, videoIn->outWidth, videoIn->outHeight);

    116. break;

    117.  
    118. case V4L2_PIX_FMT_YUYV:

    119. {

    120. int h;

    121. uint8_t* pdst = (uint8_t*)frameBuffer;

    122. uint8_t* psrc = src;

    123. int ss = videoIn->outWidth << 1;

    124. for (h = 0; h < videoIn->outHeight; h++) {

    125. memcpy(pdst,psrc,ss);

    126. pdst += strideOut;

    127. psrc += videoIn->format.fmt.pix.bytesperline;

    128. }

    129. }

    130. break;

    131.  
    132. case V4L2_PIX_FMT_SGBRG8: //0

    133. bayer_to_rgb24 (src,(uint8_t*) videoIn->tmpBuffer, videoIn->outWidth, videoIn->outHeight, 0);

    134. rgb_to_yuyv ((uint8_t*) frameBuffer, strideOut,

    135. (uint8_t*)videoIn->tmpBuffer, videoIn->outWidth*3, videoIn->outWidth, videoIn->outHeight);

    136. break;

    137.  
    138. case V4L2_PIX_FMT_SGRBG8: //1

    139. bayer_to_rgb24 (src,(uint8_t*) videoIn->tmpBuffer, videoIn->outWidth, videoIn->outHeight, 1);

    140. rgb_to_yuyv ((uint8_t*) frameBuffer, strideOut,

    141. (uint8_t*)videoIn->tmpBuffer, videoIn->outWidth*3, videoIn->outWidth, videoIn->outHeight);

    142. break;

    143.  
    144. case V4L2_PIX_FMT_SBGGR8: //2

    145. bayer_to_rgb24 (src,(uint8_t*) videoIn->tmpBuffer, videoIn->outWidth, videoIn->outHeight, 2);

    146. rgb_to_yuyv ((uint8_t*) frameBuffer, strideOut,

    147. (uint8_t*)videoIn->tmpBuffer, videoIn->outWidth*3, videoIn->outWidth, videoIn->outHeight);

    148. break;

    149.  
    150. case V4L2_PIX_FMT_SRGGB8: //3

    151. bayer_to_rgb24 (src,(uint8_t*) videoIn->tmpBuffer, videoIn->outWidth, videoIn->outHeight, 3);

    152. rgb_to_yuyv ((uint8_t*) frameBuffer, strideOut,

    153. (uint8_t*)videoIn->tmpBuffer, videoIn->outWidth*3, videoIn->outWidth, videoIn->outHeight);

    154. break;

    155.  
    156. case V4L2_PIX_FMT_RGB24:

    157. rgb_to_yuyv((uint8_t*) frameBuffer, strideOut,

    158. src, videoIn->format.fmt.pix.bytesperline, videoIn->outWidth, videoIn->outHeight);

    159. break;

    160.  
    161. case V4L2_PIX_FMT_BGR24:

    162. bgr_to_yuyv((uint8_t*) frameBuffer, strideOut,

    163. src, videoIn->format.fmt.pix.bytesperline, videoIn->outWidth, videoIn->outHeight);

    164. break;

    165.  
    166. default:

    167. ALOGE("error grabbing: unknown format: %i\n", videoIn->format.fmt.pix.pixelformat);

    168. break;

    169. }

    170.  
    171. LOG_FRAME("V4L2Camera::GrabRawFrame - Copied frame to destination 0x%p",frameBuffer);

    172. }

    173.  
    174. // buffer用完之后入队, 重复利用

    175. /* And Queue the buffer again */

    176. ret = ioctl(fd, VIDIOC_QBUF, &videoIn->buf);

    177. if (ret < 0) {

    178. ALOGE("GrabPreviewFrame: VIDIOC_QBUF Failed");

    179. return;

    180. }

    181.  
    182. nQueued++;

    183.  
    184. LOG_FRAME("V4L2Camera::GrabRawFrame - Queued buffer");

    185.  
    186. }

4.2 Framwork

JavaSDK层

Hardware的分析可以自底向上, 首先看V4L2Camera, 再看CameraHardware, 再到CameraFactory. Framework的代码自底向上看东西就太多了, 因此先从SDK中的摄像头部分看起. HAL和Framework说的都是C++的东西, 实现了安卓的底层. 但是实际上在开发app的时候用是SDK是JAVA语言编写的. 我们知道JAVA可以通过JNI来调用C++代码, 接下来就来看看ADK中是如何使用的.

首先考虑一段调用摄像头预览的代码:

 
  1. Camera cam = Camera.open(); // 获取一个摄像头实例

  2. cam.setPreviewDisplay(surfaceHolder); // 设置预览窗口

  3. cam.startPreview(); // 开始预览

第一行打开的是默认摄像头, 也可以换成 Camera.open(1) 打开其他摄像头, 这几个函数的定义在ADK中位于Camera.java中, open函数为:

frameworks/base/core/java/android/hardware/Camera.java

 
  1. public static Camera open(int cameraId) {

  2. return new Camera(cameraId);

  3. }

  4.  
  5. public static Camera open() {

  6. int numberOfCameras = getNumberOfCameras();

  7. CameraInfo cameraInfo = new CameraInfo();

  8. for (int i = 0; i < numberOfCameras; i++) {

  9. getCameraInfo(i, cameraInfo);

  10. if (cameraInfo.facing == CameraInfo.CAMERA_FACING_BACK) {

  11. return new Camera(i);

  12. }

  13. }

  14. return null;

  15. }

可以看到, 直接open不加任何参数打开的其实是第一个后置摄像头. 总之最后open返回了一个Camera对象. 这里看到了一个熟悉的函数getNumberOfCameras, 在HAL中的camera_module_t中, 除了必须的hw_module_t, 还有两个函数指针 get_number_of_cameras 和get_camera_info, 估计这个 getNumberOfCameras 最终就是调用了get_number_of_cameras. 于是来看这个函数:

 
  1. /**

  2. * Returns the number of physical cameras available on this device.

  3. */

  4. public native static int getNumberOfCameras();

这个函数在Camera.java中只有一个声明, 表明这是一个native函数, 于是就要找其对应的JNI的定义.

frameworks/base/core/jni/android_hardware_Camera.cpp

 
  1. static jint android_hardware_Camera_getNumberOfCameras(JNIEnv *env, jobject thiz)

  2. {

  3. return Camera::getNumberOfCameras();

  4. }

再来找这个C++中的Camera类, 这个类已经位于android framework中了, 但是getNumberOfCameras的定义其实是在它的父类CameraBase中:

frameworks/av/camera/CameraBase.cpp

 
  1. template <typename TCam, typename TCamTraits>

  2. int CameraBase<TCam, TCamTraits>::getNumberOfCameras() {

  3. const sp<ICameraService> cs = getCameraService();

  4.  
  5. if (!cs.get()) {

  6. // as required by the public Java APIs

  7. return 0;

  8. }

  9. return cs->getNumberOfCameras();

  10. }

可以看到这里只是简单的获取CameraService, 然后调用其getNumberOfCameras 函数, 再来看这个函数:

frameworks/av/camera/CameraBase.cpp

 
  1. template <typename TCam, typename TCamTraits>

  2. const sp<ICameraService>& CameraBase<TCam, TCamTraits>::getCameraService()

  3. {

  4. Mutex::Autolock _l(gLock);

  5. if (gCameraService.get() == 0) {

  6. sp<IServiceManager> sm = defaultServiceManager();

  7. sp<IBinder> binder;

  8. do {

  9. binder = sm->getService(String16(kCameraServiceName));

  10. if (binder != 0) {

  11. break;

  12. }

  13. ALOGW("CameraService not published, waiting...");

  14. usleep(kCameraServicePollDelay);

  15. } while(true);

  16. if (gDeathNotifier == NULL) {

  17. gDeathNotifier = new DeathNotifier();

  18. }

  19. binder->linkToDeath(gDeathNotifier);

  20. gCameraService = interface_cast<ICameraService>(binder);

  21. }

  22. ALOGE_IF(gCameraService == 0, "no CameraService!?");

  23. return gCameraService;

  24. }

可以看到gCameraService是一个sp<ICameraService>类型的单例, 第一次调用这个函数的时候对gCameraService初始化, 以后每次只是简单地返回这个变量. 在初始化的过程中, 用到了defaultServiceManager获取了一个sm, 并通过sm->getService获取到CameraService. defaultServiceManager这个函数位于frameworks/native/lib/binder/IServiceManager.cpp, 属于binder通信的一部分, 超出了本文的范围, 以后有空再写一篇博客说明.

open函数调用完之后, 就是setPreviewDisplay和startPreiview, 这两个函数同样是native的, 其实现类似, 下面就只看看startPreview:

frameworks/base/core/jni/android_hardware_Camera.cpp

 
  1. static void android_hardware_Camera_startPreview(JNIEnv *env, jobject thiz)

  2. {

  3. ALOGV("startPreview");

  4. sp<Camera> camera = get_native_camera(env, thiz, NULL);

  5. if (camera == 0) return;

  6.  
  7. if (camera->startPreview() != NO_ERROR) {

  8. jniThrowRuntimeException(env, "startPreview failed");

  9. return;

  10. }

  11. }

这段代码首先获取了一个Camera对象, 然后对其调用startPreview, get_native_camera的实习如下:

 
  1. sp<Camera> get_native_camera(JNIEnv *env, jobject thiz, JNICameraContext** pContext)

  2. {

  3. sp<Camera> camera;

  4. Mutex::Autolock _l(sLock);

  5. JNICameraContext* context = reinterpret_cast<JNICameraContext*>(env->GetLongField(thiz, fields.context));

  6. if (context != NULL) {

  7. camera = context->getCamera();

  8. }

  9. ALOGV("get_native_camera: context=%p, camera=%p", context, camera.get());

  10. if (camera == 0) {

  11. jniThrowRuntimeException(env,

  12. "Camera is being used after Camera.release() was called");

  13. }

  14.  
  15. if (pContext != NULL) *pContext = context;

  16. return camera;

  17. }

该函数通过env->GetLongField获取了一个JNICameraContext的对象的指针, 然后就能通过getCamera得到Camera对象了, 而这个JNICameraContext的对象的指针是在native_setup中设置的:

 
  1. 1: static jint android_hardware_Camera_native_setup(JNIEnv *env, jobject thiz,

  2. 2: jobject weak_this, jint cameraId, jint halVersion, jstring clientPackageName)

  3. 3: {

  4. 4: // Convert jstring to String16

  5. 5: const char16_t *rawClientName = env->GetStringChars(clientPackageName, NULL);

  6. 6: jsize rawClientNameLen = env->GetStringLength(clientPackageName);

  7. 7: String16 clientName(rawClientName, rawClientNameLen);

  8. 8: env->ReleaseStringChars(clientPackageName, rawClientName);

  9. 9:

  10. 10: sp<Camera> camera;

  11. 11: if (halVersion == CAMERA_HAL_API_VERSION_NORMAL_CONNECT) {

  12. 12: // Default path: hal version is don't care, do normal camera connect.

  13. 13: camera = Camera::connect(cameraId, clientName,

  14. 14: Camera::USE_CALLING_UID);

  15. 15: } else {

  16. 16: jint status = Camera::connectLegacy(cameraId, halVersion, clientName,

  17. 17: Camera::USE_CALLING_UID, camera);

  18. 18: if (status != NO_ERROR) {

  19. 19: return status;

  20. 20: }

  21. 21: }

  22. 22:

  23. 23: if (camera == NULL) {

  24. 24: return -EACCES;

  25. 25: }

  26. 26:

  27. 27: // make sure camera hardware is alive

  28. 28: if (camera->getStatus() != NO_ERROR) {

  29. 29: return NO_INIT;

  30. 30: }

  31. 31:

  32. 32: jclass clazz = env->GetObjectClass(thiz);

  33. 33: if (clazz == NULL) {

  34. 34: // This should never happen

  35. 35: jniThrowRuntimeException(env, "Can't find android/hardware/Camera");

  36. 36: return INVALID_OPERATION;

  37. 37: }

  38. 38:

  39. 39: // We use a weak reference so the Camera object can be garbage collected.

  40. 40: // The reference is only used as a proxy for callbacks.

  41. 41: sp<JNICameraContext> context = new JNICameraContext(env, weak_this, clazz, camera);

  42. 42: context->incStrong((void*)android_hardware_Camera_native_setup);

  43. 43: camera->setListener(context);

  44. 44:

  45. 45: // save context in opaque field

  46. 46: env->SetLongField(thiz, fields.context, (jlong)context.get());

  47. 47: return NO_ERROR;

  48. 48: }

注意第13行, 通过Camera::connect获取到了一个Camera对象, 这里终于又从ADK 层进入到了Framework层.

class Camera

在frameworks/av/camera/下有个Camera.cpp, 定义了一个Camera类, 由上一小节可以看到, ADK层通过JNI, 与这个类直接打交道, 进而进入Framework层, 可以说这个类是进入Framework的入口.

Camera类多重继承于CameraBase和BnCameraClient, 而这个CameraBase用到了模板:

frameworks/av/include/camera/CameraBase.h

 
  1. template <typename TCam>

  2. struct CameraTraits {

  3. };

  4.  
  5. template <typename TCam, typename TCamTraits = CameraTraits<TCam> >

  6. class CameraBase : public IBinder::DeathRecipient

  7. {

  8. public:

  9. typedef typename TCamTraits::TCamListener TCamListener;

  10. typedef typename TCamTraits::TCamUser TCamUser;

  11. typedef typename TCamTraits::TCamCallbacks TCamCallbacks;

  12. typedef typename TCamTraits::TCamConnectService TCamConnectService;

  13. }

这里除了用到模板还用到了模板的偏特化, Camera在实际继承CameraBase的时候, TCam就是Camera类型, 而TCamTraits用的是CameraTraits<Camera>, 但是这个模板特化并不是CameraBase.h中的CameraTraits, 而是定义在Camera.h中, 否则的话CameraTraits是空的, 也就没有TCamTraits::TCamListener这些东西了:

frameworks/av/include/camera/Camera.h

 
  1. template <>

  2. struct CameraTraits<Camera>

  3. {

  4. typedef CameraListener TCamListener;

  5. typedef ICamera TCamUser;

  6. typedef ICameraClient TCamCallbacks;

  7. typedef status_t (ICameraService::*TCamConnectService)(const sp<ICameraClient>&,

  8. int, const String16&, int,

  9. /*out*/

  10. sp<ICamera>&);

  11. static TCamConnectService fnConnectService;

  12. };

mediaserver

mediaserver是一个独立的进程, 有着自己的main函数, 系统启动之后会启动mediaserver作为一个守护进程. mediaserver负责管理android应用需要用到的多媒体相关的服务, 例如音频, 视频播放, 摄像头等. 通过Android的binder机制与app进行通信.

frameworks/av/media/mediaserver/main_mediaserver.cpp

 
  1. int main(int argc __unused, char** argv)

  2. {

  3. signal(SIGPIPE, SIG_IGN);

  4. char value[PROPERTY_VALUE_MAX];

  5. bool doLog = (property_get("ro.test_harness", value, "0") > 0) && (atoi(value) == 1);

  6. pid_t childPid;

  7. if (doLog && (childPid = fork()) != 0) {

  8. // ...省略

  9. } else {

  10. // all other services

  11. if (doLog) {

  12. prctl(PR_SET_PDEATHSIG, SIGKILL); // if parent media.log dies before me, kill me also

  13. setpgid(0, 0); // but if I die first, don't kill my parent

  14. }

  15. sp<ProcessState> proc(ProcessState::self());

  16. sp<IServiceManager> sm = defaultServiceManager();

  17. ALOGI("ServiceManager: %p", sm.get());

  18. AudioFlinger::instantiate();

  19. MediaPlayerService::instantiate();

  20. CameraService::instantiate();

  21. AudioPolicyService::instantiate();

  22. SoundTriggerHwService::instantiate();

  23. registerExtensions();

  24. ProcessState::self()->startThreadPool();

  25. IPCThreadState::self()->joinThreadPool();

  26. }

  27. }

可以看到, 在main函数中分别对几大服务调用了instantiate初始化. 重点关注CameraService::instantiate() 这一行, 初始化了摄像头服务. 于是接下来就来看这个CameraService类. 这个类的声明就很长, 约五百行, 内部还定义了BasicClientClient 等内部类. 但是并没有发现main函数中调用的instantiate函数. 考虑到CameraService多继承了BinderService<CameraService>, BnCameraService, DeathRecipient, camera_module_callbacks_t等四个类, 估计这个instantiate就是其中一个类定义的, 果然在BinderService.h中发现了这个函数:

frameworks/native/include/binder/BinderService.h

 
  1. template<typename SERVICE>

  2. class BinderService

  3. {

  4. public:

  5. static status_t publish(bool allowIsolated = false) {

  6. sp<IServiceManager> sm(defaultServiceManager());

  7. return sm->addService(

  8. String16(SERVICE::getServiceName()),

  9. new SERVICE(), allowIsolated);

  10. }

  11.  
  12. static void publishAndJoinThreadPool(bool allowIsolated = false) {

  13. publish(allowIsolated);

  14. joinThreadPool();

  15. }

  16.  
  17. static void instantiate() { publish(); }

  18.  
  19. static status_t shutdown() { return NO_ERROR; }

  20.  
  21. private:

  22. static void joinThreadPool() {

  23. sp<ProcessState> ps(ProcessState::self());

  24. ps->startThreadPool();

  25. ps->giveThreadPoolName();

  26. IPCThreadState::self()->joinThreadPool();

  27. }

  28. };

可以看到 instantiate 调用了 publish, 而 publish 首先取得了一个全局唯一的 IServiceManager 实例的指针, 并且向其中注册了一个新的服务, 结合CameraService继承了 BinderService<CameraService> 来看, 注册服务实际上调用的是:

 
  1. sm->addService(

  2. String16(CameraSeivice::getServiceName()),

  3. new CameraService(), allowIsolated);

至此, main函数仅仅注册了CameraService, 那么什么时候使用CameraService呢? 这就要看main函数的最后两行:

 
  1. ProcessState::self()->startThreadPool();

  2. IPCThreadState::self()->joinThreadPool();

这里开始就就到了binder通信的部分, 不在本文的范围内, 参见我的另一篇博文.

android_camera_framework_uml.png

CameraHardwareInterface

CameraService

  • BasicClient
  • Client
  • 1
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值