Android Camera TakePicture過程分析

Android Camera TakePicture過程分析

接著上一篇文章,繼續講解camera拍照等具體功能實行流程
Camera子系统采用C/S架构,客户端和服务端在两个不同的进程当中,它们使用android中的binder机制进行通信,
本系列文章将从Android Camera应用程序到硬件抽象的实现一步一步对照相机系统进行分析,首先从CameraService初始化过程着手,然后从上层APP打开照相机->进行preview->拍照以及聚焦等功能的实现全面的学习照相机子系统

1、CameraService初始化过程
frameworks\base\media\mediaserver\Main_MediaServer.cpp
CameraService在MediaServer中初始化,代码是MediaServer的main函数,在该函数中初始化照相机服务,已在上一篇文章中講述
CameraService中的instantiate方法用来创建CameraService实例,并进行相应的初始化,这个函数定义在它的父类BinderService中: frameworks/base/include/binder/ BinderService.h
相机服务的初始化过程首先是创建CameraService实例,然后将其注册到ServiceManager中,关于它的启动是发生在init.rc中,通过media_server来启动CameraService,具体代码如下:
system/core/rootdir/init.rc
service servicemanager /system/bin/servicemanager
    class core
    user system
    group system
    critical
    onrestart restart zygote
    onrestart restart media
    onrestart restart surfaceflinger
    onrestart restart drm

service media /system/bin/mediaserver
    class main
    user media
    group audio camera inet net_bt net_bt_admin net_bw_acct drmrpc
    ioprio rt 4
cameraService注册以及启动过程中cameraService自身会执行一些初始化的工作,主要涉及到如下工作
frameworks/base/services/camera/libcameraservice/CameraService.cpp
  1. CameraService::CameraService()
  2. :mSoundRef(0), mModule(0)
  3. {
  4.     LOGI("CameraService started (pid=%d)", getpid());
  5.     gCameraService = this;
  6. }

  7. void CameraService::onFirstRef()
  8. {
  9.     BnCameraService::onFirstRef();

  10.     if (hw_get_module(CAMERA_HARDWARE_MODULE_ID,
  11.                 (const hw_module_t **)&mModule) < 0) {
  12.         LOGE("Could not load camera HAL module");
  13.         mNumberOfCameras = 0;
  14.     }
  15.     else {
  16.         mNumberOfCameras = mModule->get_number_of_cameras();
  17.         if (mNumberOfCameras > MAX_CAMERAS) {
  18.             LOGE("Number of cameras(%d) > MAX_CAMERAS(%d).",
  19.                     mNumberOfCameras, MAX_CAMERAS);
  20.             mNumberOfCameras = MAX_CAMERAS;
  21.         }
  22.         for (int i = 0; i < mNumberOfCameras; i++) {
  23.             setCameraFree(i);
  24.         }
  25.     }
  26. }
在上述初始化代码中,先通过调用HAL硬件抽象层库 load camera HAL module,获取支持的摄像头个数并保存到mNumberOfCameras

2、应用程序链接相机服务过程
在camera应用程序启动的时候首先会和CameraService建立连接,camera应用程序代码就不分析了,上一篇文章已經說過,下面这副图是一个简单的流程图

从上面的流程图我们可以看出在请求服务的过程中多出用到了应用框架层的camera类,该类定义在 frameworks/base/core/java/android/hardware/Camera.java文件当中,这一个类正是Camera子系统中APP层和JNI层交换的接口,它对上为应用程序提供了各种操作Camera的方法,向下访问JNI实现它自身的接口Camera类定义如下:

  1. public class Camera {
  2.     public static Camera open(int cameraId) {
  3.         return new Camera(cameraId);
  4.     }
  5.     
  6.     .................
  7.         
  8.     Camera(int cameraId) {
  9.     Looper looper;
  10.     if ((looper = Looper.myLooper()) != null) {
  11.         mEventHandler = new EventHandler(this, looper);
  12.     } else if ((looper = Looper.getMainLooper()) != null) {
  13.         mEventHandler = new EventHandler(this, looper);
  14.     } else {
  15.         mEventHandler = null;
  16.     }
  17.     native_setup(new WeakReference<Camera>(this), cameraId);
  18.     }
  19. }
下面就开始从app的takepicture逐步分析camera takepicture整个过程
在app中,takepicture是在capture方法中被调用的: packages/apps/OMAPCamera/src/com/ti/omap4/android/camera/Camera.java (apk)
  1. @Override
  2.     public boolean capture() {
  3.         synchronized (mCameraStateLock) {
  4.             // If we are already in the middle of taking a snapshot then ignore.
  5.             if (mCameraState == SNAPSHOT_IN_PROGRESS || mCameraDevice == null) {
  6.                 return false;
  7.             }
  8.             mCaptureStartTime = System.currentTimeMillis();
  9.             mPostViewPictureCallbackTime = 0;
  10.             mJpegImageData = null;

  11.             // Set rotation and gps data.
  12.             Util.setRotationParameter(mParameters, mCameraId, mOrientation);
  13.             Location loc = mLocationManager.getCurrentLocation();
  14.             Util.setGpsParameters(mParameters, loc);
  15.             if (canSetParameters()) {
  16.                 mCameraDevice.setParameters(mParameters);
  17.             }

  18.             try {
  19.                 mCameraDevice.takePicture(mShutterCallback, mRawPictureCallback,
  20.                         mPostViewPictureCallback, new JpegPictureCallback(loc));
  21.             } catch (RuntimeException e ) {
  22.                 e.printStackTrace();
  23.                 return false;
  24.             }
  25.             mFaceDetectionStarted = false;
  26.             setCameraState(SNAPSHOT_IN_PROGRESS);
  27.             return true;
  28.         }
  29.     }
这里调用的 takePicture是在framework层中定义 frameworks/base/core/java/android/hardware/Camera.java

  1. public final void takePicture(ShutterCallback shutter, PictureCallback raw,
  2.             PictureCallback postview, PictureCallback jpeg) {
  3.         mShutterCallback = shutter;
  4.         mRawImageCallback = raw;
  5.         mPostviewCallback = postview;
  6.         mJpegCallback = jpeg;

  7.         // If callback is not set, do not send me callbacks.
  8.         int msgType = 0;
  9.         if (mShutterCallback != null) {
  10.             msgType |= CAMERA_MSG_SHUTTER;
  11.         }
  12.         if (mRawImageCallback != null) {
  13.             msgType |= CAMERA_MSG_RAW_IMAGE;
  14.         }
  15.         if (mPostviewCallback != null) {
  16.             msgType |= CAMERA_MSG_POSTVIEW_FRAME;
  17.         }
  18.         if (mJpegCallback != null) {
  19.             msgType |= CAMERA_MSG_COMPRESSED_IMAGE;
  20.         }

  21.         native_takePicture(msgType);
  22.     }
在这里设置callback函数,并 调用通过JNI调用takepicture方法: frameworks/base/core/jni/android_hardware_Camera.cpp
  1. static void android_hardware_Camera_takePicture(JNIEnv *env, jobject thiz, int msgType)
  2. {
  3.     LOGV("takePicture");
  4.     JNICameraContext* context;
  5.     sp<Camera> camera = get_native_camera(env, thiz, &context);
  6.     if (camera == 0) return;

  7.     /*
  8.      * When CAMERA_MSG_RAW_IMAGE is requested, if the raw image callback
  9.      * buffer is available, CAMERA_MSG_RAW_IMAGE is enabled to get the
  10.      * notification _and_ the data; otherwise, CAMERA_MSG_RAW_IMAGE_NOTIFY
  11.      * is enabled to receive the callback notification but no data.
  12.      *
  13.      * Note that CAMERA_MSG_RAW_IMAGE_NOTIFY is not exposed to the
  14.      * Java application.
  15.      */
  16.     if (msgType & CAMERA_MSG_RAW_IMAGE) {
  17.         LOGV("Enable raw image callback buffer");
  18.         if (!context->isRawImageCallbackBufferAvailable()) {
  19.             LOGV("Enable raw image notification, since no callback buffer exists");
  20.             msgType &= ~CAMERA_MSG_RAW_IMAGE;
  21.             msgType |= CAMERA_MSG_RAW_IMAGE_NOTIFY;
  22.         }
  23.     }

  24.     if (camera->takePicture(msgType) != NO_ERROR) {
  25.         jniThrowRuntimeException(env, "takePicture failed");
  26.         return;
  27.     }
  28. }
这里调用camera 的takepicture,即 camera client 的takepicture方法: frameworks/base/libs/camera/Camera.cpp
  1. status_t Camera::takePicture(int msgType, const String8& params)
  2. {
  3.     LOGV("takePicture: 0x%x", msgType);
  4.     sp <ICamera> c = mCamera;
  5.     if (== 0) return NO_INIT;
  6.     return c->takePicture(msgType, params);
  7. }
这里client 端的takepicture又调用到 camera  server端的takepicture: frameworks/base/services/camera/libcameraservice/CameraService.cpp

  1. // take a picture - image is returned in callback
  2. #ifdef OMAP_ENHANCEMENT_CPCAM
  3. status_t CameraService::Client::takePicture(int msgType, const String8& params) {
  4. #else
  5. status_t CameraService::Client::takePicture(int msgType) {
  6. #endif
  7.     LOG1("takePicture (pid %d): 0x%x", getCallingPid(), msgType);

  8.     Mutex::Autolock lock(mLock);
  9.     status_t result = checkPidAndHardware();
  10.     if (result != NO_ERROR) return result;

  11.     if ((msgType & CAMERA_MSG_RAW_IMAGE) &&
  12.         (msgType & CAMERA_MSG_RAW_IMAGE_NOTIFY)) {
  13.         LOGE("CAMERA_MSG_RAW_IMAGE and CAMERA_MSG_RAW_IMAGE_NOTIFY"
  14.                 " cannot be both enabled");
  15.         return BAD_VALUE;
  16.     }

  17.     // We only accept picture related message types
  18.     // and ignore other types of messages for takePicture().
  19.     int picMsgType = msgType
  20.                         & (CAMERA_MSG_SHUTTER |
  21.                            CAMERA_MSG_POSTVIEW_FRAME |
  22.                            CAMERA_MSG_RAW_IMAGE |
  23. #ifdef OMAP_ENHANCEMENT
  24.                            CAMERA_MSG_RAW_BURST |
  25. #endif
  26.                            CAMERA_MSG_RAW_IMAGE_NOTIFY |
  27.                            CAMERA_MSG_COMPRESSED_IMAGE);
  28. #ifdef OMAP_ENHANCEMENT
  29.     picMsgType |= CAMERA_MSG_COMPRESSED_BURST_IMAGE;
  30. #endif

  31.     enableMsgType(picMsgType);

  32. #ifdef OMAP_ENHANCEMENT
  33.     // make sure the other capture messages are disabled
  34.     picMsgType = ~picMsgType &
  35.                  (CAMERA_MSG_SHUTTER |
  36.                   CAMERA_MSG_POSTVIEW_FRAME |
  37.                   CAMERA_MSG_RAW_IMAGE |
  38.                   CAMERA_MSG_RAW_BURST |
  39.                   CAMERA_MSG_RAW_IMAGE_NOTIFY |
  40.                   CAMERA_MSG_COMPRESSED_IMAGE |
  41.                   CAMERA_MSG_COMPRESSED_BURST_IMAGE);
  42.     disableMsgType(picMsgType);
  43. #endif

  44. #ifdef OMAP_ENHANCEMENT_CPCAM
  45.     return mHardware->takePicture(params);
  46. #else
  47.     return mHardware->takePicture();
  48. #endif
  49. }
一些初始化之后,最终server端的takepicture方法会调用 HAL层(硬件接口层 )的takepicture方法: frameworks/base/services/camera/libcameraservice/CameraHardwareInterface.h
  1. /**
  2.      * Take a picture.
  3.      */
  4. #ifdef OMAP_ENHANCEMENT_CPCAM
  5.     status_t takePicture(const ShotParameters &params)
  6.     {
  7.         LOGV("%s(%s)", __FUNCTION__, mName.string());
  8.         if (mDevice->ops->take_picture)
  9.             return mDevice->ops->take_picture(mDevice,
  10.                                               params.flatten().string());
  11.         return INVALID_OPERATION;
  12.     }
  13. #else
  14.     status_t takePicture()
  15.     {
  16.         LOGV("%s(%s)", __FUNCTION__, mName.string());
  17.         if (mDevice->ops->take_picture)
  18.             return mDevice->ops->take_picture(mDevice);//从这里开始通过V4L2子系统调用到kerner driver的设备,以后针对这个部分做详细学习
  19.         return INVALID_OPERATION;
  20.     }
  21. #endif
下面的重点是分析数据回调过程,这个过程是camera的最重点,在我看来也是最难点理解的地方,要多花点时间,努把力了,现在就开始
首先还是必须先追溯到Camera客户端与服务端连接的时候,由我的上一遍初始化的文章知道, Camera客户端与服务端连接的时候,首先调用的是client端的connect方法,
client的connect方法首先getservice,然后调用server端的connect方法,为了方便理解我再次把这部分代码贴出:
server的connect()函数定义在以下路径:frameworks/base/services/camera/libcameraservice/CameraService.cpp
  1. sp<ICamera> CameraService::connect(
  2.         const sp<ICameraClient>& cameraClient, int cameraId) {
  3.     int callingPid = getCallingPid();
  4.     sp<CameraHardwareInterface> hardware = NULL;

  5.     LOG1("CameraService::connect E (pid %d, id %d)", callingPid, cameraId);

  6.     if (!mModule) {
  7.         LOGE("Camera HAL module not loaded");
  8.         return NULL;
  9.     }

  10.     sp<Client> client;
  11.     if (cameraId < 0 || cameraId >= mNumberOfCameras) {
  12.         LOGE("CameraService::connect X (pid %d) rejected (invalid cameraId %d).",
  13.             callingPid, cameraId);
  14.         return NULL;
  15.     }

  16.     char value[PROPERTY_VALUE_MAX];
  17.     property_get("sys.secpolicy.camera.disabled", value, "0");
  18.     if (strcmp(value, "1") == 0) {
  19.         // Camera is disabled by DevicePolicyManager.
  20.         LOGI("Camera is disabled. connect X (pid %d) rejected", callingPid);
  21.         return NULL;
  22.     }

  23.     Mutex::Autolock lock(mServiceLock);
  24.     if (mClient[cameraId] != 0) {
  25.         client = mClient[cameraId].promote();
  26.         if (client != 0) {
  27.             if (cameraClient->asBinder() == client->getCameraClient()->asBinder()) {
  28.                 LOG1("CameraService::connect X (pid %d) (the same client)",
  29.                     callingPid);
  30.                 return client;
  31.             } else {
  32.                 LOGW("CameraService::connect X (pid %d) rejected (existing client).",
  33.                     callingPid);
  34.                 return NULL;
  35.             }
  36.         }
  37.         mClient[cameraId].clear();
  38.     }

  39.     if (mBusy[cameraId]) {
  40.         LOGW("CameraService::connect X (pid %d) rejected"
  41.              " (camera %d is still busy).", callingPid, cameraId);
  42.         return NULL;
  43.     }

  44.     struct camera_info info;
  45.     if (mModule->get_camera_info(cameraId, &info) != OK) {
  46.         LOGE("Invalid camera id %d", cameraId);
  47.         return NULL;
  48.     }

  49.     char camera_device_name[10];
  50.     snprintf(camera_device_name, sizeof(camera_device_name), "%d", cameraId);

  51.     hardware = new CameraHardwareInterface(camera_device_name);
  52.     if (hardware->initialize(&mModule->common) != OK) {
  53.         hardware.clear();
  54.         return NULL;
  55.     }

  56.     client = new Client(this, cameraClient, hardware, cameraId, info.facing, callingPid);
  57.     mClient[cameraId] = client;
  58.     LOG1("CameraService::connect X");
  59.     return client;
  60. }
最重要的地方在上面标注的绿色部分,这里在connect成功之后会new一个client,这个Client是CamereService类的内部类,
这个时候便会调用client这个内部类的client构造函数,而我们的回调函数也正是在这个时候被设置,看看代码:

  1. CameraService::Client::Client(const sp<CameraService>& cameraService,
  2.         const sp<ICameraClient>& cameraClient,
  3.         const sp<CameraHardwareInterface>& hardware,
  4.         int cameraId, int cameraFacing, int clientPid) {
  5.     int callingPid = getCallingPid();
  6.     LOG1("Client::Client E (pid %d)", callingPid);

  7.     mCameraService = cameraService;
  8.     mCameraClient = cameraClient;
  9.     mHardware = hardware;
  10.     mCameraId = cameraId;
  11.     mCameraFacing = cameraFacing;
  12.     mClientPid = clientPid;
  13.     mMsgEnabled = 0;
  14.     mSurface = 0;
  15.     mPreviewWindow = 0;
  16. #ifdef OMAP_ENHANCEMENT_CPCAM
  17.     mTapin = 0;
  18.     mTapinClient = 0;
  19.     mTapout = 0;
  20.     mTapoutClient = 0;
  21. #endif
  22.     mHardware->setCallbacks(notifyCallback,
  23.                             dataCallback,
  24.                             dataCallbackTimestamp,
  25.                             (void *)cameraId);

  26.     // Enable zoom, error, focus, and metadata messages by default
  27.     enableMsgType(CAMERA_MSG_ERROR | CAMERA_MSG_ZOOM | CAMERA_MSG_FOCUS |
  28.                   CAMERA_MSG_PREVIEW_METADATA);

  29.     // Callback is disabled by default
  30.     mPreviewCallbackFlag = CAMERA_FRAME_CALLBACK_FLAG_NOOP;
  31.     mOrientation = getOrientation(0, mCameraFacing == CAMERA_FACING_FRONT);
  32.     mPlayShutterSound = true;
  33.     cameraService->setCameraBusy(cameraId);
  34.     cameraService->loadSound();
  35.     LOG1("Client::Client X (pid %d)", callingPid);
  36. }
上面就对camera设置了 notifyCallback、dataCallback、dataCallbackTimestamp三个回调函数,用于返回底层数据用于处理,看下它的处理方法:
这里先针对其中的dataCallback回调方法做介绍,其他的回调方法以此类推,所以我们就先看一下dataCallback方法中都做了些什么事情:
这里的回调函数是camera server层的回调函数:frameworks/base/services/camera/libcameraservice/CameraService.cpp
  1. void CameraService::Client::dataCallback(int32_t msgType,
  2.         const sp<IMemory>& dataPtr, camera_frame_metadata_t *metadata, void* user) {
  3.     LOG2("dataCallback(%d)", msgType);

  4.     sp<Client> client = getClientFromCookie(user);
  5.     if (client == 0) return;
  6.     if (!client->lockIfMessageWanted(msgType)) return;

  7.     if (dataPtr == 0 && metadata == NULL) {
  8.         LOGE("Null data returned in data callback");
  9.         client->handleGenericNotify(CAMERA_MSG_ERROR, UNKNOWN_ERROR, 0);
  10.         return;
  11.     }

  12.     switch (msgType & ~CAMERA_MSG_PREVIEW_METADATA) {
  13.         case CAMERA_MSG_PREVIEW_FRAME:
  14.             client->handlePreviewData(msgType, dataPtr, metadata);
  15.             break;
  16.         case CAMERA_MSG_POSTVIEW_FRAME:
  17.             client->handlePostview(dataPtr);
  18.             break;
  19.         case CAMERA_MSG_RAW_IMAGE:
  20.             client->handleRawPicture(dataPtr);
  21.             break;
  22.         case CAMERA_MSG_COMPRESSED_IMAGE:
  23.             client->handleCompressedPicture(dataPtr);
  24.             break;
  25. #ifdef OMAP_ENHANCEMENT
  26.         case CAMERA_MSG_COMPRESSED_BURST_IMAGE:
  27.             client->handleCompressedBurstPicture(dataPtr);
  28.             break;
  29. #endif
  30.         default:
  31.             client->handleGenericData(msgType, dataPtr, metadata);
  32.             break;
  33.     }
  34. }
这里进行分类处理,因为preview过程需要大量数据传输,而且容易大家理解,这里就针对 preview数据回调过程进行分析
  1. // preview callback - frame buffer update
  2. void CameraService::Client::handlePreviewData(int32_t msgType,
  3.                                               const sp<IMemory>& mem,
  4.                                               camera_frame_metadata_t *metadata) {
  5.     ssize_t offset;
  6.     size_t size;
  7.     sp<IMemoryHeap> heap = mem->getMemory(&offset, &size);

  8.     // local copy of the callback flags
  9.     int flags = mPreviewCallbackFlag;

  10.     // is callback enabled?
  11.     if (!(flags & CAMERA_FRAME_CALLBACK_FLAG_ENABLE_MASK)) {
  12.         // If the enable bit is off, the copy-out and one-shot bits are ignored
  13.         LOG2("frame callback is disabled");
  14.         mLock.unlock();
  15.         return;
  16.     }

  17.     // hold a strong pointer to the client
  18.     sp<ICameraClient> c = mCameraClient;

  19.     // clear callback flags if no client or one-shot mode
  20.     if (== 0 || (mPreviewCallbackFlag & CAMERA_FRAME_CALLBACK_FLAG_ONE_SHOT_MASK)) {
  21.         LOG2("Disable preview callback");
  22.         mPreviewCallbackFlag &= ~(CAMERA_FRAME_CALLBACK_FLAG_ONE_SHOT_MASK |
  23.                                   CAMERA_FRAME_CALLBACK_FLAG_COPY_OUT_MASK |
  24.                                   CAMERA_FRAME_CALLBACK_FLAG_ENABLE_MASK);
  25.         disableMsgType(CAMERA_MSG_PREVIEW_FRAME);
  26.     }

  27.     if (!= 0) {
  28.         // Is the received frame copied out or not?
  29.         if (flags & CAMERA_FRAME_CALLBACK_FLAG_COPY_OUT_MASK) {
  30.             LOG2("frame is copied");
  31.             copyFrameAndPostCopiedFrame(msgType, c, heap, offset, size, metadata);
  32.         } else {
  33.             LOG2("frame is forwarded");
  34.             mLock.unlock();
  35.             c->dataCallback(msgType, mem, metadata);
  36.         }
  37.     } else {
  38.         mLock.unlock();
  39.     }
  40. }
这里有两个方向
copyFrameAndPostCopiedFrame这个函数执行两个buff区preview数据的投递,通过它的具体实现过程,可以知道最终他也要调用dataCallback方法继续调用客户端client的回调函数
所以这里直接分析 copyFrameAndPostCopiedFrame
  1. void CameraService::Client::copyFrameAndPostCopiedFrame(
  2.         int32_t msgType, const sp<ICameraClient>& client,
  3.         const sp<IMemoryHeap>& heap, size_t offset, size_t size,
  4.         camera_frame_metadata_t *metadata) {
  5.     LOG2("copyFrameAndPostCopiedFrame");
  6.     // It is necessary to copy out of pmem before sending this to
  7.     // the callback. For efficiency, reuse the same MemoryHeapBase
  8.     // provided it's big enough. Don't allocate the memory or
  9.     // perform the copy if there's no callback.
  10.     // hold the preview lock while we grab a reference to the preview buffer
  11.     sp<MemoryHeapBase> previewBuffer;

  12.     if (mPreviewBuffer == 0) {
  13.         mPreviewBuffer = new MemoryHeapBase(size, 0, NULL);
  14.     } else if (size > mPreviewBuffer->virtualSize()) {
  15.         mPreviewBuffer.clear();
  16.         mPreviewBuffer = new MemoryHeapBase(size, 0, NULL);
  17.     }
  18.     if (mPreviewBuffer == 0) {
  19.         LOGE("failed to allocate space for preview buffer");
  20.         mLock.unlock();
  21.         return;
  22.     }
  23.     previewBuffer = mPreviewBuffer;

  24.     memcpy(previewBuffer->base(), (uint8_t *)heap->base() + offset, size);

  25.     sp<MemoryBase> frame = new MemoryBase(previewBuffer, 0, size);
  26.     if (frame == 0) {
  27.         LOGE("failed to allocate space for frame callback");
  28.         mLock.unlock();
  29.         return;
  30.     }

  31.     mLock.unlock();
  32.     client->dataCallback(msgType, frame, metadata);
  33. }
从这里开始,回调函数进入到camera client的回调函数: frameworks/base/libs/camera/Camera.cpp
  1. // callback from camera service when frame or image is ready
  2. void Camera::dataCallback(int32_t msgType, const sp<IMemory>& dataPtr,
  3.                           camera_frame_metadata_t *metadata)
  4. {
  5.     sp<CameraListener> listener;
  6.     {
  7.         Mutex::Autolock _l(mLock);
  8.         listener = mListener;
  9.     }
  10.     if (listener != NULL) {
  11.         listener->postData(msgType, dataPtr, metadata);
  12.     }
  13. }
这里的listener到底是什么,还记得初始化的时候,在jni里面有设置listenerm吗?我们还是从新再看一下吧: frameworks/base/core/jni/android_hardware_Camera.cpp
  1. // connect to camera service
  2. static void android_hardware_Camera_native_setup(JNIEnv *env, jobject thiz,
  3.     jobject weak_this, jint cameraId)
  4. {
  5.     sp<Camera> camera = Camera::connect(cameraId);

  6.     if (camera == NULL) {
  7.         jniThrowRuntimeException(env, "Fail to connect to camera service");
  8.         return;
  9.     }

  10.     // make sure camera hardware is alive
  11.     if (camera->getStatus() != NO_ERROR) {
  12.         jniThrowRuntimeException(env, "Camera initialization failed");
  13.         return;
  14.     }

  15.     jclass clazz = env->GetObjectClass(thiz);
  16.     if (clazz == NULL) {
  17.         jniThrowRuntimeException(env, "Can't find android/hardware/Camera");
  18.         return;
  19.     }

  20.     // We use a weak reference so the Camera object can be garbage collected.
  21.     // The reference is only used as a proxy for callbacks.
  22.     sp<JNICameraContext> context = new JNICameraContext(env, weak_this, clazz, camera);
  23.     context->incStrong(thiz);
  24.     camera->setListener(context);

  25.     // save context in opaque field
  26.     env->SetIntField(thiz, fields.context, (int)context.get());
  27. }
由上面可以看出 JNICameraContext是个监听类,同时set这个监听类,这个类的定义在: frameworks/base/core/jni/android_hardware_Camera.cpp
  1. // provides persistent context for calls from native code to Java
  2. class JNICameraContext: public CameraListener
  3. {
  4. public:
  5.     JNICameraContext(JNIEnv* env, jobject weak_this, jclass clazz, const sp<Camera>& camera);
  6.     ~JNICameraContext() { release(); }
  7.     virtual void notify(int32_t msgType, int32_t ext1, int32_t ext2);
  8.     virtual void postData(int32_t msgType, const sp<IMemory>& dataPtr,
  9.                           camera_frame_metadata_t *metadata);
  10.     virtual void postDataTimestamp(nsecs_t timestamp, int32_t msgType, const sp<IMemory>& dataPtr);
  11.     void postMetadata(JNIEnv *env, int32_t msgType, camera_frame_metadata_t *metadata);
  12.     void addCallbackBuffer(JNIEnv *env, jbyteArray cbb, int msgType);
  13.     void setCallbackMode(JNIEnv *env, bool installed, bool manualMode);
  14.     sp<Camera> getCamera() { Mutex::Autolock _l(mLock); return mCamera; }
  15.     bool isRawImageCallbackBufferAvailable() const;
  16.     void release();

  17. private:
  18.     void copyAndPost(JNIEnv* env, const sp<IMemory>& dataPtr, int msgType);
  19.     void clearCallbackBuffers_l(JNIEnv *env, Vector<jbyteArray> *buffers);
  20.     void clearCallbackBuffers_l(JNIEnv *env);
  21.     jbyteArray getCallbackBuffer(JNIEnv *env, Vector<jbyteArray> *buffers, size_t bufferSize);

  22.     jobject mCameraJObjectWeak; // weak reference to java object
  23.     jclass mCameraJClass; // strong reference to java class
  24.     sp<Camera> mCamera; // strong reference to native object
  25.     jclass mFaceClass; // strong reference to Face class
  26.     jclass mRectClass; // strong reference to Rect class
  27.     Mutex mLock;

  28.     /*
  29.      * Global reference application-managed raw image buffer queue.
  30.      *
  31.      * Manual-only mode is supported for raw image callbacks, which is
  32.      * set whenever method addCallbackBuffer() with msgType =
  33.      * CAMERA_MSG_RAW_IMAGE is called; otherwise, null is returned
  34.      * with raw image callbacks.
  35.      */
  36.     Vector<jbyteArray> mRawImageCallbackBuffers;

  37.     /*
  38.      * Application-managed preview buffer queue and the flags
  39.      * associated with the usage of the preview buffer callback.
  40.      */
  41.     Vector<jbyteArray> mCallbackBuffers; // Global reference application managed byte[]
  42.     bool mManualBufferMode; // Whether to use application managed buffers.
  43.     bool mManualCameraCallbackSet; // Whether the callback has been set, used to
  44.                                          // reduce unnecessary calls to set the callback.
  45. };
标注部分是我们在上面用到的postData,我们看一看 postData的实现过程:
  1. void JNICameraContext::postData(int32_t msgType, const sp<IMemory>& dataPtr,
  2.                                 camera_frame_metadata_t *metadata)
  3. {
  4.     // VM pointer will be NULL if object is released
  5.     Mutex::Autolock _l(mLock);
  6.     JNIEnv *env = AndroidRuntime::getJNIEnv();
  7.     if (mCameraJObjectWeak == NULL) {
  8.         LOGW("callback on dead camera object");
  9.         return;
  10.     }

  11.     int32_t dataMsgType = msgType & ~CAMERA_MSG_PREVIEW_METADATA;

  12.     // return data based on callback type
  13.     switch (dataMsgType) {
  14.         case CAMERA_MSG_VIDEO_FRAME:
  15.             // should never happen
  16.             break;

  17.         // For backward-compatibility purpose, if there is no callback
  18.         // buffer for raw image, the callback returns null.
  19.         case CAMERA_MSG_RAW_IMAGE:
  20.             LOGV("rawCallback");
  21.             if (mRawImageCallbackBuffers.isEmpty()) {
  22.                 env->CallStaticVoidMethod(mCameraJClass, fields.post_event,
  23.                         mCameraJObjectWeak, dataMsgType, 0, 0, NULL);
  24.             } else {
  25.                 copyAndPost(env, dataPtr, dataMsgType);
  26.             }
  27.             break;

  28.         // There is no data.
  29.         case 0:
  30.             break;

  31.         default:
  32.             LOGV("dataCallback(%d, %p)", dataMsgType, dataPtr.get());
  33.             copyAndPost(env, dataPtr, dataMsgType);
  34.             break;
  35.     }

  36.     // post frame metadata to Java
  37.     if (metadata && (msgType & CAMERA_MSG_PREVIEW_METADATA)) {
  38.         postMetadata(env, CAMERA_MSG_PREVIEW_METADATA, metadata);
  39.     }
  40. }
我们接着看看这个 copyAndPost方法:
  1. void JNICameraContext::copyAndPost(JNIEnv* env, const sp<IMemory>& dataPtr, int msgType)
  2. {
  3.     jbyteArray obj = NULL;

  4.     // allocate Java byte array and copy data
  5.     if (dataPtr != NULL) {
  6.         ssize_t offset;
  7.         size_t size;
  8.         sp<IMemoryHeap> heap = dataPtr->getMemory(&offset, &size);
  9.         LOGV("copyAndPost: off=%ld, size=%d", offset, size);
  10.         uint8_t *heapBase = (uint8_t*)heap->base();

  11.         if (heapBase != NULL) {
  12.             const jbyte* data = reinterpret_cast<const jbyte*>(heapBase + offset);

  13.             if (msgType == CAMERA_MSG_RAW_IMAGE) {
  14.                 obj = getCallbackBuffer(env, &mRawImageCallbackBuffers, size);
  15.             } else if (msgType == CAMERA_MSG_PREVIEW_FRAME && mManualBufferMode) {
  16.                 obj = getCallbackBuffer(env, &mCallbackBuffers, size);

  17.                 if (mCallbackBuffers.isEmpty()) {
  18.                     LOGV("Out of buffers, clearing callback!");
  19.                     mCamera->setPreviewCallbackFlags(CAMERA_FRAME_CALLBACK_FLAG_NOOP);
  20.                     mManualCameraCallbackSet = false;

  21.                     if (obj == NULL) {
  22.                         return;
  23.                     }
  24.                 }
  25.             } else {
  26.                 LOGV("Allocating callback buffer");
  27.                 obj = env->NewByteArray(size);
  28.             }

  29.             if (obj == NULL) {
  30.                 LOGE("Couldn't allocate byte array for JPEG data");
  31.                 env->ExceptionClear();
  32.             } else {
  33.                 env->SetByteArrayRegion(obj, 0, size, data);
  34.             }
  35.         } else {
  36.             LOGE("image heap is NULL");
  37.         }
  38.     }

  39.     // post image data to Java
  40.     env->CallStaticVoidMethod(mCameraJClass, fields.post_event,
  41.             mCameraJObjectWeak, msgType, 0, 0, obj);
  42.     if (obj) {
  43.         env->DeleteLocalRef(obj);
  44.     }
  45. }
以上先建立一个byte数组obj,将data缓存数据存储进obj数组,CallStaticVoidMethod是C调用java函数,最后执行实在Camera.java(框架)的postEventFromNative()
从这里开始,回调函数进入到camera framework层
frameworks/base/core/java/android/hardware/Camera.java
  1. private static void postEventFromNative(Object camera_ref,
  2.                                             int what, int arg1, int arg2, Object obj)
  3.     {
  4.         Camera c = (Camera)((WeakReference)camera_ref).get();
  5.         if (== null)
  6.             return;

  7.         if (c.mEventHandler != null) {
  8.             Message m = c.mEventHandler.obtainMessage(what, arg1, arg2, obj);
  9.             c.mEventHandler.sendMessage(m);
  10.         }
  11.     }
sendMessage之后由handle进行处理,定义同样在framework层
  1. private class EventHandler extends Handler
  2.     {
  3.         private Camera mCamera;

  4.         public EventHandler(Camera c, Looper looper) {
  5.             super(looper);
  6.             mCamera = c;
  7.         }

  8.         @Override
  9.         public void handleMessage(Message msg) {
  10.             switch(msg.what) {
  11.             case CAMERA_MSG_SHUTTER:
  12.                 if (mShutterCallback != null) {
  13.                     mShutterCallback.onShutter();
  14.                 }
  15.                 return;

  16.             case CAMERA_MSG_RAW_IMAGE:
  17.                 if (mRawImageCallback != null) {
  18.                     mRawImageCallback.onPictureTaken((byte[])msg.obj, mCamera);
  19.                 }
  20.                 return;

  21.             case CAMERA_MSG_COMPRESSED_IMAGE:
  22.                 if (mJpegCallback != null) {
  23.                     mJpegCallback.onPictureTaken((byte[])msg.obj, mCamera);
  24.                 }
  25.                 return;

  26.             case CAMERA_MSG_PREVIEW_FRAME:
  27.                 if (mPreviewCallback != null) {
  28.                     PreviewCallback cb = mPreviewCallback;
  29.                     if (mOneShot) {
  30.                         // Clear the callback variable before the callback
  31.                         // in case the app calls setPreviewCallback from
  32.                         // the callback function
  33.                         mPreviewCallback = null;
  34.                     } else if (!mWithBuffer) {
  35.                         // We're faking the camera preview mode to prevent
  36.                         // the app from being flooded with preview frames.
  37.                         // Set to oneshot mode again.
  38.                         setHasPreviewCallback(true, false);
  39.                     }
  40.                     cb.onPreviewFrame((byte[])msg.obj, mCamera);
  41.                 }
  42.                 return;

  43.             case CAMERA_MSG_POSTVIEW_FRAME:
  44.                 if (mPostviewCallback != null) {
  45.                     mPostviewCallback.onPictureTaken((byte[])msg.obj, mCamera);
  46.                 }
  47.                 return;

  48.             case CAMERA_MSG_FOCUS:
  49.                 if (mAutoFocusCallback != null) {
  50.                     mAutoFocusCallback.onAutoFocus(msg.arg1 == 0 ? false : true, mCamera);
  51.                 }
  52.                 return;

  53.             case CAMERA_MSG_ZOOM:
  54.                 if (mZoomListener != null) {
  55.                     mZoomListener.onZoomChange(msg.arg1, msg.arg2 != 0, mCamera);
  56.                 }
  57.                 return;

  58.             case CAMERA_MSG_PREVIEW_METADATA:
  59.                 if (mFaceListener != null) {
  60.                     mFaceListener.onFaceDetection((Face[])msg.obj, mCamera);
  61.                 }
  62.                 return;

  63.             case CAMERA_MSG_ERROR :
  64.                 Log.e(TAG, "Error " + msg.arg1);
  65.                 if (mErrorCallback != null) {
  66.                     mErrorCallback.onError(msg.arg1, mCamera);
  67.                 }
  68.                 return;

  69.             default:
  70.                 Log.e(TAG, "Unknown message type " + msg.what);
  71.                 return;
  72.             }
  73.         }
  74.     }
上面可以看出,这里处理了所有的回调,快门回调mShutterCallback.onShutter(),RawImageCallback.onPictureTaken()拍照数据回调,自动对焦回调等。。
默认是没有previewcallback这个回调的,除非你的app设置了setPreviewCallback,可以看出preview的数据还是可以向上层回调,只是系统默认不回调,这里再说深一些:
由上面绿色标注的地方可以看出,我们需要做以下事情,检查 PreviewCallback 这个在framework中定义的接口有没有 设置了setPreviewCallback,设置则调用,这里接口中
onPreviewFrame方法需要开发者自己实现,这里默认是没有实现的,需要特殊使用的要自己添加,这里是自己的理解,看一下 PreviewCallback 接口的定义: frameworks/base/core/java/android/hardware/Camera.java

  1. /**
  2.      * Callback interface used to deliver copies of preview frames as
  3.      * they are displayed.
  4.      *
  5.      * @see #setPreviewCallback(Camera.PreviewCallback)
  6.      * @see #setOneShotPreviewCallback(Camera.PreviewCallback)
  7.      * @see #setPreviewCallbackWithBuffer(Camera.PreviewCallback)
  8.      * @see #startPreview()
  9.      */
  10.     public interface PreviewCallback
  11.     {
  12.         /**
  13.          * Called as preview frames are displayed. This callback is invoked
  14.          * on the event thread {@link #open(int)} was called from.
  15.          *
  16.          * @param data the contents of the preview frame in the format defined
  17.          * by {@link android.graphics.ImageFormat}, which can be queried
  18.          * with {@link android.hardware.Camera.Parameters#getPreviewFormat()}.
  19.          * If {@link android.hardware.Camera.Parameters#setPreviewFormat(int)}
  20.          * is never called, the default will be the YCbCr_420_SP
  21.          * (NV21) format.
  22.          * @param camera the Camera service object.
  23.          */
  24.         void onPreviewFrame(byte[] data, Camera camera);
  25.     };
另数据采集区与显示区两个缓存区buffer preview数据的投递,以完成preview实时显示是在HAL层完成的
takePicture()处理过程跟preview差不多,只是增加了回调函数返回时候存储图像的动作,这里分析一下takepicture的处理过程:

  1. case CAMERA_MSG_COMPRESSED_IMAGE:
  2.                 if (mJpegCallback != null) {
  3.                     mJpegCallback.onPictureTaken((byte[])msg.obj, mCamera);
  4.                 }
  5.                 return;
mJpegCallback的定义
private PictureCallback mJpegCallback;
走到这里我们又不得不回头看看最起初在调用takepicture的时候是怎么调用的

  1.             try {
  2.                 mCameraDevice.takePicture(mShutterCallback, mRawPictureCallback,
  3.                         mPostViewPictureCallback, new JpegPictureCallback(loc));
  4.             } catch (RuntimeException e ) {
  5.                 e.printStackTrace();
  6.                 return false;
  7.             }
这里大家看到了标准部分就是要使用的 mJpegCallback,但是这个callback是JpegPictureCallback 类,我们定义的 mJpegCallback确是 PictureCallback 类,不是同一个类
所以这个还是必须得说清楚一点,看看JpegPictureCallback
类的定义吧

  1. private final class JpegPictureCallback implements PictureCallback {
  2.         Location mLocation;

  3.         public JpegPictureCallback(Location loc) {
  4.             mLocation = loc;
  5.         }

  6.         public void onPictureTaken(
  7.                 final byte [] jpegData, final android.hardware.Camera camera) {
  8.             if (mPausing) {
  9.                 if (mBurstImages > 0) {
  10.                     resetBurst();
  11.                     mBurstImages = 0;
  12.                     mHandler.sendEmptyMessageDelayed(RELEASE_CAMERA,
  13.                                                      CAMERA_RELEASE_DELAY);
  14.                 }
  15.                 return;
  16.             }

  17.             FocusManager.TempBracketingStates tempState = mFocusManager.getTempBracketingState();
  18.             mJpegPictureCallbackTime = System.currentTimeMillis();
  19.             // If postview callback has arrived, the captured image is displayed
  20.             // in postview callback. If not, the captured image is displayed in
  21.             // raw picture callback.
  22.             if (mPostViewPictureCallbackTime != 0) {
  23.                 mShutterToPictureDisplayedTime =
  24.                         mPostViewPictureCallbackTime - mShutterCallbackTime;
  25.                 mPictureDisplayedToJpegCallbackTime =
  26.                         mJpegPictureCallbackTime - mPostViewPictureCallbackTime;
  27.             } else {
  28.                 mShutterToPictureDisplayedTime =
  29.                         mRawPictureCallbackTime - mShutterCallbackTime;
  30.                 mPictureDisplayedToJpegCallbackTime =
  31.                         mJpegPictureCallbackTime - mRawPictureCallbackTime;
  32.             }
  33.             Log.v(TAG, "mPictureDisplayedToJpegCallbackTime = "
  34.                     + mPictureDisplayedToJpegCallbackTime + "ms");

  35.             if (!mIsImageCaptureIntent) {
  36.                 enableCameraControls(true);

  37.                 if (( tempState != FocusManager.TempBracketingStates.RUNNING ) &&
  38.                       !mCaptureMode.equals(mExposureBracketing) &&
  39.                       !mCaptureMode.equals(mZoomBracketing) &&
  40.                       !mBurstRunning == true) {
  41.                 // We want to show the taken picture for a while, so we wait
  42.                 // for at least 0.second before restarting the preview.
  43.                     long delay = 500 - mPictureDisplayedToJpegCallbackTime;
  44.                     if (delay < 0) {
  45.                         startPreview(true);
  46.                         startFaceDetection();
  47.                     } else {
  48.                         mHandler.sendEmptyMessageDelayed(RESTART_PREVIEW, delay);
  49.                     }
  50.                 }

  51.             }

  52.             if (!mIsImageCaptureIntent) {
  53.                 Size s = mParameters.getPictureSize();
  54.                 mImageSaver.addImage(jpegData, mLocation, s.width, s.height);
  55.             } else {
  56.                 mJpegImageData = jpegData;
  57.                 if (!mQuickCapture) {
  58.                     showPostCaptureAlert();
  59.                 } else {
  60.                     doAttach();
  61.                 }
  62.             }

  63.             // Check this in advance of each shot so we don't add to shutter
  64.             // latency. It'true that someone else could write to the SD card in
  65.             // the mean time and fill it, but that could have happened between the
  66.             // shutter press and saving the JPEG too.
  67.             checkStorage();

  68.             if (!mHandler.hasMessages(RESTART_PREVIEW)) {
  69.                 long now = System.currentTimeMillis();
  70.                 mJpegCallbackFinishTime = now - mJpegPictureCallbackTime;
  71.                 Log.v(TAG, "mJpegCallbackFinishTime = "
  72.                         + mJpegCallbackFinishTime + "ms");
  73.                 mJpegPictureCallbackTime = 0;
  74.             }

  75.             if (mCaptureMode.equals(mExposureBracketing) ) {
  76.                 mBurstImages --;
  77.                 if (mBurstImages == 0 ) {
  78.                     mHandler.sendEmptyMessageDelayed(RESTART_PREVIEW, 0);
  79.                 }
  80.             }

  81.           //reset burst in case of exposure bracketing
  82.             if (mCaptureMode.equals(mExposureBracketing) && mBurstImages == 0) {
  83.                 mBurstImages = EXPOSURE_BRACKETING_COUNT;
  84.                 mParameters.set(PARM_BURST, mBurstImages);
  85.                 mCameraDevice.setParameters(mParameters);
  86.             }

  87.             if (mCaptureMode.equals(mZoomBracketing) ) {
  88.                 mBurstImages --;
  89.                 if (mBurstImages == 0 ) {
  90.                     mHandler.sendEmptyMessageDelayed(RESTART_PREVIEW, 0);
  91.                 }
  92.             }

  93.           //reset burst in case of zoom bracketing
  94.             if (mCaptureMode.equals(mZoomBracketing) && mBurstImages == 0) {
  95.                 mBurstImages = ZOOM_BRACKETING_COUNT;
  96.                 mParameters.set(PARM_BURST, mBurstImages);
  97.                 mCameraDevice.setParameters(mParameters);
  98.             }

  99.             if ( tempState == FocusManager.TempBracketingStates.RUNNING ) {
  100.                 mBurstImages --;
  101.                 if (mBurstImages == 0 ) {
  102.                     mHandler.sendEmptyMessageDelayed(RESTART_PREVIEW, 0);
  103.                     mTempBracketingEnabled = true;
  104.                     stopTemporalBracketing();
  105.                 }
  106.             }

  107.             if (mBurstRunning) {
  108.                 mBurstImages --;
  109.                 if (mBurstImages == 0) {
  110.                     resetBurst();
  111.                     mBurstRunning = false;
  112.                     mHandler.sendEmptyMessageDelayed(RESTART_PREVIEW, 0);
  113.                 }
  114.             }
  115.         }
  116.     }
原来他们是父子类之间的关系,那么自然父类可以可以转换为子类的形式,但是子类就不能向父类转换了,这个不懂就没办法了,面向对象的知识
而且这里子类重新实现了父类的方法 onPictureTaken
这里
这个函数不就是handle里面调用的函数了嘛,可以看到上面onPictureTaken的实现过程,其实与preview最大的不同就是我上面标注的部分,
takepicture最终将图片保存下来了
好了,takepicture的过程就说到这里了,下一步要进底层了,HAL和driver之间的那些事,driver做的那些事

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值