Camera架构------初始化
Android Camera采用C/S架构,client 与server两个独立的线程之间使用Binder通信,这已经是众所周知的了。这里将介绍Camera从设备开机,到进入相机应用是如何完成初始化工作的。
首 先,既然Camera是利用binder通信,它肯定要将它的service注册到ServiceManager里面,以备后续Client引用,那么这一 步是在哪里进行的呢。在frameworks\base\media\mediaserver\Main_MediaServer.cpp下有个main函数,可以用来注册媒体服务。没错就是在这里,CameraService完成了服务的注册intmain(int argc, char** argv) { sp<ProcessState> proc(ProcessState::self());sp<IServiceManager> sm = defaultServiceManager();LOGI("ServiceManager: %p", sm.get()); waitBeforeAdding(String16("media.audio_flinger") ); AudioFlinger::instantiate();waitBeforeAdding( String16("media.player") ); MediaPlayerService::instantiate();waitBeforeAdding( String16("media.camera") );CameraService::instantiate(); waitBeforeAdding(String16("media.audio_policy") ); AudioPolicyService::instantiate();ProcessState::self()->startThreadPool(); IPCThreadState::self()->joinThreadPool();} 可是我们到CameraService文件里面却找不到instantiate()这个函数,它在哪?
继续追到它的一个父类BinderService
staticvoid instantiate() { publish(); }
static status_t publish() {sp<IServiceManager> sm(defaultServiceManager()); returnsm->addService(String16(SERVICE::getServiceName()), new SERVICE()); }
可以发现在publish()函数中,CameraService完成服务的注册 。这里面有个SERVICE,源码中有说明
template<typenameSERVICE>
这表示SERVICE是个模板,这里是注册CameraService,所以可以用CameraService代替
returnsm->addService(String16(CameraService::getServiceName()), newCameraService());
好了这样,Camera就在ServiceManager完成服务注册,提供给client随时使用。
Main_MediaServer主函数由init.rc在启动是调用,所以在设备开机的时候Camera就会注册一个服务,用作binder通信。
servicemedia /system/bin/mediaserver class main user media group audio camera inetnet_bt net_bt_admin net_bw_acct drmrpc ioprio rt 4
Binder服务已注册,那接下来就看看client如何连上server端,并打开camera模块。
咱们先从camera app的源码入手。
在onCreate()函数中专门有一个open Camera的线程
public void onCreate(Bundle icicle){
......
mCameraOpenThread.start();
}
再看看mCameraOpenThread
Thread mCameraOpenThread = newThread(new Runnable() {
public void run() {
try {
qcameraUtilProfile("open camera");
mCameraDevice = Util.openCamera(Camera.this, mCameraId);
qcameraUtilProfile("camera opended");
} catch(CameraHardwareException e) {
mOpenCameraFail = true;
} catch(CameraDisabledException e) {
mCameraDisabled = true;
}
}
});
继续追Util.openCamera
return CameraHolder.instance().open(cameraId)
又来了个CameraHolder,该类用一个实例openCamera
public synchronized android.hardware.Camera open(int cameraId)
throws CameraHardwareException {
mCameraDevice= android.hardware.Camera.open(cameraId);
return mCameraDevice;
}
在这里就开始进入framework层了,调用frameworks\base\core\java\android\hardware\Camera.java类的open方法 。
public static Camera open(){
return newCamera(i);
}
这里调用了Camera的构造函数,在看看构造函数
Camera(int cameraId){
native_setup(newWeakReference<Camera>(this), cameraId)
}
好,终于来到JNI了
继续看camera的JNI文件android_hardware_camera.cpp
static void android_hardware_Camera_native_setup(JNIEnv*env, jobject thiz,
jobject weak_this, jint cameraId)
{
sp<Camera> camera = Camera::connect(cameraId);
sp<JNICameraContext>context = new JNICameraContext(env, weak_this, clazz, camera);
camera->setListener(context);
}
JNI函数里面,我们找到Camera C/S架构的客户端了,它调用connect函数向服务器发送连接请求。JNICameraContext这个类是一个监听类,用于处理底层Camera回调函数传来的数据和消息
看看客户端的connect函数有什么
sp<Camera> Camera::connect(intcameraId)
{
LOGV("connect");
sp<Camera> c = new Camera();
const sp<ICameraService>& cs =getCameraService();
if (cs != 0) {
c->mCamera =cs->connect(c, cameraId);
}
if (c->mCamera != 0) {
c->mCamera->asBinder()->linkToDeath(c);
c->mStatus = NO_ERROR;
} else {
c.clear();
}
return c;
}
先看标红的第一句,通过getCameraService()函数获取一个Camera服务实例。
const sp<ICameraService>&Camera::getCameraService()
{
if (mCameraService.get() == 0) {
sp<IServiceManager> sm =defaultServiceManager();
sp<IBinder> binder;
do {
binder =sm->getService(String16("media.camera"));
if (binder!= 0)
break;
LOGW("CameraService not published, waiting...");
usleep(500000); // 0.5 s
} while(true);
mCameraService =interface_cast<ICameraService>(binder);
}
LOGE_IF(mCameraService==0, "no CameraService!?");
return mCameraService;
}
可以看出,该CameraService实例是通过binder获取的,由binder机制可以知道,该服务就是CameraService一个实例。
c->mCamera = cs->connect(c, cameraId);
然后执行服务端的connect()函数,并返回一个ICamera对象赋值给Camera 的mCamera,服务端connect()返回的其实是它内部类client的一个实例。
sp<ICamera>CameraService::connect(){
hardware = newCameraHardwareInterface(camera_device_name);
if (hardware->initialize(&mModule->common) != OK){
hardware.clear();
return NULL;
}
client = newClient(this, cameraClient, hardware, cameraId, info.facing, callingPid);
mClient[cameraId] = client;
LOG1("CameraService::connect X");
return client;
}
先实例化Camera Hal接口 hardware,hardware调用initialize()进入HAL层打开Camear驱动
status_t initialize(hw_module_t*module)
{
LOGI("Opening camera %s",mName.string());
int rc = module->methods->open(module,mName.string(),
(hw_device_t **)&mDevice);
if (rc != OK) {
LOGE("Could not open camera %s: %d", mName.string(), rc);
return rc;
}
initHalPreviewWindow();
return rc;
}
module->methods->open(module,mName.string(),
(hw_device_t **)&mDevice)这一句作用就是打开Camera底层驱动
hardware->initialize(&mModule->common)中mModule模块是一个结构体camera_module_t,他是怎么初始化的呢?我们发现CameraService里面有个函数
void CameraService::onFirstRef()
{
BnCameraService::onFirstRef();
if (hw_get_module(CAMERA_HARDWARE_MODULE_ID,
(const hw_module_t **)&mModule) < 0) {
LOGE("Could not load camera HALmodule");
mNumberOfCameras = 0;
}
}
了解HAL层的都知道hw_get_module函数就是用来获取模块的Hal stub,这里是通过CAMERA_HARDWARE_MODULE_ID 获取Camera Hal层的代理stub,并赋值给mModule,后面就可通过操作mModule完成对Camera模块的控制。那么onFirstRef()函数又是何时调用的?
onFirstRef()属于其父类RefBase,该函数在强引用sp新增引用计数时调用,什么意思?就是当有sp包装的类初始化的时候调用,那么camera是何时调用的呢?可以发现在
客户端发起连接时候
sp<Camera> Camera::connect(intcameraId)
{
LOGV("connect");
sp<Camera> c = new Camera();
const sp<ICameraService>& cs =getCameraService();
}
这个时候初始化了一个CameraService实例,且用Sp包装,这个时候sp将新增计数,相应的CameraService实例里面onFirstRef()函数完成调用。
CameraService::connect()返回client的时候,就表明客户端和服务端连接建立。Camera完成初始化,可以进行拍照和preview等动作。
Android 4.0 Camera架构preview和takePicture
Camera初始化的过程,完成初始化之后就可以使用Camera提供的以下功能了
1.预览preview
2.视频录制
3.拍照和参数设置
打开Camera第一件事情就是预览取景preview的动作,我们先从Camera app分析起。所有拥有拍照功能的应用,它在预览时候都要实现SurfaceHolder.Callback接口,并实现其surfaceCreated、 surfaceChanged、surfaceDestroyed三个函数,同时声明一个用于预览的窗口SurfaceView,以下是系统自带ap的源代码
SurfaceView preview = (SurfaceView)findViewById(R.id.camera_preview);
SurfaceHolder holder = preview.getHolder();
holder.addCallback(this);
还要设置camera预览的surface缓存区,系统自带app是在surfaceChange()方法里设置Camera的预览区,以供底层获取的preview数据不断投递到这个surface缓存区内。
public void surfaceChanged(SurfaceHolder holder, intformat, int w, int h) {
mSurfaceHolder= holder;
// The mCameraDevice will be null ifit fails to connect to the camera
// hardware. In this case we willshow a dialog and then finish the
// activity, so it's OK to ignoreit.
if (mCameraDevice == null) return;
// Sometimes surfaceChanged iscalled after onPause or before onResume.
// Ignore it.
if (mPausing || isFinishing())return;
// Set preview display if thesurface is being created. Preview was
// already started. Also restart thepreview if display rotation has
// changed. Sometimes this happenswhen the device is held in portrait
// and camera app is opened.Rotation animation takes some time and
// display rotation in onCreate maynot be what we want.
if (mCameraState == PREVIEW_STOPPED){
startPreview();
startFaceDetection();
} else {
if (Util.getDisplayRotation(this)!= mDisplayRotation) {
setDisplayOrientation();
}
if(holder.isCreating()) {
//Set preview display if the surface is being created and preview
// was already started. That means preview display was set to null
// and we need to set it now.
setPreviewDisplay(holder);
}
}
设置好以上参数后,就可以调用startPreview()进行取景预览startPreview()也是一层层往下调用,最后到Camera的服务端CameraService,我们看下它的过程
Camera.java(应用)-------------> Camera.java(框架)-------------->android_hardware_camera.cpp(JNI)-------------------->Camera.cpp(客户端)------------------->CameraService.cpp(服务端)--------------------->CameraHarwareInterface(HAL接口)
在CameraService端将处理preview的请求并进入HAL层
status_t CameraService::Client::startPreview() {
enableMsgType(CAMERA_MSG_PREVIEW_METADATA);
returnstartCameraMode(CAMERA_PREVIEW_MODE);
}
先是传递preview的消息到HAL层,然后执行preview
status_tCameraService::Client::startCameraMode(camera_mode mode) {
switch(mode) {
case CAMERA_PREVIEW_MODE:
if (mSurface== 0 && mPreviewWindow == 0) {
LOG1("mSurface is not set yet.");
// still able to start preview in this case.
}
returnstartPreviewMode();
}
}
status_t CameraService::Client::startPreviewMode() {
LOG1("startPreviewMode");
status_t result = NO_ERROR;
// if preview has been enabled, nothing needs to be done
if (mHardware->previewEnabled()) {
return NO_ERROR;
}
if (mPreviewWindow != 0) {
native_window_set_scaling_mode(mPreviewWindow.get(),
NATIVE_WINDOW_SCALING_MODE_SCALE_TO_WINDOW);
native_window_set_buffers_transform(mPreviewWindow.get(),
mOrientation);
}
mHardware->setPreviewWindow(mPreviewWindow);
result = mHardware->startPreview();
return result;
}
然后就近去HAL层调用,并通过回调函数源源不断的将数据投递到surfaceview的缓存去,因为preview的数据是比较大的,所以数据不会携带着传上上层,而是直接在两个缓存区之间copy,一个是底层采集数据的缓存区,另一个是用于显示的surfaceview缓存区我们看看preview的回调函数是怎么处理的,首先在Camera客户端与服务端连接成功的时候就会设置一个回调函数dataCallBack CameraService::Client::Client(constsp<CameraService>& cameraService,
const sp<ICameraClient>& cameraClient,
constsp<CameraHardwareInterface>& hardware,
int cameraId, int cameraFacing, intclientPid) {
......
mHardware->setCallbacks(notifyCallback,
dataCallback,
dataCallbackTimestamp,
(void *)cameraId);
}
client与server连接成功后就会new 一个client返回,在client的构造函数中,就对camera设置了notifyCallback、dataCallback、 dataCallbackTimestamp三个回调函数,用于返回底层数据用于处理,看下它的处理方法
void CameraService::Client::dataCallback(int32_t msgType,
const sp<IMemory>&dataPtr, camera_frame_metadata_t *metadata, void* user) {
switch (msgType & ~CAMERA_MSG_PREVIEW_METADATA) {
case CAMERA_MSG_PREVIEW_FRAME:
client->handlePreviewData(msgType,dataPtr, metadata);
break;
.......
}
继续看handlePreviewData()
void CameraService::Client::handlePreviewData(int32_tmsgType,
const sp<IMemory>& mem,
camera_frame_metadata_t *metadata) {
sp<ICameraClient> c =mCameraClient;
.......
if (c != 0) {
// Is the received frame copied outor not?
if (flags &CAMERA_FRAME_CALLBACK_FLAG_COPY_OUT_MASK) {
LOG2("frame is copied");
copyFrameAndPostCopiedFrame(msgType,c, heap, offset, size, metadata);
} else {
LOG2("frame is forwarded");
mLock.unlock();
c->dataCallback(msgType, mem, metadata);
}
} else {
mLock.unlock();
}
}
copyFrameAndPostCopiedFrame就是这个函数执行两个buff区preview数据的投递
void CameraService::Client::copyFrameAndPostCopiedFrame(
int32_t msgType, constsp<ICameraClient>& client,
const sp<IMemoryHeap>&heap, size_t offset, size_t size,
camera_frame_metadata_t *metadata) {
......
previewBuffer = mPreviewBuffer;
memcpy(previewBuffer->base(), (uint8_t *)heap->base()+ offset, size);
sp<MemoryBase> frame = new MemoryBase(previewBuffer,0, size);
if (frame == 0) {
LOGE("failed to allocate spacefor frame callback");
mLock.unlock();
return;
}
mLock.unlock();
client->dataCallback(msgType, frame, metadata);
}
将数据处理成frame,继续调用客户端client的回调函数 client->dataCallback(msgType, frame, metadata);
// callback from camera service when frame or image isready
void Camera::dataCallback(int32_t msgType, const sp<IMemory>&dataPtr,
camera_frame_metadata_t *metadata)
{
sp<CameraListener> listener;
{
Mutex::Autolock _l(mLock);
listener = mListener;
}
if (listener != NULL) {
listener->postData(msgType,dataPtr, metadata);
}
}
初始化的时候,在jni里面有设置listener
static void android_hardware_Camera_native_setup(JNIEnv*env, jobject thiz,
jobject weak_this, jint cameraId)
{
sp<JNICameraContext> context =new JNICameraContext(env, weak_this, clazz, camera);
context->incStrong(thiz);
camera->setListener(context);
}
继续 listener->postData(msgType,dataPtr, metadata);
void JNICameraContext::postData(int32_t msgType, constsp<IMemory>& dataPtr,
camera_frame_metadata_t *metadata)
{
......
switch (dataMsgType) {
case CAMERA_MSG_VIDEO_FRAME:
// shouldnever happen
break;
default:
LOGV("dataCallback(%d, %p)", dataMsgType, dataPtr.get());
copyAndPost(env, dataPtr, dataMsgType);
break;
}
}
继续copyAndPost(env, dataPtr, dataMsgType);
void JNICameraContext::copyAndPost(JNIEnv* env, constsp<IMemory>& dataPtr, int msgType)
{
jbyteArrayobj = NULL;
// allocate Java byte array and copy data
if (dataPtr != NULL) {
.......
} else {
LOGV("Allocating callback buffer");
obj= env->NewByteArray(size);
.......
env->SetByteArrayRegion(obj,0, size, data);
}
} else {
LOGE("image heap is NULL");
}
}
// post image data to Java
env->CallStaticVoidMethod(mCameraJClass,fields.post_event,
mCameraJObjectWeak, msgType, 0, 0, obj);
if (obj) {
env->DeleteLocalRef(obj);
}
}
解释一下标红的部分,先建立一个byte数组obj,将data缓存数据存储进obj数组,CallStaticVoidMethod是C调用java函数,最后执行是在Camera.java(框架)的postEventFromNative()
private static voidpostEventFromNative(Object camera_ref,
int what, int arg1, int arg2, Object obj)
{
Camera c =(Camera)((WeakReference)camera_ref).get();
if (c == null)
return;
if (c.mEventHandler != null) {
Message m =c.mEventHandler.obtainMessage(what, arg1, arg2, obj);
c.mEventHandler.sendMessage(m);
}
}
handler处理:
public void handleMessage(Message msg) {
switch(msg.what) {
caseCAMERA_MSG_SHUTTER:
if (mShutterCallback != null) {
mShutterCallback.onShutter();
}
return;
caseCAMERA_MSG_RAW_IMAGE:
if (mRawImageCallback != null) {
mRawImageCallback.onPictureTaken((byte[])msg.obj, mCamera);
}
return;
caseCAMERA_MSG_COMPRESSED_IMAGE:
if (mJpegCallback != null) {
mJpegCallback.onPictureTaken((byte[])msg.obj, mCamera);
}
return;
case CAMERA_MSG_PREVIEW_FRAME:
if (mPreviewCallback != null) {
PreviewCallback cb = mPreviewCallback;
if (mOneShot) {
// Clear the callback variable before the callback
// in case the app calls setPreviewCallback from
// the callback function
mPreviewCallback = null;
} else if (!mWithBuffer) {
// We're faking the camera preview mode to prevent
// the app from being flooded with preview frames.
// Set to oneshot mode again.
setHasPreviewCallback(true, false);
}
cb.onPreviewFrame((byte[])msg.obj,mCamera);
}
return;
}
}
}
上面可以看出,这里处理了所有的回调,快门回调 mShutterCallback.onShutter(),RawImageCallback.onPictureTaken()拍照数据回调,自动对焦回调等。。默认是没有previewcallback这个回调的,除非你的app设置了setPreviewCallback,可以看出preview 的数据还是可以向上层回调,只是系统默认不回调,另数据采集区与显示区两个缓存区buffer preview数据的投递,preview实时显示是在HAL层完成的。
takePicture()处理过程跟preview差不多,只是增加了回调函数返回时候存储图像的动作
publicJpegPictureCallback(Location loc) {
mLocation =loc;
}
public void onPictureTaken(
final byte [] jpegData, final android.hardware.Cameracamera) {
.........................
if(!mIsImageCaptureIntent) {
Size s = mParameters.getPictureSize();
mImageSaver.addImage(jpegData,mLocation, s.width, s.height);
} else {
mJpegImageData = jpegData;
if (!mQuickCapture) {
showPostCaptureAlert();
} else {
doAttach();
}
}
}
}
}
拍照同时响应onShutterButtonFocus和onShutterButtonClick
1)
public void onShutterButtonFocus(booleanpressed) {
if (mPaused || collapseCameraControls()
|| (mCameraState ==SNAPSHOT_IN_PROGRESS)
|| (mCameraState ==PREVIEW_STOPPED)) return;
// Do not do focus if there is notenough storage.
if (pressed &&!canTakePicture()) return;
if (pressed) {
mFocusManager.onShutterDown();//主要进行autoFocus操作
} else {
mFocusManager.onShutterUp();
}
}
2)
public void onShutterButtonClick() {
if (mPaused || collapseCameraControls()
|| (mCameraState ==SWITCHING_CAMERA)
|| (mCameraState ==PREVIEW_STOPPED)) return;
// Do not take the picture if there is not enough storage.
if (mStorageSpace <= Storage.LOW_STORAGE_THRESHOLD) {
Log.i(TAG, "Not enough space or storage not ready. remaining="+ mStorageSpace);
return;
}
Log.v(TAG, "onShutterButtonClick: mCameraState=" +mCameraState);
if ((mFocusManager.isFocusingSnapOnFinish() || mCameraState ==SNAPSHOT_IN_PROGRESS)
&&!mIsImageCaptureIntent) {
mSnapshotOnIdle = true;
return;
}
mSnapshotOnIdle = false;
mFocusManager.doSnap();
}
调用FocusManager类的doSnap函数,注册回调函数并通知
public void doSnap() {
if (!mInitialized) return;
if (!needAutoFocusCall() || (mState == STATE_SUCCESS || mState ==STATE_FAIL)) {
capture();
} else if (mState == STATE_FOCUSING) {
mState = STATE_FOCUSING_SNAP_ON_FINISH;
} else if (mState == STATE_IDLE) {
capture();
}
}
Recording:
private void initializeRecorder() {
………
mMediaRecorder =new MediaRecorder();
mMediaRecorder.setCamera(mCameraDevice.getCamera());
mMediaRecorder.setVideoSource(MediaRecorder.VideoSource.CAMERA);
mMediaRecorder.setProfile(mProfile);
mMediaRecorder.setMaxDuration(mMaxVideoDurationInMs);
}
public MediaRecorder(){
native_setup(new WeakReference<MediaRecorder>(this));
}
static void
android_media_MediaRecorder_native_setup(JNIEnv *env, jobject thiz, jobjectweak_this)
{
ALOGV("setup");
sp<MediaRecorder> mr = newMediaRecorder();
return;
}
MediaRecorder::MediaRecorder():mSurfaceMediaSource(NULL)
{
ALOGV("constructor");
const sp<IMediaPlayerService>&service(getMediaPlayerService());
if(service != NULL) {
mMediaRecorder = service->createMediaRecorder(getpid());
}
if(mMediaRecorder != NULL) {
mCurrentState = MEDIA_RECORDER_IDLE;
}
doCleanUp();
}
MediaRecorderClient::MediaRecorderClient(const sp<MediaPlayerService>&service, pid_t pid)
{
ALOGV("Client constructor");
mPid = pid;
mRecorder = new StagefrightRecorder;
mMediaPlayerService = service;
}
static void android_media_MediaRecorder_setCamera(JNIEnv* env,jobject thiz, jobject camera)
{
//we should not pass a null camera to get_native_camera() call.
if(camera == NULL) {
jniThrowNullPointerException(env, "camera object is a NULLpointer");
return;
}
sp<Camera> c = get_native_camera(env, camera, NULL);
sp<MediaRecorder> mr = getMediaRecorder(env, thiz);
process_media_recorder_call(env, mr->setCamera(c->remote(),c->getRecordingProxy()),
"java/lang/RuntimeException", "setCamera failed.");
}
status_t MediaRecorderClient::setCamera(constsp<ICamera>& camera,
constsp<ICameraRecordingProxy>& proxy)
{
ALOGV("setCamera");
Mutex::Autolock lock(mLock);
if(mRecorder == NULL) {
ALOGE("recorder is not initialized");
return NO_INIT;
}
return mRecorder->setCamera(camera, proxy);
}
StagefrightRecorder::setCamera
{
}