Android Camera APP preview buffer 总体流程

前言

  • Camera sensor获取自然图像(图像代表着image data buffer)通过ISP处理之后到显示器显示,这个过程代表着buffer从camera到display的传输,也是camera module中一个重要的流程,在此博客中和大家分享。
  • 为弄清楚snapdragon camera APP preview功能下image buffer是怎么从camera module传输到display module进行显示,本文详细介绍总体流程其中包括buffer flow和各部分组件功能。
  • 主要涉及四个部分:1.snapdragon camera APP:发送image buffer request,设置surfaceview;2.camera framework and camera service:处理request回传result;3.surfaceflinger:处理buffe queue;4.surface view:建立显示surface,触发preview buffer。
  • 图片title使用斜体表示

Camera preview 总体概括

Fig1是Google提供的camera 2API调用流程,主要关注画红线camera preview buffer flow流程,下面分为几个部分介绍:
Fig1,camera preview buffer flow
Fig1,camera preview buffer flow

Establish snapdragon surfaceview surface

首先,Snapdragon camera preview功能通过surfaceview创建surface,通过其显示image buffer。camera device createCaptureSession把创建成功surface传输下去。下面分析具体代码流程。
首先初始化创建surfaceview,并为其设置状态回调callback。

// display the view
mSurfaceView = (AutoFitSurfaceView) mRootView.findViewById(R.id.mdp_preview_content);
mSurfaceHolder = mSurfaceView.getHolder();
mSurfaceHolder.addCallback(callback);
mSurfaceView.addOnLayoutChangeListener(new View.OnLayoutChangeListener() {
    @Override
    public void onLayoutChange(View v, int left, int top, int right,
                               int bottom, int oldLeft, int oldTop, int oldRight,
                               int oldBottom) {
        int width = right - left;
        int height = bottom - top;
        if (mFaceView != null) {
            mFaceView.onSurfaceTextureSizeChanged(width, height);
        }
        if (mStatsNNFocusRenderer != null) {
            mStatsNNFocusRenderer.onSurfaceTextureSizeChanged(width, height);
        }
        if (mT2TFocusRenderer != null) {
            mT2TFocusRenderer.onSurfaceTextureSizeChanged(width, height);
        }
    }
});

当surfaceview创建成功时回调surfaceCreated,通过调用previewUIReady()去通知CaptureModule开启预览的操作,

private SurfaceHolder.Callback callback = new SurfaceHolder.Callback() {
    // SurfaceHolder callbacks
    @Override
    public void surfaceCreated(SurfaceHolder holder) {
        Log.v(TAG, "surfaceCreated");
        mSurfaceHolder = holder;
        previewUIReady();
        if(mTrackingFocusRenderer != null && mTrackingFocusRenderer.isVisible()) {
            mTrackingFocusRenderer.setSurfaceDim(mSurfaceView.getLeft(), 
            	mSurfaceView.getTop(), mSurfaceView.getRight(), mSurfaceView.getBottom());
        }
        if(mT2TFocusRenderer != null && mT2TFocusRenderer.isShown()) {
            mT2TFocusRenderer.setSurfaceDim(mSurfaceView.getLeft(), mSurfaceView.getTop(),
                    mSurfaceView.getRight(), mSurfaceView.getBottom());
        }
        if(mStatsNNFocusRenderer != null && mStatsNNFocusRenderer.isShown()) {
            mStatsNNFocusRenderer.setSurfaceDim(mSurfaceView.getLeft(), mSurfaceView.getTop(),
                    mSurfaceView.getRight(), mSurfaceView.getBottom());
        }
    }
};

判断当前时刻CaptureUI的预览surface view是否创建成功并可用,获取此预览surface。

Surface surface = null;
try {
    waitForPreviewSurfaceReady();
} catch (RuntimeException e) {
    Log.v(TAG,
            "createSession: normal status occur Time out waiting for surface ");
}
surface = getPreviewSurfaceForSession(id);

开始准备预览流显示surface通过create session配置。

List<Surface> surfaces = mFrameProcessor.getInputSurfaces();
for(Surface surs : surfaces) {
    mPreviewRequestBuilder[id].addTarget(surs);
    list.add(surs);
}

Camera Device Create Preview Session

  • 回到Fig1,随后Camera device创建capture session之后,通过setRepeatingRequest不断的发送request请求Show preview stream(想象stream是image buffer在不断的循环),request在发送的过程中包含在queue的结构(repeating request list -> pending request queue -> in-progress queue),保证每时每刻camera HAL都能够有足够request buffer处理。
  • Camera Device hardware对image buffer指向地址填充完毕之后,通过output stream(configured outputs)传输到surfaceview 创建的surface显示,自此整个体系形成一个生产-消费-归还的循环。

Camera preview 详细流程

Fig2展现camera preview显示图像总体流程,并且表达出各部分之间的关系,一供分为三个层次结构。

  • 顶部JAVA layer包含Java APP、Java framework,包括Snapdragon camera APP(未画出),surface view and display区域。
  • 中部Native layer,包括camera service process and surfaceflinger process。
  • 底部HAL layer,HIDL只是作为中间连接层,包含camera provider process,其中包含Camx module。
    Fig2 camera preview显示图像总体流程
    Fig2 camera preview显示图像总体流程

Camera Buffer Flow

由于Fig2复杂性,首先介绍camera buffer flow勾勒出camera sensor获取image,处理之后传输到display显示的主线。之后在对其进行补充。
camera的buffer在使用功能上主要分为两类:output stream buffer and input stream buffer。Output and input主语是camera hardware,如果从camera HAL输出就是output stream。

  • output buffer是empty buffer,request携带output buffer到HAL的过程中里面没有数据,HAL会将生成的buffer地址填入到output buffer,最后通过callback返回到framework。
  • input buffer是有实际内容的buffer,通过reprocess流程将其送到HAL Layer处理。reprocess的使用情况举例,如使用HDR algorithm:native layer算法需要使用raw image进行处理合成,会通过output stream获取。处理完之后再通过input stream把raw image送给HAL layer,对其raw image转为YUV,再对其进行后续ISP algorithm process。Native layer algorithm use raw image处理会有更高的精度和无细节损失,但是Native layer并不能直接把raw image传输给Java layer使用,需要使用ISP硬件转换处理,所以reprocess再次传输到HAL。
  • 再说一个input buffer和output buffer代码层面根本的区别,input stream作为comsumer是从buffer queue中acquire graphic buffer(comsumer拿去真正消费掉)。output stream从buffer queue中dequeue graphic buffer,虽然代码中都是把comsumer 作为alias,但是并不是真正的消费掉,而是传输到HAL进行sensor image data的填充,是一种对buffer内容的produce(producer在生产内容)。

Fig3显示一个生产消费者模型,buffer在其中是一个循环的流程(具体参考:BufferQueue 学习总结)。
Fig3 生产消费者模型
Fig3 生产消费者模型

应用到camera service 、 surfaceflinger and surfaceview之间关系的情境下,figure转化为Fig4。
真正的producer是surfaceview创建的native 对象ANativeWindow, nativewindow创建出preview stream buffer,surface and camera service之间有一个dequeue、queue循环buffer过程。camera service output stream作为中间bridge,通过binder调用到surfaceflinger,surfaceflinger使用buffer queue结构管理preview graphic buffer。
Fig4 camera service 、 surfaceflinger and surfaceview生产消费者模型
Fig4 camera service 、 surfaceflinger and surfaceview生产消费者模型

本文只介绍camera捕获image到display显示,所以只分析Fig4从surface、camera service到surfaceflinger单向流程。

Camera Service dequeue buffer

函数调用逻辑如图Fig5显示,Camera3Device中threadloop函数一旦run之后就一直循环,不断下发camera preview request。Camera3OutputStream直接使用Anativewindow dequeue buffer,之后传递buffer handle让camera provider填充image data到申请好的buffer地址中。

status_t Camera3OutputStream::getBufferLockedCommon(ANativeWindowBuffer** anb, int* fenceFd) {
    ATRACE_HFR_CALL();
    status_t res;

    if ((res = getBufferPreconditionCheckLocked()) != OK) {
        return res;
    }

    bool gotBufferFromManager = false;

    if (mUseBufferManager) {
        sp<GraphicBuffer> gb;
        res = mBufferManager->getBufferForStream(getId(), getStreamSetId(), &gb, fenceFd);
        if (res == OK) {
            // Attach this buffer to the bufferQueue: the buffer will be in dequeue state after a
            // successful return.
            *anb = gb.get();
            res = mConsumer->attachBuffer(*anb);
            if (shouldLogError(res, mState)) {
                ALOGE("%s: Stream %d: Can't attach the output buffer to this surface: %s (%d)",
                        __FUNCTION__, mId, strerror(-res), res);
            }
            if (res != OK) {
                checkRetAndSetAbandonedLocked(res);
                return res;
            }
            gotBufferFromManager = true;
            ALOGV("Stream %d: Attached new buffer", getId());
        } else if (res == ALREADY_EXISTS) {
            // Have sufficient free buffers already attached, can just
            // dequeue from buffer queue
            ALOGV("Stream %d: Reusing attached buffer", getId());
            gotBufferFromManager = false;
        } else if (res != OK) {
            ALOGE("%s: Stream %d: Can't get next output buffer from buffer manager: %s (%d)",
                    __FUNCTION__, mId, strerror(-res), res);
            return res;
        }
    }
    if (!gotBufferFromManager) {
        sp<ANativeWindow> currentConsumer = mConsumer;
        mLock.unlock();

        nsecs_t dequeueStart = systemTime(SYSTEM_TIME_MONOTONIC);
        res = currentConsumer->dequeueBuffer(currentConsumer.get(), anb, fenceFd);
        nsecs_t dequeueEnd = systemTime(SYSTEM_TIME_MONOTONIC);
        mDequeueBufferLatency.add(dequeueStart, dequeueEnd);

        mLock.lock();

        if (mUseBufferManager && res == TIMED_OUT) {
            checkRemovedBuffersLocked();

            sp<GraphicBuffer> gb;
            res = mBufferManager->getBufferForStream(
                    getId(), getStreamSetId(), &gb, fenceFd, /*noFreeBuffer*/true);

            if (res == OK) {
                // Attach this buffer to the bufferQueue: the buffer will be in dequeue state after
                // a successful return.
                *anb = gb.get();
                res = mConsumer->attachBuffer(*anb);
                gotBufferFromManager = true;
                ALOGV("Stream %d: Attached new buffer", getId());

                if (res != OK) {
                    if (shouldLogError(res, mState)) {
                        ALOGE("%s: Stream %d: Can't attach the output buffer to this surface:"
                                " %s (%d)", __FUNCTION__, mId, strerror(-res), res);
                    }
                    checkRetAndSetAbandonedLocked(res);
                    return res;
                }
            } else {
                ALOGE("%s: Stream %d: Can't get next output buffer from buffer manager:"
                        " %s (%d)", __FUNCTION__, mId, strerror(-res), res);
                return res;
            }
        } else if (res != OK) {
            if (shouldLogError(res, mState)) {
                ALOGE("%s: Stream %d: Can't dequeue next output buffer: %s (%d)",
                        __FUNCTION__, mId, strerror(-res), res);
            }
            checkRetAndSetAbandonedLocked(res);
            return res;
        }
    }

    if (res == OK) {
        checkRemovedBuffersLocked();
    }

    return res;
}

getBufferLockedCommon function本文情况下运行,mUseBufferManager为FALSE,因为OutputBuffer的申请有Camera3BufferManager 申请和Anativewindow 申请两种情况,选择在snapdragon APP中设置:

  • snapdragon APP中调用OutputConfiguration(@NonNull Surface surface)函数,surfaceGroupId默认赋值SURFACE_GROUP_ID_NONE(-1),mUseBufferManager等于FALSE不使用buffer manager分配buffer。
  • 直接使用ANativeWindow分配preview buffer,运行 if (!gotBufferFromManager) 逻辑
    Fig5 Camera Service dequeue buffer
    Fig5 Camera Service dequeue buffer

Graphic buffer is transmit by binder to surfaceflinger process

Camera service获取到camera provider填充完成的image buffer之后要传递给display module显示经过Fig6、Fig7函数调用过程。
在cameraserver process中由surface module发起(后面会解释为什么是surface作为发起人),由hook_queueBuffer调用到surface,通过binder transact and ontransact结构,跨进程把从ANativewindow中申请的graphic buffer传输给surfaceflinger process,surfaceflinger再把buffer 传输到BufferQueue结构中进行管理,等待surfaceflinger process acquire use。

  • 补充说明:mGraphicBufferProducer 是在 Surface 构造器中初始化的。它实际指向一个 BpGraphicBufferProducer 对象。BnGraphicBufferProducer::onTransact 调用的queuebuffer,是class BnGraphicBufferProducer的子类class bufferqueueproducer中的queuebuffer function。实际在运行中都是子类实例化调用父类的函数。

Fig6 Graphic buffer is transmit by binder to surfaceflinger process
Fig6 Graphic buffer is transmit by binder to surfaceflinger process

SurfaceView建立native surface为真正的producer,surfaceflinger通过layer建立的BufferQueueLayer为真正的consumer,BufferQueueLayer调用updatetexImage从BufferQueue中acquire buffer,bindTextureImageLocked function Bind the new buffer to the GL texture(添加纹理之后才送到GPU进行渲染)。
Fig 7 BufferQueueLayer consumer buffer
Fig 7 BufferQueueLayer consumer buffer

Display module 显示data image

以上内容介绍完cameraserver process到surfaceflinger process preview buffer 传输过程,最后介绍surfaceflinger 怎么把申请graphic buffer显示到display。
Preview UI的刷新是由VSYNC机制来驱动,VSYNC作为整个机制的心脏不停的刷新着preview UI,如Fig 8所示。

  • 先看CPU渲染路径,在APP render use CPU,surface把绘画完成之后,通过lock function向BufferQueue申请AnativeWindow buffer,再通过unlockAndPost function放入(queuebuffer)BufferQueue结构中,VSYNC发送trigger 信号触发surfaceflinger process来consume。
  • 再看复杂的GPU渲染路径,VSYNC core通过event message调用到BitTube,BitTube发送socket mSendFd message通知监听对象,surfaceflinger会通过mReceiveFd监听mSendFd message。surfaceflinger作为consumer从BufferQueue中acquire graphic buffer,对buffer添加texture纹理之后送到GPU(ondraw function process)进行渲染。结束渲染之后做两件事情,
    - Frist:Surface作为producer把buffer传输到BufferQueue结构中进行管理;
    - Second:GPU渲染完成之后会调用onFrameAvailable通知surfaceflinger有可用buffer,SurfaceFlinger再通过内部MessageQueue调 用requestNextVsync请求接收下一个VSYNC用于合成,下一个VSYNC到了之后回调MessageQueue的handleMessage函数,实际调到SurfaceFlinger的onMessageReceived函数,处理REFRESH message,最后FrameBufferSurface作为consumer从BufferQueue结构中acquire graphic buffer。

不管是CPU or GPU渲染,最后都是传输到HW Composer进行layer合成,GPU渲染是通过onFrameTarget传输到FrameBufferTarget layer,CPU渲染是传输到Overlay layer。HWC完成之后就set到display module显示。

  • HWC是Android中进行窗口layer合成和显示的HAL层模块。HWC通常由显示设备制造商(OEM)完成,为surfaceflinger服务提供硬件支持。

Fig 8 display module显示graphic buffer flow
Fig8 display module显示graphic buffer flow

SurfaceView module

以上基本上介绍完Fig2中主干流程,这一部分再介绍SurfaceView module如何与display module and surfaceflinger process建立联系。

-SurfaceView是一种特殊的View,可在子线程进行UI绘制,它具有独立于应用程序之外的surface,主要用来处理复杂,耗时的UI绘制,如视频播放,camera preview。

SurfaceView create createBufferQueueLayer

SurfaceView module通过updateSurface function创建出surfacecontrol。传递到WindowManagerService进程中对其进行具体的初始化。

protected void updateSurface() {
if (creating) {
                    viewRoot.createBoundsSurface(mSubLayer);
                    mSurfaceSession = new SurfaceSession();
                    mDeferredDestroySurfaceControl = mSurfaceControl;

                    updateOpaqueFlag();
                    final String name = "SurfaceView - " + viewRoot.getTitle().toString();

                    mSurfaceControl = new SurfaceControl.Builder(mSurfaceSession)
                        .setName(name)
                        .setOpaque((mSurfaceFlags & SurfaceControl.OPAQUE) != 0)
                        .setBufferSize(mSurfaceWidth, mSurfaceHeight)
                        .setFormat(mFormat)
                        .setParent(viewRoot.getSurfaceControl())
                        .setFlags(mSurfaceFlags)
                        .build();
                    mBackgroundControl = new SurfaceControl.Builder(mSurfaceSession)
                        .setName("Background for -" + name)
                        .setOpaque(true)
                        .setColorLayer()
                        .setParent(mSurfaceControl)
                        .build();

                } else if (mSurfaceControl == null) {
                    return;
                }

通过SurfaceControl的内部Builder类来创建SurfaceControl对象的,并且将SurfaceControl的Name,BufferSize,Format,Flags等等一并保存到内部Builder中

public SurfaceControl build() {
    if (mWidth < 0 || mHeight < 0) {
        throw new IllegalStateException(
                "width and height must be positive or unset");
    }
    if ((mWidth > 0 || mHeight > 0) && (isEffectLayer() || isContainerLayer())) {
        throw new IllegalStateException(
                "Only buffer layers can set a valid buffer size.");
    }
    return new SurfaceControl(
            mSession, mName, mWidth, mHeight, mFormat, mFlags, mParent, mMetadata,
            mLocalOwnerView, mCallsite);
}

SurfaceControl构造函数中调用SurfaceComposerClient::mClient的createSurface函数,mClient是ISurfaceComposerClient的Binder Bp端,Bn端则是SurfaceFlinger进程中的Client对象,createSurface具体实现是在surfaceflinger。

SurfaceComposerClient::createSurfaceChecked
	err = mClient->createSurface(name, w, h, format, flags, parentHandle, std::move(metadata),
	                                     &handle, &gbp, &transformHint);
	*outSurface = new SurfaceControl(this, handle, gbp, transformHint);

status_t Client::createSurface(const String8& name, uint32_t w, uint32_t h, PixelFormat format,
                               uint32_t flags, const sp<IBinder>& parentHandle,
                               LayerMetadata metadata, sp<IBinder>* handle,
                               sp<IGraphicBufferProducer>* gbp, uint32_t* outTransformHint) {
    // We rely on createLayer to check permissions.
    return mFlinger->createLayer(name, this, w, h, format, flags, std::move(metadata), handle, gbp,
                                 parentHandle, nullptr, outTransformHint);}

SurfaceFlinger::createLayer
	result = createBufferQueueLayer(client, std::move(uniqueName), w, h, flags, std::move(metadata), format, handle, gbp, &layer);

Create native surface

上面SurfaceFlinger进程的Layer创建好之后,接着通过new创建Surface,参数handle持有创建的Layer的引用。

static jlong nativeGetFromSurfaceControl(JNIEnv* env, jclass clazz,
        jlong nativeObject,
        jlong surfaceControlNativeObj) {
    Surface* self(reinterpret_cast<Surface *>(nativeObject));
    sp<SurfaceControl> ctrl(reinterpret_cast<SurfaceControl *>(surfaceControlNativeObj));
    sp<Surface> surface(ctrl->getSurface());
    if (surface != NULL) {
        surface->incStrong(&sRefBaseOwner);
    }
    return reinterpret_cast<jlong>(surface.get());
}

首次创建Surface,调用SurfaceControl的getSurface函数创建Surface。

sp<Surface> SurfaceControl::getSurface() const
{
    Mutex::Autolock _l(mLock);
    if (mSurfaceData == nullptr) {
        return generateSurfaceLocked();
    }
    return mSurfaceData;
}
sp<Surface> SurfaceControl::generateSurfaceLocked() const
{
    // This surface is always consumed by SurfaceFlinger, so the
    // producerControlledByApp value doesn't matter; using false.
    mSurfaceData = new Surface(mGraphicBufferProducer, false);
    return mSurfaceData;
}

native层SurfaceControl创建好了之后就可以通过此对象创建native层的Surface对象,最后将native层Surface指针保存到java层Surface。

Set preview UI Window

Surfaceview onAttachedToWindow function中先调用父类view onAttachedToWindow。因为surfaceview surface window默认在GUI window下层,如果需要能够在display展示给用户观看,需要设置透明区域(The surface is Z ordered so that it is behind the window holding its SurfaceView; the SurfaceView punches a hole in its window to allow its surface to be displayed.)使用requestTransparentRegion function实现。

  protected void onAttachedToWindow() {
        super.onAttachedToWindow();
        getViewRootImpl().addSurfaceChangedCallback(this);
        mWindowStopped = false;
        mViewVisibility = getVisibility() == VISIBLE;
        updateRequestedVisibility();
        mAttachedToWindow = true;
        mParent.requestTransparentRegion(SurfaceView.this);
        if (!mGlobalListenersAdded) {
            ViewTreeObserver observer = getViewTreeObserver();
            observer.addOnScrollChangedListener(mScrollChangedListener);
            observer.addOnPreDrawListener(mDrawListener);
            mGlobalListenersAdded = true;
        }
    }

Complement

  • Fig4流程逻辑上很好理解,但是会存在一些疑问,为什么camera service and surface会有一个dequeue、queue buffer的逻辑?buffer circulation结构中acquire、release接口没有使用吗?Surface 中的buffer哪里来的?
  • 完整buffer流程如Fig 9,Surfaceflinger create BufferQueueLayer,BufferQueueLayer create BufferQueue construction,BufferQueue再通过binder传输buffer给Surface,因为Surface and camera service都在cameraservice process,所以可以直接传输buffer,camera service获取到buffer之后会通过HIDL interface 让camera HAL填充image data,最后还给BufferQueue。
  • 生产消费者模型只是BufferQueue自身的circulation,让surfaceflinger更好的管理buffer对象。
  • 整个流程图仅限surfaceview使用流程,其他view很可能不同
    在这里插入图片描述
    Fig 9,buffer完整流动模型

总结

  • 这篇博客主要详细讲述从camera获取image到display显示的整个code flow,主要是也是想了解端到端的过程,对自己以后想做的双摄显示项目做一个基础。后来和头聊天了解到如果想要把camera module做好还需要了解display module and gallery module,东西真的挺多的,最近在做一个算法上移的工作,希望以后有机会和大家分享。
  • 如果有什么错误的地方也请大家指出留言,觉得好点个赞,互粉一下,相互交流,谢谢大家!

参考文档

SnapdragonCamera源码分析(三)createSessions & startPreview
深入浅出CameraServer的Buffer管理机制
Android 源码 Camera2 预览流程分析二
AndroidQ 图形系统(11)UI刷新,SurfaceFlinger,Vsync机制总结
AndroidQ 图形系统(1)Surface与SurfaceControl创建分析

  • 3
    点赞
  • 21
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值