浅析Android中View的硬件绘制流程

前言

Android手机经常被吐槽比iOS手机的使用体验差,比如流畅性不如iOS手机,所以为了提升Android系统的渲染性能,从Android 3.0开始支持硬件加速,Android 4.0开始默认开启硬件加速,Android 4.1还引入VSYNC以及Triple Buffers机制来缓解屏幕撕裂(tearing)和丢帧的问题,为了帮助分析优化过度绘制的问题,在Android 4.2上引入了过度绘制检测工具。为了进一步提升渲染性能,Android 5.0开始引入RenderNode以及RenderThread来用于减少不必要的重复绘制以及降低主线程的负载,并且在Android 7.0上引入了Vulkan硬件渲染引擎来提升3D图形的渲染性能。

在《浅析Android中View的软件绘制流程》我们已经对软件绘制的实现原理进行了分析,本文将结合源码分析下View的硬件绘制流程。

硬件绘制流程

硬件绘制之所以可以降低主线程的负载,ThreadedRenderer是关键所在,ThreadedRenderer将绘制的工作转移到了子线程,从而降低了主线程的负载,缓解了丢帧问题。

下面会从ThreadedRenderer的创建开始分析,了解过Activity启动流程的同学应该知道,在执行完Activity#onResume之后会调用WindowManager#addView方法将DecorViewWindow进行关联,接着会创建ViewRootImpl对象并通过ViewRootImpl#setView方法将DecorViewViewRootImpl进行关联,此时会创建ThreadedRenderer实例用于后续的硬件绘制。

1. ThreadedRenderer的创建

public final class ViewRootImpl implements ViewParent, View.AttachInfo.Callbacks, ThreadedRenderer.DrawCallbacks, AttachedSurfaceControl {
	public void setView(View view, WindowManager.LayoutParams attrs, View panelParentView, int userId) {
		synchronized (this) {
            if (mView == null) {
                mView = view;
				// ...
	
				// mSurfaceHolder == null
                if (mSurfaceHolder == null) {
                    // While this is supposed to enable only, it can effectively disable
                    // the acceleration too.
                    enableHardwareAcceleration(attrs);
                    final boolean useMTRenderer = MT_RENDERER_AVAILABLE && mAttachInfo.mThreadedRenderer != null;
                    if (mUseMTRenderer != useMTRenderer) {
                        // Shouldn't be resizing, as it's done only in window setup,
                        // but end just in case.
                        endDragResizing();
                        mUseMTRenderer = useMTRenderer;
                    }
                }
				// ...		
			}
		}
	}

	private void enableHardwareAcceleration(WindowManager.LayoutParams attrs) {
        mAttachInfo.mHardwareAccelerated = false;
        mAttachInfo.mHardwareAccelerationRequested = false;

        // Don't enable hardware acceleration when the application is in compatibility mode
        if (mTranslator != null) return;

        // Try to enable hardware acceleration if requested
        final boolean hardwareAccelerated = (attrs.flags & WindowManager.LayoutParams.FLAG_HARDWARE_ACCELERATED) != 0;

        if (hardwareAccelerated) {
            final boolean forceHwAccelerated = (attrs.privateFlags & WindowManager.LayoutParams.PRIVATE_FLAG_FORCE_HARDWARE_ACCELERATED) != 0;
			// 开启硬件加速的情况
            if (ThreadedRenderer.sRendererEnabled || forceHwAccelerated) {
                if (mAttachInfo.mThreadedRenderer != null) {
                    mAttachInfo.mThreadedRenderer.destroy();
                }

                final Rect insets = attrs.surfaceInsets;
                final boolean hasSurfaceInsets = insets.left != 0 || insets.right != 0 || insets.top != 0 || insets.bottom != 0;
                final boolean translucent = attrs.format != PixelFormat.OPAQUE || hasSurfaceInsets;
                // 1. 创建ThreadedRenderer对象并赋值给mAttachInfo.mThreadedRenderer
                mAttachInfo.mThreadedRenderer = ThreadedRenderer.create(mContext, translucent, attrs.getTitle().toString());
                updateColorModeIfNeeded(attrs.getColorMode());
                updateForceDarkMode();
                if (mAttachInfo.mThreadedRenderer != null) {
                	// 2. 更新硬件加速相关的字段
                    mAttachInfo.mHardwareAccelerated = mAttachInfo.mHardwareAccelerationRequested = true;
                    if (mHardwareRendererObserver != null) {
                        mAttachInfo.mThreadedRenderer.addObserver(mHardwareRendererObserver);
                    }
                    // 3. 将mSurfaceControl以及mBlastBufferQueue设置到mThreadedRenderer中,用于后续的绘制渲染
                    mAttachInfo.mThreadedRenderer.setSurfaceControl(mSurfaceControl);
                    mAttachInfo.mThreadedRenderer.setBlastBufferQueue(mBlastBufferQueue);
                }

				// ...
            }
        }
    }
	// ...
}

从源码中看出,在调度VSYNC信号之前就会创建ThreadedRenderer对象并将其保存在mAttachInfo中,同时更新mAttachInfo中硬件加速相关的字段,最后将mSurfaceControl以及mBlastBufferQueue设置到mThreadedRenderer中,用于后续的绘制渲染。下面看下ThreadedRenderer对象的创建过程都做了哪些事情。

/**
 * ThreadedRenderer将渲染工作放到了一个render线程中执行,render线程可能阻塞UI线程。
 * ThreadedRenderer创建了RenderProxy实例,RenderProxy创建并管理了render线程中的CanvasContext,CanvasContext完全由RenderProxy的生命周期所管理。
 */
public final class ThreadedRenderer extends HardwareRenderer {
    /**
     * 创建一个默认使用OpenGL渲染引擎的ThreadedRenderer实例.
     */
    public static ThreadedRenderer create(Context context, boolean translucent, String name) {
        return new ThreadedRenderer(context, translucent, name);
    }

	ThreadedRenderer(Context context, boolean translucent, String name) {
		// 调用了HardwareRenderer的构造函数
        super();
        setName(name);
        setOpaque(!translucent);

        final TypedArray a = context.obtainStyledAttributes(null, R.styleable.Lighting, 0, 0);
        mLightY = a.getDimension(R.styleable.Lighting_lightY, 0);
        mLightZ = a.getDimension(R.styleable.Lighting_lightZ, 0);
        mLightRadius = a.getDimension(R.styleable.Lighting_lightRadius, 0);
        float ambientShadowAlpha = a.getFloat(R.styleable.Lighting_ambientShadowAlpha, 0);
        float spotShadowAlpha = a.getFloat(R.styleable.Lighting_spotShadowAlpha, 0);
        a.recycle();
        setLightSourceAlpha(ambientShadowAlpha, spotShadowAlpha);
    }
}

基本上都是对一些成员变量进行初始化,因为ThreadedRenderer继承自HardwareRenderer,所以看下HardwareRenderer的构造函数。

/**
 * 创建一个硬件加速的render实例,用于将RenderNode渲染到Surface上。
 * 所有的HardwareRenderer实例共享一个render线程。render线程包含用于GPU加速渲染所需的GPU上下文以及资源。
 * 第一个HardwareRenderer创建的同时还伴随着Context的创建开销,但是之后的每个HardwareRenderer实例的创建开销小。
 */
public class HardwareRenderer {
	protected RenderNode mRootNode; // 根节点
	private final long mNativeProxy; // native层的渲染代理对象
	
    public HardwareRenderer() {
    	// 初始化Context
        ProcessInitializer.sInstance.initUsingContext();
        // 1. 在native层创建RootRenderNode对象,根据返回的句柄值创建Java层的RenderNode对象(根节点)
        mRootNode = RenderNode.adopt(nCreateRootRenderNode());
        mRootNode.setClipToBounds(false);
        // 2. 调用nCreateProxy在native层创建一个渲染代理对象,返回句柄值
        mNativeProxy = nCreateProxy(!mOpaque, mRootNode.mNativeRenderNode);
        if (mNativeProxy == 0) {
            throw new OutOfMemoryError("Unable to create hardware renderer");
        }
        Cleaner.create(this, new DestroyContextRunnable(mNativeProxy));
        // 3. 对native层的渲染代理对象进行初始化
        ProcessInitializer.sInstance.init(mNativeProxy);
    }

    private static native long nCreateRootRenderNode();

    public static RenderNode adopt(long nativePtr) {
        return new RenderNode(nativePtr);
    }

	private static native long nCreateProxy(boolean translucent, long rootRenderNode);

	private static class ProcessInitializer {
		
		synchronized void init(long renderProxy) {
            if (mInitialized) return;
            mInitialized = true;
			// 初始化render线程信息
            initSched(renderProxy);
            // 请求buffer并将对应的fd传设置到native层
            initGraphicsStats();
        }
	}
}

分析源码可知,ThreadedRenderer对象的创建主要包含:

  • 创建根渲染节点,主要是创建Native层的根渲染节点;
  • 创建渲染代理对象,主要是创建Native层的代理对象;

1.1 Native层的根渲染节点的创建

public class HardwareRenderer {
	protected RenderNode mRootNode; // 根节点
	// ...
	
	public HardwareRenderer() {
    	// 初始化Context
        ProcessInitializer.sInstance.initUsingContext();
        // 1. 在native层创建RenderNode对象,根据返回的句柄值创建Java层的RenderNode对象(根节点)
        mRootNode = RenderNode.adopt(nCreateRootRenderNode());
        mRootNode.setClipToBounds(false);
        // 2. 调用nCreateProxy在native层创建一个渲染代理对象,返回句柄值
        // ...
        // 3. 根据native层的渲染代理对象对
        // ...
    }
    
	private static native long nCreateRootRenderNode();
}

public final class RenderNode {
    // native层的RenderNode的句柄值
    public final long mNativeRenderNode;

    private RenderNode(long nativePtr) {
        mNativeRenderNode = nativePtr;
        NoImagePreloadHolder.sRegistry.registerNativeAllocation(this, mNativeRenderNode);
        mAnimationHost = null;
    }
	// ...
}

可以看出Java层的RenderNode其实是一个壳,其内部实现还是在Native层,即通过nCreateRootRenderNode方法调用到Native层进行RootRenderNode的创建。

// frameworks/base/libs/hwui/jni/android_graphics_HardwareRenderer.cpp
static jlong android_view_ThreadedRenderer_createRootRenderNode(JNIEnv* env, jobject clazz) {
    RootRenderNode* node = new RootRenderNode(std::make_unique<JvmErrorReporter>(env));
    node->incStrong(0);
    node->setName("RootRenderNode");
    return reinterpret_cast<jlong>(node);
}

// frameworks/base/libs/hwui/RootRenderNode.h
class RootRenderNode : public RenderNode {
public:
    explicit RootRenderNode(std::unique_ptr<ErrorHandler> errorHandler)
            : RenderNode(), mErrorHandler(std::move(errorHandler)) {}
	// ...
}

// frameworks/base/libs/hwui/RenderNode.cpp
RenderNode::RenderNode()
        : mUniqueId(generateId())
        , mDirtyPropertyFields(0)
        , mNeedsDisplayListSync(false)
        , mDisplayList(nullptr)
        , mStagingDisplayList(nullptr)
        , mAnimatorManager(*this)
        , mParentCount(0) {}

最终创建了一个Native层的RootRenderNode对象,并将其句柄值返回给Java层的RenderNode对象,并更新到其mNativeRenderNode成员变量,从Native层的RenderNode类的构造函数可以看到两个关键的成员变量mDisplayListmStagingDisplayList,这两个成员变量和后续的DisplayList数据同步有关,后面会再次提及,现在继续再看下渲染代理对象的创建。

1.2 渲染代理对象的创建

渲染代理对象用于负责处理Java层的渲染请求,下面看下渲染代理的创建过程。

public class HardwareRenderer {
	protected RenderNode mRootNode; // 根节点
	private final long mNativeProxy; // Native层的渲染代理对象
	
    public HardwareRenderer() {
    	// 初始化Context
        ProcessInitializer.sInstance.initUsingContext();
        // 1. 在Native层创建RenderNode对象,根据返回的句柄值创建Java层的RenderNode对象(根节点)
        // ...
        // 2. 调用nCreateProxy在Native层创建一个渲染代理对象,返回句柄值
        mNativeProxy = nCreateProxy(!mOpaque, mRootNode.mNativeRenderNode);
        // ...
        // 3. 根据Native层的渲染代理对象对
        ProcessInitializer.sInstance.init(mNativeProxy);
    }

	private static native long nCreateProxy(boolean translucent, long rootRenderNode);

在创建Native层的渲染代理对象时用到了Native层的RenderNode

// frameworks/base/libs/hwui/jni/android_graphics_HardwareRenderer.cpp
static jlong android_view_ThreadedRenderer_createProxy(JNIEnv* env, jobject clazz, jboolean translucent, jlong rootRenderNodePtr) {
	// 获取之前创建的RootRenderNode对象
    RootRenderNode* rootRenderNode = reinterpret_cast<RootRenderNode*>(rootRenderNodePtr);
    // 创建ContextFactoryImpl对象,ContextFactoryImpl对象持有了rootRenderNode
    ContextFactoryImpl factory(rootRenderNode);
    // 创建RenderProxy对象
    RenderProxy* proxy = new RenderProxy(translucent, rootRenderNode, &factory);
    return (jlong) proxy;
}

// frameworks/base/libs/hwui/renderthread/RenderProxy.cpp
RenderProxy::RenderProxy(bool translucent, RenderNode* rootRenderNode, IContextFactory* contextFactory) : mRenderThread(RenderThread::getInstance()), mContext(nullptr) {
#ifdef __ANDROID__
    pid_t uiThreadId = pthread_gettid_np(pthread_self());
#else
    pid_t uiThreadId = 0;
#endif
    pid_t renderThreadId = getRenderThreadTid();
    mContext = mRenderThread.queue().runSync([=, this]() -> CanvasContext* {
        CanvasContext* context = CanvasContext::create(mRenderThread, translucent, rootRenderNode, contextFactory, uiThreadId, renderThreadId);
        if (context != nullptr) {
            mRenderThread.queue().post([=] { context->startHintSession(); });
        }
        return context;
    });
    mDrawFrameTask.setContext(&mRenderThread, mContext, rootRenderNode);
}

从源码中可以看出,RenderProxy对象的创建过程会使用RenderThread::getInstance()获取RenderThread,因此App进程内默认只有一个渲染线程RenderThread。接着向RenderThread线程提交一个任务来创建CanvasContext对象,之后调用了DrawFrameTask#setContext方法将RenderThreadCanvasContext以及RootRenderNode设置给DrawFrameTask对象。

至此可以看到CanvasContext对象的创建是整个渲染代理对象的创建过程中的关键,下面看下CanvasContext对象的创建过程。

// frameworks/base/libs/hwui/renderthread/CanvasContext.cpp
CanvasContext* CanvasContext::create(RenderThread& thread, bool translucent, RenderNode* rootRenderNode, IContextFactory* contextFactory, pid_t uiThreadId, pid_t renderThreadId) {
    auto renderType = Properties::getRenderPipelineType();

    switch (renderType) {
        case RenderPipelineType::SkiaGL:
            return new CanvasContext(thread, translucent, rootRenderNode, contextFactory,
                                     std::make_unique<skiapipeline::SkiaOpenGLPipeline>(thread),
                                     uiThreadId, renderThreadId);
        case RenderPipelineType::SkiaVulkan:
            return new CanvasContext(thread, translucent, rootRenderNode, contextFactory,
                                     std::make_unique<skiapipeline::SkiaVulkanPipeline>(thread),
                                     uiThreadId, renderThreadId);
#ifndef __ANDROID__
        case RenderPipelineType::SkiaCpu:
            return new CanvasContext(thread, translucent, rootRenderNode, contextFactory,
                                     std::make_unique<skiapipeline::SkiaCpuPipeline>(thread),
                                     uiThreadId, renderThreadId);
#endif
        default:
            LOG_ALWAYS_FATAL("canvas context type %d not supported", (int32_t)renderType);
            break;
    }
    return nullptr;
}

可以看出,最终是根据系统设置的渲染管线类型来创建最终的CanvasContext,不同的渲染管线类型对应不同的渲染管线的实现,其中包括OpenGLVulkan两种硬件渲染引擎的渲染管线实现以及Skia软件渲染引擎的渲染管线实现。

2. Surface的绑定

分析到此我们发现ViewRootImpl#setView方法完成了ThreadedRenderer对象的创建,并且完成了渲染相关的准备工作,但是没有对绘制使用的Surface进行任何处理。通过之前对软件绘制流程的分析我们知道,Surface的绑定是在Surface可用之后进行的,因此我们硬件渲染流程中Surface是何时进行关联处理的。

public final class ViewRootImpl implements ViewParent, View.AttachInfo.Callbacks, ThreadedRenderer.DrawCallbacks, AttachedSurfaceControl {
    private void performTraversals() {
    	// ...
        if (mFirst || windowShouldResize || viewVisibilityChanged || params != null || mForceNextWindowRelayout) {
			// ...
			try {
				// ...
				relayoutResult = relayoutWindow(params, viewVisibility, insetsPending);
				// ...
				if (surfaceCreated) {
					if (mAttachInfo.mThreadedRenderer != null) {
                        try {
                        	// 经过relayoutWindow之后Surface已经处于可用状态,这里就可以将Surface交给ThreadedRenderer进行处理
                            hwInitialized = mAttachInfo.mThreadedRenderer.initialize(mSurface);
                            if (hwInitialized && (host.mPrivateFlags & View.PFLAG_REQUEST_TRANSPARENT_REGIONS) == 0) {
                                // Don't pre-allocate if transparent regions are requested as they may not be needed
                                mAttachInfo.mThreadedRenderer.allocateBuffers();
                            }
                        } catch (OutOfResourcesException e) {
                            handleOutOfResourcesException(e);
                            return;
                        }
                    }
				}
			} 
		}
	}
}

可以看到Surface创建之后会调用ThreadedRenderer#initialize方法,推测这里可能会将Surface和硬件渲染进行关联。

public final class ThreadedRenderer extends HardwareRenderer {
    /**
     * Initializes the threaded renderer for the specified surface.
     * @param surface The surface to render.
     * @return True if the initialization was successful, false otherwise.
     */
    boolean initialize(Surface surface) throws OutOfResourcesException {
        boolean status = !mInitialized;
        mInitialized = true;
        updateEnabledState(surface);
        setSurface(surface);
        return status;
    }

	    @Override
    public void setSurface(Surface surface) {
        if (surface != null && surface.isValid()) {
            super.setSurface(surface);
        } else {
            super.setSurface(null);
        }
    }
}

public class HardwareRenderer {
    /**
     * @param discardBuffer determines whether the surface will attempt to preserve its contents
     *                      between frames.  If set to true the renderer will attempt to preserve
     *                      the contents of the buffer between frames if the implementation allows
     *                      it.  If set to false no attempt will be made to preserve the buffer's
     *                      contents between frames.
     */
    public void setSurface(@Nullable Surface surface, boolean discardBuffer) {
        if (surface != null && !surface.isValid()) {
            throw new IllegalArgumentException("Surface is invalid. surface.isValid() == false.");
        }
        // discardBuffer为false
        nSetSurface(mNativeProxy, surface, discardBuffer);
    }

    private static native void nSetSurface(long nativeProxy, Surface window, boolean discardBuffer);

}

最终还是调用到了Native层,跟进去看下nSetSurface方法。

// frameworks/base/libs/hwui/jni/android_graphics_HardwareRenderer.cpp
static void android_view_ThreadedRenderer_setSurface(JNIEnv* env, jobject clazz, jlong proxyPtr, jobject jsurface, jboolean discardBuffer) {
    RenderProxy* proxy = reinterpret_cast<RenderProxy*>(proxyPtr);
    ANativeWindow* window = nullptr;
    if (jsurface) {
        window = fromSurface(env, jsurface);
    }
    bool enableTimeout = true;
    // ...
    // 设置Surface
    proxy->setSurface(window, enableTimeout);
    if (window) {
        ANativeWindow_release(window);
    }
}

// frameworks/base/libs/hwui/renderthread/RenderProxy.cpp
void RenderProxy::setSurface(ANativeWindow* window, bool enableTimeout) {
    if (window) { ANativeWindow_acquire(window); }
    mRenderThread.queue().post([this, win = window, enableTimeout]() mutable {
        mContext->setSurface(win, enableTimeout);
        if (win) { ANativeWindow_release(win); }
    });
}

// frameworks/base/libs/hwui/renderthread/CanvasContext.cpp
void CanvasContext::setSurface(ANativeWindow* window, bool enableTimeout) {
    ATRACE_CALL();

    startHintSession();
    if (window) {
        mNativeSurface = std::make_unique<ReliableSurface>(window);
        mNativeSurface->init();
        if (enableTimeout) {
            // TODO: Fix error handling & re-shorten timeout
            ANativeWindow_setDequeueTimeout(window, 4000_ms);
        }
    } else {
        mNativeSurface = nullptr;
    }
    setupPipelineSurface();
}

void CanvasContext::setupPipelineSurface() {
 	// 将surface设置给渲染管线
    bool hasSurface = mRenderPipeline->setSurface(mNativeSurface ? mNativeSurface->getNativeWindow() : nullptr, mSwapBehavior);

    if (mNativeSurface && !mNativeSurface->didSetExtraBuffers()) {
        setBufferCount(mNativeSurface->getNativeWindow());
    }

    mFrameNumber = 0;

    if (mNativeSurface != nullptr && hasSurface) {
        mHaveNewSurface = true;
        mSwapHistory.clear();
        // Enable frame stats after the surface has been bound to the appropriate graphics API.
        // Order is important when new and old surfaces are the same, because old surface has
        // its frame stats disabled automatically.
        native_window_enable_frame_timestamps(mNativeSurface->getNativeWindow(), true);
        native_window_set_scaling_mode(mNativeSurface->getNativeWindow(), NATIVE_WINDOW_SCALING_MODE_FREEZE);
    } else {
        mRenderThread.removeFrameCallback(this);
        mGenerationID++;
    }
}

至此根据源码可以确定,在CanvasContext::setupPipelineSurface方法中将Surface设置给了渲染管线,后续渲染管线就可以将渲染后的数据放入Surface中,并通过Surface来进行后续的数据处理。

3. View的绘制分发

经过一系列的准备工作之后,就可以进入到真正的View绘制流程了,硬件绘制是通过ThreadedRenderer#draw方法来实现的,下面根据源码看下ThreadedRenderer#draw方法的具体实现。

public final class ViewRootImpl implements ViewParent, View.AttachInfo.Callbacks, ThreadedRenderer.DrawCallbacks, AttachedSurfaceControl {
	// ...
	private boolean performDraw() {
		final boolean fullRedrawNeeded = mFullRedrawNeeded || mSyncBufferCallback != null;
        // ...
        boolean usingAsyncReport = isHardwareEnabled() && mSyncBufferCallback != null;
        // ...
        try {
            boolean canUseAsync = draw(fullRedrawNeeded, usingAsyncReport && mSyncBuffer);
            // ...
        } finally {
            // ...
        }
        // ...
    }

    private boolean draw(boolean fullRedrawNeeded, boolean forceDraw) {
        Surface surface = mSurface;
        // surface不可用时直接return,surface在经过relayoutWindow之后已经被更新并处于可用状态
        if (!surface.isValid()) {
            return false;
        }

		// ...
        final Rect dirty = mDirty;
        if (fullRedrawNeeded) {
            dirty.set(0, 0, (int) (mWidth * appScale + 0.5f), (int) (mHeight * appScale + 0.5f));
        }

		// ...
        if (!dirty.isEmpty() || mIsAnimating || accessibilityFocusDirty) {
            if (isHardwareEnabled()) {
            	// ...
            	// 开启硬件绘制时,使用ThreadedRenderer进行绘制
            	mAttachInfo.mThreadedRenderer.draw(mView, mAttachInfo, this);
            } else {
            	// 未开启硬件绘制时,使用软件绘制
                // ...
            }
        }
		// 如果当前正在动画,调度下一次VSYNC信号来执行布局流程
        if (animating) {
            mFullRedrawNeeded = true;
            scheduleTraversals();
        }
        return useAsyncReport;
    }
}

public final class ThreadedRenderer extends HardwareRenderer {
	// ...
    void draw(View view, AttachInfo attachInfo, DrawCallbacks callbacks) {
        attachInfo.mViewRootImpl.mViewFrameInfo.markDrawStart();
		// 1. 更新根节点的DisplayList
        updateRootDisplayList(view, callbacks);

        // 注册在创建ThreadedRenderer之前就启动的动画渲染节点,这些动画通常在第一次draw之前就开始了。
        if (attachInfo.mPendingAnimatingRenderNodes != null) {
            final int count = attachInfo.mPendingAnimatingRenderNodes.size();
            for (int i = 0; i < count; i++) {
                registerAnimatingRenderNode(attachInfo.mPendingAnimatingRenderNodes.get(i));
            }
            attachInfo.mPendingAnimatingRenderNodes.clear();
            attachInfo.mPendingAnimatingRenderNodes = null;
        }

        final FrameInfo frameInfo = attachInfo.mViewRootImpl.getUpdatedFrameInfo();
		// 2. 将RenderNode树的DisplayList数据同步到渲染线程并请求绘制下一帧
        int syncResult = syncAndDrawFrame(frameInfo);
        if ((syncResult & SYNC_LOST_SURFACE_REWARD_IF_FOUND) != 0) {
            // 丢失了surface,因此重新发起布局请求,下一次布局时WindowManager会提供新的surface。
            attachInfo.mViewRootImpl.mForceNextWindowRelayout = true;
            attachInfo.mViewRootImpl.requestLayout();
        }
        if ((syncResult & SYNC_REDRAW_REQUESTED) != 0) {
            attachInfo.mViewRootImpl.invalidate();
        }
    }
	
    // 将RenderNode的数据同步到渲染线程并请求绘制下一帧
    @SyncAndDrawResult
    public int syncAndDrawFrame(@NonNull FrameInfo frameInfo) {
        return nSyncAndDrawFrame(mNativeProxy, frameInfo.frameInfo, frameInfo.frameInfo.length);
    }
}

从源码可以看出,ThreadedRenderer#draw方法主要做了两件事情:

  • 更新根节点的DisplayList
  • RenderNode树持有的DisplayList同步到渲染线程并请求绘制下一帧;

3.1 更新根节点的DisplayList

从源码可以看出,ThreadedRenderer#updateRootDisplayList方法并不只会更新根ViewDisplayList,还会遍历View树进行DisplayList的更新,然后再更新根ViewDisplayList

public final class ThreadedRenderer extends HardwareRenderer {
	// ...
    private void updateRootDisplayList(View view, DrawCallbacks callbacks) {
        // 1. 从DecorView开始遍历更新View树上的各个View的DisplayList
        updateViewTreeDisplayList(view);
		// ...
		// 2. 如果根渲染节点需要更新或者根渲染节点没有DisplayList,则对根渲染节点进行处理
		// 如果在第1步之后还需要更新根渲染节点的话,说明第1步没有处理过根渲染节点
        if (mRootNodeNeedsUpdate || !mRootNode.hasDisplayList()) {
            RecordingCanvas canvas = mRootNode.beginRecording(mSurfaceWidth, mSurfaceHeight);
            try {
                final int saveCount = canvas.save();
                canvas.translate(mInsetLeft, mInsetTop);
                callbacks.onPreDraw(canvas);

                canvas.enableZ();
                // 3. 将根View的DisplayList填充到canvas
                canvas.drawRenderNode(view.updateDisplayListIfDirty());
                canvas.disableZ();

                callbacks.onPostDraw(canvas);
                canvas.restoreToCount(saveCount);
                mRootNodeNeedsUpdate = false;
            } finally {
                mRootNode.endRecording();
            }
        }
        Trace.traceEnd(Trace.TRACE_TAG_VIEW);
    }
	
	// 更新view树上的DisplayList
    private void updateViewTreeDisplayList(View view) {
        view.mPrivateFlags |= View.PFLAG_DRAWN;
        // 更新mRecreateDisplayList,如果view调用过invalidate方法则标记其需要重新创建DisplayList
        view.mRecreateDisplayList = (view.mPrivateFlags & View.PFLAG_INVALIDATED) == View.PFLAG_INVALIDATED;
        view.mPrivateFlags &= ~View.PFLAG_INVALIDATED;
        // 调用View#updateDisplayListIfDirty,方法内部会分发更新的动作
        view.updateDisplayListIfDirty();
        view.mRecreateDisplayList = false;
    }
    // ...
}

每个View实例创建的时候都会创建自身的RenderNode,在ThreadedRenderer#updateViewTreeDisplayList方法中调用了DecorView#updateDisplayListIfDirty方法之后,首先会判断是否需要重新创建自身的DisplayList,如果不需要则直接调用dispatchGetDisplayList方法将更新操作分发给所有的子View,否则对根View进行必要的绘制,并将绘制操作分发给所有的子View

public class View implements Drawable.Callback, KeyEvent.Callback, AccessibilityEventSource {

    // 获取这个view的RenderNode实例并根据需要对其DisplayList进行更新
    public RenderNode updateDisplayListIfDirty() {
        final RenderNode renderNode = mRenderNode;
        // 只有view已经被attach过并且开启硬件加速才会有DisplayList
        if (!canHaveDisplayList()) {
            return renderNode;
        }
		// 1. 如果绘制缓存无效或者没有DisplayList或者mRecreateDisplayList被设置为true,则需要进一步处理
        if ((mPrivateFlags & PFLAG_DRAWING_CACHE_VALID) == 0 || !renderNode.hasDisplayList() || (mRecreateDisplayList)) {
            // 1.1 不需要重新创建当前View的DisplayList,只需要通知子View恢复或重建他们的DisplayList。
            // 这种情况对应的是(mPrivateFlags & PFLAG_DRAWING_CACHE_VALID) == 0,但是硬件加速开启的情况下,主要根据hasDisplayList以及mRecreateDisplayList来判断是否需要重建DisplayList
            if (renderNode.hasDisplayList() && !mRecreateDisplayList) {
                mPrivateFlags |= PFLAG_DRAWN | PFLAG_DRAWING_CACHE_VALID;
                mPrivateFlags &= ~PFLAG_DIRTY_MASK;
                // 通知子View重建DisplayList
                dispatchGetDisplayList();
                return renderNode; // no work needed
            }

            // 1.2 需要重新创建当前View的DisplayList,将mRecreateDisplayList置为true来保证当调用drawChild方法时可以将子View的DisplayList拷贝到当前View的DisplayList里。
            mRecreateDisplayList = true;

            int width = mRight - mLeft;
            int height = mBottom - mTop;
            int layerType = getLayerType();
            renderNode.clearStretch();
			// 1.2.1 开始记录当前View的绘制命令
            final RecordingCanvas canvas = renderNode.beginRecording(width, height);

            try {
                if (layerType == LAYER_TYPE_SOFTWARE) { // 软件绘制时 
                    // ...
                } else { // 硬件绘制时
                    // ...
                    // 如果当前View是布局类型的并且没有背景,此时直接跳过自身的绘制。
                    if ((mPrivateFlags & PFLAG_SKIP_DRAW) == PFLAG_SKIP_DRAW) {
						// 分发给子View进行绘制
                        dispatchDraw(canvas);
                        // ...
                    } else {
                    	// 绘制自身到canvas
                        draw(canvas);
                    }
                }
            } finally {
            	// 结束记录当前View的绘制命令
                renderNode.endRecording();
                setDisplayListProperties(renderNode);
            }
        } else {
            mPrivateFlags |= PFLAG_DRAWN | PFLAG_DRAWING_CACHE_VALID;
            mPrivateFlags &= ~PFLAG_DIRTY_MASK;
        }
        return renderNode;
    }

    /**
     * View类没有实现此方法,因为只有继承自ViewGroup的View才需要分发给子View。
     * ViewGroup类中遍历所有的子View并调用其updateDisplayListIfDirty方法,由此开启新一轮的分发处理。
     * @hide
     */
    protected void dispatchGetDisplayList() {}

    /**
     * View类没有实现此方法,因为只有继承自ViewGroup的View才需要分发给子View。
     * ViewGroup类中遍历所有的子View并调用其draw方法,由此开启新一轮的分发处理。
     */
    protected void dispatchDraw(Canvas canvas) {

    }
	
	// ...
}
// android.graphics.RenderNode
public final class RenderNode {
    /**
     * `
     * Ends the recording for this display list. Calling this method marks
     * the display list valid and {@link #hasDisplayList()} will return true.
     *
     * @see #beginRecording(int, int)
     * @see #hasDisplayList()
     */
    public void endRecording() {
        if (mCurrentRecordingCanvas == null) {
            throw new IllegalStateException("No recording in progress, forgot to call #beginRecording()?");
        }
        RecordingCanvas canvas = mCurrentRecordingCanvas;
        mCurrentRecordingCanvas = null;
        canvas.finishRecording(this);
        canvas.recycle();
    }
}

// android.graphics.RecordingCanvas
public final class RecordingCanvas extends BaseRecordingCanvas {
    void finishRecording(RenderNode node) {
        nFinishRecording(mNativeCanvasWrapper, node.mNativeRenderNode);
    }
    
    @CriticalNative
    private static native void nFinishRecording(long renderer, long renderNode);
}    
// frameworks/base/libs/hwui/jni/android_graphics_DisplayListCanvas.cpp
static void android_view_DisplayListCanvas_finishRecording(CRITICAL_JNI_PARAMS_COMMA jlong canvasPtr, jlong renderNodePtr) {
    Canvas* canvas = reinterpret_cast<Canvas*>(canvasPtr);
    RenderNode* renderNode = reinterpret_cast<RenderNode*>(renderNodePtr);
    canvas->finishRecording(renderNode);
}

// frameworks/base/libs/hwui/pipeline/skia/SkiaRecordingCanvas.cpp
void SkiaRecordingCanvas::finishRecording(uirenderer::RenderNode* destination) {
    destination->setStagingDisplayList(uirenderer::DisplayList(finishRecording()));
}

// frameworks/base/libs/hwui/RenderNode.cpp
void RenderNode::setStagingDisplayList(DisplayList&& newData) {
    mValid = newData.isValid();
    mNeedsDisplayListSync = true;
    mStagingDisplayList = std::move(newData);
}

可见updateDisplayListIfDirty方法主要完成了绘制任务的分发以及DecorView自身的绘制。绘制完成之后会调用RecordingCanvas#finishRecording方法,最终通过JNI调用到了Native层的SkiaRecordingCanvas::finishRecording方法,将经过绘制得到的DisplayList设置到了RenderNodemStagingDisplayList变量中。(软件绘制对应SkiaCanvas,硬件绘制对应RecordingCanvas

View自身的绘制是通过draw方法实现的,而软件绘制其实也是通过这个draw方法实现的自身绘制,因此这块就不重复分析了。

public class View implements Drawable.Callback, KeyEvent.Callback, AccessibilityEventSource {
    /**
     * Manually render this view (and all of its children) to the given Canvas.
     * The view must have already done a full layout before this function is
     * called.  When implementing a view, implement
     * {@link #onDraw(android.graphics.Canvas)} instead of overriding this method.
     * If you do need to override this method, call the superclass version.
     *
     * @param canvas The Canvas to which the View is rendered.
     */
    @CallSuper
    public void draw(Canvas canvas) {
        final int privateFlags = mPrivateFlags;
        mPrivateFlags = (privateFlags & ~PFLAG_DIRTY_MASK) | PFLAG_DRAWN;

        /*
         * Draw traversal performs several drawing steps which must be executed
         * in the appropriate order:
         *
         *      1. Draw the background
         *      2. If necessary, save the canvas' layers to prepare for fading
         *      3. Draw view's content
         *      4. Draw children
         *      5. If necessary, draw the fading edges and restore layers
         *      6. Draw decorations (scrollbars for instance)
         *      7. If necessary, draw the default focus highlight
         */

        // Step 1, draw the background, if needed
        int saveCount;
        drawBackground(canvas);

        // skip step 2 & 5 if possible (common case)
        final int viewFlags = mViewFlags;
        boolean horizontalEdges = (viewFlags & FADING_EDGE_HORIZONTAL) != 0;
        boolean verticalEdges = (viewFlags & FADING_EDGE_VERTICAL) != 0;
        if (!verticalEdges && !horizontalEdges) {
            // Step 3, draw the content
            onDraw(canvas);

            // Step 4, draw the children
            dispatchDraw(canvas);

            drawAutofilledHighlight(canvas);

            // Overlay is part of the content and draws beneath Foreground
            if (mOverlay != null && !mOverlay.isEmpty()) {
                mOverlay.getOverlayView().dispatchDraw(canvas);
            }

            // Step 6, draw decorations (foreground, scrollbars)
            onDrawForeground(canvas);

            // Step 7, draw the default focus highlight
            drawDefaultFocusHighlight(canvas);

            if (isShowingLayoutBounds()) {
                debugDrawFocus(canvas);
            }

            // we're done...
            return;
        }
		// ...
    }

}

总结下整体的流程如下图所示:
在这里插入图片描述

3.2 同步数据到渲染线程

public final class ThreadedRenderer extends HardwareRenderer {
	// ...
    void draw(View view, AttachInfo attachInfo, DrawCallbacks callbacks) {
        attachInfo.mViewRootImpl.mViewFrameInfo.markDrawStart();
		// 1. 更新根节点的DisplayList
        // ...

        final FrameInfo frameInfo = attachInfo.mViewRootImpl.getUpdatedFrameInfo();
		// 2. 将RenderNode树同步到渲染线程并请求绘制下一帧
        int syncResult = syncAndDrawFrame(frameInfo);
        if ((syncResult & SYNC_LOST_SURFACE_REWARD_IF_FOUND) != 0) {
            // 丢失了surface,因此重新发起布局请求,下一次布局时WindowManager会提供新的surface。
            attachInfo.mViewRootImpl.mForceNextWindowRelayout = true;
            attachInfo.mViewRootImpl.requestLayout();
        }
        if ((syncResult & SYNC_REDRAW_REQUESTED) != 0) {
            attachInfo.mViewRootImpl.invalidate();
        }
    }
	
    // 将RenderNode树同步到渲染线程并请求绘制下一帧
    @SyncAndDrawResult
    public int syncAndDrawFrame(@NonNull FrameInfo frameInfo) {
        return nSyncAndDrawFrame(mNativeProxy, frameInfo.frameInfo, frameInfo.frameInfo.length);
    }
}

// android.view.ViewRootImpl
public final class ViewRootImpl implements ViewParent, View.AttachInfo.Callbacks, ThreadedRenderer.DrawCallbacks, AttachedSurfaceControl {
    /**
     * Update the Choreographer's FrameInfo object with the timing information for the current
     * ViewRootImpl instance. Erase the data in the current ViewFrameInfo to prepare for the next
     * frame.
     * @return the updated FrameInfo object
     */
    protected @NonNull FrameInfo getUpdatedFrameInfo() {
        // Since Choreographer is a thread-local singleton while we can have multiple
        // ViewRootImpl's, populate the frame information from the current viewRootImpl before
        // starting the draw
        FrameInfo frameInfo = mChoreographer.mFrameInfo;
        mViewFrameInfo.populateFrameInfo(frameInfo);
        mViewFrameInfo.reset();
        mInputEventAssigner.notifyFrameProcessed();
        return frameInfo;
    }
	// ...
}

在完成DisplayList数据收集之后,通过nSyncAndDrawFrame方法调用到了Native层,并通过Native层的渲染代理进行数据同步,最终将数据同步到了Choreographer持有的frameInfo对象中。

// frameworks/base/libs/hwui/jni/android_graphics_HardwareRenderer.cpp
static int android_view_ThreadedRenderer_syncAndDrawFrame(JNIEnv* env, jobject clazz, jlong proxyPtr, jlongArray frameInfo, jint frameInfoSize) {
	// 获取RenderProxy对象
    RenderProxy* proxy = reinterpret_cast<RenderProxy*>(proxyPtr);
    env->GetLongArrayRegion(frameInfo, 0, frameInfoSize, proxy->frameInfo());
    // 通过RenderProxy对象同步并绘制帧数据
    return proxy->syncAndDrawFrame();
}

// frameworks/base/libs/hwui/renderthread/RenderProxy.cpp
int RenderProxy::syncAndDrawFrame() {
    return mDrawFrameTask.drawFrame();
}

// frameworks/base/libs/hwui/renderthread/DrawFrameTask.cpp
int DrawFrameTask::drawFrame() {
    mSyncResult = SyncResult::OK;
    mSyncQueued = systemTime(SYSTEM_TIME_MONOTONIC);
    postAndWait();
    return mSyncResult;
}

void DrawFrameTask::postAndWait() {
    ATRACE_CALL();
    AutoMutex _lock(mLock);
    mRenderThread->queue().post([this]() { run(); });
    // 阻塞等待数据同步结束
    mSignal.wait(mLock);
}

void DrawFrameTask::run() {
    const int64_t vsyncId = mFrameInfo[static_cast<int>(FrameInfoIndex::FrameTimelineVsyncId)];
    mContext->setSyncDelayDuration(systemTime(SYSTEM_TIME_MONOTONIC) - mSyncQueued);
    mContext->setTargetSdrHdrRatio(mRenderSdrHdrRatio);

    auto hardwareBufferParams = mHardwareBufferParams;
    mContext->setHardwareBufferRenderParams(hardwareBufferParams);
    IRenderPipeline* pipeline = mContext->getRenderPipeline();
    bool canUnblockUiThread;
    bool canDrawThisFrame;
    bool solelyTextureViewUpdates;
    {
        TreeInfo info(TreeInfo::MODE_FULL, *mContext);
        info.forceDrawFrame = mForceDrawFrame;
        mForceDrawFrame = false;
        // 同步帧信息
        canUnblockUiThread = syncFrameState(info);
        canDrawThisFrame = !info.out.skippedFrameReason.has_value();
        solelyTextureViewUpdates = info.out.solelyTextureViewUpdates;

        if (mFrameCommitCallback) {
            mContext->addFrameCommitListener(std::move(mFrameCommitCallback));
            mFrameCommitCallback = nullptr;
        }
    }

    // Grab a copy of everything we need
    CanvasContext* context = mContext;
    std::function<std::function<void(bool)>(int32_t, int64_t)> frameCallback = std::move(mFrameCallback);
    std::function<void()> frameCompleteCallback = std::move(mFrameCompleteCallback);
    mFrameCallback = nullptr;
    mFrameCompleteCallback = nullptr;

    // From this point on anything in "this" is *UNSAFE TO ACCESS*
    if (canUnblockUiThread) {
        unblockUiThread();
    }

    // Even if we aren't drawing this vsync pulse the next frame number will still be accurate
    // ...

    if (CC_LIKELY(canDrawThisFrame)) {
        context->draw(solelyTextureViewUpdates);
    } else {
#ifdef __ANDROID__
        // Do a flush in case syncFrameState performed any texture uploads. Since we skipped
        // the draw() call, those uploads (or deletes) will end up sitting in the queue.
        // Do them now
        if (GrDirectContext* grContext = mRenderThread->getGrContext()) {
            grContext->flushAndSubmit();
        }
#endif
        // wait on fences so tasks don't overlap next frame
        context->waitOnFences();
    }
	// ...
    if (!canUnblockUiThread) {
        unblockUiThread();
    }

    if (pipeline->hasHardwareBuffer()) {
        auto fence = pipeline->flush();
        hardwareBufferParams.invokeRenderCallback(std::move(fence), 0);
    }
}

void DrawFrameTask::unblockUiThread() {
    AutoMutex _lock(mLock);
    mSignal.signal();
}

bool DrawFrameTask::syncFrameState(TreeInfo& info) {
    ATRACE_CALL();
    int64_t vsync = mFrameInfo[static_cast<int>(FrameInfoIndex::Vsync)];
    int64_t intendedVsync = mFrameInfo[static_cast<int>(FrameInfoIndex::IntendedVsync)];
    int64_t vsyncId = mFrameInfo[static_cast<int>(FrameInfoIndex::FrameTimelineVsyncId)];
    int64_t frameDeadline = mFrameInfo[static_cast<int>(FrameInfoIndex::FrameDeadline)];
    int64_t frameInterval = mFrameInfo[static_cast<int>(FrameInfoIndex::FrameInterval)];
    mRenderThread->timeLord().vsyncReceived(vsync, intendedVsync, vsyncId, frameDeadline, frameInterval);
    bool canDraw = mContext->makeCurrent();
    mContext->unpinImages();

#ifdef __ANDROID__
    for (size_t i = 0; i < mLayers.size(); i++) {
        if (mLayers[i]) {
            mLayers[i]->apply();
        }
    }
#endif

    mLayers.clear();
    mContext->setContentDrawBounds(mContentDrawBounds);
    // 准备渲染节点树
    mContext->prepareTree(info, mFrameInfo, mSyncQueued, mTargetNode);

    // This is after the prepareTree so that any pending operations
    // (RenderNode tree state, prefetched layers, etc...) will be flushed.
    bool hasTarget = mContext->hasOutputTarget();
    if (CC_UNLIKELY(!hasTarget || !canDraw)) {
        if (!hasTarget) {
            mSyncResult |= SyncResult::LostSurfaceRewardIfFound;
            info.out.skippedFrameReason = SkippedFrameReason::NoOutputTarget;
        } else {
            // If we have a surface but can't draw we must be stopped
            mSyncResult |= SyncResult::ContextIsStopped;
            info.out.skippedFrameReason = SkippedFrameReason::ContextIsStopped;
        }
    }

    if (info.out.hasAnimations) {
        if (info.out.requiresUiRedraw) {
            mSyncResult |= SyncResult::UIRedrawRequired;
        }
    }
    if (info.out.skippedFrameReason) {
        mSyncResult |= SyncResult::FrameDropped;
    }
    // If prepareTextures is false, we ran out of texture cache space
    return info.prepareTextures;
}
// frameworks/base/libs/hwui/renderthread/CanvasContext.cpp
void CanvasContext::prepareTree(TreeInfo& info, int64_t* uiFrameInfo, int64_t syncQueued,
                                RenderNode* target) {
    mRenderThread.removeFrameCallback(this);

    // If the previous frame was dropped we don't need to hold onto it, so
    // just keep using the previous frame's structure instead
    // ...

    mCurrentFrameInfo->importUiThreadInfo(uiFrameInfo);
    mCurrentFrameInfo->set(FrameInfoIndex::SyncQueued) = syncQueued;
    mCurrentFrameInfo->markSyncStart();

    info.damageAccumulator = &mDamageAccumulator;
    info.layerUpdateQueue = &mLayerUpdateQueue;
    info.damageGenerationId = mDamageId++;
    info.out.skippedFrameReason = std::nullopt;

    mAnimationContext->startFrame(info.mode);
    // 遍历渲染节点,将渲染节点的数据同步到TreeInfo对象中,最终会调用到RenderNode::syncDisplayList方法将之前绘制过程中得到的mStagingDisplayList赋值给mDisplayList,完成数据的同步
    for (const sp<RenderNode>& node : mRenderNodes) {
        info.mode = (node.get() == target ? TreeInfo::MODE_FULL : TreeInfo::MODE_RT_ONLY);
        node->prepareTree(info);
        GL_CHECKPOINT(MODERATE);
    }
    // ...
    mIsDirty = true;

    if (CC_UNLIKELY(!hasOutputTarget())) {
        info.out.skippedFrameReason = SkippedFrameReason::NoOutputTarget;
        mCurrentFrameInfo->setSkippedFrameReason(*info.out.skippedFrameReason);
        return;
    }
	// ...

    bool postedFrameCallback = false;
    if (info.out.hasAnimations || info.out.skippedFrameReason) {
        if (CC_UNLIKELY(!Properties::enableRTAnimations)) {
            info.out.requiresUiRedraw = true;
        }
        if (!info.out.requiresUiRedraw) {
            // If animationsNeedsRedraw is set don't bother posting for an RT anim
            // as we will just end up fighting the UI thread.
            // 提交一个任务到下一帧
            mRenderThread.postFrameCallback(this);
            postedFrameCallback = true;
        }
    }
	// ...
    if (!postedFrameCallback && info.out.animatedImageDelay != TreeInfo::Out::kNoAnimatedImageDelay) {
        // Subtract the time of one frame so it can be displayed on time.
        const nsecs_t kFrameTime = mRenderThread.timeLord().frameIntervalNanos();
        if (info.out.animatedImageDelay <= kFrameTime) {
            mRenderThread.postFrameCallback(this);
        } else {
            const auto delay = info.out.animatedImageDelay - kFrameTime;
            int genId = mGenerationID;
            mRenderThread.queue().postDelayed(delay, [this, genId]() {
                if (mGenerationID == genId) {
                    mRenderThread.postFrameCallback(this);
                }
            });
        }
    }
}

// Called by choreographer to do an RT-driven animation
void CanvasContext::doFrame() {
    if (!mRenderPipeline->isSurfaceReady()) return;
    mIdleDuration = systemTime(SYSTEM_TIME_MONOTONIC) -mRenderThread.timeLord().computeFrameTimeNanos();
    prepareAndDraw(nullptr);
}

void CanvasContext::prepareAndDraw(RenderNode* node) {
    int64_t vsyncId = mRenderThread.timeLord().lastVsyncId();
    nsecs_t vsync = mRenderThread.timeLord().computeFrameTimeNanos();
    int64_t frameDeadline = mRenderThread.timeLord().lastFrameDeadline();
    int64_t frameInterval = mRenderThread.timeLord().frameIntervalNanos();
    int64_t frameInfo[UI_THREAD_FRAME_INFO_SIZE];
    UiFrameInfoBuilder(frameInfo).addFlag(FrameInfoFlags::RTAnimation).setVsync(vsync, vsync, vsyncId, frameDeadline, frameInterval);

    TreeInfo info(TreeInfo::MODE_RT_ONLY, *this);
    prepareTree(info, frameInfo, systemTime(SYSTEM_TIME_MONOTONIC), node);
    if (!info.out.skippedFrameReason) {
        draw(info.out.solelyTextureViewUpdates);
    } else {
        // wait on fences so tasks don't overlap next frame
        waitOnFences();
    }
}

void CanvasContext::draw(bool solelyTextureViewUpdates) {
	// ...
    IRenderPipeline::DrawResult drawResult;
    {
    	// 渲染管线处理绘制数据
        drawResult = mRenderPipeline->draw(frame, windowDirty, dirty, mLightGeometry, &mLayerUpdateQueue, mContentDrawBounds, mOpaque, mLightInfo, mRenderNodes, &(profiler()), mBufferParams, profilerLock());
    }
	// ...
	bool requireSwap = false;
    bool didDraw = false;

    int error = OK;
    // 渲染管线将处理之后的数据交换到buffer中
    bool didSwap = mRenderPipeline->swapBuffers(frame, drawResult, windowDirty, mCurrentFrameInfo, &requireSwap);
    // ...
}

RenderNode::syncDisplayList方法可以看出,同步数据其实就是将之前绘制阶段得到的mStagingDisplayList数据赋值给mDisplayList,之后就可以使用mDisplayList进行渲染处理了。

// frameworks/base/libs/hwui/RenderNode.cpp
void RenderNode::syncDisplayList(TreeObserver& observer, TreeInfo* info) {
    // Make sure we inc first so that we don't fluctuate between 0 and 1,
    // which would thrash the layer cache
    if (mStagingDisplayList) {
        mStagingDisplayList.updateChildren([](RenderNode* child) { child->incParentRefCount(); });
    }
    deleteDisplayList(observer, info);
    mDisplayList = std::move(mStagingDisplayList);
    if (mDisplayList) {
        WebViewSyncData syncData{.applyForceDark = shouldEnableForceDark(info)};
        mDisplayList.syncContents(syncData);
        handleForceDark(info);
    }
}

从上面的源码分析可以看出,当绘制完成之后绘制命令被记录到mStagingDisplayList,当同步数据时,在主线程同步阻塞完成mStagingDisplayListmDisplayList的转移之后,向渲染线程RenderThread提交渲染任务,主要是由渲染管线处理绘制命令,并通知SurfaceFlinger进程对绘制命令处理之后的数据进行合成处理。

总结

整体上,硬件绘制流程相比软件绘制流程来说存在以下不同点:

  1. 硬件绘制在主线程主要负责记录绘制命令并同步绘制命令,而软件绘制则是在主线程完成绘制命令的处理,生成最终的数据;
  2. 硬件绘制通过RenderNode记录当前View是否需要重新构建DisplayList来减少不必要的绘制处理,而软件绘制则会对所有View重新进行绘制;
  3. 硬件绘制引入渲染线程降低主线程的压力,软件绘制则是在主线程完成所有的绘制命令处理;

下面给出了梳理后的硬件绘制的主要结构图,用于帮助整体上理解和掌握硬件绘制流程。

在这里插入图片描述

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值