目录
1.影响流畅度的因素有哪些
2.GPU硬件加速源码分析
3.使用硬件加速提升动画流畅度
4.通过Perfetto进行效果对比
5.资料
上一篇我们介绍了整体上Android渲染机制,这篇我们继续从渲染方面谈性能优化。GPU拥有强大的并行处理能力,在图像处理以及音视频编解码方面起着至关重要的作用,它在移动端性能优化方面,不仅可以提升渲染的性能,还能降低CPU负载,减少功耗,提供更好的流畅度和交互体验。
用户对于界面渲染流畅度十分敏锐,卡顿、丢帧或者动画不连贯甚至ANR,都可能会降低用户体验。Android上GPU硬件加速在渲染过程中如何发挥作用?页面切换/弹框动画等如何更加流畅地执行?下面开启我们当下的探索之旅。
一、影响渲染流畅度的因素有哪些
渲染流畅度可以用每帧渲染时长来进行评估。若一帧无法在手机屏幕刷新周期内完成绘制渲染,就会出现掉帧。影响渲染流畅度的因素有:
1)系统渲染机制: Andorid为了保证线程安全和数据一致性(防止页面错乱), 把View的Measure Layout Draw(构建displaylist部分)在UIThread执行, 当页面层级比较深或者比较复杂时,Measure和Layout的时间过长,导致页面渲染时丢帧卡顿, 也有一些其他的框架例如Facebook大胆创新推出的Litho,提出了异步布局的思想,在CPU闲暇时提前在异步线程进行Measure和Layout,绘制环节则保持Android的Draw流程(UIThread构建Display,RenderThread渲染),但是它采用声明式API编写UI,无法实时预览. Flutter在渲染机制上和Android也有很大的不同,Measure Layout和Draw都不需要在UIThread上执行,它采用Skia(包含OpenGL和Vulkan等GPU实现)渲染引擎和统一的渲染管线,有Flutter Enginer在独立的线程处理.
2)CPU 负载过高: 软件渲染,绘制渲染也在cpu上处理;多层级嵌套视图,增加measure和layout的耗时;频繁的重绘
3)GPU负载过高:GPU渲染时过度绘制;大量复杂的绘制操作(阴影,渐变等)
4)UIThread阻塞: UIThread负责处理用户输入、布局计算、displaylist构建等操作。如果 UI 线程被阻塞,会导致界面卡顿、ANR,例如,在 UI 线程进行耗时操作 (如网络请求、频繁大量SQLite操作,文件读写,binder同步通信,循环/嵌套的逻辑计算等)
5)频繁/大量的纹理上传:Bitmap在java/Native分配内存后,使用GPU渲染,需要进行纹理上传,把图像数据从内存上传给GPU显存
6)内存抖动: 频繁的内存分配和回收会导致内存碎片增多,降低内存使用效率,频繁触发 GC,导致卡顿。
7)复杂的动画: 复杂的动画效果需要进行大量的计算和绘制,如果优化不足,会导致动画卡顿。
规范和监控是避免上述因素的重要手段,另外从机制上,减少CPU(特别是UIThread)的耗时,充分利用GPU进行矩阵运算及渲染操作,也是重要的手段。下面先结合源码和perfetto分析GPU硬件加速渲染的过程。
二、GPU硬件加速源码分析
当Vsync信号到来时,Choreograhper#onVsync会依次调用input,animator,traversal等回调。其中traversal终 ViewRootImpl会依次执行performTraversals-->performDraw-->draw,进而在构建Displaylist并进行渲染。
这里的渲染流程分为两部分:
1)UIThread上构建(重建/更新)DisplayListOps
2)把DisplayList同步到RenderThread线程,并调用GPU渲染管线(OpenGLES/Vulkan)进行渲染
//ThreadedRenderer.java
void draw(View view, AttachInfo attachInfo, DrawCallbacks callbacks) {
...
//构建(重新创建或者更新)Displaylist
updateRootDisplayList(view, callbacks);
//同步displaylist,并进行渲染
int syncResult = syncAndDrawFrame(choreographer.mFrameInfo);
...
}
整体的流程图如下图所示,由于图片比较大,压缩后字体看不太清,高清原图请公众号“音视频开发之旅”,回复:“硬件加速高清图” 进行获取
同时我们也从Perfetto视角看下硬件加速下的渲染的流程
另外为了更好的理解View、RenderNode、Displaylist以及对应的ViewTree,RenderNode树之间的关系,引用下面图片进行可视化说明
RootView是渲染树的根节点,RootRenderNode是视图的渲染根节点,负责协调和管理其子节点的渲染
每个渲染节点RenderNode都关联一个SkiaDisplayList,SkiaDisplayList包含存储了绘制指令的DisplayListData和用于存储子节点额外信息的mChildNotes
https://blog.csdn.net/ukynho/article/details/130763187
下面我们分别从DisplayList的构建以及渲染两个阶段从源码上进行分析
2.1 构建DisplayList
View内部有一个RenderNode,用于记录该View的渲染节点是否有变更
//View.java
final RenderNode mRenderNode;
//其实是Native层的一个指针
mRenderNode = RenderNode.create(getClass().getName(), new ViewAnimationHostBridge(this));
Java层的RenderNode其实是对Native层RenderNode的封装
// RenderNode.java
private RenderNode(String name, AnimationHost animationHost) {
mNativeRenderNode = nCreate(name);
...
}
private static native long nCreate(String name);
JNI层终构造RenderNode对象,其中RenderNdoe中有几个重要的属性 mProperties、mStagingProperties、mDisplayList和mStagingDisplayList
//android_graphics_RenderNode.cpp
static jlong android_view_RenderNode_create(JNIEnv* env, jobject, jstring name) {
RenderNode* renderNode = new RenderNode();
renderNode->incStrong(0);
if (name != NULL) {
const char* textArray = env->GetStringUTFChars(name, NULL);
renderNode->setName(textArray);
env->ReleaseStringUTFChars(name, textArray);
}
return reinterpret_cast<jlong>(renderNode);
}
//RenderNode.h
RenderProperties mProperties;
//mStagingProperties记录的是修改但还没有提交的属性,存在该临时变量中
RenderProperties mStagingProperties;
//提交后把mStagingDisplayList赋值给mDisplayList
DisplayList mDisplayList;
//mStagingDisplayList记录已修改但还没提交的绘制指令,存在该临时变量中
DisplayList mStagingDisplayList;
各种绘制命令存储在RecordingCanvas
//RecordingCanvas.cpp
...
struct Scale final : Op {
static const auto kType = Type::Scale;
Scale(SkScalar sx, SkScalar sy) : sx(sx), sy(sy) {}
SkScalar sx, sy;
void draw(SkCanvas* c, const SkMatrix&) const { c->scale(sx, sy); }
};
struct Translate final : Op {
static const auto kType = Type::Translate;
Translate(SkScalar dx, SkScalar dy) : dx(dx), dy(dy) {}
SkScalar dx, dy;
void draw(SkCanvas* c, const SkMatrix&) const { c->translate(dx, dy); }
};
struct ClipPath final : Op {
static const auto kType = Type::ClipPath;
ClipPath(const SkPath& path, SkClipOp op, bool aa) : path(path), op(op), aa(aa) {}
SkPath path;
SkClipOp op;
bool aa;
void draw(SkCanvas* c, const SkMatrix&) const { c->clipPath(path, op, aa); }
};
...
RenderNode节点支持的属性类型有 平移,旋转,缩放,透明度变换等
//RenderNode.h
enum DirtyPropertyMask {
GENERIC = 1 << 1,
TRANSLATION_X = 1 << 2,
TRANSLATION_Y = 1 << 3,
TRANSLATION_Z = 1 << 4,
SCALE_X = 1 << 5,
SCALE_Y = 1 << 6,
ROTATION = 1 << 7,
ROTATION_X = 1 << 8,
ROTATION_Y = 1 << 9,
X = 1 << 10,
Y = 1 << 11,
Z = 1 << 12,
ALPHA = 1 << 13,
DISPLAY_LIST = 1 << 14,
};
分析完构造和基本的属性后我们通过ThreadedRender的updateRootDisplayList看下Displaylist构建的流程
1)首先遍历更新子视图的DisplayList(通过View.RenderNode.beginRecording获取RecordingCanvas对象,进行各种操作记录,最后调用RootNode.endRecording把修改的displaylist暂存到RenderNode的StageingDisplayList,为下一步渲染做准备)
2)同样的操作对RootNode进行Ops记录
private void updateRootDisplayList(View view, DrawCallbacks callbacks) {
updateViewTreeDisplayList(view);
if (mRootNodeNeedsUpdate || !mRootNode.hasDisplayList()) {
RecordingCanvas canvas = mRootNode.beginRecording(mSurfaceWidth, mSurfaceHeight);
try {
final int saveCount = canvas.save();
canvas.translate(mInsetLeft, mInsetTop);
callbacks.onPreDraw(canvas);
canvas.enableZ();
canvas.drawRenderNode(view.updateDisplayListIfDirty());
canvas.disableZ();
callbacks.onPostDraw(canvas);
canvas.restoreToCount(saveCount);
mRootNodeNeedsUpdate = false;
} finally {
mRootNode.endRecording();
}
}
}
整体构建DisplayList流程图如下
2.2 渲染DisplayList
再回到ThreadedRenderer#draw方法,第二格阶段就是调用syncAndDrawFrame对UIThread构建的Displaylist进行同步到RenderThread线程并调用OpenGLES/Vulkan渲染管线进行GPU渲染
//ThreadedRenderer.java
void draw(View view, AttachInfo attachInfo, DrawCallbacks callbacks) {
...
//构建(重新创建或者更新)Displaylist
updateRootDisplayList(view, callbacks);
//渲染Displaylist
int syncResult = syncAndDrawFrame(choreographer.mFrameInfo);
...
}
@SyncAndDrawResult
public int syncAndDrawFrame(@NonNull FrameInfo frameInfo) {
return nSyncAndDrawFrame(mNativeProxy, frameInfo.frameInfo, frameInfo.frameInfo.length);
}
在JNI层,通过RenderProxy进一步调用proxy->syncAndDrawFrame,RenderProxy是在ThreadedRenderer构造时被创建
//android_graphics_HardwareRenderer.cpp
static int android_view_ThreadedRenderer_syncAndDrawFrame(JNIEnv* env, jobject clazz,
jlong proxyPtr, jlongArray frameInfo,
jint frameInfoSize) {
//调用proxy->syncAndDrawFrame
RenderProxy* proxy = reinterpret_cast<RenderProxy*>(proxyPtr);
env->GetLongArrayRegion(frameInfo, 0, frameInfoSize, proxy->frameInfo());
return proxy->syncAndDrawFrame();
}
//ThreadedRenderer.java
public HardwareRenderer() {
...
//native层创建RenderNode返回指针给到java层,创建javamRootNode对象
mRootNode = RenderNode.adopt(nCreateRootRenderNode());
mNativeProxy = nCreateProxy(!mOpaque, mRootNode.mNativeRenderNode);
...
}
RenderProxy有3个重要的成员变量:mRenderThread,mContext (CanvasContext类型)和mDrawFrameTask,其中mRenderThread就是渲染线程,通过单例的方式创建,一个进程只有一个; CanvasContext是一个画布上下文,具体的绘制渲染都是通过它调用OpenGLES/Vulkan完成的; DrawFrameTask是一个用来执行渲染任务的Task.
//RenderProxy.cpp
RenderProxy::RenderProxy(bool translucent, RenderNode* rootRenderNode,
IContextFactory* contextFactory)
: mRenderThread(RenderThread::getInstance()), mContext(nullptr) {
mContext = mRenderThread.queue().runSync([&]() -> CanvasContext* {
return CanvasContext::create(mRenderThread, translucent, rootRenderNode, contextFactory);
});
mDrawFrameTask.setContext(&mRenderThread, mContext, rootRenderNode,
pthread_gettid_np(pthread_self()), getRenderThreadTid());
}
//RenderThread.cpp
RenderThread& RenderThread::getInstance() {
[[clang::no_destroy]] static sp<RenderThread> sInstance = []() {
sp<RenderThread> thread = sp<RenderThread>::make();
thread->start("RenderThread");
return thread;
}();
gHasRenderThreadInstance = true;
return *sInstance;
}
RenderThread::RenderThread()
: ThreadBase()
, mVsyncSource(nullptr)
, mVsyncRequested(false)
, mFrameCallbackTaskPending(false)
, mRenderState(nullptr)
, mEglManager(nullptr)
, mFunctorManager(WebViewFunctorManager::instance())
, mGlobalProfileData(mJankDataMutex) {
Properties::load();
}
//CanvasContext.cpp
CanvasContext* CanvasContext::create(RenderThread& thread, bool translucent,
RenderNode* rootRenderNode, IContextFactory* contextFactory) {
auto renderType = Properties::getRenderPipelineType();
switch (renderType) {
case RenderPipelineType::SkiaGL:
return new CanvasContext(thread, translucent, rootRenderNode, contextFactory,
std::make_unique<skiapipeline::SkiaOpenGLPipeline>(thread));
case RenderPipelineType::SkiaVulkan:
return new CanvasContext(thread, translucent, rootRenderNode, contextFactory,
std::make_unique<skiapipeline::SkiaVulkanPipeline>(thread));
default:
LOG_ALWAYS_FATAL("canvas context type %d not supported", (int32_t)renderType);
break;
}
return nullptr;
}
继续回到RenderProxy::syncAndDrawFrame,它调用了mDrawFrameTask.drawFrame方法
int RenderProxy::syncAndDrawFrame() {
return mDrawFrameTask.drawFrame();
}
DrawFrameTask进行postAndWait,加入到RenderThread的queue队列中,支持UIThread工作完成,接下来就交给RenderThread线程进行同步和渲染了
//DrawFrameTask.cpp
int DrawFrameTask::drawFrame() {
mSyncResult = SyncResult::OK;
postAndWait();
return mSyncResult;
}
void DrawFrameTask::postAndWait() {
ATRACE_CALL();
AutoMutex _lock(mLock);
//向RenderThread线程的Messagequeue post一个待执行任务
mRenderThread->queue().post([this]() { run(); });
//UI线程暂时进入wait等待状态
mSignal.wait(mLock);
}
接下来进入到RenderThread线程继续执行DisplayList的同步和渲染
DrawFrameTask::run调用到CanvasContext的方法
IRenderPipeline是一个接口, 实现类有SkiaCpuPipeline SkiaGpuPipeline SkiaVulkanPipeline SkiaOpenGLPipeline
//DrawFrameTask.cpp
CanvasContext* mContext;
void DrawFrameTask::run() {
...
// 获取渲染管线
IRenderPipeline* pipeline = mContext->getRenderPipeline();
{
TreeInfo info(TreeInfo::MODE_FULL, *mContext);
info.forceDrawFrame = mForceDrawFrame;
mForceDrawFrame = false;
//将UIThread构建的DisplayListOp绘制命令树,同步给RenderThread线程
canUnblockUiThread = syncFrameState(info);
...
}
...
//如果不阻塞UIThread,则唤醒UIThread,即DrawFrameTask::postAndWait中的wait
if (canUnblockUiThread) {
unblockUiThread();
}
if (CC_LIKELY(canDrawThisFrame)) {
//调用CanvasContext::draw进行绘制渲染
context->draw(solelyTextureViewUpdates);
}
if (pipeline->hasHardwareBuffer()) {
auto fence = pipeline->flush();
hardwareBufferParams.invokeRenderCallback(std::move(fence), 0);
}
...
}
DrawFrameTask::syncFrameState进行Displaylistops的同步到RenderThread线程
//DrawFrameTask.cpp
bool DrawFrameTask::syncFrameState(TreeInfo& info) {
...
//确保EGL上下文和EGLSurface被正确设置为后面的OpenGLES渲染做准备
bool canDraw = mContext->makeCurrent();
mContext->unpinImages();
...
mContext->setContentDrawBounds(mContentDrawBounds);
//调用CanvasContext::prepareTree 绘制命令树进行同步
mContext->prepareTree(info, mFrameInfo, mSyncQueued, mTargetNode);
...
return info.prepareTextures;
}
//CanvasContext.cpp
//CanvasContext::prepareTree 中递归调用个子View对应的RenderNode的prepareTree
void CanvasContext::prepareTree(TreeInfo& info, int64_t* uiFrameInfo, int64_t syncQueued,
RenderNode* target) {
...
//同步Displaylist之前, 调用animaterContext->startFrame执行一些动画显示的准备工作
mAnimationContext->startFrame(info.mode);
//遍历RenderNode调用prepareTree,同步DisplayList到GPU
for (const sp<RenderNode>& node : mRenderNodes) {
node->prepareTree(info);
}
//同步Displaylist之后,调用animatorContext->runRemaingAnimation更新剩下的还未完成的动画
mAnimationContext->runRemainingAnimations(info);
...
}
RenderNode::prepareTree通过调用RenderNode::pushStagingDisplayListChanges 完成DisplayList的赋值同步,至此DisplaylistOps的同步完成
void RenderNode::pushStagingDisplayListChanges(TreeObserver& observer, TreeInfo& info) {
if (mNeedsDisplayListSync) {
mNeedsDisplayListSync = false;
damageSelf(info);
syncDisplayList(observer, &info);
damageSelf(info);
}
}
void RenderNode::syncDisplayList(TreeObserver& observer, TreeInfo* info) {
...
mDisplayList = std::move(mStagingDisplayList);
...
}
完成的Displaylist和属性的同步,接下来再回到DrawFrameTask::run()方法中,调用CanvasContext->draw
//CanvasContext.cpp
void CanvasContext::draw(bool solelyTextureViewUpdates) {
...
//调用渲染管线(可以是OpenGL/Vulkan),按照构建好的绘制渲染命令进行绘制渲染
IRenderPipeline::DrawResult drawResult;
drawResult = mRenderPipeline->draw(
frame, windowDirty, dirty, mLightGeometry, &mLayerUpdateQueue, mContentDrawBounds,
mOpaque, mLightInfo, mRenderNodes, &(profiler()), mBufferParams, profilerLock());
//将绘制渲染好的backbuffer和frontbuffer进行交换,给到surfaceflinger进行合成上屏
bool didSwap = mRenderPipeline->swapBuffers(frame, drawResult, windowDirty, mCurrentFrameInfo,
&requireSwap);
}
整体Displaylistops同步以及渲染的流程总结为下图
三、使用RenderThread提升动画流畅度
动画广义上可以分为基本的页面动画和复杂的动效。
基本的页面动画用于按钮点击、页面切换、视图显示隐藏等常见的交互操作。包含位置变换(Translation)、大小缩放(Scale)、角度旋转(Rotation)、透明度变换(Alpha),它们可以单独使用也可以组合使用;动效指更复杂和有表现力的动画效果,涉及时间轴控制,矢量图动画等,通常使用渲染库(如:Lottie,libpag等)
这里我们分析基本的页面动画,使用硬件加速提升动画的流畅度
ViewPropertyAnimator对一些常见的属性(alpha、translation、scale、rorationo等)提供了GPU侧”变换“优化,由于属性动画不改变动画本身的绘制内容(背景、文本、位图等),只需要在CPU侧更新矩阵、属性等Displaylist信息(过程非常轻量级),而不是重现生成整份DisplayList,避免频繁的在CPU上重跑measure/layout/draw,减少CPU端的大量工作, GPU使用这些更新后的参数进行最终的合成。这里Displaylist的更新还是在UIThread线程执行,那么是否可以把动画的更新和渲染都在RenderThread上执行呐?这样可以更进一步解放CPU,特别是页面切换的场景,一边进行页面切换动画,一边进行界面的inflate和逻辑数据的填充,如果动画的displaylist构建或更新还在UIThread,容易造成页面切换的卡顿。通过 RenderThread 的并行处理,可以有效地提升动画的流畅度。把动画的更新和渲染都由 RenderThread 负责,避免了 UI 线程的阻塞。
ViewPropertyAnimator的使用
View view = findViewById(R.id.button);
ViewPropertyAnimator animator = view.animate().scaleX(1).translationX(1).alpha(1);
animator.start();
ViewPropertyAnimator的start调用了内部方法startAnimation
public void start() {
mView.removeCallbacks(mAnimationStarter);
startAnimation();
}
ViewPropertyAnimator#startAnimation实现在Android API29前后有些变化,API29前,startAnimation内部会先通过ViewPropertyAnimatorRT处理,API29之后废弃删除,但是ViewPropertyAnimatorRT的内部实现还是很有参考意义,下面我们先对其进行分析,对应代码地址
https://android.googlesource.com/platform/frameworks/base/+/57caeb5/core/java/android/view/ViewPropertyAnimator.java
https://android.googlesource.com/platform/frameworks/base/+/57caeb5/core/java/android/view/ViewPropertyAnimatorRT.java
3.1 ViewPropertyAnimatorRT
public class ViewPropertyAnimator {
......
/**
* A RenderThread-driven backend that may intercept startAnimation
*/
private ViewPropertyAnimatorRT mRTBackend;
......
private void startAnimation() {
//如果 mRTBackend不为空,且mRTBackend.startAnimation返回true,
//则表示使用了ViewPropertyAnimatorRT在RenderThread线程进行动画
if (mRTBackend != null && mRTBackend.startAnimation(this)) {
return;
}
......
ValueAnimator animator = ValueAnimator.ofFloat(1.0f);
......
animator.start();
}
......
}
ViewPropertyAnimatorRT#startAnimation,经过的canHandleAnimator(判断是否有监听以及是否支持硬件加速等)过滤后ViewPropertyAnimatorRT#doStartAnimation,实现如下
private void doStartAnimation(ViewPropertyAnimator parent) {
//获取待处理动画的数量
int size = parent.mPendingAnimations.size();
long startDelay = parent.getStartDelay();
long duration = parent.getDuration();
TimeInterpolator interpolator = parent.getInterpolator();
...
//遍历所有待处理的动画
for (int i = 0; i < size; i++) {
NameValuesHolder holder = parent.mPendingAnimations.get(i);
//RenderNodeAnimator是一个关键类,
//通过mapViewPropertyToRenderProperty方法,把View属性映射为RenderNodeAnimator的属性 (比如 ViewPropertyAnimator#TRANSLATION_X为0x0001,RenderNodeAnimator#TRANSLATION_X为0,把它们进行映射)
int property = RenderNodeAnimator.mapViewPropertyToRenderProperty(holder.mNameConstant);
//根据动画的起始值和变化值计算最终值
final float finalValue = holder.mFromValue + holder.mDeltaValue;
//根据RenderNodeAnimator的属性值property,动画的最终值初始化,延迟时间,持续时间以及插值器初始化RenderNodeAnimator
RenderNodeAnimator animator = new RenderNodeAnimator(property, finalValue);
animator.setStartDelay(startDelay);
animator.setDuration(duration);
animator.setInterpolator(interpolator);
//这个也是一个比较关键的方法,把RenderNodeAnimator对象设置给目标View
animator.setTarget(mView);
animator.start();
mAnimators[property] = animator;
}
parent.mPendingAnimations.clear();
}
下面我们看下 RenderNodeAnimator的构造函数和setTarget函数的实现
//frameworks/base/graphics/java/android/graphics/animation/RenderNodeAnimator.java
public RenderNodeAnimator(int property, float finalValue) {
mRenderProperty = property;
mFinalValue = finalValue;
mUiThreadHandlesDelay = true;
init(nCreateAnimator(property, finalValue));
}
private void init(long ptr) {
mNativePtr = new VirtualRefBasePtr(ptr);
}
private static native long nCreateAnimator(int property, float finalValue);
可以看到java层的RenderNodeAnimator是native层RenderPropertyAnimator的封装
//android_graphics_animation_RenderNodeAnimator.cpp
static jlong createAnimator(JNIEnv* env, jobject clazz,
jint propertyRaw, jfloat finalValue) {
RenderPropertyAnimator::RenderProperty property = toRenderProperty(propertyRaw);
BaseRenderNodeAnimator* animator = new RenderPropertyAnimator(property, finalValue);
animator->setListener(&sLifecycleChecker);
return reinterpret_cast<jlong>( animator );
}
我们继续分析RenderNodeAnimator#setTarget,最终在Native层RenderPropertyAnimator添加到AnimatorManager,AnimatorManager会检测动画是否完成,如果没有完成,RenderThread线程中监听onVsync事件回调,自动计算和渲染动画的下一帧
//frameworks/base/core/java/android/view/RenderNodeAnimator.java
public void setTarget(View view) {
mViewTarget = view;
setTarget(mViewTarget.mRenderNode);
}
private void setTarget(RenderNode node) {
...
nSetListener(mNativePtr.get(), this);
mTarget = node;
mTarget.addAnimator(this);
}
//frameworks/base/graphics/java/android/graphics/RenderNode.java
/** @hide */
public void addAnimator(RenderNodeAnimator animator) {
...
nAddAnimator(mNativeRenderNode, animator.getNativeAnimator());
mAnimationHost.registerAnimatingRenderNode(this, animator);
}
//android_graphics_RenderNode.cpp
static void android_view_RenderNode_addAnimator(JNIEnv* env, jobject clazz, jlong renderNodePtr,
jlong animatorPtr) {
RenderNode* renderNode = reinterpret_cast<RenderNode*>(renderNodePtr);
RenderPropertyAnimator* animator = reinterpret_cast<RenderPropertyAnimator*>(animatorPtr);
renderNode->addAnimator(animator);
}
//frameworks/base/libs/hwui/RenderNode.cpp
void RenderNode::addAnimator(const sp<BaseRenderNodeAnimator>& animator) {
//把动画设置到目标View关联的RenderNode对象的AnimatorManager
mAnimatorManager.addAnimator(animator);
}
我们再回到Java层的RenderNode#addAnimator,分析mAnimationHost.registerAnimatingRenderNode(this, animator)
会调用到ViewRootImpl#registerAnimatingRenderNode方法,将参数animator描述的RenderNode注册到RenderThread中,以便RenderThread知道哪些RenderNode有新的动画需要显示
//ViewRootImpl.java
public void registerAnimatingRenderNode(RenderNode animator) {
...
mAttachInfo.mThreadedRenderer.registerAnimatingRenderNode(animator);
...
}
//ThreadedRenderer.java
/** @hide */
public void registerAnimatingRenderNode(RenderNode animator) {
nRegisterAnimatingRenderNode(mRootNode.mNativeRenderNode, animator.mNativeRenderNode);
}
//frameworks/base/core/jni/android_graphics_HardwareRenderer.cpp
static void android_view_ThreadedRenderer_registerAnimatingRenderNode(JNIEnv* env, jobject clazz,
jlong rootNodePtr, jlong animatingNodePtr) {
RootRenderNode* rootRenderNode = reinterpret_cast<RootRenderNode*>(rootNodePtr);
RenderNode* animatingNode = reinterpret_cast<RenderNode*>(animatingNodePtr);
//将animatingNode描述的RenderNode注册到RootRenderNode
rootRenderNode->attachAnimatingNode(animatingNode);
}
保存在mPendingAnimatingRenderNodes这个Vector中的RenderNode,会在下一帧被渲染时得到处理,
r<sp<RenderNode> > mPendingAnimatingRenderNodes;
void RootRenderNode::attachAnimatingNode(RenderNode* animatingNode) {
mPendingAnimatingRenderNodes.push_back(animatingNode);
}
RenderThread在渲染下一帧时,会调用CanvasContext::prepareTree将Displaylist同步到RenderThread线程,在内部实现中会先调用AnimationContext->startFrame进行动画显示的准备工作,然后同步Displaylist,再调用AnimationContext->runRemainingAnimations执行剩余的还未完成的动画
//CanvasContext.cpp
void CanvasContext::prepareTree(TreeInfo& info, int64_t* uiFrameInfo, int64_t syncQueued,
RenderNode* target) {
...
//同步Displaylist之前, 调用animaterContext->startFrame执行一些动画显示的准备工作
mAnimationContext->startFrame(info.mode);
//遍历RenderNode调用prepareTree,同步DisplayList到RenderThread线程
for (const sp<RenderNode>& node : mRenderNodes) {
node->prepareTree(info);
}
//同步Displaylist之后,调用animatorContext->runRemaingAnimation更新剩下的还未完成的动画(其实就是计算动画的下一帧参数,应用到目标RenderNode上)
mAnimationContext->runRemainingAnimations(info);
...
}
AnimationContext中startFrame和runRemainingAnimations的实现如下
//frameworks/base/libs/hwui/AnimationContext.cpp
void AnimationContext::startFrame(TreeInfo::TraversalMode mode) {
AnimationHandle* head = mNextFrameAnimations.mNextHandle;
if (head) {
mNextFrameAnimations.mNextHandle = nullptr;
//将下一帧的AnimationHandle赋值给mCurrentFrameAnimations
mCurrentFrameAnimations.mNextHandle = head;
head->mPreviousHandle = &mCurrentFrameAnimations;
}
mFrameTimeMs = ns2ms(mClock.latestVsync());
}
void AnimationContext::runRemainingAnimations(TreeInfo& info) {
while (mCurrentFrameAnimations.mNextHandle) {
AnimationHandle* current = mCurrentFrameAnimations.mNextHandle;
//通过mCurrentFrameAnimations当前帧动画获取AnimatorManager,然后执行pushstaging和animateNoDamage,进行下一帧动画参数的计算
AnimatorManager& animators = current->mRenderNode->animators();
animators.pushStaging();
animators.animateNoDamage(info);
}
}
到此流程基本分析完毕,那么我们该如何使用ViewPropertyAnimatorRT把动画交给RenderThread处理呐?
new一个ViewPropertyAnimatorRT对象,然后设置ViewPropertyAnimator中的mRTBackend就可以,但ViewPropertyAnimatorRT是包保护级,可以通过反射获取并构造
//通过反射,创建对应的View的ViewPropertyAnimatorRT
private static Object createViewPropertyAnimatorRT(View view) {
try {
Class<?> animRtClazz = Class.forName("android.view.ViewPropertyAnimatorRT");
Constructor<?> animRtConstructor = animRtClazz.getDeclaredConstructor(View.class);
animRtConstructor.setAccessible(true);
Object animRt = animRtConstructor.newInstance(view);
return animRt;
} catch (Exception e) {
Log.d(TAG, "创建ViewPropertyAnimatorRT出错,错误信息:" + e.toString());
return null;
}
}
//然后把create的ViewPropertyAnimatorRT对象设置给ViewPropertyAnimator的mRTBackend
private static void setViewPropertyAnimatorRT(ViewPropertyAnimator animator, Object rt) {
try {
Class<?> animClazz = Class.forName("android.view.ViewPropertyAnimator");
Field animRtField = animClazz.getDeclaredField("mRTBackend");
animRtField.setAccessible(true);
animRtField.set(animator, rt);
} catch (Exception e) {
Log.d(TAG, "设置ViewPropertyAnimatorRT出错,错误信息:" + e.toString());
}
}
使用方法如下所示
ViewPropertyAnimator animator = view.animate().scaleX(2).setDuration(1000);
animatorRT = createViewPropertyAnimatorRT(view);
setViewPropertyAnimatorRT(animator,animatorRT)
animator.start()
3.2 CanvasProperty+RenderNodeAnimator
understanding-the-renderthread(https://medium.com/@workingkills/understanding-the-renderthread-4dc17bcaf979) 这篇文章提供了另一种思路,不依赖于ViewPropertyAnimatorRT,直接根据RenderNodeAnimator和CanvasProperty对Animator进行硬件加速
基本使用如下:
private void initialiseAnimation(Canvas canvas) {
float width = getWidth();
float height = getHeight();
float initialRadius = 0f;
float targetRadius = Math.min(width, height) / 2f;
radiusProperty = RenderThread.createCanvasProperty(canvas, initialRadius, True);
if (radiusAnimator != null) {
radiusAnimator.cancel();
}
//这里返回的Animator,如果是在硬件加速的情况下就是RenderNodeAnimator
radiusAnimator = RenderThread.createFloatAnimator(this, canvas, radiusProperty, targetRadius);
radiusAnimator.setInterpolator(new LinearInterpolator());
radiusAnimator.setDuration(animationDurationMillis);
radiusAnimator.start();
}
自定义RenderThread.java类,通过代理方式兼容软件计算渲染动画和硬件计算渲染动画
实现一些类似createCanvasProperty方法,把基本类型转为CanvasProperty包装的对象
实现一些类似createFloatAnimator方法,生成Animator对象
public final class RenderThread {
...
public static void init(boolean skipAndroidVersionCheck) {
RenderThreadDelegate delegate = DELEGATE;
if (delegate == null || !delegate.isSupported()) {
//RenderThreadMethods通过反射方式获取DisplayListCanvas,CanvasProperty和RenderNodeAnimator类中的方法
RenderThreadMethods methods = RenderThreadMethods.create(skipAndroidVersionCheck);
if (methods != null) {
DELEGATE = new RenderThreadDelegateHw(methods);
} else {
DELEGATE = new RenderThreadDelegate();
}
}
}
@NonNull
public static CanvasProperty<Float> createCanvasProperty(@NonNull Canvas canvas, float initialValue, boolean useRenderThread) {
return DELEGATE.createCanvasProperty(canvas, initialValue, useRenderThread);
}
public static void setAnimatorTarget(@RenderNodeAnimator @NonNull Animator animator, @DisplayListCanvas @NonNull Canvas target) {
DELEGATE.setTarget(animator, target);
}
public static Animator createFloatAnimator(
@NonNull View view,
@NonNull Canvas canvas,
@NonNull CanvasProperty<Float> property,
float targetValue) {
return DELEGATE.createFloatAnimator(view, canvas, property, targetValue);
}
...
}
RenderThreadMethods通过反射方式获取DisplayListCanvas,CanvasProperty和RenderNodeAnimator的类 方法和属性
final class RenderThreadMethods {
static RenderThreadMethods create(boolean skipAndroidVersionCheck) {
...
try {
ClassLoader classLoader = RenderThreadMethods.class.getClassLoader();
Class<?> displayListCanvas = loadDisplayListCanvasClassOrEquivalent(sdk, classLoader);
Class<?> canvasProperty = loadCanvasPropertyClass(classLoader);
Class<Animator> renderNodeAnimatorClass = loadRenderNodeAnimatorClass(classLoader);
Method displayListCanvas_drawRoundRect = getDisplayListCanvasDrawRoundRectMethod(displayListCanvas, canvasProperty);
Method canvasProperty_createFloat = getCanvasPropertyCreateFloatMethod(canvasProperty);
Constructor<Animator> renderNodeAnimator_float = getRenderNodeAnimatorFloatConstructor(canvasProperty, renderNodeAnimatorClass);
Method renderNodeAnimator_setTarget = getRenderNodeAnimatorSetTargetMethod(renderNodeAnimatorClass);
...
return new RenderThreadMethods(
displayListCanvas,
displayListCanvas_drawCircle,
displayListCanvas_drawRoundRect,
canvasProperty_createFloat,
canvasProperty_createPaint,
renderNodeAnimator_float,
renderNodeAnimator_paint,
renderNodeAnimator_setTarget,
renderNodeAnimator_paintField_strokeWidth,
renderNodeAnimator_paintField_alpha);
} catch (Exception e) {
logW("Error while getting render thread methods.", e);
return null;
}
}
public void setTarget(@RenderNodeAnimator @NonNull Animator animator, @DisplayListCanvas @NonNull Canvas target) {
//noinspection TryWithIdenticalCatches
try {
renderNodeAnimator_setTarget.invoke(animator, target);
} catch (IllegalAccessException e) {
throw new RuntimeException(e);
} catch (InvocationTargetException e) {
throw new RuntimeException(e);
}
}
}
下面我们看下使用硬件GPU进行动画计算和渲染的实现(RenderThreadDelegateHw.java)
主要还是做几件事情:
1. 构造HardwareCanvasProperty
2. 调用RenderThreadMethods中的反射方法创建RenderNodeAnimator
3. 建立View(最终为RenderNode)和RenderNodeAnimator之间的关系
final class RenderThreadDelegateHw extends RenderThreadDelegate {
private final RenderThreadMethods renderThread;
RenderThreadDelegateHw(@NonNull RenderThreadMethods renderThread) {
this.renderThread = renderThread;
}
public CanvasProperty<Float> createCanvasProperty(@NonNull Canvas canvas, float initialValue, boolean useRenderThread) {
CanvasProperty<Float> hw = null;
if (useRenderThread && isDisplayListCanvas(canvas)) {
hw = createHardwareCanvasProperty(initialValue);
}
if (hw != null) {
return hw;
} else {
return createSoftwareCanvasProperty(initialValue);
}
}
protected HardwareCanvasProperty<Float> createHardwareCanvasProperty(float initialValue) {
return new HardwareCanvasProperty<>(renderThread.createCanvasProperty(initialValue));
}
...
protected Animator createHardwareFloatAnimator(
@DisplayListCanvas @Nullable Canvas canvas, @NonNull HardwareCanvasProperty<Float> property, float targetValue) {
Animator animator = renderThread.createFloatRenderNodeAnimator(property.getProperty(), targetValue);
if (canvas != null) {
setTarget(animator, canvas);
}
return animator;
}
@Override
public void setTarget(@RenderNodeAnimator @NonNull Animator animator, @DisplayListCanvas @NonNull Canvas target) {
renderThread.setTarget(animator, target);
}
...
}
最终setTarget调用到RenderNodeAnimator的setTarget方法,进行RenderNode和RenderNodeAnimator的监听绑定
最终调用到Native层AnimatorManager.cpp
public final void setTarget(RecordingCanvas canvas) {
setTarget(canvas.mNode);
}
protected void setTarget(RenderNode node) {
...
nSetListener(mNativePtr.get(), this);
mTarget = node;
mTarget.addAnimator(this);
}
//frameworks/base/libs/hwui/RenderNode.cpp
void RenderNode::addAnimator(const sp<BaseRenderNodeAnimator>& animator) {
mAnimatorManager.addAnimator(animator);
}
然后可以借鉴ViewPropertyAnimatorRT#doStartAnimation的实现,对ViewPropertyAnimator(不论是API29之前还是之后)进行硬件加速动画
private void doStartAnimation(ViewPropertyAnimator parent) {
//这里改为反射方式获取mPendingAnimations
int size = parent.mPendingAnimations.size();
long startDelay = parent.getStartDelay();
long duration = parent.getDuration();
TimeInterpolator interpolator = parent.getInterpolator();
...
//遍历所有待处理的动画
for (int i = 0; i < size; i++) {
//这里改为反射方式获取mPendingAnimations
NameValuesHolder holder = parent.mPendingAnimations.get(i);
//RenderNodeAnimator是一个关键类,
//这里也改为通过反射方式调用mapViewPropertyToRenderProperty
int property = RenderNodeAnimator.mapViewPropertyToRenderProperty(holder.mNameConstant);
//根据动画的起始值和变化值计算最终值
final float finalValue = holder.mFromValue + holder.mDeltaValue;
//通过反射方式构造RenderNodeAnimator并调用对应的set方法
RenderNodeAnimator animator = new RenderNodeAnimator(property, finalValue);
animator.setStartDelay(startDelay);
animator.setDuration(duration);
animator.setInterpolator(interpolator);
animator.setTarget(mView);
animator.start();
mAnimators[property] = animator;
}
parent.mPendingAnimations.clear();
}
四、 通过Perfetto进行效果对比
通过Perfetto对 使用RenderThread对Animator进行加速 和 不使用RenderThread进行动画加速 进行对比
效果如下所示,可以很清晰的看到,使用RenderThread对Animator动画加速时,动画的计算和渲染都在RenderThread线程,完全解放了UIThread
不使用RenderThread对动画进行硬件加速时,UIThread还会承担Displaylist更新的任务,再把Displaylistops同步到RenderThread进行渲染动画,相比前者,如果UI线程任务比较繁忙时就更容易造成卡顿,并且也会影响秒开率等性能指标
五、资料
1. Android N中UI硬件渲染(hwui)的HWUI_NEW_OPS(基于Android 7.1)
https://blog.csdn.net/jinzhuojun/article/details/54234354
2.【硬件加速】3、DisplayList渲染过程分析【Android 13】https://blog.csdn.net/ukynho/article/details/130763295
3. Android 重学系列 View的绘制流程(七) 硬件渲染(下)
https://www.jianshu.com/p/4854d9fcc55e
4.Android 系统渲染那些事:
https://juejin.cn/post/7423708879224045594
5.努比亚团队-Android性能优化(一)——卡顿优化
https://www.sukaidev.top/2022/06/28/f4c089aa
6.安卓性能优化---绘制优化篇
https://juejin.cn/post/7050404740760354829
7.老罗-Android应用程序UI硬件加速渲染环境初始化过程分析
https://blog.csdn.net/luoshengyang/article/details/45769759
8. 西瓜视频RenderThread引起的闪退问题攻坚历程
https://blog.csdn.net/ByteDanceTech/article/details/134985982
9.美团-Litho的使用及原理剖析:基本功 | Litho的使用及原理剖析 - 美团技术团队
10.RenderThread:异步渲染动画 https://mp.weixin.qq.com/s?__biz=MzUyMDAxMjQ3Ng==&mid=2247489230&idx=1&sn=adc193e35903ab90a4c966059933a35a
11.张绍文-如何优化UI渲染
https://time.geekbang.org/column/article/81049
12.Understanding the RenderThread
https://medium.com/@workingkills/understanding-the-renderthread-4dc17bcaf979
感谢你的阅读
欢迎关注公众号“音视频开发之旅”,一起学习成长。
欢迎交流