浅析Android中的Choreographer工作原理

0. 前言

Android应用的UI界面需要多个角色共同协作才能完成渲染并呈现到屏幕上被用户看到,其中包括App进程的测量、布局和绘制,SurfaceFlinger进程的渲染数据合成以及屏幕刷新帧数据等。但是,如果App进程生成视图数据的速度比SurfaceFlinger进程合成数据快,那么将会出现屏幕撕裂(tearing)问题,即屏幕中显示的内容来自多帧数据。

为了协调App进程的视图数据生产和SurfaceFlinger进程的视图数据消费之间的节奏,Android系统引入Choreographer来请求VSYNC信号并将App进程的布局绘制工作调度到VSYNC信号周期中,减少因为Android应用的绘制渲染和屏幕刷新之间不同步导致的屏幕撕裂(tearing)问题。

本文从Choreographer的创建流程和Choreographer请求和分发处理VSYNC两个方面来分析Choreographer机制的工作原理。

说明:本文中的源码对应的Android版本是Android 13。

1. Choreographer的创建流程

先看下Choreographer的创建时机:Android应用每启动一个ActivitySystemServer进程会在ActivityTaskSupervisor#realStartActivityLocked方法中通过Binder跨进程调用到App进程,在App进程调用了IApplicationThread#scheduleTransaction来执行Activity的创建和生命周期方法的调用,最终在Activity#onResume方法执行之后会调用ViewManager#addView进行ViewRootImpl实例的创建,并在ViewRootImpl的构造函数中完成了Choreographer实例创建,并将其作为ViewRootImpl对象的成员变量进行持有。

下面结合源码对Activity的启动过程进行分析:

首先,SystemServer进程收到App进程启动ActivityBinder请求后,在ActivityTaskSupervisor#realStartActivityLocked方法中封装LaunchActivityItemResumeActivityItem并通过ClientLifecycleManager#scheduleTransaction调度执行任务,最终通过App进程的IApplicationThreadBinder实例)跨进程调用到App进程的ApplicationThread#scheduleTransaction方法。

	// com.android.server.wm.ActivityTaskSupervisor#realStartActivityLocked
	boolean realStartActivityLocked(ActivityRecord r, WindowProcessController proc, boolean andResume, boolean checkConfig) throws RemoteException {       
			...
			// Create activity launch transaction.
           final ClientTransaction clientTransaction = ClientTransaction.obtain(proc.getThread(), r.token);
           ...
           // add callback to execute the lifecycle of activity.
           clientTransaction.addCallback(LaunchActivityItem.obtain(new Intent(r.intent), System.identityHashCode(r), r.info, mergedConfiguration.getGlobalConfiguration(), mergedConfiguration.getOverrideConfiguration(), r.compat, r.getFilteredReferrer(r.launchedFromPackage), task.voiceInteractor, proc.getReportedProcState(), r.getSavedState(), r.getPersistentSavedState(), results, newIntents, r.takeOptions(), isTransitionForward, proc.createProfilerInfoIfNeeded(), r.assistToken, activityClientController, r.shareableActivityToken, r.getLaunchedFromBubble(), fragmentToken));

           // Set desired final state.
           final ActivityLifecycleItem lifecycleItem;
           if (andResume) {
               lifecycleItem = ResumeActivityItem.obtain(isTransitionForward);
           } else {
               lifecycleItem = PauseActivityItem.obtain();
           }
           clientTransaction.setLifecycleStateRequest(lifecycleItem);

           // Schedule transaction.
		   mService.getLifecycleManager().scheduleTransaction(clientTransaction);
           ...
	}

	// android.app.ClientTransactionHandler#scheduleTransaction
    void scheduleTransaction(ClientTransaction transaction) throws RemoteException {
        final IApplicationThread client = transaction.getClient();
        transaction.schedule();
        if (!(client instanceof Binder)) {
            // If client is not an instance of Binder - it's a remote call and at this point it is
            // safe to recycle the object. All objects used for local calls will be recycled after
            // the transaction is executed on client in ActivityThread.
            transaction.recycle();
        }
    }

	// android.app.servertransaction.ClientTransaction#schedule
	/**
     * 当transaction初始化之后开始调度执行,将会被发送给client端并按照以下顺序进行处理:
     * 1. 调用preExecute(ClientTransactionHandler)
     * 2. 调度transaction对应的消息
     * 3. 调用TransactionExecutor#execute(ClientTransaction)
     */
    public void schedule() throws RemoteException {
        mClient.scheduleTransaction(this); // mClient为App进程通过Binder通信传递的Binder句柄IApplicationThread。
    }

经过Binder跨进程通信后,已经来到App进程里,ApplicationThread#scheduleTransaction方法直接调用到外部类ActivityThread#scheduleTransaction方法。而ActivityThread#scheduleTransaction通过Handler类型的对象mH向主线程的MessageQueue中插入了一个EXECUTE_TRANSACTION消息。之后,App进程将会在主线程中处理EXECUTE_TRANSACTION消息,创建Activity实例并调用其生命周期方法。

/**
 * 负责管理应用进程中的主线程任务,按照SystemServer进程请求的那样对任务进行调度和执行。
 */
public final class ActivityThread extends ClientTransactionHandler
        implements ActivityThreadInternal {
	@UnsupportedAppUsage
    final ApplicationThread mAppThread = new ApplicationThread();
    @UnsupportedAppUsage
    final Looper mLooper = Looper.myLooper();
    @UnsupportedAppUsage
    final H mH = new H();
    // An executor that performs multi-step transactions.
    private final TransactionExecutor mTransactionExecutor = new TransactionExecutor(this);
    ...
	private class ApplicationThread extends IApplicationThread.Stub {
		...
		@Override
        public void scheduleTransaction(ClientTransaction transaction) throws RemoteException {
            ActivityThread.this.scheduleTransaction(transaction);
        }
        ...
	}

	// from the class ClientTransactionHandler that is the base class of ActivityThread
	/** Prepare and schedule transaction for execution. */
    void scheduleTransaction(ClientTransaction transaction) {
        transaction.preExecute(this);
        sendMessage(ActivityThread.H.EXECUTE_TRANSACTION, transaction);
    }

	private void sendMessage(int what, Object obj, int arg1, int arg2, boolean async) {
		...
        // mH为主线程的Handler
        mH.sendMessage(msg);
    }

	class H extends Handler {
		public void handleMessage(Message msg) {
			...
			switch (msg.what) {
				...
				case EXECUTE_TRANSACTION:
                    final ClientTransaction transaction = (ClientTransaction) msg.obj;
                    mTransactionExecutor.execute(transaction);
                    ...
                    break;
                ...
            }
			...
		}
	}

接下来,主线程调用TransactionExecutor#execute方法处理消息,并在executeCallbacks方法中执行了LaunchActivityItem实例。

/**
 * Class that manages transaction execution in the correct order.
 */
public class TransactionExecutor {
	/**
     * Resolve transaction.
     * First all callbacks will be executed in the order they appear in the list. If a callback
     * requires a certain pre- or post-execution state, the client will be transitioned accordingly.
     * Then the client will cycle to the final lifecycle state if provided. Otherwise, it will
     * either remain in the initial state, or last state needed by a callback.
     */
    public void execute(ClientTransaction transaction) {
        ...
		// 实际执行生命周期任务的地方
        executeCallbacks(transaction);
        ...
    }

	/** Cycle through all states requested by callbacks and execute them at proper times. */
    @VisibleForTesting
    public void executeCallbacks(ClientTransaction transaction) {
    	// 取出之前SystemServer进程放入的LaunchActivityItem和ResumeActivityItem
        final List<ClientTransactionItem> callbacks = transaction.getCallbacks();
        if (callbacks == null || callbacks.isEmpty()) {
            // No callbacks to execute, return early.
            return;
        }
        
        final IBinder token = transaction.getActivityToken();
        ActivityClientRecord r = mTransactionHandler.getActivityClient(token);

        // In case when post-execution state of the last callback matches the final state requested
        // for the activity in this transaction, we won't do the last transition here and do it when
        // moving to final state instead (because it may contain additional parameters from server).
        final ActivityLifecycleItem finalStateRequest = transaction.getLifecycleStateRequest();
        final int finalState = finalStateRequest != null ? finalStateRequest.getTargetState() : UNDEFINED;
        // Index of the last callback that requests some post-execution state.
        final int lastCallbackRequestingState = lastCallbackRequestingState(transaction);

        final int size = callbacks.size();
        for (int i = 0; i < size; ++i) {
            final ClientTransactionItem item = callbacks.get(i);
            final int postExecutionState = item.getPostExecutionState();
            final int closestPreExecutionState = mHelper.getClosestPreExecutionState(r, item.getPostExecutionState());
            if (closestPreExecutionState != UNDEFINED) {
                cycleToPath(r, closestPreExecutionState, transaction);
            }
			// 执行LaunchActivityItem和ResumeActivityItem的execute方法
            item.execute(mTransactionHandler, token, mPendingActions);
            item.postExecute(mTransactionHandler, token, mPendingActions);
            if (r == null) {
                // Launch activity request will create an activity record.
                r = mTransactionHandler.getActivityClient(token);
            }

            if (postExecutionState != UNDEFINED && r != null) {
                // Skip the very last transition and perform it by explicit state request instead.
                final boolean shouldExcludeLastTransition = i == lastCallbackRequestingState && finalState == postExecutionState;
                cycleToPath(r, postExecutionState, shouldExcludeLastTransition, transaction);
            }
        }
    }
	...
}

最后,调用ActivityThread#handleResumeActivity方法并执行了Activity#onResume方法后,继续调用ViewManager#addViewDecorViewWindowManagerImpl进行记录。

	/**
	 * Request to move an activity to resumed state.
	 * @hide
	 */
	public class ResumeActivityItem extends ActivityLifecycleItem {
	    private static final String TAG = "ResumeActivityItem";
	    ...
	    @Override
	    public void execute(ClientTransactionHandler client, ActivityClientRecord r, PendingTransactionActions pendingActions) {
	        Trace.traceBegin(TRACE_TAG_ACTIVITY_MANAGER, "activityResume");
	        // client是android.app.ActivityThread
	        client.handleResumeActivity(r, true /* finalStateRequest */, mIsForward, "RESUME_ACTIVITY");
	        Trace.traceEnd(TRACE_TAG_ACTIVITY_MANAGER);
	    }
	    ...
	}

	// android.app.ActivityThread#handleResumeActivity
	@Override
    public void handleResumeActivity(ActivityClientRecord r, boolean finalStateRequest, boolean isForward, String reason) {
        // If we are getting ready to gc after going to the background, well
        // we are back active so skip it.
        unscheduleGcIdler();
        mSomeActivitiesChanged = true;
        // 执行Activity的Resume方法
        if (!performResumeActivity(r, finalStateRequest, reason)) {
            return;
        }
        ...
        if (r.window == null && !a.mFinished && willBeVisible) {
            r.window = r.activity.getWindow();
            View decor = r.window.getDecorView();
            // 绘制完成之前需要对用户不可见
            decor.setVisibility(View.INVISIBLE);
            ViewManager wm = a.getWindowManager();
            WindowManager.LayoutParams l = r.window.getAttributes();
            ...
            if (a.mVisibleFromClient) {
                if (!a.mWindowAdded) {
                    a.mWindowAdded = true;
                    // 将DecorView和WindowManagerImpl进行关联
                    wm.addView(decor, l);
                } else {
                	...
                }
            }

            // If the window has already been added, but during resume
            // we started another activity, then don't yet make the
            // window visible.
        } else if (!willBeVisible) {
            if (localLOGV) Slog.v(TAG, "Launch " + r + " mStartedActivity set");
            r.hideForNow = true;
        }

        ...
        Looper.myQueue().addIdleHandler(new Idler());
    }

ViewManager#addView方法中创建了ViewRootImpl实例,并将ViewRootImpl实例和DecorView实例进行保存,最后调用ViewRootImpl#setViewDecorViewViewRootImpl进行关联,后续由ViewRootImpl作为桥梁来和DecorView进行交互。

/**
 * Provides low-level communication with the system window manager for
 * operations that are not associated with any particular context.
 *
 * This class is only used internally to implement global functions where
 * the caller already knows the display and relevant compatibility information
 * for the operation.  For most purposes, you should use {@link WindowManager} instead
 * since it is bound to a context.
 *
 * @see WindowManagerImpl
 * @hide
 */
public final class WindowManagerGlobal {
	@UnsupportedAppUsage
    private final ArrayList<View> mViews = new ArrayList<View>();
    @UnsupportedAppUsage
    private final ArrayList<ViewRootImpl> mRoots = new ArrayList<ViewRootImpl>();
    @UnsupportedAppUsage
    private final ArrayList<WindowManager.LayoutParams> mParams = new ArrayList<WindowManager.LayoutParams>();

    public void addView(View view, ViewGroup.LayoutParams params,
            Display display, Window parentWindow, int userId) {
        ...
        final WindowManager.LayoutParams wparams = (WindowManager.LayoutParams) params;
        ViewRootImpl root;
        View panelParentView = null;
        synchronized (mLock) {
            ...
            IWindowSession windowlessSession = null;
            ...
			// 创建ViewRootImpl实例
            if (windowlessSession == null) {
                root = new ViewRootImpl(view.getContext(), display);
            } else {
                root = new ViewRootImpl(view.getContext(), display, windowlessSession);
            }
            view.setLayoutParams(wparams);
			// 维护DecorView、ViewRootImpl实例
            mViews.add(view);
            mRoots.add(root);
            mParams.add(wparams);
            try {
            	// 调用setView将DecorView和ViewRootImpl进行关联,后续由ViewRootImpl作为桥梁来间接和DecorView进行交互
                root.setView(view, wparams, panelParentView, userId);
            } catch (RuntimeException e) {
                // BadTokenException or InvalidDisplayException, clean up.
                if (index >= 0) {
                    removeViewLocked(index, true);
                }
                throw e;
            }
        }
    }
    ...
}

看下ViewRootImpl的构造函数,其中创建了Choreographer实例,并将其作为ViewRootImpl的成员变量持有,借助Choreographer实例,ViewRootImpl可以在App进程和SurfaceFlinger进程之间进行双向通信。

public final class ViewRootImpl implements ViewParent, View.AttachInfo.Callbacks, ThreadedRenderer.DrawCallbacks, AttachedSurfaceControl {
    private static final String TAG = "ViewRootImpl";
	public ViewRootImpl(@UiContext Context context, Display display, IWindowSession session, boolean useSfChoreographer) {
        ...
        // 创建Choreographer实例
        mChoreographer = useSfChoreographer ? Choreographer.getSfInstance() : Choreographer.getInstance();
         ...
    }
    ...
}

/**
 * 协调动画、输入事件以及绘制的处理。
 * choreographer接收时间脉冲,如来自显示子系统的VSYNC信号,其用于调度下一帧的渲染工作。
 * 应用通常使用动画或者视图相关的上层抽象接口来间接地和choreographer进行交互,比如:
 * 使用android.animation.ValueAnimator#start方法来开始一个周期性动画。
 * 使用View#postOnAnimation提交一个任务并在下一帧开始的时候执行。
 * 使用View#postOnAnimationDelayed提交一个任务并在下一帧开始之后延迟执行。
 * 使用View#postInvalidateOnAnimation提交一个任务在下一帧开始之后执行。
 * 
 * 然而,也有一些场景需要直接使用choreographer进行操作,比如:
 * 使用GL在不同的线程中进行渲染,或者不借助动画或者View在下一帧绘制开始后执行任务,可以调用Choreographer#postFrameCallback方法。
 */
public final class Choreographer {
	private static final String TAG = "Choreographer";
    // Thread local storage for the choreographer.
    private static final ThreadLocal<Choreographer> sThreadInstance = new ThreadLocal<Choreographer>() {
        @Override
        protected Choreographer initialValue() {
            Looper looper = Looper.myLooper();
            if (looper == null) {
                throw new IllegalStateException("The current thread must have a looper!");
            }
            Choreographer choreographer = new Choreographer(looper, VSYNC_SOURCE_APP);
            if (looper == Looper.getMainLooper()) {
                mMainInstance = choreographer;
            }
            return choreographer;
        }
    };
    
	private Choreographer(Looper looper, int vsyncSource) {
        mLooper = looper;
        mHandler = new FrameHandler(looper);
        mDisplayEventReceiver = USE_VSYNC ? new FrameDisplayEventReceiver(looper, vsyncSource) : null;
        mLastFrameTimeNanos = Long.MIN_VALUE;
        mFrameIntervalNanos = (long)(1000000000 / getRefreshRate());
        mCallbackQueues = new CallbackQueue[CALLBACK_LAST + 1];
        for (int i = 0; i <= CALLBACK_LAST; i++) {
            mCallbackQueues[i] = new CallbackQueue();
        }
        // b/68769804: For low FPS experiments.
        setFPSDivisor(SystemProperties.getInt(ThreadedRenderer.DEBUG_FPS_DIVISOR, 1));
    }

    public static Choreographer getInstance() {
        return sThreadInstance.get();
    }
}

从上面的分析可知,Choreographer的创建时机是在Activity#onResume执行之后,Android系统这么设计的原因是:ActivityAndroid系统设计用于承载UI的容器,只有容器创建之后,才需要创建Choreographer来调度VSYNC信号来开启一帧帧的界面渲染和刷新。

那么会不会每启动一个Activity之后都会创建一个Choreographer实例呢?
答案是不会的,因为从Choreographer的构造过程可以知道,Choreographer的创建是通过ThreadLocal实现的,所以Choreographer是线程单例的,所以主线程只会创建一个Choreographer实例。

那么是不是任何一个线程都可以创建Choreographer实例呢?
答案是不会的,只有创建了Looper的线程才能创建Choreographer实例,原因是Choreographer需要通过Looper进行线程切换。

至于为什么需要线程切换将会在下面进行分析回答。

2. VSYNC信号的调度分发流程

下面结合源码分析下,Choreographer是如何调度VSYNC信号的,调度之后又是如何接收VSYNC信号的,接收到VSYNC信号之后又是怎么处理的。

首先,看下Choreographer的构造函数中哪些工作和VSYNC信号的调度分发是相关的。

  1. 基于主线程的Looper创建了FrameHandler用于线程切换,用于主线程请求调度VSYNC信号以及在主线程处理接收到的VSYNC信号。
  2. 创建了FrameDisplayEventReceiver用于请求和接收VSYNC信号。
  3. 创建了CallbackQueue类型的数组,用于接收上层业务发起的各种类型的任务。
    private Choreographer(Looper looper, int vsyncSource) {
        mLooper = looper;
        // 负责线程切换
        mHandler = new FrameHandler(looper);
        // 负责请求和接收VSYNC信号
        mDisplayEventReceiver = USE_VSYNC ? new FrameDisplayEventReceiver(looper, vsyncSource) : null;
        mLastFrameTimeNanos = Long.MIN_VALUE;
        mFrameIntervalNanos = (long)(1000000000 / getRefreshRate());
		// 创建数组来存放业务提交的四种类型的任务
        mCallbackQueues = new CallbackQueue[CALLBACK_LAST + 1];
        for (int i = 0; i <= CALLBACK_LAST; i++) {
            mCallbackQueues[i] = new CallbackQueue();
        }
        // b/68769804: For low FPS experiments.
        setFPSDivisor(SystemProperties.getInt(ThreadedRenderer.DEBUG_FPS_DIVISOR, 1));
    }
private final class CallbackQueue {
	private CallbackRecord mHead;
	...
}

// 链表结构
private static final class CallbackRecord {
	public CallbackRecord next;
	public long dueTime;
	/** Runnable or FrameCallback or VsyncCallback object. */
	public Object action;
	/** Denotes the action type. */
	public Object token;
	...
}

下面结合源码看下FrameDisplayEventReceiver的创建过程,可以看到FrameDisplayEventReceiver继承自DisplayEventReceiver并在构造函数中调用了DisplayEventReceiver的构造函数。因此,看下DisplayEventReceiver的构造函数做了哪些事情。

private final class FrameDisplayEventReceiver extends DisplayEventReceiver implements Runnable {
	private boolean mHavePendingVsync;
	private long mTimestampNanos;
	private int mFrame;
	private VsyncEventData mLastVsyncEventData = new VsyncEventData();

	// 直接调用DisplayEventReceiver的构造函数
	public FrameDisplayEventReceiver(Looper looper, int vsyncSource) {
		super(looper, vsyncSource, 0);
	}

    @Override
    public void onVsync(long timestampNanos, long physicalDisplayId, int frame, VsyncEventData vsyncEventData) {
        try {
            long now = System.nanoTime();
            if (timestampNanos > now) {
                timestampNanos = now;
            }

            if (mHavePendingVsync) {
                Log.w(TAG, "Already have a pending vsync event.  There should only be " + "one at a time.");
            } else {
                mHavePendingVsync = true;
            }

            mTimestampNanos = timestampNanos;
            mFrame = frame;
            mLastVsyncEventData = vsyncEventData;
			// 发送异步消息到主线程
            Message msg = Message.obtain(mHandler, this);
            msg.setAsynchronous(true);
            mHandler.sendMessageAtTime(msg, timestampNanos / TimeUtils.NANOS_PER_MS);
        } finally {
            Trace.traceEnd(Trace.TRACE_TAG_VIEW);
        }
    }

	// 在主线程执行
    @Override
    public void run() {
        mHavePendingVsync = false;
        doFrame(mTimestampNanos, mFrame, mLastVsyncEventData);
    }
}

DisplayEventReceiver的构造函数中将主线程的MessageQueue取出,之后调用了nativeInit方法并传递了主线程的MessageQueue,并将自身作为参数也一起传入nativeInit方法。

/**
 * 为应用提供一种底层机制来接收显示事件,比如垂直信号。
 * @hide
 */
public abstract class DisplayEventReceiver {
    public DisplayEventReceiver(Looper looper, int vsyncSource, int eventRegistration) {
        if (looper == null) {
            throw new IllegalArgumentException("looper must not be null");
        }

        mMessageQueue = looper.getQueue();
        mReceiverPtr = nativeInit(new WeakReference<DisplayEventReceiver>(this), mMessageQueue, vsyncSource, eventRegistration);
    }

	private static native long nativeInit(WeakReference<DisplayEventReceiver> receiver, MessageQueue messageQueue, int vsyncSource, int eventRegistration);
}

nativeInit方法的具体代码位于android_view_DisplayEventReceiver.cpp中,其中,关键部分是NativeDisplayEventReceiver的创建以及调用其initialize方法进行初始化。

	// frameworks/base/core/jni/android_view_DisplayEventReceiver.cpp
	static jlong nativeInit(JNIEnv* env, jclass clazz, jobject receiverWeak, jobject vsyncEventDataWeak, jobject messageQueueObj, jint vsyncSource, jint eventRegistration, jlong layerHandle) {
		// 获取native层的MessageQueue对象
    	sp<MessageQueue> messageQueue = android_os_MessageQueue_getMessageQueue(env, messageQueueObj);
	    if (messageQueue == NULL) {
	        jniThrowRuntimeException(env, "MessageQueue is not initialized.");
	        return 0;
	    }
		// 创建native层的DisplayEventReceiver,即NativeDisplayEventReceiver
    	sp<NativeDisplayEventReceiver> receiver = new NativeDisplayEventReceiver(env, receiverWeak, vsyncEventDataWeak, messageQueue, vsyncSource, eventRegistration, layerHandle);
    	// 调用initialize进行初始化
    	status_t status = receiver->initialize();
	    if (status) {
	        String8 message;
	        message.appendFormat("Failed to initialize display event receiver.  status=%d", status);
	        jniThrowRuntimeException(env, message.c_str());
	        return 0;
	    }

	    receiver->incStrong(gDisplayEventReceiverClassInfo.clazz); // retain a reference for the object
	    return reinterpret_cast<jlong>(receiver.get());
	}
	
	// 父类是DisplayEventDispatcher
	class NativeDisplayEventReceiver : public DisplayEventDispatcher {
	public:
	    NativeDisplayEventReceiver(JNIEnv* env, jobject receiverWeak, jobject vsyncEventDataWeak, const sp<MessageQueue>& messageQueue, jint vsyncSource, jint eventRegistration, jlong layerHandle);
	
	    void dispose();
	
	protected:
	    virtual ~NativeDisplayEventReceiver();
	
	private:
	    jobject mReceiverWeakGlobal;
	    jobject mVsyncEventDataWeakGlobal;
	    sp<MessageQueue> mMessageQueue;
	
	    void dispatchVsync(nsecs_t timestamp, PhysicalDisplayId displayId, uint32_t count, VsyncEventData vsyncEventData) override;
	    void dispatchHotplug(nsecs_t timestamp, PhysicalDisplayId displayId, bool connected) override;
	    void dispatchHotplugConnectionError(nsecs_t timestamp, int errorCode) override;
	    void dispatchModeChanged(nsecs_t timestamp, PhysicalDisplayId displayId, int32_t modeId,
	                             nsecs_t renderPeriod) override;
	    void dispatchFrameRateOverrides(nsecs_t timestamp, PhysicalDisplayId displayId,
	                                    std::vector<FrameRateOverride> overrides) override;
	    void dispatchNullEvent(nsecs_t timestamp, PhysicalDisplayId displayId) override {}
	    void dispatchHdcpLevelsChanged(PhysicalDisplayId displayId, int connectedLevel,
	                                   int maxLevel) override;
	};


	NativeDisplayEventReceiver::NativeDisplayEventReceiver(JNIEnv* env, jobject receiverWeak,
	                                                       jobject vsyncEventDataWeak,
	                                                       const sp<MessageQueue>& messageQueue,
	                                                       jint vsyncSource,
	                                                       jint eventRegistration,
	                                                       jlong layerHandle) 
	                                                       // 父类构造函数
	                                                       : DisplayEventDispatcher(
	                                                       messageQueue->getLooper(),
	                                                       static_cast<gui::ISurfaceComposer::VsyncSource>(vsyncSource),
	                                                       static_cast<gui::ISurfaceComposer::EventRegistration>(eventRegistration), 
	                                                       layerHandle != 0 ? sp<IBinder>::fromExisting(reinterpret_cast<IBinder*>(layerHandle)) : nullptr
	                                                       ),
	                                                       // Java层的receiver
	                                                       mReceiverWeakGlobal(env->NewGlobalRef(receiverWeak)),
	                                                       mVsyncEventDataWeakGlobal(env->NewGlobalRef(vsyncEventDataWeak)),
	                                                       mMessageQueue(messageQueue) {
	    ALOGV("receiver %p ~ Initializing display event receiver.", this);
	}

	// frameworks/native/libs/gui/DisplayEventDispatcher.cpp
	status_t DisplayEventDispatcher::initialize() {
	    ...
	    // 通过Looper监听了Socket读端描述符,唤醒时的事件类型为EVENT_INPUT
	    // this指代DisplayEventDispatcher中的handleEvent 函数
	    int rc = mLooper->addFd(mReceiver.getFd(), 0, Looper::EVENT_INPUT, this, NULL);
	    ...
	    return OK;
	}

首先看下NativeDisplayEventReceiver对象的创建,NativeDisplayEventReceiver的父类是DisplayEventDispatcher。查看源码可以判断出DisplayEventDispatcher是用于分发VSYNCHotplug等信号的,而DisplayEventDispatcher内部会创建DisplayEventReceiver对象用于接收SurfaceFlinger进程发送过来的信号。

	// frameworks/native/libs/gui/DisplayEventDispatcher.cpp
	DisplayEventDispatcher::DisplayEventDispatcher(const sp<Looper>& looper,
	                                               gui::ISurfaceComposer::VsyncSource vsyncSource,
	                                               EventRegistrationFlags eventRegistration,
	                                               const sp<IBinder>& layerHandle)
	      : mLooper(looper),
	        mReceiver(vsyncSource, eventRegistration, layerHandle), // DisplayEventReceiver
	        mWaitingForVsync(false),
	        mLastVsyncCount(0),
	        mLastScheduleVsyncTime(0) {
	    ALOGV("dispatcher %p ~ Initializing display event dispatcher.", this);
	}

	// frameworks/native/libs/gui/DisplayEventReceiver.cpp
	DisplayEventReceiver::DisplayEventReceiver(gui::ISurfaceComposer::VsyncSource vsyncSource, EventRegistrationFlags eventRegistration, const sp<IBinder>& layerHandle) {
		// 获取SurfaceFlinger的代理对象
	    sp<gui::ISurfaceComposer> sf(ComposerServiceAIDL::getComposerService());
	    if (sf != nullptr) {
	        mEventConnection = nullptr;
	        // 创建一个与SurfaceFlinger进程中的EventTread-app线程的vsyncSource信号连接
	        binder::Status status = sf->createDisplayEventConnection(vsyncSource, static_cast<gui::ISurfaceComposer::EventRegistration>(eventRegistration.get()), layerHandle, &mEventConnection);
	        if (status.isOk() && mEventConnection != nullptr) {
	        	// 创建成功之后构造BitTube对象,并将上面的mEventConnection的读端描述拷贝过来用于监听VSYNC信号
	            mDataChannel = std::make_unique<gui::BitTube>();
	            // 拷贝SurfaceFlinger进程中创建的Scoket读端描述符
	            status = mEventConnection->stealReceiveChannel(mDataChannel.get());
	            if (!status.isOk()) {
	                ALOGE("stealReceiveChannel failed: %s", status.toString8().c_str());
	                mInitError = std::make_optional<status_t>(status.transactionError());
	                mDataChannel.reset();
	                mEventConnection.clear();
	            }
	        } else {
	            ALOGE("DisplayEventConnection creation failed: status=%s", status.toString8().c_str());
	        }
	    }
	}

	// frameworks/native/services/surfaceflinger/Scheduler/EventThread.cpp
	sp<EventThreadConnection> EventThread::createEventConnection(EventRegistrationFlags eventRegistration) const {
	    auto connection = sp<EventThreadConnection>::make(const_cast<EventThread*>(this), IPCThreadState::self()->getCallingUid(), eventRegistration);
	    if (FlagManager::getInstance().misc1()) {
	        const int policy = SCHED_FIFO;
	        connection->setMinSchedulerPolicy(policy, sched_get_priority_min(policy));
	    }
	    return connection;
	}

	// 创建了BitTube,内部创建了Socket
	EventThreadConnection::EventThreadConnection(EventThread* eventThread, uid_t callingUid, EventRegistrationFlags eventRegistration)
      : mOwnerUid(callingUid),
        mEventRegistration(eventRegistration),
        mEventThread(eventThread),
        mChannel(gui::BitTube::DefaultSize) {} 

从源码中可以看到,BitTube是用于接收SurfaceFlinger进程发送过来的信号的,从BitTube的类文件可以看出,BitTube内部是通过Socket来实现跨进程发送信号的。

	// frameworks/native/libs/gui/BitTube.cpp
	static const size_t DEFAULT_SOCKET_BUFFER_SIZE = 4 * 1024;

	BitTube::BitTube(size_t bufsize) {
	    init(bufsize, bufsize);
	}

	void BitTube::init(size_t rcvbuf, size_t sndbuf) {
	    int sockets[2];
	    if (socketpair(AF_UNIX, SOCK_SEQPACKET, 0, sockets) == 0) {
	        size_t size = DEFAULT_SOCKET_BUFFER_SIZE;
	        setsockopt(sockets[0], SOL_SOCKET, SO_RCVBUF, &rcvbuf, sizeof(rcvbuf));
	        setsockopt(sockets[1], SOL_SOCKET, SO_SNDBUF, &sndbuf, sizeof(sndbuf));
	        // since we don't use the "return channel", we keep it small...
	        setsockopt(sockets[0], SOL_SOCKET, SO_SNDBUF, &size, sizeof(size));
	        setsockopt(sockets[1], SOL_SOCKET, SO_RCVBUF, &size, sizeof(size));
	        fcntl(sockets[0], F_SETFL, O_NONBLOCK);
	        fcntl(sockets[1], F_SETFL, O_NONBLOCK);
	        mReceiveFd.reset(sockets[0]);
	        mSendFd.reset(sockets[1]);
	    } else {
	        mReceiveFd.reset();
	        ALOGE("BitTube: pipe creation failed (%s)", strerror(errno));
	    }
	}

	base::unique_fd BitTube::moveReceiveFd() {
	    return std::move(mReceiveFd);
	}

总结一下,Choreographer的创建流程:
在这里插入图片描述

2.1 VSYNC信号的请求

对于刷新率为60Hz的屏幕来说,一般是每16.67ms产生一个VSYNC信号,但是每个App进程不是默认接收硬件产生的每个VSYNC信号,而是根据上层业务的实际需要进行VSYNC信号的监听和接收。这样设计的好处是可以按需触发App进程的渲染流程,降低不必要的绘制渲染流程所带来的功耗。

上层业务一般会通过invalidate或者requestLayout方法来发起一次绘制请求,最终这个绘制请求会被转换成CallbackRecord放入对应类型的CallbackQueue中。

public final class Choreographer {
	...
    private void postCallbackDelayedInternal(int callbackType, Object action, Object token, long delayMillis) {
        synchronized (mLock) {
            final long now = SystemClock.uptimeMillis();
            // 计算任务执行的具体时间戳
            final long dueTime = now + delayMillis;
            mCallbackQueues[callbackType].addCallbackLocked(dueTime, action, token);
            if (dueTime <= now) { // 如果不需要延迟执行的话,则立即请求调度VSYNC信号
                scheduleFrameLocked(now);
            } else { // 否则通过延迟消息来请求调度VSYNC信号
                Message msg = mHandler.obtainMessage(MSG_DO_SCHEDULE_CALLBACK, action);
                msg.arg1 = callbackType;
                msg.setAsynchronous(true);
                mHandler.sendMessageAtTime(msg, dueTime);
            }
        }
    }

	private void scheduleFrameLocked(long now) {
        if (!mFrameScheduled) {
            mFrameScheduled = true;
            if (USE_VSYNC) {
                // 检查当前线程是否为主线程,如果是主线程,直接请求调度VSYNC信号,否则向主线程发送异步消息来请求调度VSYNC信号
                if (isRunningOnLooperThreadLocked()) {
                    scheduleVsyncLocked();
                } else {
                    Message msg = mHandler.obtainMessage(MSG_DO_SCHEDULE_VSYNC);
                    msg.setAsynchronous(true);
                    mHandler.sendMessageAtFrontOfQueue(msg);
                }
            } else {
                final long nextFrameTime = Math.max(mLastFrameTimeNanos / TimeUtils.NANOS_PER_MS + sFrameDelay, now);

                Message msg = mHandler.obtainMessage(MSG_DO_FRAME);
                msg.setAsynchronous(true);
                mHandler.sendMessageAtTime(msg, nextFrameTime);
            }
        }
    }
    
	@UnsupportedAppUsage(maxTargetSdk = Build.VERSION_CODES.R, trackingBug = 170729553)
    private void scheduleVsyncLocked() {
        try {
            Trace.traceBegin(Trace.TRACE_TAG_VIEW, "Choreographer#scheduleVsyncLocked");
            // 通过调用FrameDisplayEventReceiver的scheduleVsync方法请求VSYNC信号
            mDisplayEventReceiver.scheduleVsync();
        } finally {
            Trace.traceEnd(Trace.TRACE_TAG_VIEW);
        }
    }
    ...
}
	// android.view.DisplayEventReceiver#scheduleVsync
    @UnsupportedAppUsage
    public void scheduleVsync() {
        if (mReceiverPtr == 0) {
            Log.w(TAG, "Attempted to schedule a vertical sync pulse but the display event "
                    + "receiver has already been disposed.");
        } else {
            nativeScheduleVsync(mReceiverPtr);
        }
    }

	// frameworks/base/core/jni/android_view_DisplayEventReceiver.cpp
	static void nativeScheduleVsync(JNIEnv* env, jclass clazz, jlong receiverPtr) {
	    sp<NativeDisplayEventReceiver> receiver =
	            reinterpret_cast<NativeDisplayEventReceiver*>(receiverPtr);
	    status_t status = receiver->scheduleVsync();
	    if (status) {
	        String8 message;
	        message.appendFormat("Failed to schedule next vertical sync pulse.  status=%d", status);
	        jniThrowRuntimeException(env, message.c_str());
	    }
	}

	// frameworks/native/libs/gui/DisplayEventDispatcher.cpp
	status_t DisplayEventDispatcher::scheduleVsync() {
	    if (!mWaitingForVsync) {
	        ALOGV("dispatcher %p ~ Scheduling vsync.", this);
	
	        // Drain all pending events.
	        nsecs_t vsyncTimestamp;
	        PhysicalDisplayId vsyncDisplayId;
	        uint32_t vsyncCount;
	        VsyncEventData vsyncEventData;
	        if (processPendingEvents(&vsyncTimestamp, &vsyncDisplayId, &vsyncCount, &vsyncEventData)) {
	            ALOGE("dispatcher %p ~ last event processed while scheduling was for %" PRId64 "", this,
	                  ns2ms(static_cast<nsecs_t>(vsyncTimestamp)));
	        }
			// 请求下一个VSYNC信号
	        status_t status = mReceiver.requestNextVsync();
	        if (status) {
	            ALOGW("Failed to request next vsync, status=%d", status);
	            return status;
	        }
	
	        mWaitingForVsync = true;
	        mLastScheduleVsyncTime = systemTime(SYSTEM_TIME_MONOTONIC);
	    }
	    return OK;
	}

	// frameworks/native/libs/gui/DisplayEventReceiver.cpp
	status_t DisplayEventReceiver::requestNextVsync() {
	    if (mEventConnection != nullptr) {
	        mEventConnection->requestNextVsync();
	        return NO_ERROR;
	    }
	    return mInitError.has_value() ? mInitError.value() : NO_INIT;
	}

最终调用了native方法向SurfaceFlinger进程进行通信,请求分发VSYNC信号到App进程。之前创建Choreographer的过程中,创建了DisplayEventReceiver并通过JNI调用到了native层,在native层创建了NativeDisplayEventReceiver对象之后,将其返回了Java层,并保存在了Java层的DisplayEventReceiver对象中。

Java层通过JNI调用到native层并将之前创建的NativeDisplayEventReceiver指针传回了native层,这样就可以在native层找到之前创建好的NativeDisplayEventReceiver对象,并调用其scheduleVsync方法,最终通过mEventConnection完成跨进程请求。

2.2 VSYNC信号的分发

SurfaceFlinger进程通过Socket通知App进程VSYNC信号到达之后,App进程的DisplayEventDispatcher#handleEvent方法将会被调用,最终通过JNI调用到Java层的FrameDisplayEventReceiver#dispatchVsync方法。

	// frameworks/native/libs/gui/DisplayEventDispatcher.cpp
	int DisplayEventDispatcher::handleEvent(int, int events, void*) {
	    if (events & (Looper::EVENT_ERROR | Looper::EVENT_HANGUP)) {
	        return 0; // remove the callback
	    }
	
	    if (!(events & Looper::EVENT_INPUT)) {
	        return 1; // keep the callback
	    }
	
	    // Drain all pending events, keep the last vsync.
	    nsecs_t vsyncTimestamp;
	    PhysicalDisplayId vsyncDisplayId;
	    uint32_t vsyncCount;
	    VsyncEventData vsyncEventData;
	    if (processPendingEvents(&vsyncTimestamp, &vsyncDisplayId, &vsyncCount, &vsyncEventData)) {
	        mWaitingForVsync = false;
	        mLastVsyncCount = vsyncCount;
	        dispatchVsync(vsyncTimestamp, vsyncDisplayId, vsyncCount, vsyncEventData);
	    }
	
	    if (mWaitingForVsync) {
	        const nsecs_t currentTime = systemTime(SYSTEM_TIME_MONOTONIC);
	        const nsecs_t vsyncScheduleDelay = currentTime - mLastScheduleVsyncTime;
	        if (vsyncScheduleDelay > WAITING_FOR_VSYNC_TIMEOUT) {
	            mWaitingForVsync = false;
	            dispatchVsync(currentTime, vsyncDisplayId /* displayId is not used */,
	                          ++mLastVsyncCount, vsyncEventData /* empty data */);
	        }
	    }
	
	    return 1; // keep the callback
	}

	// frameworks/base/core/jni/android_view_DisplayEventReceiver.cpp
	void NativeDisplayEventReceiver::dispatchVsync(nsecs_t timestamp, PhysicalDisplayId displayId, uint32_t count, VsyncEventData vsyncEventData) {
	    JNIEnv* env = AndroidRuntime::getJNIEnv();
	    ScopedLocalRef<jobject> receiverObj(env, GetReferent(env, mReceiverWeakGlobal));
	    ScopedLocalRef<jobject> vsyncEventDataObj(env, GetReferent(env, mVsyncEventDataWeakGlobal));
	    if (receiverObj.get() && vsyncEventDataObj.get()) {
	        ...
	        ScopedLocalRef<jobjectArray> frameTimelinesObj(env, reinterpret_cast<jobjectArray>(env->GetObjectField(vsyncEventDataObj.get(), gDisplayEventReceiverClassInfo.vsyncEventDataClassInfo.frameTimelines)));
	        for (size_t i = 0; i < vsyncEventData.frameTimelinesLength; i++) {
	            VsyncEventData::FrameTimeline& frameTimeline = vsyncEventData.frameTimelines[i];
	            ScopedLocalRef<jobject> frameTimelineObj(env, env->GetObjectArrayElement(frameTimelinesObj.get(), i));
	            ...
	        }
			// 最终调用到了Java层的dispatchVsync
	        env->CallVoidMethod(receiverObj.get(), gDisplayEventReceiverClassInfo.dispatchVsync, timestamp, displayId.value, count);
	        ALOGV("receiver %p ~ Returned from vsync handler.", this);
	    }
	
	    mMessageQueue->raiseAndClearException(env, "dispatchVsync");
	}

Java层的FrameDisplayEventReceiver#dispatchVsync方法会调用FrameDisplayEventReceiver#onVsync方法,接收VSYNC信号并进行处理。

	// android.view.DisplayEventReceiver
	// Called from native code.
    @SuppressWarnings("unused")
    private void dispatchVsync(long timestampNanos, long physicalDisplayId, int frame, VsyncEventData vsyncEventData) {
        onVsync(timestampNanos, physicalDisplayId, frame, vsyncEventData);
    }

	// android.view.Choreographer.FrameDisplayEventReceiver
	@Override
    public void onVsync(long timestampNanos, long physicalDisplayId, int frame, VsyncEventData vsyncEventData) {
    	try {
            long now = System.nanoTime();
            if (timestampNanos > now) {
                timestampNanos = now;
            }

            if (mHavePendingVsync) {
                Log.w(TAG, "Already have a pending vsync event.  There should only be "
                        + "one at a time.");
            } else {
                mHavePendingVsync = true;
            }

            mTimestampNanos = timestampNanos;
            mFrame = frame;
            mLastVsyncEventData = vsyncEventData;
            Message msg = Message.obtain(mHandler, this);
            msg.setAsynchronous(true); // 异步消息,利用之前插入的同步屏障来加速消息的处理
            mHandler.sendMessageAtTime(msg, timestampNanos / TimeUtils.NANOS_PER_MS);
        } finally {
            Trace.traceEnd(Trace.TRACE_TAG_VIEW);
        }
	}

        @Override
        public void run() {
            mHavePendingVsync = false;
            doFrame(mTimestampNanos, mFrame, mLastVsyncEventData);
        }

最终调用到了Choreographer#doFrame方法,到了这里就开始了下一帧的数据准备工作。

2.3 VSYNC信号的处理

App进程收到VSYNC信号之后就会调用doFrame方法开始新的一帧数据的准备工作,其中还会计算卡顿时间,即VSYNC信号到达之后多久才被主线程处理(这里需要说明的是,卡顿时间是指App进程监听到VSYNC信号之后发送异步消息到主线程的时刻开始,到主线程真正处理异步消息的时刻结束,即异步消息发送到异步消息处理这个阶段的耗时,这部分耗时一般是因为主线程的消息队列中还有其他位于同步屏障之前的消息没有被处理完,导致异步消息一直不能被处理),等待时间过长会导致无法在一帧时间内完成数据准备的工作,最终导致用户的视觉效果不够流畅。

	// android.view.Choreographer
    void doFrame(long frameTimeNanos, int frame, DisplayEventReceiver.VsyncEventData vsyncEventData) {
        final long startNanos;
        final long frameIntervalNanos = vsyncEventData.frameInterval;
        try {
            FrameData frameData = new FrameData(frameTimeNanos, vsyncEventData);
            synchronized (mLock) {
                if (!mFrameScheduled) {
                    traceMessage("Frame not scheduled");
                    return; // no work to do
                }

                long intendedFrameTimeNanos = frameTimeNanos;
                startNanos = System.nanoTime();
                // frameTimeNanos是SurfaceFlinger传递给来的时间戳,可能会被校准为App进程接收到VSYNC信号的时间戳
                // jitterNanos包含了Handler处理消息的耗时,即异步消息被处理之前,主线程还在处理其他消息所占用的时间,如果这个时间过长会导致卡顿
                final long jitterNanos = startNanos - frameTimeNanos;
                if (jitterNanos >= frameIntervalNanos) {
                    long lastFrameOffset = 0;
                    if (frameIntervalNanos == 0) {
                        Log.i(TAG, "Vsync data empty due to timeout");
                    } else {
                        lastFrameOffset = jitterNanos % frameIntervalNanos;
                        final long skippedFrames = jitterNanos / frameIntervalNanos;
                        if (skippedFrames >= SKIPPED_FRAME_WARNING_LIMIT) {
                            Log.i(TAG, "Skipped " + skippedFrames + " frames!  "
                                    + "The application may be doing too much work on its main "
                                    + "thread.");
                        }
                        if (DEBUG_JANK) {
                            Log.d(TAG, "Missed vsync by " + (jitterNanos * 0.000001f) + " ms "
                                    + "which is more than the frame interval of "
                                    + (frameIntervalNanos * 0.000001f) + " ms!  "
                                    + "Skipping " + skippedFrames + " frames and setting frame "
                                    + "time to " + (lastFrameOffset * 0.000001f)
                                    + " ms in the past.");
                        }
                    }
                    frameTimeNanos = startNanos - lastFrameOffset;
                    frameData.updateFrameData(frameTimeNanos);
                }

                if (frameTimeNanos < mLastFrameTimeNanos) {
                    if (DEBUG_JANK) {
                        Log.d(TAG, "Frame time appears to be going backwards.  May be due to a "
                                + "previously skipped frame.  Waiting for next vsync.");
                    }
                    traceMessage("Frame time goes backward");
                    scheduleVsyncLocked();
                    return;
                }

                if (mFPSDivisor > 1) {
                    long timeSinceVsync = frameTimeNanos - mLastFrameTimeNanos;
                    if (timeSinceVsync < (frameIntervalNanos * mFPSDivisor) && timeSinceVsync > 0) {
                        traceMessage("Frame skipped due to FPSDivisor");
                        scheduleVsyncLocked();
                        return;
                    }
                }

                mFrameInfo.setVsync(intendedFrameTimeNanos, frameTimeNanos,
                        vsyncEventData.preferredFrameTimeline().vsyncId,
                        vsyncEventData.preferredFrameTimeline().deadline, startNanos,
                        vsyncEventData.frameInterval);
                mFrameScheduled = false;
                mLastFrameTimeNanos = frameTimeNanos;
                mLastFrameIntervalNanos = frameIntervalNanos;
                mLastVsyncEventData = vsyncEventData;
            }

			// 开始执行各种类型的Callback
            AnimationUtils.lockAnimationClock(frameTimeNanos / TimeUtils.NANOS_PER_MS);

            mFrameInfo.markInputHandlingStart();
            doCallbacks(Choreographer.CALLBACK_INPUT, frameData, frameIntervalNanos);

            mFrameInfo.markAnimationsStart();
            doCallbacks(Choreographer.CALLBACK_ANIMATION, frameData, frameIntervalNanos);
            doCallbacks(Choreographer.CALLBACK_INSETS_ANIMATION, frameData,
                    frameIntervalNanos);

            mFrameInfo.markPerformTraversalsStart();
            doCallbacks(Choreographer.CALLBACK_TRAVERSAL, frameData, frameIntervalNanos);

            doCallbacks(Choreographer.CALLBACK_COMMIT, frameData, frameIntervalNanos);
        } finally {
            AnimationUtils.unlockAnimationClock();
            Trace.traceEnd(Trace.TRACE_TAG_VIEW);
        }
    }

接着,就会将之前业务提交的各种类型的CallbackRecord,主要分为:

  • CALLBACK_INPUT:比如屏幕触摸事件,最先执行;
  • CALLBACK_ANIMATION:比如属性动画;
  • CALLBACK_INSETS_ANIMATION:背景更新动画;
  • CALLBACK_TRAVERSALView的测量、布局和绘制等;
  • CALLBACK_COMMIT:提交绘制数据;

按照顺序执行完所有的CallbackRecord之后,App进程的绘制任务就完成了,最终数据会被存放在GraphicBuffer中并被提交到BufferQueue中,最终调度sf类型的VSYNC信号,由SurfaceFlinger完成数据合成和送显。

总结一下,VSYNC信号的分发处理流程:
在这里插入图片描述

3. 总结

整个Choreographer工作机制作为App进程和SurfaceFlinger进程的协调机制,承接App进程的业务刷新UI的请求,统一调度VSYNC信号,将UI渲染任务同步到 VSYNC 信号的时间线上。同时作为中转站来分发VSYNC信号,并处理上层业务的刷新请求。按照 VSYNC 信号的周期有规律地准备每一帧数据,并通过SurfaceFlinger进程完成合成上屏。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值