Android Framework原理 -- service_manager注册服务流程分析

1 AMS的注册流程

不管是系统服务,还是自定义服务,都是会被注册到sm中,那么我们就拿AMS来举例,看AMS如何注册到sm中的。在第一小节中我们知道,init进程启动之后,创建zygote进程,zygote孵化出system_server进程,在system_server进程中持有了像AMS、PMS、WMS等系统服务,所以我们先看下AMS是如何启动的。

1.1 AMS的启动

public static void main(String[] args) {
    new SystemServer().run();
}

SystemServer启动是由zygote进程调用了main方法,我们看下run方法:

private void run() {
    TimingsTraceAndSlog t = new TimingsTraceAndSlog();
    //......
    try {
        //启动一些系统引导服务
        startBootstrapServices(t);
        startCoreServices(t);
        startOtherServices(t);
    } catch (Throwable ex) {
        Slog.e("System", "******************************************");
        Slog.e("System", "************ Failure starting system services", ex);
        throw ex;
    } finally {
        t.traceEnd(); // StartServices
    }
    //......
    // Loop forever.
    Looper.loop();
    throw new RuntimeException("Main thread loop unexpectedly exited");
}

在run方法中,调用了startBootstrapServices,启动一些系统的引导服务

private void startBootstrapServices(@NonNull TimingsTraceAndSlog t) {

    // TODO: Might need to move after migration to WM.
    ActivityTaskManagerService atm = mSystemServiceManager.startService(
            ActivityTaskManagerService.Lifecycle.class).getService();
    mActivityManagerService = ActivityManagerService.Lifecycle.startService(
            mSystemServiceManager, atm);
    mActivityManagerService.setSystemServiceManager(mSystemServiceManager);
    mActivityManagerService.setInstaller(installer);
    mWindowManagerGlobalLock = atm.getGlobalLock();
    t.traceEnd();

    // Set up the Application instance for the system process and get started.
    t.traceBegin("SetSystemProcess");
    mActivityManagerService.setSystemProcess();
    t.traceEnd();
}

如果有看过之前插件化的伙伴们应该知道,ActivityTaskManagerService就是我们说的AMS,在这里就是将AMS服务启动了;在服务启动之后,调用setSystemProcess方法

public void setSystemProcess() {
    try {
        ServiceManager.addService(Context.ACTIVITY_SERVICE, this, /* allowIsolated= */ true,
                DUMP_FLAG_PRIORITY_CRITICAL | DUMP_FLAG_PRIORITY_NORMAL | DUMP_FLAG_PROTO);
        ServiceManager.addService(ProcessStats.SERVICE_NAME, mProcessStats);
        ServiceManager.addService("meminfo", new MemBinder(this), /* allowIsolated= */ false,
                DUMP_FLAG_PRIORITY_HIGH);
        ServiceManager.addService("gfxinfo", new GraphicsBinder(this));
        ServiceManager.addService("dbinfo", new DbBinder(this));
        mAppProfiler.setCpuInfoService();
        ServiceManager.addService("permission", new PermissionController(this));
        ServiceManager.addService("processinfo", new ProcessInfoService(this));
        ServiceManager.addService("cacheinfo", new CacheBinder(this));

        ApplicationInfo info = mContext.getPackageManager().getApplicationInfo(
                "android", STOCK_PM_FLAGS | MATCH_SYSTEM_ONLY);
        mSystemThread.installSystemApplicationInfo(info, getClass().getClassLoader());

        synchronized (this) {
            ProcessRecord app = mProcessList.newProcessRecordLocked(info, info.processName,
                    false,
                    0,
                    new HostingRecord("system"));
            app.setPersistent(true);
            app.setPid(MY_PID);
            app.mState.setMaxAdj(ProcessList.SYSTEM_ADJ);
            app.makeActive(mSystemThread.getApplicationThread(), mProcessStats);
            addPidLocked(app);
            updateLruProcessLocked(app, false, null);
            updateOomAdjLocked(OomAdjuster.OOM_ADJ_REASON_NONE);
        }
    } catch (PackageManager.NameNotFoundException e) {
        throw new RuntimeException(
                "Unable to find android system package", e);
    }

    // Start watching app ops after we and the package manager are up and running.
    mAppOpsService.startWatchingMode(AppOpsManager.OP_RUN_IN_BACKGROUND, null,
            new IAppOpsCallback.Stub() {
                @Override public void opChanged(int op, int uid, String packageName) {
                    if (op == AppOpsManager.OP_RUN_IN_BACKGROUND && packageName != null) {
                        if (getAppOpsManager().checkOpNoThrow(op, uid, packageName)
                                != AppOpsManager.MODE_ALLOWED) {
                            runInBackgroundDisabled(uid);
                        }
                    }
                }
            });

    final int[] cameraOp = {AppOpsManager.OP_CAMERA};
    mAppOpsService.startWatchingActive(cameraOp, new IAppOpsActiveCallback.Stub() {
        @Override
        public void opActiveChanged(int op, int uid, String packageName, String attributionTag,
                boolean active, @AttributionFlags int attributionFlags,
                int attributionChainId) {
            cameraActiveChanged(uid, active);
        }
    });
}

在这个方法中,调用了ServiceManager的addService方法,我们看下这个方法

public static void addService(String name, IBinder service, boolean allowIsolated,
        int dumpPriority) {
    try {
        getIServiceManager().addService(name, service, allowIsolated, dumpPriority);
    } catch (RemoteException e) {
        Log.e(TAG, "error in addService", e);
    }
}

在这个方法中,会调用getIServiceManager方法,应该是获取一个service_manager在Java层的实现。

1.2 Java层获取service_manager对象

这个方法同样是一个单例,从这里开始的源码,我们需要跟native层的做联系了,因为它们实在是太像了!

private static IServiceManager getIServiceManager() {
    if (sServiceManager != null) {
        return sServiceManager;
    }

    // Find the service manager
    sServiceManager = ServiceManagerNative
            .asInterface(Binder.allowBlocking(BinderInternal.getContextObject()));
    return sServiceManager;
}

如果sServiceManager是空的,那么会调用ServiceManagerNative(service_manager在native层的实现类)的asInterface方法

1.2.1 BinderInternal.getContextObject()

首先我们先看入参BinderInternal.getContextObject(),是不是有点像native层的ProcessState中的getContextObject方法,native层最终返回了BpBinder对象,那么在Java层我们看下源码:

public static final native IBinder getContextObject();

我们发现这是个native方法,在jni层对应的方法为android_os_BinderInternal_getContextObject

//http://androidxref.com/9.0.0_r3/xref/frameworks/base/core/jni/android_util_Binder.cpp
static jobject android_os_BinderInternal_getContextObject(JNIEnv* env, jobject clazz)
{
    sp<IBinder> b = ProcessState::self()->getContextObject(NULL);//得到BpBinder对象
    return javaObjectForIBinder(env, b);
}

首先调用了ProcessState中的getContextObject方法,返回了一个BpBinder对象;将其作为参数,传入javaObjectForIBinder方法中;

jobject javaObjectForIBinder(JNIEnv* env, const sp<IBinder>& val)
{
    //......

    BinderProxyNativeData* nativeData = gNativeDataCache;
    if (nativeData == nullptr) {
        nativeData = new BinderProxyNativeData();
    }
    // gNativeDataCache is now logically empty.
    jobject object = env->CallStaticObjectMethod(gBinderProxyOffsets.mClass,
            gBinderProxyOffsets.mGetInstance, (jlong) nativeData, (jlong) val.get());
    if (env->ExceptionCheck()) {
        // In the exception case, getInstance still took ownership of nativeData.
        gNativeDataCache = nullptr;
        return NULL;
    }
    BinderProxyNativeData* actualNativeData = getBPNativeData(env, object);
    if (actualNativeData == nativeData) {
        // New BinderProxy; we still have exclusive access.
        nativeData->mOrgue = new DeathRecipientList;
        nativeData->mObject = val;
        gNativeDataCache = nullptr;
        ++gNumProxies;
        if (gNumProxies >= gProxiesWarned + PROXY_WARN_INTERVAL) {
            ALOGW("Unexpectedly many live BinderProxies: %d\n", gNumProxies);
            gProxiesWarned = gNumProxies;
        }
    } else {
        // nativeData wasn't used. Reuse it the next time.
        gNativeDataCache = nativeData;
    }

    return object;
}

在javaObjectForIBinder中,其实就是创建了一个BinderProxy对象,并与BpBinder对象做了绑定

1.2.2 ServiceManagerNative .asInterface

//--------ServiceManagerNative asInterface------------//
static public IServiceManager asInterface(IBinder obj)
{
    if (obj == null) {
        return null;
    }
    //这里返回的是空
    IServiceManager in =
        (IServiceManager)obj.queryLocalInterface(descriptor);
    if (in != null) {
        return in;
    }

    return new ServiceManagerProxy(obj);
}

其实asInterface方法跟我们之前介绍aidl一样,因为system_server和service_manager不是一个进程,要获取service_manager肯定涉及到跨进程通信,因此调用queryLocalInterface返回就是null,因此asInterface返回的就是一个ServiceManagerProxy(BinderProxy)对象。

所以在Java层,调用getIServiceManager得到的就是ServiceManagerProxy对象,调用它的addService方法。

1.3 AMS注册服务

在Java层获取到service_manager服务之后,我们就需要向sm注册服务,因此需要调用ServiceManagerProxy的addService方法,看源码:

1.3.1 BinderProxy – transact

public ServiceManagerProxy(IBinder remote) {
    mRemote = remote;
}

public void addService(String name, IBinder service, boolean allowIsolated, int dumpPriority)
        throws RemoteException {
    Parcel data = Parcel.obtain();
    Parcel reply = Parcel.obtain();
    data.writeInterfaceToken(IServiceManager.descriptor);
    data.writeString(name);
    //将ams添加到data中打包
    data.writeStrongBinder(service);
    data.writeInt(allowIsolated ? 1 : 0);
    data.writeInt(dumpPriority);
    //核心代码
    mRemote.transact(ADD_SERVICE_TRANSACTION, data, reply, 0);
    reply.recycle();
    data.recycle();
}

所以调用addService就是调用ServiceManagerProxy的addService方法,核心就是调用mRemote.transact,发送了一个指令ADD_SERVICE_TRANSACTION;mRemote就是BinderProxy,看下transact源码:

//-------BinderProxy transact----------//
public boolean transact(int code, Parcel data, Parcel reply, int flags) throws RemoteException {
   //......
    try {
        return transactNative(code, data, reply, flags);
    } finally {

    }
}

核心就是调用了transactNative方法,这个方法是native方法,看下jni层是如何实现的

static jboolean android_os_BinderProxy_transact(JNIEnv* env, jobject obj,
        jint code, jobject dataObj, jobject replyObj, jint flags) // throws RemoteException
{
    //......

    //获取到BpBinder对象
    IBinder* target = getBPNativeData(env, obj)->mObject.get();
    if (target == NULL) {
        jniThrowException(env, "java/lang/IllegalStateException", "Binder has been finalized!");
        return JNI_FALSE;
    }
    //调用BpBinder的transact方法
    status_t err = target->transact(code, *data, reply, flags);
    //if (reply) printf("Transact from Java code to %p received: ", target); reply->print();

    if (kEnableBinderSample) {
        if (time_binder_calls) {
            conditionally_log_binder_call(start_millis, target, code);
        }
    }

    if (err == NO_ERROR) {
        return JNI_TRUE;
    } else if (err == UNKNOWN_TRANSACTION) {
        return JNI_FALSE;
    }

    signalExceptionForError(env, obj, err, true /*canThrowRemoteException*/, data->dataSize());
    return JNI_FALSE;
}

因为在创建BinderProxy的时候,将BpBinder保存在了mObject属性中,所以在jni层首先获取了BpBinder对象,然后调用了transact方法;

//--------BpBinder transact-------//
status_t BpBinder::transact(
    uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
    // Once a binder has died, it will never come back to life.
    if (mAlive) {
        status_t status = IPCThreadState::self()->transact(
            mHandle, code, data, reply, flags);
        if (status == DEAD_OBJECT) mAlive = 0;
        return status;
    }

    return DEAD_OBJECT;
}

在BpBinder的transact方法中,其实是调用了IPCThreadState的transact方法,在这个方法中,调用了writeTransactionData方法,写入传递的数据。

status_t IPCThreadState::transact(int32_t handle,
                                  uint32_t code, const Parcel& data,
                                  Parcel* reply, uint32_t flags)
{
    status_t err;

    flags |= TF_ACCEPT_FDS;

    IF_LOG_TRANSACTIONS() {
        TextOutput::Bundle _b(alog);
        alog << "BC_TRANSACTION thr " << (void*)pthread_self() << " / hand "
            << handle << " / code " << TypeCode(code) << ": "
            << indent << data << dedent << endl;
    }

    LOG_ONEWAY(">>>> SEND from pid %d uid %d %s", getpid(), getuid(),
        (flags & TF_ONE_WAY) == 0 ? "READ REPLY" : "ONE WAY");
        //核心代码
    err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);
    //......
    if (reply) {
        err = waitForResponse(reply);
    } else {
        Parcel fakeReply;
        err = waitForResponse(&fakeReply);
    }

    return err;
}

注意这里传入的cmd命令是BC_TRANSACTION,然后将命令写入了mOut;

status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags,
    int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer)
{
    binder_transaction_data tr;

    tr.target.ptr = 0; /* Don't pass uninitialized stack data to a remote process */
    tr.target.handle = handle;
    tr.code = code;
    tr.flags = binderFlags;
    tr.cookie = 0;
    tr.sender_pid = 0;
    tr.sender_euid = 0;

    const status_t err = data.errorCheck();
    if (err == NO_ERROR) {
        tr.data_size = data.ipcDataSize();
        tr.data.ptr.buffer = data.ipcData();
        tr.offsets_size = data.ipcObjectsCount()*sizeof(binder_size_t);
        tr.data.ptr.offsets = data.ipcObjects();
    } else if (statusBuffer) {
        tr.flags |= TF_STATUS_CODE;
        *statusBuffer = err;
        tr.data_size = sizeof(status_t);
        tr.data.ptr.buffer = reinterpret_cast<uintptr_t>(statusBuffer);
        tr.offsets_size = 0;
        tr.data.ptr.offsets = 0;
    } else {
        return (mLastError = err);
    }

    mOut.writeInt32(cmd);
    mOut.write(&tr, sizeof(tr));

    return NO_ERROR;
}

1.3.2 talkWithDriver

在写入命令之后,调用了waitForResponse方法,等待响应,在这个方法中,首先调用了talkWithDriver方法

status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
{
    uint32_t cmd;
    int32_t err;

    while (1) {
        if ((err=talkWithDriver()) < NO_ERROR) break;
        err = mIn.errorCheck();
        if (err < NO_ERROR) break;
        if (mIn.dataAvail() == 0) continue;

        cmd = (uint32_t)mIn.readInt32();

        IF_LOG_COMMANDS() {
            alog << "Processing waitForResponse Command: "
                << getReturnString(cmd) << endl;
        }

        switch (cmd) {
        case BR_TRANSACTION_COMPLETE:
            if (!reply && !acquireResult) goto finish;
            break;

        case BR_DEAD_REPLY:
            err = DEAD_OBJECT;
            goto finish;

        case BR_FAILED_REPLY:
            err = FAILED_TRANSACTION;
            goto finish;

        case BR_ACQUIRE_RESULT:
            {
                ALOG_ASSERT(acquireResult != NULL, "Unexpected brACQUIRE_RESULT");
                const int32_t result = mIn.readInt32();
                if (!acquireResult) continue;
                *acquireResult = result ? NO_ERROR : INVALID_OPERATION;
            }
            goto finish;

        case BR_REPLY:
            {
                binder_transaction_data tr;
                err = mIn.read(&tr, sizeof(tr));
                ALOG_ASSERT(err == NO_ERROR, "Not enough command data for brREPLY");
                if (err != NO_ERROR) goto finish;

                if (reply) {
                    if ((tr.flags & TF_STATUS_CODE) == 0) {
                        reply->ipcSetDataReference(
                            reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
                            tr.data_size,
                            reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
                            tr.offsets_size/sizeof(binder_size_t),
                            freeBuffer, this);
                    } else {
                        err = *reinterpret_cast<const status_t*>(tr.data.ptr.buffer);
                        freeBuffer(NULL,
                            reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
                            tr.data_size,
                            reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
                            tr.offsets_size/sizeof(binder_size_t), this);
                    }
                } else {
                    freeBuffer(NULL,
                        reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
                        tr.data_size,
                        reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
                        tr.offsets_size/sizeof(binder_size_t), this);
                    continue;
                }
            }
            goto finish;

        default:
            err = executeCommand(cmd);
            if (err != NO_ERROR) goto finish;
            break;
        }
    }

finish:
    if (err != NO_ERROR) {
        if (acquireResult) *acquireResult = err;
        if (reply) reply->setError(err);
        mLastError = err;
    }

    return err;
}

既然方法名叫做talkWithDriver,那么肯定是要和binder驱动打交道了

status_t IPCThreadState::talkWithDriver(bool doReceive)
{

    binder_write_read bwr;

    // Is the read buffer empty?
    const bool needRead = mIn.dataPosition() >= mIn.dataSize();

    const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0;

    bwr.write_size = outAvail;
    bwr.write_buffer = (uintptr_t)mOut.data();

    // This is what we'll read.
    if (doReceive && needRead) {
        bwr.read_size = mIn.dataCapacity();
        bwr.read_buffer = (uintptr_t)mIn.data();
    } else {
        bwr.read_size = 0;
        bwr.read_buffer = 0;
    }

    // Return immediately if there is nothing to do.
    if ((bwr.write_size == 0) && (bwr.read_size == 0)) return NO_ERROR;

    bwr.write_consumed = 0;
    bwr.read_consumed = 0;
    status_t err;
    do {
        IF_LOG_COMMANDS() {
            alog << "About to read/write, write size = " << mOut.dataSize() << endl;
        }
#if defined(__ANDROID__)
        //①
        if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)
            err = NO_ERROR;
        else
            err = -errno;
#else
        err = INVALID_OPERATION;
#endif
        if (mProcess->mDriverFD <= 0) {
            err = -EBADF;
        }
        IF_LOG_COMMANDS() {
            alog << "Finished read/write, write size = " << mOut.dataSize() << endl;
        }
    } while (err == -EINTR);

    IF_LOG_COMMANDS() {
        alog << "Our err: " << (void*)(intptr_t)err << ", write consumed: "
            << bwr.write_consumed << " (of " << mOut.dataSize()
                        << "), read consumed: " << bwr.read_consumed << endl;
    }

    if (err >= NO_ERROR) {
        if (bwr.write_consumed > 0) {
            if (bwr.write_consumed < mOut.dataSize())
                mOut.remove(0, bwr.write_consumed);
            else {
                mOut.setDataSize(0);
                processPostWriteDerefs();
            }
        }
        if (bwr.read_consumed > 0) {
            mIn.setDataSize(bwr.read_consumed);
            mIn.setDataPosition(0);
        }
        IF_LOG_COMMANDS() {
            TextOutput::Bundle _b(alog);
            alog << "Remaining data size: " << mOut.dataSize() << endl;
            alog << "Received commands from driver: " << indent;
            const void* cmds = mIn.data();
            const void* end = mIn.data() + mIn.dataSize();
            alog << HexDump(cmds, mIn.dataSize()) << endl;
            while (cmds < end) cmds = printReturnCommand(alog, cmds);
            alog << dedent;
        }
        return NO_ERROR;
    }

    return err;
}

①:这里调用了ioctl函数,意味着需要进行读写操作,因为write_size > 0,所以需要写操作,直接去Binder驱动查看,指令为BC_TRANSACTION的代码执行。

case BC_TRANSACTION:
case BC_REPLY: {
   struct binder_transaction_data tr;

   if (copy_from_user(&tr, ptr, sizeof(tr)))
      return -EFAULT;
   ptr += sizeof(tr);
   binder_transaction(proc, thread, &tr, cmd == BC_REPLY);
   break;
}

在BC_TRANSACTION指令中,执行了binder_transaction方法,binder_transaction的代码巨长无比,就不贴在这里,只挑核心代码…

因为这个时候的cmd == BC_TRANSACTION,所以binder_transaction的第四个参数为false,所以深入源码,首先获取到了binder_context_mgr_node,这个就是之前我们在第2节说的,因为sm会被频繁获取,所以创建了一个全局的binder_node,其实就是sm的Binder对象,赋值给了target_node

target_node = binder_context_mgr_node;
if (target_node == NULL) {
   return_error = BR_DEAD_REPLY;
   goto err_no_context_mgr_node;
}

然后将数据拷贝到内核空间(sm注册时申请的内核空间),这就是真正一次拷贝的地方

if (copy_from_user(t->buffer->data, (const void __user *)(uintptr_t)
         tr->data.ptr.buffer, tr->data_size)) {
   binder_user_error("%d:%d got transaction with invalid data ptr\n",
         proc->pid, thread->pid);
   return_error = BR_FAILED_REPLY;
   goto err_copy_data_failed;
}
if (copy_from_user(offp, (const void __user *)(uintptr_t)
         tr->data.ptr.offsets, tr->offsets_size)) {
   binder_user_error("%d:%d got transaction with invalid offsets ptr\n",
         proc->pid, thread->pid);
   return_error = BR_FAILED_REPLY;
   goto err_copy_data_failed;
}
t->work.type = BINDER_WORK_TRANSACTION;
list_add_tail(&t->work.entry, target_list);
tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE;

然后调用wake_up_interruptible将sm唤醒,并执行BINDER_WORK_TRANSACTION状态,同时客户端挂起

其实我们可以这么理解,当客户端调用transact远程方法时,在native层主要就是通过指令与Binder进行通信,无论是客户端还是服务端,都会向Binder驱动发送BC_xxx命令,而服务端会返回BR_xxx命令,如果服务端有返回值,那么也会给Binder驱动发送BC_REPLY指令,然后由Binder驱动分发给客户端。

同时指令的分发都会有服务端或者客户端进入活跃状态或者挂起状态,都是在waitForResponse的循环中与Binder驱动进行通信。

2 Binder机制总结

其实在Java层已经画过一部分的流程图了,对于整个Binder机制,无非就是Binder驱动作为一个中间人,来进行数据的读写操作,而因为mmap的存在,可以使得通信变得更加高效,而service_manager作为一个大管家,管理全部的服务,所有的服务都会在这里注册,而在客户端与服务端通信的时候,需要从service_manager中获取这个服务,从而由Binder驱动分发事件传递数据。

作者:Vector7
链接:https://juejin.cn/post/7151035282777702436

文末福利

作为一个Android APP开发者,我们也不能当温水里的青蛙,必须对Android系统的组成和Android Framework的层次架构有所了解,才能突破和进阶。

在这里就给大家推荐一份由腾讯技术团队出品的《Android Framework开发揭秘》,其包含经典Binder、Handler、AMS等模块的知识点和详细解析,能够帮助你加深你对Android Framework框架层的理解,助你轻松上手Framework!

完整版PDF共26万字,已放在下方卡片中,需要保存的可点击文末卡片查看免费获取方式!

《Android Framework 开发揭秘》

目录

第一章 系统启动流程分析

  • 第一节 Android启动概览
  • 第二节 init.rc解析
  • 第三节 Zygote
  • 面试题

第二章 Binder解析

  • 第一节 宏观认识Binder
  • 第二节 binder的jni方法注册
  • 第三节 binder驱动
  • 第四节 数据结构
  • 第五节 启动service_manager
  • 第六节 获取service_manager
  • 第七节 addService流程
  • 第八节 Binder面试题全解析

第三章 Handler解析

  • 第一节 源码分析
  • 第二节 难点问题
  • 第三节 Handler常见面试题

第四章 AMS解析

  • 第一节 引言
  • 第二节 Android架构
  • 第三节 通信方式
  • 第四节 系统启动系列
  • 第五节 AMS
  • 第六节 AMS面试题解析

第五章 WMS解析

  • 第一节Activity与Window相关概念
  • 第二节 Android窗口管理服务WindowManagerService计算Activity窗口大
  • 第三节Android窗口管理服务WindowManagerService对窗口的组织方式分析
  • 第四节 Android窗口管理服务WindowManagerService对输入法窗口(Input
  • 第五节 Android窗口管理服务WindowManagerService对壁纸窗口(Wallpap
  • 第六节Android窗口管理服务WindowManagerService计算窗口Z轴位置的过程分析
  • 第七节Android窗口管理服务WindowManagerService显示Activity组件的启
  • 第八节Android窗口管理服务WindowManagerService切换Activity窗口(A
  • 第九节 Android窗口管理服务WindowManagerService显示窗口动画的原理分析

第六章PKMS Android10.0 源码解读

  • 第一节 前言 PKMS是什么东西
  • 第二节 PKMS概述信息
  • 第三节 PKMS角色位置
  • 第四节 PKMS启动过程分析
  • 第五节 APK的扫描
  • 第六节 PMS之权限扫描
  • 第七节 PackageManagerService大综合笔记

相信这份资料,一定可以为大家在Framework的学习上提供强有力的帮助和支撑,快人一步成为真正的高级Android开发者。

完整版腾讯技术团队出品《Android Framework开发揭秘》文档的可点击文末卡片直接领取👇

最后,祝愿大家也都能拿到心仪的offer,登上人生的高峰!!

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值