android binder 讲解

下面进行详细讲述Android Binder机制问题,Binder机制是通过驱动的形式来实现,其实驱动程序的部分是保存在源代码的以下的文件中。

Android Binder机制大部分都是使用的IPC,进程间通信机制有很多种,例如linux中可以采用管道,消息队列,信号,共享内存,socket等,这些都可以实现进程间的通信。

Android Binder机制通信是基于Service与Client的,有一个ServiceManager的守 护进程管理着系统的各个服务,它负责监听是否有其他程序向其发送请求。如果有请求就响应。每个服务都要在ServiceManager中注册,而请求服务 的客户端去ServiceManager请求服务。

binder的通信操作类似线程迁移(thread migration),binder的用户空间为每一个进程维护着一个可用的线程池, 用来处理到来的IPC以及执行本地消息。两个进程间通信就好像是一个进程进入另一个进程执行代码然后带着执行的结果返回,Android和驱动程序通信采用linux的ioctl机制。下面先简单介绍一下ioctl机制。

什么是ioctl

ioctl是设备驱动程序中对设备的I/O通道进行管理的函数。所谓对I/O通道进行管理,就是对设备的一些特性进行控制,例如串口的传输波特率、 马达的转速等等。它的调用函数如下:int ioctl(int fd, ind cmd, …);其中fd就是用户程序打开设备时使用open函数返回的文件标示符,cmd就是用户程序对设备的控制命令,至于后面的省略号。

那是一些补充参数,一般最多一个,有或没有是和cmd的意义相关的。ioctl函数是文件结构中的一个属性分量。就是说如果你的驱动程序提供了对ioctl的支持,用户就可以在用户程序中使用ioctl函数控制设备的I/O通道。

ioctl的必要性

如果不用ioctl的话,也可以实现对设备I/O通道的控制,但那就太复杂了。例如,我们可以在驱动程序中实现write的时候检查一下是否有特殊 约定的数据流通过。如果有的话,那么后面就跟着控制命令(一般在socket编程中常常这样做)。但是如果这样做的话,会导致代码分工不明,程序结构混 乱。

程序员自己也会头昏眼花的。所以,我们就使用ioctl来实现控制的功能。要记住,用户程序所作的只是通过命令码告诉驱动程序它想做什么,至于怎么解释这些命令和怎么实现这些命令,这都是驱动程序要做的事情。

Android Binder机制如 何实现在驱动程序中实现的ioctl函数体内,实际上是有一个switch{case}结构,每一个case对应一个命令码,做出一些相应的操作。怎么实 现这些操作,这是每一个程序员自己的事情,因为设备都是特定的。关键在于怎么样组织命令码,因为在ioctl中命令码是唯一联系用户程序命令和驱动程序支 持的途径。命令码的组织是有一些讲究的。

因为我们一定要做到命令和设备是一一对应的,这样才不会将正确的命令发给错误的设备,或者是把错误的命令发给正确的设备。或者是把错误的命令发给错误的设备。这些错误都会导致不可预料的事情发生,而当程序员发现了这些奇怪的事情的时候,再来调试程序查找错误,那将是非常困难的事情。

第一部分 Binder的组成
1.1 驱动程序部分驱动程序的部分在以下的文件夹中:

Java代码


kernel/include/linux/binder.h 
kernel/drivers/android/binder.c 


    binder驱动程序是一个miscdevice,主设备号为10,此设备号使用动态获得(MISC_DYNAMIC_MINOR),其设备的节点为:
/dev/binder
    binder驱动程序会在proc文件系统中建立自己的信息,其文件夹为/proc/binder,其中包含如下内容:
proc目录:调用Binder各个进程的内容
state文件:使用函数binder_read_proc_state
stats文件:使用函数binder_read_proc_stats
transactions文件:使用函数binder_read_proc_transactions
transaction_log文件:使用函数binder_read_proc_transaction_log,其参数为binder_transaction_log (类型为struct binder_transaction_log)
failed_transaction_log文件:使用函数binder_read_proc_transaction_log 其参数为
binder_transaction_log_failed (类型为struct binder_transaction_log)
    在binder文件被打开后,其私有数据(private_data)的类型:
struct binder_proc
    在这个数据结构中,主要包含了当前进程、进程ID、内存映射信息、Binder的统计信息和线程信息等。
    在用户空间对Binder驱动程序进行控制主要使用的接口是mmap、poll和ioctl,ioctl主要使用的ID为:

Java代码


#define BINDER_WRITE_READ        _IOWR('b', 1, struct binder_write_read) 
#define BINDER_SET_IDLE_TIMEOUT  _IOW('b', 3, int64_t) 
#define BINDER_SET_MAX_THREADS   _IOW('b', 5, size_t) 
#define BINDER_SET_IDLE_PRIORITY _IOW('b', 6, int) 
#define BINDER_SET_CONTEXT_MGR   _IOW('b', 7, int) 
#define BINDER_THREAD_EXIT       _IOW('b', 8, int) 
#define BINDER_VERSION           _IOWR('b', 9, struct binder_version) 


    BR_XXX等宏为BinderDriverReturnProtocol,表示Binder驱动返回协议。
    BC_XXX等宏为BinderDriverCommandProtocol,表示Binder驱动命令协议。
    binder_thread是Binder驱动程序中使用的另外一个重要的数据结构,数据结构的定义如下所示:

Java代码


struct binder_thread { 
      struct binder_proc *proc; 
     struct rb_node rb_node; 
     int pid; 
     int looper; 
     struct binder_transaction *transaction_stack; 
     struct list_head todo; 
     uint32_t return_error; 
     uint32_t return_error2; 
     wait_queue_head_t wait; 
     struct binder_stats stats; 
}; 


    binder_thread 的各个成员信息是从rb_node中得出。
    BINDER_WRITE_READ是最重要的ioctl,它使用一个数据结构binder_write_read定义读写的数据。

Java代码


struct binder_write_read { 
     signed long write_size; 
     signed long write_consumed; 
     unsigned long write_buffer; 
     signed long read_size; 
     signed long read_consumed; 
     unsigned long read_buffer; 
}; 


1.2 servicemanager部分        servicemanager是一个守护进程,用于这个进程的和/dev/binder通讯,从而达到管理系统中各个服务的作用。
        可执行程序的路径:
        /system/bin/servicemanager       
开源版本文件的路径:

Java代码


frameworks/base/cmds/servicemanager/binder.h 
frameworks/base/cmds/servicemanager/binder.c 
frameworks/base/cmds/servicemanager/service_manager.c 


       程序执行的流程:
open():打开binder驱动
mmap():映射一个128*1024字节的内存
ioctl(BINDER_SET_CONTEXT_MGR):设置上下文为mgr
       进入主循环binder_loop()
             ioctl(BINDER_WRITE_READ),读取
                       binder_parse()进入binder处理过程循环处理
         binder_parse()的处理,调用返回值:
        当处理BR_TRANSACTION的时候,调用svcmgr_handler()处理增加服务、检查服务等工作。各种服务存放在一个链表(svclist)中。其中调用binder_等开头的函数,又会调用ioctl的各种命令。
        处理BR_REPLY的时候,填充binder_io类型的数据结
1.3 binder的库的部分
    binder相关的文件作为Android的uitls库的一部分,这个库编译后的名称为libutils.so,是Android系统中的一个公共库。
    主要文件的路径如下所示:

Java代码


frameworks/base/include/utils/* 
frameworks/base/libs/utils/* 


  
    主要的类为:
RefBase.h :
    引用计数,定义类RefBase。
Parcel.h :
    为在IPC中传输的数据定义容器,定义类Parcel
IBinder.h:
    Binder对象的抽象接口, 定义类IBinder
Binder.h:
    Binder对象的基本功能, 定义类Binder和BpRefBase
BpBinder.h:
BpBinder的功能,定义类BpBinder
IInterface.h:
为抽象经过Binder的接口定义通用类,
    定义类IInterface,类模板BnInterface,类模板BpInterface
ProcessState.h
    表示进程状态的类,定义类ProcessState
IPCThreadState.h
    表示IPC线程的状态,定义类IPCThreadState
各个类之间的关系如下所示:

    在IInterface.h中定义的BnInterface和BpInterface是两个重要的模版,这是为各种程序中使用的。
BnInterface模版的定义如下所示:

Java代码


template 
class BnInterface : public INTERFACE, public BBinder 

public: 
    virtual sp  queryLocalInterface(const String16& _descriptor); 
    virtual String16        getInterfaceDescriptor() const; 
protected: 
    virtual IBinder*        onAsBinder(); 
}; 
     BnInterface模版的定义如下所示: 
template 
class BpInterface : public INTERFACE, public BpRefBase 

public: 
                            BpInterface(const sp& remote); 
protected: 
    virtual IBinder*    onAsBinder(); 
}; 


         这两个模版在使用的时候,起到得作用实际上都是双继承:使用者定义一个接口INTERFACE,然后使用BnInterface和BpInterface两个模版结合自己的接口,构建自己的BnXXX和BpXXX两个类。
         DECLARE_META_INTERFACE和IMPLEMENT_META_INTERFACE两个宏用于帮助BpXXX类的实现:

Java代码


#define DECLARE_META_INTERFACE(INTERFACE)                               \ 
    static const String16 descriptor;                                   \ 
    static sp asInterface(const sp& obj);        \ 
    virtual String16 getInterfaceDescriptor() const;                    \ 
#define IMPLEMENT_META_INTERFACE(INTERFACE, NAME)                       \ 
    const String16 I##INTERFACE::descriptor(NAME);                      \ 
    String16 I##INTERFACE::getInterfaceDescriptor() const {             \ 
        return I##INTERFACE::descriptor;                                \ 
    }                                                                   \ 
    sp I##INTERFACE::asInterface(const sp& obj)  \ 
    {                                                                   \ 
        sp intr;                                          \ 
        if (obj != NULL) {                                              \ 
            intr = static_cast(                          \ 
                obj->queryLocalInterface(                               \ 
                        I##INTERFACE::descriptor).get());               \ 
            if (intr == NULL) {                                         \ 
                intr = new Bp##INTERFACE(obj);                          \ 
            }                                                           \ 
        }                                                               \ 
        return intr;                                                    \ 
    } 


 
   在定义自己的类的时候,只需要使用DECLARE_META_INTERFACE和IMPLEMENT_META_INTERFACE两个接口,并
结合类的名称,就可以实现BpInterface中的asInterface()和getInterfaceDescriptor()两个函数。

第二部分 Binder的运作
  2.1 Binder的工作机制
      Service Manager是一个守护进程,它负责启动各个进程之间的服务,对于相关的两个需要通讯的进程,它们通过调用libutil.so库实现通讯,而真正通讯的机制,是内核空间中的一块共享内存。 

 2.2 从应用程序的角度看Binder
  从应用程序的角度看Binder一共有三个方面:
  Native 本地:例如BnABC,这是一个需要被继承和实现的类。
  Proxy 代理:例如BpABC,这是一个在接口框架中被实现,但是在接口中没有体现的类。
  客户端:例如客户端得到一个接口ABC,在调用的时候实际上被调用的是BpABC

本地功能(Bn)部分做的:
    实现BnABC:: BnTransact()
    注册服务:IServiceManager::AddService
代理部分(Bp)做的:
    实现几个功能函数,调用BpABC::remote()->transact()
客户端做的:
    获得ABC接口,然后调用接口(实际上调用了BpABC,继而通过IPC调用了BnABC,然后调用了具体的功能)
       在程序的实现过程中BnABC和BpABC是双继承了接口ABC。一般来说BpABC是一个实现类,这个实现类不需要在接口中体现,它实际上负责的只是通讯功能,不执行具体的功能;BnABC则是一个接口类,需要一个真正工作的类来继承、实现它,这个类才是真正执行具体功能的类。
       在客户端中,从ISeriviceManager中获得一个ABC的接口,客户端调用这个接口,实际上是在调用BpABC,而BpABC又通过Binder的IPC机制和BnABC通讯,BnABC的实现类在后面执行。
  事实上,
服务器
的具体实现和客户端是两个不同的进程,如果不考虑进程间通讯的过程,从调用者的角度,似乎客户端在直接调用另外一个进程间的函数——当然这个函数必须是接口ABC中定义的。
  2.3 ISericeManager的作用
    ISericeManager涉及的两个文件是ISericeManager.h和ISericeManager.cpp。这两个文件基本上是
ISericeManager。ISericeManager是系统最先被启动的服务。非常值得注意的是:ISericeManager本地功能并没有使
现,它实际上由ServiceManager守护进程执行,而用户程序通过调用BpServiceManager来获得其他的服务。
      在ISericeManager.h中定义了一个接口,用于得到默认的ISericeManager:
        sp defaultServiceManager();
     这时得到的ISericeManager实际上是一个全局的ISericeManager。

第三部分 程序中Binder的具体实现
  3.1 一个利用接口的具体实现
    PermissionController也是libutils中定义的一个有关权限控制的接口,它一共包含两个文件:IPermissionController.h和IPermissionController.cpp这个结构在所有类的实现中都是类似的。
     头文件IPermissionController.h的主要内容是定义IPermissionController接口和类BnPermissionController: 

Java代码


class IPermissionController : public IInterface 

public: 
    DECLARE_META_INTERFACE(PermissionController); 
    virtual bool   checkPermission(const String16& permission,int32_t pid, int32_t uid) = 0; 
    enum { 
        CHECK_PERMISSION_TRANSACTION = IBinder::FIRST_CALL_TRANSACTION 
    }; 
}; 
class BnPermissionController : public BnInterface 

public: 
    virtual status_t    onTransact( uint32_t code, 
                                    const Parcel& data, 
                                    Parcel* reply, 
                                    uint32_t flags = 0); 
}; 


    IPermissionController是一个接口类,只有checkPermission()一个纯虚函数。
BnPermissionController继承了以BnPermissionController实例化模版类BnInterface。因
此,BnPermissionController,事实上BnPermissionController双继承了BBinder和
IPermissionController。
    实现文件IPermissionController.cpp中,首先实现了一个BpPermissionController。

Java代码


class BpPermissionController : public BpInterface 

public: 
    BpPermissionController(const sp& impl) 
        : BpInterface(impl) 
    { 
    } 
    virtual bool checkPermission(const String16& permission, int32_t pid, int32_t uid) 
    { 
        Parcel data, reply; 
        data.writeInterfaceToken(IPermissionController:: 
                                       getInterfaceDescriptor()); 
        data.writeString16(permission); 
        data.writeInt32(pid); 
        data.writeInt32(uid); 
        remote()->transact(CHECK_PERMISSION_TRANSACTION, data, &reply); 
        if (reply.readInt32() != 0) return 0; 
        return reply.readInt32() != 0; 
    } 
}; 


IMPLEMENT_META_INTERFACE(PermissionController, "android.os.IPermissionController");
BpPermissionController继承了BpInterface,它本身是一个
已经实现的类,而且并没有在接口中体现。这个类按照格式写就可以,在实现checkPermission()函数的过程中,使用Parcel作为传输数据
的容器,传输中时候transact()函数,其参数需要包含枚举值CHECK_PERMISSION_TRANSACTION。
IMPLEMENT_META_INTERFACE用于扶助生成。
    BnPermissionController中实现的onTransact()函数如下所示:

Java代码


status_t BnPermissionController:: BnTransact( 
    uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags) 

    switch(code) { 
        case CHECK_PERMISSION_TRANSACTION: { 
            CHECK_INTERFACE(IPermissionController, data, reply); 
            String16 permission = data.readString16(); 
            int32_t pid = data.readInt32(); 
            int32_t uid = data.readInt32(); 
            bool res = checkPermission(permission, pid, uid); 
            reply->writeInt32(0); 
            reply->writeInt32(res ? 1 : 0); 
            return NO_ERROR; 
        } break; 
        default: 
            return BBinder:: BnTransact(code, data, reply, flags); 
    } 


     在onTransact()函数中根据枚举值判断数据使用的方式。注意,由于BnPermissionController也是继承了类
IPermissionController,但是纯虚函数checkPermission()依然没有实现。因此这个
BnPermissionController类并不能实例化,它其实也还是一个接口,需要一个实现类来继承它,那才是实现具体功能的类。
  3.2 BnABC的实现
    本地服务启动后将形成一个守护进程,具体的本地服务是由一个实现类继承BnABC来实现的,这个服务的名称通常叫做ABC。
    在其中,通常包含了一个instantiate()函数,这个函数一般按照如下的方式实现:
void ABC::instantiate() {
    defaultServiceManager()->addService(
            String16("XXX.ABC"), new ABC ());
}
    按照这种方式,通过调用defaultServiceManager()函数,将增加一个名为"XXX.ABC"的服务。
    在这个defaultServiceManager()函数中调用了:
ProcessState::self()->getContextObject(NULL));
    IPCThreadState* ipc = IPCThreadState::self();
   IPCThreadState::talkWithDriver()
在ProcessState 类建立的过程中调用open_driver()打开
驱动
程序,在talkWithDriver()的执行过程中。
  3.3 BpABC调用的实现
    BpABC调用的过程主要通过mRemote()->transact() 来传输数据,mRemote()是BpRefBase的成员,它是一个IBinder。这个调用过程如下所示:

   

Java代码


mRemote()->transact() 
    Process::self() 
    IPCThreadState::self()->transact() 
    writeTransactionData() 
    waitForResponse() 
    talkWithDriver() 
    ioctl(fd, BINDER_WRITE_READ, &bwr) 


    在IPCThreadState::executeCommand()函数中,实现传输操作。

o IBinder接口

IBinder接口是对跨进程的对象的抽象。普通对象在当前进程可以访问,如果希望对象能被其它进程访问,那就必须实现IBinder接口。IBinder接口可以指向本地对象,也可以指向远程对象,调用者不需要关心指向的对象是本地的还是远程。

transact是IBinder接口中一个比较重要的函数,它的函数原型如下:


virtual status_t transact(uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags = 0) = 0; 

virtual status_t transact(uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags = 0) = 0;

android中的IPC的基本模型是基于客户/服务器(C/S)架构的。

 

客户端

请求通过内核模块中转

服务端

如果IBinder指向的是一个客户端代理,那transact只是把请求发送给服务器。服务端的IBinder的transact则提供了实际的服务。

o 客户端

BpBinder是远程对象在当前进程的代理,它实现了IBinder接口。它的transact函数实现如下:


status_t BpBinder::transact( 
    uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags) 

    // Once a binder has died, it will never come back to life. 
    if (mAlive) { 
        status_t status = IPCThreadState::self()->transact( 
            mHandle, code, data, reply, flags); 
        if (status == DEAD_OBJECT) mAlive = 0; 
        return status; 
    }  
  
    return DEAD_OBJECT; 

status_t BpBinder::transact(

    uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)

{

    // Once a binder has died, it will never come back to life.

    if (mAlive) {

        status_t status = IPCThreadState::self()->transact(

            mHandle, code, data, reply, flags);

        if (status == DEAD_OBJECT) mAlive = 0;

        return status;

    }

    return DEAD_OBJECT;

}

参数说明:


code 是请求的ID号。
data 是请求的参数。
reply 是返回的结果。
flags 一些额外的标识,如FLAG_ONEWAY。通常为0。

transact只是简单的调用了IPCThreadState::self()的transact,在IPCThreadState::transact中:


status_t IPCThreadState::transact(int32_t handle, 
                                  uint32_t code, const Parcel& data, 
                                  Parcel* reply, uint32_t flags) 

    status_t err = data.errorCheck(); 
  
    flags |= TF_ACCEPT_FDS; 
  
    IF_LOG_TRANSACTIONS() { 
        TextOutput::Bundle _b(alog); 
        alog << "BC_TRANSACTION thr " << (void*)pthread_self() << " / hand " 
            << handle << " / code " << TypeCode(code) << ": " 
            << indent << data << dedent << endl; 
    } 
  
    if (err == NO_ERROR) { 
        LOG_ONEWAY(">>>> SEND from pid %d uid %d %s", getpid(), getuid(), 
            (flags & TF_ONE_WAY) == 0 ? "READ REPLY" : "ONE WAY"); 
        err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL); 
    } 
  
    if (err != NO_ERROR) { 
        if (reply) reply->setError(err); 
        return (mLastError = err); 
    } 
  
    if ((flags & TF_ONE_WAY) == 0) { 
        if (reply) { 
            err = waitForResponse(reply); 
        } else { 
            Parcel fakeReply; 
            err = waitForResponse(&fakeReply); 
        } 
  
        IF_LOG_TRANSACTIONS() { 
            TextOutput::Bundle _b(alog); 
            alog << "BR_REPLY thr " << (void*)pthread_self() << " / hand " 
                << handle << ": "; 
            if (reply) alog << indent << *reply << dedent << endl; 
            else alog << "(none requested)" << endl; 
        } 
    } else { 
        err = waitForResponse(NULL, NULL); 
    } 
  
    return err; 

  
status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult) 

    int32_t cmd; 
    int32_t err; 
  
    while (1) { 
        if ((err=talkWithDriver()) < NO_ERROR) break; 
        err = mIn.errorCheck(); 
        if (err < NO_ERROR) break; 
        if (mIn.dataAvail() == 0) continue; 
  
        cmd = mIn.readInt32(); 
  
        IF_LOG_COMMANDS() { 
            alog << "Processing waitForResponse Command: " 
                << getReturnString(cmd) << endl; 
        } 
  
        switch (cmd) { 
        case BR_TRANSACTION_COMPLETE: 
            if (!reply && !acquireResult) goto finish; 
            break; 
  
        case BR_DEAD_REPLY: 
            err = DEAD_OBJECT; 
            goto finish; 
  
        case BR_FAILED_REPLY: 
            err = FAILED_TRANSACTION; 
            goto finish; 
  
        case BR_ACQUIRE_RESULT: 
            { 
                LOG_ASSERT(acquireResult != NULL, "Unexpected brACQUIRE_RESULT"); 
                const int32_t result = mIn.readInt32(); 
                if (!acquireResult) continue; 
                *acquireResult = result ? NO_ERROR : INVALID_OPERATION; 
            } 
            goto finish; 
  
        case BR_REPLY: 
            { 
                binder_transaction_data tr; 
                err = mIn.read(&tr, sizeof(tr)); 
                LOG_ASSERT(err == NO_ERROR, "Not enough command data for brREPLY"); 
                if (err != NO_ERROR) goto finish; 
  
                if (reply) { 
                    if ((tr.flags & TF_STATUS_CODE) == 0) { 
                        reply->ipcSetDataReference( 
                            reinterpret_cast(tr.data.ptr.buffer), 
                            tr.data_size, 
                            reinterpret_cast(tr.data.ptr.offsets), 
                            tr.offsets_size/sizeof(size_t), 
                            freeBuffer, this); 
                    } else { 
                        err = *static_cast(tr.data.ptr.buffer); 
                        freeBuffer(NULL, 
                            reinterpret_cast(tr.data.ptr.buffer), 
                            tr.data_size, 
                            reinterpret_cast(tr.data.ptr.offsets), 
                            tr.offsets_size/sizeof(size_t), this); 
                    } 
                } else { 
                    freeBuffer(NULL, 
                        reinterpret_cast(tr.data.ptr.buffer), 
                        tr.data_size, 
                        reinterpret_cast(tr.data.ptr.offsets), 
                        tr.offsets_size/sizeof(size_t), this); 
                    continue; 
                } 
            } 
            goto finish; 
  
        default: 
            err = executeCommand(cmd); 
            if (err != NO_ERROR) goto finish; 
            break; 
        } 
    } 
  
finish: 
    if (err != NO_ERROR) { 
        if (acquireResult) *acquireResult = err; 
        if (reply) reply->setError(err); 
        mLastError = err; 
    } 
  
    return err; 

status_t IPCThreadState::transact(int32_t handle,

                                  uint32_t code, const Parcel& data,

                                  Parcel* reply, uint32_t flags)

{

    status_t err = data.errorCheck();

    flags |= TF_ACCEPT_FDS;

    IF_LOG_TRANSACTIONS() {

        TextOutput::Bundle _b(alog);

        alog << "BC_TRANSACTION thr " << (void*)pthread_self() << " / hand "

            << handle << " / code " << TypeCode(code) << ": "

            << indent << data << dedent << endl;

    }

    if (err == NO_ERROR) {

        LOG_ONEWAY(">>>> SEND from pid %d uid %d %s", getpid(), getuid(),

            (flags & TF_ONE_WAY) == 0 ? "READ REPLY" : "ONE WAY");

        err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);

    }

    if (err != NO_ERROR) {

        if (reply) reply->setError(err);

        return (mLastError = err);

    }

    if ((flags & TF_ONE_WAY) == 0) {

        if (reply) {

            err = waitForResponse(reply);

        } else {

            Parcel fakeReply;

            err = waitForResponse(&fakeReply);

        }

        IF_LOG_TRANSACTIONS() {

            TextOutput::Bundle _b(alog);

            alog << "BR_REPLY thr " << (void*)pthread_self() << " / hand "

                << handle << ": ";

            if (reply) alog << indent << *reply << dedent << endl;

            else alog << "(none requested)" << endl;

        }

    } else {

        err = waitForResponse(NULL, NULL);

    }

    return err;

}

status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)

{

    int32_t cmd;

    int32_t err;

    while (1) {

        if ((err=talkWithDriver()) < NO_ERROR) break;

        err = mIn.errorCheck();

        if (err < NO_ERROR) break;

        if (mIn.dataAvail() == 0) continue;

        cmd = mIn.readInt32();

        IF_LOG_COMMANDS() {

            alog << "Processing waitForResponse Command: "

                << getReturnString(cmd) << endl;

        }

        switch (cmd) {

        case BR_TRANSACTION_COMPLETE:

            if (!reply && !acquireResult) goto finish;

            break;

        case BR_DEAD_REPLY:

            err = DEAD_OBJECT;

            goto finish;

        case BR_FAILED_REPLY:

            err = FAILED_TRANSACTION;

            goto finish;

        case BR_ACQUIRE_RESULT:

            {

                LOG_ASSERT(acquireResult != NULL, "Unexpected brACQUIRE_RESULT");

                const int32_t result = mIn.readInt32();

                if (!acquireResult) continue;

                *acquireResult = result ? NO_ERROR : INVALID_OPERATION;

            }

            goto finish;

        case BR_REPLY:

            {

                binder_transaction_data tr;

                err = mIn.read(&tr, sizeof(tr));

                LOG_ASSERT(err == NO_ERROR, "Not enough command data for brREPLY");

                if (err != NO_ERROR) goto finish;

                if (reply) {

                    if ((tr.flags & TF_STATUS_CODE) == 0) {

                        reply->ipcSetDataReference(

                            reinterpret_cast(tr.data.ptr.buffer),

                            tr.data_size,

                            reinterpret_cast(tr.data.ptr.offsets),

                            tr.offsets_size/sizeof(size_t),

                            freeBuffer, this);

                    } else {

                        err = *static_cast(tr.data.ptr.buffer);

                        freeBuffer(NULL,

                            reinterpret_cast(tr.data.ptr.buffer),

                            tr.data_size,

                            reinterpret_cast(tr.data.ptr.offsets),

                            tr.offsets_size/sizeof(size_t), this);

                    }

                } else {

                    freeBuffer(NULL,

                        reinterpret_cast(tr.data.ptr.buffer),

                        tr.data_size,

                        reinterpret_cast(tr.data.ptr.offsets),

                        tr.offsets_size/sizeof(size_t), this);

                    continue;

                }

            }

            goto finish;

        default:

            err = executeCommand(cmd);

            if (err != NO_ERROR) goto finish;

            break;

        }

    }

finish:

    if (err != NO_ERROR) {

        if (acquireResult) *acquireResult = err;

        if (reply) reply->setError(err);

        mLastError = err;

    }

    return err;

}

这里transact把请求经内核模块发送了给服务端,服务端处理完请求之后,沿原路返回结果给调用者。这里也可以看出请求是同步操作,它会等待直到结果返回为止。

在BpBinder之上进行简单包装,我们可以得到与服务对象相同的接口,调用者无需要关心调用的对象是远程的还是本地的。拿ServiceManager来说:
(frameworks/base/libs/utils/IServiceManager.cpp)


class BpServiceManager : public BpInterface 

public: 
    BpServiceManager(const sp& impl) 
        : BpInterface(impl) 
    { 
    } 
... 
    virtual status_t addService(const String16& name, const sp& service) 
    { 
        Parcel data, reply; 
        data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor()); 
        data.writeString16(name); 
        data.writeStrongBinder(service); 
        status_t err = remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply); 
        return err == NO_ERROR ? reply.readInt32() : err; 
    } 
... 
}; 

class BpServiceManager : public BpInterface

{

public:

    BpServiceManager(const sp& impl)

        : BpInterface(impl)

    {

    }

...

    virtual status_t addService(const String16& name, const sp& service)

    {

        Parcel data, reply;

        data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());

        data.writeString16(name);

        data.writeStrongBinder(service);

        status_t err = remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply);

        return err == NO_ERROR ? reply.readInt32() : err;

    }

...

};

BpServiceManager实现了 IServiceManager和IBinder两个接口,调用者可以把BpServiceManager的对象看作是一个 IServiceManager对象或者IBinder对象。当调用者把BpServiceManager对象当作IServiceManager对象使 用时,所有的请求只是对BpBinder::transact的封装。这样的封装使得调用者不需要关心IServiceManager对象是本地的还是远 程的了。

客户通过defaultServiceManager函数来创建BpServiceManager对象:
(frameworks/base/libs/utils/IServiceManager.cpp)


sp<IServiceManager> defaultServiceManager() 

    if (gDefaultServiceManager != NULL) return gDefaultServiceManager;  
  
    { 
        AutoMutex _l(gDefaultServiceManagerLock); 
        if (gDefaultServiceManager == NULL) { 
            gDefaultServiceManager = interface_cast<IServiceManager>( 
                ProcessState::self()->getContextObject(NULL)); 
        } 
    }  
  
    return gDefaultServiceManager; 

sp<IServiceManager> defaultServiceManager()

{

    if (gDefaultServiceManager != NULL) return gDefaultServiceManager;

    {

        AutoMutex _l(gDefaultServiceManagerLock);

        if (gDefaultServiceManager == NULL) {

            gDefaultServiceManager = interface_cast<IServiceManager>(

                ProcessState::self()->getContextObject(NULL));

        }

    }

    return gDefaultServiceManager;

}

先通过ProcessState::self()->getContextObject(NULL)创建一个Binder对象,然后通过 interface_cast和IMPLEMENT_META_INTERFACE(ServiceManager, “android.os.IServiceManager”)把Binder对象包装成 IServiceManager对象。原理上等同于创建了一个BpServiceManager对象。

ProcessState::self()->getContextObject调用ProcessState::getStrongProxyForHandle创建代理对象:


sp<IBinder> ProcessState::getStrongProxyForHandle(int32_t handle) 

    sp<IBinder> result;  
  
    AutoMutex _l(mLock);  
  
    handle_entry* e = lookupHandleLocked(handle);  
  
    if (e != NULL) { 
        // We need to create a new BpBinder if there isn't currently one, OR we 
        // are unable to acquire a weak reference on this current one.  See comment 
        // in getWeakProxyForHandle() for more info about this. 
        IBinder* b = e->binder; 
        if (b == NULL || !e->refs->attemptIncWeak(this)) { 
            b = new BpBinder(handle); 
            e->binder = b; 
            if (b) e->refs = b->getWeakRefs(); 
            result = b; 
        } else { 
            // This little bit of nastyness is to allow us to add a primary 
            // reference to the remote proxy when this team doesn't have one 
            // but another team is sending the handle to us. 
            result.force_set(b); 
            e->refs->decWeak(this); 
        } 
    }  
  
    return result; 

sp<IBinder> ProcessState::getStrongProxyForHandle(int32_t handle)

{

    sp<IBinder> result;

    AutoMutex _l(mLock);

    handle_entry* e = lookupHandleLocked(handle);

    if (e != NULL) {

        // We need to create a new BpBinder if there isn't currently one, OR we

        // are unable to acquire a weak reference on this current one.  See comment

        // in getWeakProxyForHandle() for more info about this.

        IBinder* b = e->binder;

        if (b == NULL || !e->refs->attemptIncWeak(this)) {

            b = new BpBinder(handle);

            e->binder = b;

            if (b) e->refs = b->getWeakRefs();

            result = b;

        } else {

            // This little bit of nastyness is to allow us to add a primary

            // reference to the remote proxy when this team doesn't have one

            // but another team is sending the handle to us.

            result.force_set(b);

            e->refs->decWeak(this);

        }

    }

    return result;

}

如果handle为空,默认为context_manager对象,context_manager实际上就是ServiceManager。
o 服务端
服务端也要实现IBinder接口,BBinder类对IBinder接口提供了部分默认实现,其中transact的实现如下:


status_t BBinder::transact( 
    uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags) 

    data.setDataPosition(0);  
  
    status_t err = NO_ERROR; 
    switch (code) { 
        case PING_TRANSACTION: 
            reply->writeInt32(pingBinder()); 
            break; 
        default: 
            err = onTransact(code, data, reply, flags); 
            break; 
    }  
  
    if (reply != NULL) { 
        reply->setDataPosition(0); 
    }  
  
    return err; 

status_t BBinder::transact(

    uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)

{

    data.setDataPosition(0);

    status_t err = NO_ERROR;

    switch (code) {

        case PING_TRANSACTION:

            reply->writeInt32(pingBinder());

            break;

        default:

            err = onTransact(code, data, reply, flags);

            break;

    }

    if (reply != NULL) {

        reply->setDataPosition(0);

    }

    return err;

}

PING_TRANSACTION请求用来检查对象是否还存在,这里简单的把 pingBinder的返回值返回给调用者。其它的请求交给onTransact处理。onTransact是BBinder里声明的一个 protected类型的虚函数,这个要求它的子类去实现。比如CameraService里的实现如下:


status_t CameraService::onTransact( 
    uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags) 

    // permission checks... 
    switch (code) { 
        case BnCameraService::CONNECT: 
            IPCThreadState* ipc = IPCThreadState::self(); 
            const int pid = ipc->getCallingPid(); 
            const int self_pid = getpid(); 
            if (pid != self_pid) { 
                // we're called from a different process, do the real check 
                if (!checkCallingPermission( 
                        String16("android.permission.CAMERA"))) 
                { 
                    const int uid = ipc->getCallingUid(); 
                    LOGE("Permission Denial: " 
                            "can't use the camera pid=%d, uid=%d", pid, uid); 
                    return PERMISSION_DENIED; 
                } 
            } 
            break; 
    } 
  
    status_t err = BnCameraService::onTransact(code, data, reply, flags); 
  
    LOGD("+++ onTransact err %d code %d", err, code); 
  
    if (err == UNKNOWN_TRANSACTION || err == PERMISSION_DENIED) { 
        // the 'service' command interrogates this binder for its name, and then supplies it 
        // even for the debugging commands.  that means we need to check for it here, using 
        // ISurfaceComposer (since we delegated the INTERFACE_TRANSACTION handling to 
        // BnSurfaceComposer before falling through to this code). 
  
        LOGD("+++ onTransact code %d", code); 
  
        CHECK_INTERFACE(ICameraService, data, reply); 
  
        switch(code) { 
        case 1000: 
        { 
            if (gWeakHeap != 0) { 
                sp h = gWeakHeap.promote(); 
                IMemoryHeap *p = gWeakHeap.unsafe_get(); 
                LOGD("CHECKING WEAK REFERENCE %p (%p)", h.get(), p); 
                if (h != 0) 
                    h->printRefs(); 
                bool attempt_to_delete = data.readInt32() == 1; 
                if (attempt_to_delete) { 
                    // NOT SAFE! 
                    LOGD("DELETING WEAK REFERENCE %p (%p)", h.get(), p); 
                    if (p) delete p; 
                } 
                return NO_ERROR; 
            } 
        } 
        break; 
        default: 
            break; 
        } 
    } 
    return err; 

status_t CameraService::onTransact(

    uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)

{

    // permission checks...

    switch (code) {

        case BnCameraService::CONNECT:

            IPCThreadState* ipc = IPCThreadState::self();

            const int pid = ipc->getCallingPid();

            const int self_pid = getpid();

            if (pid != self_pid) {

                // we're called from a different process, do the real check

                if (!checkCallingPermission(

                        String16("android.permission.CAMERA")))

                {

                    const int uid = ipc->getCallingUid();

                    LOGE("Permission Denial: "

                            "can't use the camera pid=%d, uid=%d", pid, uid);

                    return PERMISSION_DENIED;

                }

            }

            break;

    }

    status_t err = BnCameraService::onTransact(code, data, reply, flags);

    LOGD("+++ onTransact err %d code %d", err, code);

    if (err == UNKNOWN_TRANSACTION || err == PERMISSION_DENIED) {

        // the 'service' command interrogates this binder for its name, and then supplies it

        // even for the debugging commands.  that means we need to check for it here, using

        // ISurfaceComposer (since we delegated the INTERFACE_TRANSACTION handling to

        // BnSurfaceComposer before falling through to this code).

        LOGD("+++ onTransact code %d", code);

        CHECK_INTERFACE(ICameraService, data, reply);

        switch(code) {

        case 1000:

        {

            if (gWeakHeap != 0) {

                sp h = gWeakHeap.promote();

                IMemoryHeap *p = gWeakHeap.unsafe_get();

                LOGD("CHECKING WEAK REFERENCE %p (%p)", h.get(), p);

                if (h != 0)

                    h->printRefs();

                bool attempt_to_delete = data.readInt32() == 1;

                if (attempt_to_delete) {

                    // NOT SAFE!

                    LOGD("DELETING WEAK REFERENCE %p (%p)", h.get(), p);

                    if (p) delete p;

                }

                return NO_ERROR;

            }

        }

        break;

        default:

            break;

        }

    }

    return err;

}

由此可见,服务端的onTransact是一个请求分发函数,它根据请求码(code)做相应的处理。

o 消息循环

服务端(任何进程都可以作为服务端)有一个线程监听来自客户端的请求,并循环处理这些请求。

如果在主线程中处理请求,可以直接调用下面的函数:


IPCThreadState::self()->joinThreadPool(mIsMain); 

IPCThreadState::self()->joinThreadPool(mIsMain);

如果想在非主线程中处理请求,可以按下列方式:


sp 
pan>= ProcessState::self(); 
if (proc->supportsProcesses()) { 
    LOGV("App process: starting thread pool.\n"); 
    proc->startThreadPool(); 

        sp

proc = ProcessState::self();

        if (proc->supportsProcesses()) {

            LOGV("App process: starting thread pool.\n");

            proc->startThreadPool();

        }

startThreadPool的实现原理:


void ProcessState::startThreadPool() 

    AutoMutex _l(mLock); 
    if (!mThreadPoolStarted) { 
        mThreadPoolStarted = true; 
        spawnPooledThread(true); 
    } 
}  
  
void ProcessState::spawnPooledThread(bool isMain) 

    if (mThreadPoolStarted) { 
        int32_t s = android_atomic_add(1, &mThreadPoolSeq); 
        char buf[32]; 
        sprintf(buf, "Binder Thread #%d", s); 
        LOGV("Spawning new pooled thread, name=%s\n", buf); 
        sp 
t = new PoolThread(isMain); 
        t->run(buf); 
    } 

void ProcessState::startThreadPool()

{

    AutoMutex _l(mLock);

    if (!mThreadPoolStarted) {

        mThreadPoolStarted = true;

        spawnPooledThread(true);

    }

}

void ProcessState::spawnPooledThread(bool isMain)

{

    if (mThreadPoolStarted) {

        int32_t s = android_atomic_add(1, &mThreadPoolSeq);

        char buf[32];

        sprintf(buf, "Binder Thread #%d", s);

        LOGV("Spawning new pooled thread, name=%s\n", buf);

        sp

t = new PoolThread(isMain);

        t->run(buf);

    }

}

这里创建了PoolThread的对象,实现上就是创建了一个线程。所有的线程类都要实现threadLoop虚函数。PoolThread的threadLoop的实现如下:


virtual bool threadLoop() 

    IPCThreadState::self()->joinThreadPool(mIsMain); 
    return false; 

    virtual bool threadLoop()

    {

        IPCThreadState::self()->joinThreadPool(mIsMain);

        return false;

    }

上述代码,简而言之就是创建了一个线程,然后在线程里调用 IPCThreadState::self()->joinThreadPool函数。

 

下面再看joinThreadPool的实现:


do 

... 
        result = talkWithDriver(); 
        if (result >= NO_ERROR) { 
            size_t IN = mIn.dataAvail(); 
            if (IN < sizeof(int32_t)) continue; 
            cmd = mIn.readInt32(); 
            IF_LOG_COMMANDS() { 
                alog << "Processing top-level Command: " 
                    << getReturnString(cmd) << endl; 
            } 
            result = executeCommand(cmd); 
        } 
... 
while(...); 

do

{

...

        result = talkWithDriver();

        if (result >= NO_ERROR) {

            size_t IN = mIn.dataAvail();

            if (IN < sizeof(int32_t)) continue;

            cmd = mIn.readInt32();

            IF_LOG_COMMANDS() {

                alog << "Processing top-level Command: "

                    << getReturnString(cmd) << endl;

            }

            result = executeCommand(cmd);

        }

...

while(...);

这个函数在循环中重复执行下列动作:


talkWithDriver 通过ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr)读取请求和写回结果。
executeCommand 执行相应的请求

在IPCThreadState::executeCommand(int32_t cmd)函数中:


对于控制对象生命周期的请求,像BR_ACQUIRE/BR_RELEASE直接做了处理。
对于BR_TRANSACTION请求,它调用被请求对象的transact函数。

按下列方式调用实际的对象:


if (tr.target.ptr) { 
    sp<BBinder> b((BBinder*)tr.cookie); 
    const status_t error = b->transact(tr.code, buffer, &reply, 0); 
    if (error < NO_ERROR) reply.setError(error);  
  
} else { 
    const status_t error = the_context_object->transact(tr.code, buffer, &reply, 0); 
    if (error < NO_ERROR) reply.setError(error); 

if (tr.target.ptr) {

    sp<BBinder> b((BBinder*)tr.cookie);

    const status_t error = b->transact(tr.code, buffer, &reply, 0);

    if (error < NO_ERROR) reply.setError(error);

} else {

    const status_t error = the_context_object->transact(tr.code, buffer, &reply, 0);

    if (error < NO_ERROR) reply.setError(error);

}

如果tr.target.ptr不为空,就把tr.cookie转换成一个Binder对象,并调用它的transact函数。如果没有目标对象, 就调用 the_context_object对象的transact函数。奇怪的是,根本没有谁对the_context_object进行初始 化,the_context_object是空指针。原因是context_mgr的请求发给了ServiceManager,所以根本不会走到else 语句里来。

o 内核模块

android使用了一个内核模块binder来中转各个进程之间的消息。模块源代码放在binder.c里,它是一个字符驱动程序,主要通过 binder_ioctl与用户空间的进程交换数据。其中BINDER_WRITE_READ用来读写数据,数据包中有一个cmd域用于区分不同的请求:


binder_thread_write用于发送请求或返回结果。
binder_thread_read用于读取结果。

从binder_thread_write中调用binder_transaction中转请求和返回结果,binder_transaction的实现如下:

对请求的处理:


通过对象的handle找到对象所在的进程,如果handle为空就认为对象是context_mgr,把请求发给context_mgr所在的进程。
把请求中所有的binder对象全部放到一个RB树中。
把请求放到目标进程的队列中,等待目标进程读取。

如何成为context_mgr呢?内核模块提供了BINDER_SET_CONTEXT_MGR调用:


static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) 

    ... 
    case BINDER_SET_CONTEXT_MGR: 
        if (binder_context_mgr_node != NULL) { 
            printk(KERN_ERR "binder: BINDER_SET_CONTEXT_MGR already set\n"); 
            ret = -EBUSY; 
            goto err; 
        } 
        if (binder_context_mgr_uid != -1) { 
            if (binder_context_mgr_uid != current->euid) { 
                printk(KERN_ERR "binder: BINDER_SET_" 
                       "CONTEXT_MGR bad uid %d != %d\n", 
                       current->euid, 
                       binder_context_mgr_uid); 
                ret = -EPERM; 
                goto err; 
            } 
        } else 
            binder_context_mgr_uid = current->euid; 
        binder_context_mgr_node = binder_new_node(proc, NULL, NULL); 
        if (binder_context_mgr_node == NULL) { 
            ret = -ENOMEM; 
            goto err; 
        } 
        binder_context_mgr_node->local_weak_refs++; 
        binder_context_mgr_node->local_strong_refs++; 
        binder_context_mgr_node->has_strong_ref = 1; 
        binder_context_mgr_node->has_weak_ref = 1; 
        break; 

static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)

{

...

case BINDER_SET_CONTEXT_MGR:

  if (binder_context_mgr_node != NULL) {

   printk(KERN_ERR "binder: BINDER_SET_CONTEXT_MGR already set\n");

   ret = -EBUSY;

   goto err;

  }

  if (binder_context_mgr_uid != -1) {

   if (binder_context_mgr_uid != current->euid) {

    printk(KERN_ERR "binder: BINDER_SET_"

           "CONTEXT_MGR bad uid %d != %d\n",

           current->euid,

           binder_context_mgr_uid);

    ret = -EPERM;

    goto err;

   }

  } else

   binder_context_mgr_uid = current->euid;

  binder_context_mgr_node = binder_new_node(proc, NULL, NULL);

  if (binder_context_mgr_node == NULL) {

   ret = -ENOMEM;

   goto err;

  }

  binder_context_mgr_node->local_weak_refs++;

  binder_context_mgr_node->local_strong_refs++;

  binder_context_mgr_node->has_strong_ref = 1;

  binder_context_mgr_node->has_weak_ref = 1;

  break;

ServiceManager(frameworks/base/cmds/servicemanager)通过下列方式成为了context_mgr进程:


int binder_become_context_manager(struct binder_state *bs) 

    return ioctl(bs->fd, BINDER_SET_CONTEXT_MGR, 0); 

  
int main(int argc, char **argv) 

    struct binder_state *bs; 
    void *svcmgr = BINDER_SERVICE_MANAGER; 
  
    bs = binder_open(128*1024); 
  
    if (binder_become_context_manager(bs)) { 
        LOGE("cannot become context manager (%s)\n", strerror(errno)); 
        return -1; 
    } 
  
    svcmgr_handle = svcmgr; 
    binder_loop(bs, svcmgr_handler); 
    return 0; 

int binder_become_context_manager(struct binder_state *bs)

{

    return ioctl(bs->fd, BINDER_SET_CONTEXT_MGR, 0);

}

int main(int argc, char **argv)

{

    struct binder_state *bs;

    void *svcmgr = BINDER_SERVICE_MANAGER;

    bs = binder_open(128*1024);

    if (binder_become_context_manager(bs)) {

        LOGE("cannot become context manager (%s)\n", strerror(errno));

        return -1;

    }

    svcmgr_handle = svcmgr;

    binder_loop(bs, svcmgr_handler);

    return 0;

}

o 如何得到服务对象的handle


服务提供者通过defaultServiceManager得到ServiceManager对象,然后调用addService向服务管理器注册。
服务使用者通过defaultServiceManager得到ServiceManager对象,然后调用getService通过服务名称查找到服务对象的handle。

o 如何通过服务对象的handle找到服务所在的进程

0表示服务管理器的handle,getService可以查找到系统服务的handle。这个handle只是代表了服务对象,内核模块是如何通过handle找到服务所在的进程的呢?


对于ServiceManager: ServiceManager调用了binder_become_context_manager使用自己成为context_mgr,所有handle为0的请求都会被转发给ServiceManager。
对于系统服务和应用程序的Listener,在第一次请求内核模块时(比如调用addService),内核模块在一个RB树中建立了服务对象和进程的对应关系。


off_end = (void *)offp + tr->offsets_size; 
for (; offp < off_end; offp++) { 
    struct flat_binder_object *fp; 
    if (*offp > t->buffer->data_size - sizeof(*fp)) { 
        binder_user_error("binder: %d:%d got transaction with " 
            "invalid offset, %d\n", 
            proc->pid, thread->pid, *offp); 
        return_error = BR_FAILED_REPLY; 
        goto err_bad_offset; 
    } 
    fp = (struct flat_binder_object *)(t->buffer->data + *offp); 
    switch (fp->type) { 
    case BINDER_TYPE_BINDER: 
    case BINDER_TYPE_WEAK_BINDER: { 
        struct binder_ref *ref; 
        struct binder_node *node = binder_get_node(proc, fp->binder); 
        if (node == NULL) { 
            node = binder_new_node(proc, fp->binder, fp->cookie); 
            if (node == NULL) { 
                return_error = BR_FAILED_REPLY; 
                goto err_binder_new_node_failed; 
            } 
            node->min_priority = fp->flags & FLAT_BINDER_FLAG_PRIORITY_MASK; 
            node->accept_fds = !!(fp->flags & FLAT_BINDER_FLAG_ACCEPTS_FDS); 
        } 
        if (fp->cookie != node->cookie) { 
            binder_user_error("binder: %d:%d sending u%p " 
                "node %d, cookie mismatch %p != %p\n", 
                proc->pid, thread->pid, 
                fp->binder, node->debug_id, 
                fp->cookie, node->cookie); 
            goto err_binder_get_ref_for_node_failed; 
        } 
        ref = binder_get_ref_for_node(target_proc, node); 
        if (ref == NULL) { 
            return_error = BR_FAILED_REPLY; 
            goto err_binder_get_ref_for_node_failed; 
        } 
        if (fp->type == BINDER_TYPE_BINDER) 
            fp->type = BINDER_TYPE_HANDLE; 
        else 
            fp->type = BINDER_TYPE_WEAK_HANDLE; 
        fp->handle = ref->desc; 
        binder_inc_ref(ref, fp->type == BINDER_TYPE_HANDLE, &thread->todo); 
        if (binder_debug_mask & BINDER_DEBUG_TRANSACTION) 
            printk(KERN_INFO "        node %d u%p -> ref %d desc %d\n", 
                   node->debug_id, node->ptr, ref->debug_id, ref->desc); 
    } break; 

  off_end = (void *)offp + tr->offsets_size;

  for (; offp < off_end; offp++) {

    struct flat_binder_object *fp;

    if (*offp > t->buffer->data_size - sizeof(*fp)) {

      binder_user_error("binder: %d:%d got transaction with "

        "invalid offset, %d\n",

        proc->pid, thread->pid, *offp);

      return_error = BR_FAILED_REPLY;

      goto err_bad_offset;

    }

    fp = (struct flat_binder_object *)(t->buffer->data + *offp);

    switch (fp->type) {

    case BINDER_TYPE_BINDER:

    case BINDER_TYPE_WEAK_BINDER: {

      struct binder_ref *ref;

      struct binder_node *node = binder_get_node(proc, fp->binder);

      if (node == NULL) {

        node = binder_new_node(proc, fp->binder, fp->cookie);

        if (node == NULL) {

          return_error = BR_FAILED_REPLY;

          goto err_binder_new_node_failed;

        }

        node->min_priority = fp->flags & FLAT_BINDER_FLAG_PRIORITY_MASK;

        node->accept_fds = !!(fp->flags & FLAT_BINDER_FLAG_ACCEPTS_FDS);

      }

      if (fp->cookie != node->cookie) {

        binder_user_error("binder: %d:%d sending u%p "

          "node %d, cookie mismatch %p != %p\n",

          proc->pid, thread->pid,

          fp->binder, node->debug_id,

          fp->cookie, node->cookie);

        goto err_binder_get_ref_for_node_failed;

      }

      ref = binder_get_ref_for_node(target_proc, node);

      if (ref == NULL) {

        return_error = BR_FAILED_REPLY;

        goto err_binder_get_ref_for_node_failed;

      }

      if (fp->type == BINDER_TYPE_BINDER)

        fp->type = BINDER_TYPE_HANDLE;

      else

        fp->type = BINDER_TYPE_WEAK_HANDLE;

      fp->handle = ref->desc;

      binder_inc_ref(ref, fp->type == BINDER_TYPE_HANDLE, &thread->todo);

      if (binder_debug_mask & BINDER_DEBUG_TRANSACTION)

        printk(KERN_INFO "        node %d u%p -> ref %d desc %d\n",

               node->debug_id, node->ptr, ref->debug_id, ref->desc);

    } break;


请求服务时,内核先通过handle找到对应的进程,然后把请求放到服务进程的队列中。

o C调用JAVA

前面我们分析的是C代码的处理。对于JAVA代码,JAVA调用C的函数通过JNI调用即可。从内核时读取请求是在C代码(executeCommand)里进行了,那如何在C代码中调用那些用JAVA实现的服务呢?

android_os_Binder_init里的JavaBBinder对Java里的Binder对象进行包装。

JavaBBinder::onTransact调用Java里的execTransact函数:


        jboolean res = env->CallBooleanMethod(mObject, gBinderOffsets.mExecTransact, 
            code, (int32_t)&data, (int32_t)reply, flags); 
        jthrowable excep = env->ExceptionOccurred(); 
        if (excep) { 
            report_exception(env, excep, 
                "*** Uncaught remote exception!  " 
                "(Exceptions are not yet supported across processes.)"); 
            res = JNI_FALSE;  
  
            /* clean up JNI local ref -- we don't return to Java code */ 
            env->DeleteLocalRef(excep); 
        } 

        jboolean res = env->CallBooleanMethod(mObject, gBinderOffsets.mExecTransact,

            code, (int32_t)&data, (int32_t)reply, flags);

        jthrowable excep = env->ExceptionOccurred();

        if (excep) {

            report_exception(env, excep,

                "*** Uncaught remote exception!  "

                "(Exceptions are not yet supported across processes.)");

            res = JNI_FALSE;

            /* clean up JNI local ref -- we don't return to Java code */

            env->DeleteLocalRef(excep);

        }

o 广播消息

binder不提供广播消息,不过可以ActivityManagerService服务来实现广播。
(frameworks/base/core/java/android/app/ActivityManagerNative.java)

接收广播消息需要实现接口BroadcastReceiver,然后调用ActivityManagerProxy::registerReceiver注册。

触发广播调用ActivityManagerProxy::broadcastIntent。(应用程序并不直接调用它,而是调用Context对它的包装)

 

  

下面再看joinThreadPool的实现:


do 

... 
        result = talkWithDriver(); 
        if (result >= NO_ERROR) { 
            size_t IN = mIn.dataAvail(); 
            if (IN < sizeof(int32_t)) continue; 
            cmd = mIn.readInt32(); 
            IF_LOG_COMMANDS() { 
                alog << "Processing top-level Command: " 
                    << getReturnString(cmd) << endl; 
            } 
            result = executeCommand(cmd); 
        } 
... 
while(...); 

do

{

...

        result = talkWithDriver();

        if (result >= NO_ERROR) {

            size_t IN = mIn.dataAvail();

            if (IN < sizeof(int32_t)) continue;

            cmd = mIn.readInt32();

            IF_LOG_COMMANDS() {

                alog << "Processing top-level Command: "

                    << getReturnString(cmd) << endl;

            }

            result = executeCommand(cmd);

        }

...

while(...);

这个函数在循环中重复执行下列动作:


talkWithDriver 通过ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr)读取请求和写回结果。
executeCommand 执行相应的请求

在IPCThreadState::executeCommand(int32_t cmd)函数中:


对于控制对象生命周期的请求,像BR_ACQUIRE/BR_RELEASE直接做了处理。
对于BR_TRANSACTION请求,它调用被请求对象的transact函数。

按下列方式调用实际的对象:


if (tr.target.ptr) { 
    sp<BBinder> b((BBinder*)tr.cookie); 
    const status_t error = b->transact(tr.code, buffer, &reply, 0); 
    if (error < NO_ERROR) reply.setError(error);  
  
} else { 
    const status_t error = the_context_object->transact(tr.code, buffer, &reply, 0); 
    if (error < NO_ERROR) reply.setError(error); 

if (tr.target.ptr) {

    sp<BBinder> b((BBinder*)tr.cookie);

    const status_t error = b->transact(tr.code, buffer, &reply, 0);

    if (error < NO_ERROR) reply.setError(error);

} else {

    const status_t error = the_context_object->transact(tr.code, buffer, &reply, 0);

    if (error < NO_ERROR) reply.setError(error);

}

如果tr.target.ptr不为空,就把tr.cookie转换成一个Binder对象,并调用它的transact函数。如果没有目标对象, 就调用 the_context_object对象的transact函数。奇怪的是,根本没有谁对the_context_object进行初始 化,the_context_object是空指针。原因是context_mgr的请求发给了ServiceManager,所以根本不会走到else 语句里来。

o 内核模块

android使用了一个内核模块binder来中转各个进程之间的消息。模块源代码放在binder.c里,它是一个字符驱动程序,主要通过 binder_ioctl与用户空间的进程交换数据。其中BINDER_WRITE_READ用来读写数据,数据包中有一个cmd域用于区分不同的请求:


binder_thread_write用于发送请求或返回结果。
binder_thread_read用于读取结果。

从binder_thread_write中调用binder_transaction中转请求和返回结果,binder_transaction的实现如下:

对请求的处理:


通过对象的handle找到对象所在的进程,如果handle为空就认为对象是context_mgr,把请求发给context_mgr所在的进程。
把请求中所有的binder对象全部放到一个RB树中。
把请求放到目标进程的队列中,等待目标进程读取。

如何成为context_mgr呢?内核模块提供了BINDER_SET_CONTEXT_MGR调用:


static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) 

    ... 
    case BINDER_SET_CONTEXT_MGR: 
        if (binder_context_mgr_node != NULL) { 
            printk(KERN_ERR "binder: BINDER_SET_CONTEXT_MGR already set\n"); 
            ret = -EBUSY; 
            goto err; 
        } 
        if (binder_context_mgr_uid != -1) { 
            if (binder_context_mgr_uid != current->euid) { 
                printk(KERN_ERR "binder: BINDER_SET_" 
                       "CONTEXT_MGR bad uid %d != %d\n", 
                       current->euid, 
                       binder_context_mgr_uid); 
                ret = -EPERM; 
                goto err; 
            } 
        } else 
            binder_context_mgr_uid = current->euid; 
        binder_context_mgr_node = binder_new_node(proc, NULL, NULL); 
        if (binder_context_mgr_node == NULL) { 
            ret = -ENOMEM; 
            goto err; 
        } 
        binder_context_mgr_node->local_weak_refs++; 
        binder_context_mgr_node->local_strong_refs++; 
        binder_context_mgr_node->has_strong_ref = 1; 
        binder_context_mgr_node->has_weak_ref = 1; 
        break; 

static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)

{

...

case BINDER_SET_CONTEXT_MGR:

  if (binder_context_mgr_node != NULL) {

   printk(KERN_ERR "binder: BINDER_SET_CONTEXT_MGR already set\n");

   ret = -EBUSY;

   goto err;

  }

  if (binder_context_mgr_uid != -1) {

   if (binder_context_mgr_uid != current->euid) {

    printk(KERN_ERR "binder: BINDER_SET_"

           "CONTEXT_MGR bad uid %d != %d\n",

           current->euid,

           binder_context_mgr_uid);

    ret = -EPERM;

    goto err;

   }

  } else

   binder_context_mgr_uid = current->euid;

  binder_context_mgr_node = binder_new_node(proc, NULL, NULL);

  if (binder_context_mgr_node == NULL) {

   ret = -ENOMEM;

   goto err;

  }

  binder_context_mgr_node->local_weak_refs++;

  binder_context_mgr_node->local_strong_refs++;

  binder_context_mgr_node->has_strong_ref = 1;

  binder_context_mgr_node->has_weak_ref = 1;

  break;

ServiceManager(frameworks/base/cmds/servicemanager)通过下列方式成为了context_mgr进程:


int binder_become_context_manager(struct binder_state *bs) 

    return ioctl(bs->fd, BINDER_SET_CONTEXT_MGR, 0); 

  
int main(int argc, char **argv) 

    struct binder_state *bs; 
    void *svcmgr = BINDER_SERVICE_MANAGER; 
  
    bs = binder_open(128*1024); 
  
    if (binder_become_context_manager(bs)) { 
        LOGE("cannot become context manager (%s)\n", strerror(errno)); 
        return -1; 
    } 
  
    svcmgr_handle = svcmgr; 
    binder_loop(bs, svcmgr_handler); 
    return 0; 

int binder_become_context_manager(struct binder_state *bs)

{

    return ioctl(bs->fd, BINDER_SET_CONTEXT_MGR, 0);

}

int main(int argc, char **argv)

{

    struct binder_state *bs;

    void *svcmgr = BINDER_SERVICE_MANAGER;

    bs = binder_open(128*1024);

    if (binder_become_context_manager(bs)) {

        LOGE("cannot become context manager (%s)\n", strerror(errno));

        return -1;

    }

    svcmgr_handle = svcmgr;

    binder_loop(bs, svcmgr_handler);

    return 0;

}

o 如何得到服务对象的handle


服务提供者通过defaultServiceManager得到ServiceManager对象,然后调用addService向服务管理器注册。
服务使用者通过defaultServiceManager得到ServiceManager对象,然后调用getService通过服务名称查找到服务对象的handle。

o 如何通过服务对象的handle找到服务所在的进程

0表示服务管理器的handle,getService可以查找到系统服务的handle。这个handle只是代表了服务对象,内核模块是如何通过handle找到服务所在的进程的呢?


对于ServiceManager: ServiceManager调用了binder_become_context_manager使用自己成为context_mgr,所有handle为0的请求都会被转发给ServiceManager。
对于系统服务和应用程序的Listener,在第一次请求内核模块时(比如调用addService),内核模块在一个RB树中建立了服务对象和进程的对应关系。


off_end = (void *)offp + tr->offsets_size; 
for (; offp < off_end; offp++) { 
    struct flat_binder_object *fp; 
    if (*offp > t->buffer->data_size - sizeof(*fp)) { 
        binder_user_error("binder: %d:%d got transaction with " 
            "invalid offset, %d\n", 
            proc->pid, thread->pid, *offp); 
        return_error = BR_FAILED_REPLY; 
        goto err_bad_offset; 
    } 
    fp = (struct flat_binder_object *)(t->buffer->data + *offp); 
    switch (fp->type) { 
    case BINDER_TYPE_BINDER: 
    case BINDER_TYPE_WEAK_BINDER: { 
        struct binder_ref *ref; 
        struct binder_node *node = binder_get_node(proc, fp->binder); 
        if (node == NULL) { 
            node = binder_new_node(proc, fp->binder, fp->cookie); 
            if (node == NULL) { 
                return_error = BR_FAILED_REPLY; 
                goto err_binder_new_node_failed; 
            } 
            node->min_priority = fp->flags & FLAT_BINDER_FLAG_PRIORITY_MASK; 
            node->accept_fds = !!(fp->flags & FLAT_BINDER_FLAG_ACCEPTS_FDS); 
        } 
        if (fp->cookie != node->cookie) { 
            binder_user_error("binder: %d:%d sending u%p " 
                "node %d, cookie mismatch %p != %p\n", 
                proc->pid, thread->pid, 
                fp->binder, node->debug_id, 
                fp->cookie, node->cookie); 
            goto err_binder_get_ref_for_node_failed; 
        } 
        ref = binder_get_ref_for_node(target_proc, node); 
        if (ref == NULL) { 
            return_error = BR_FAILED_REPLY; 
            goto err_binder_get_ref_for_node_failed; 
        } 
        if (fp->type == BINDER_TYPE_BINDER) 
            fp->type = BINDER_TYPE_HANDLE; 
        else 
            fp->type = BINDER_TYPE_WEAK_HANDLE; 
        fp->handle = ref->desc; 
        binder_inc_ref(ref, fp->type == BINDER_TYPE_HANDLE, &thread->todo); 
        if (binder_debug_mask & BINDER_DEBUG_TRANSACTION) 
            printk(KERN_INFO "        node %d u%p -> ref %d desc %d\n", 
                   node->debug_id, node->ptr, ref->debug_id, ref->desc); 
    } break; 

  off_end = (void *)offp + tr->offsets_size;

  for (; offp < off_end; offp++) {

    struct flat_binder_object *fp;

    if (*offp > t->buffer->data_size - sizeof(*fp)) {

      binder_user_error("binder: %d:%d got transaction with "

        "invalid offset, %d\n",

        proc->pid, thread->pid, *offp);

      return_error = BR_FAILED_REPLY;

      goto err_bad_offset;

    }

    fp = (struct flat_binder_object *)(t->buffer->data + *offp);

    switch (fp->type) {

    case BINDER_TYPE_BINDER:

    case BINDER_TYPE_WEAK_BINDER: {

      struct binder_ref *ref;

      struct binder_node *node = binder_get_node(proc, fp->binder);

      if (node == NULL) {

        node = binder_new_node(proc, fp->binder, fp->cookie);

        if (node == NULL) {

          return_error = BR_FAILED_REPLY;

          goto err_binder_new_node_failed;

        }

        node->min_priority = fp->flags & FLAT_BINDER_FLAG_PRIORITY_MASK;

        node->accept_fds = !!(fp->flags & FLAT_BINDER_FLAG_ACCEPTS_FDS);

      }

      if (fp->cookie != node->cookie) {

        binder_user_error("binder: %d:%d sending u%p "

          "node %d, cookie mismatch %p != %p\n",

          proc->pid, thread->pid,

          fp->binder, node->debug_id,

          fp->cookie, node->cookie);

        goto err_binder_get_ref_for_node_failed;

      }

      ref = binder_get_ref_for_node(target_proc, node);

      if (ref == NULL) {

        return_error = BR_FAILED_REPLY;

        goto err_binder_get_ref_for_node_failed;

      }

      if (fp->type == BINDER_TYPE_BINDER)

        fp->type = BINDER_TYPE_HANDLE;

      else

        fp->type = BINDER_TYPE_WEAK_HANDLE;

      fp->handle = ref->desc;

      binder_inc_ref(ref, fp->type == BINDER_TYPE_HANDLE, &thread->todo);

      if (binder_debug_mask & BINDER_DEBUG_TRANSACTION)

        printk(KERN_INFO "        node %d u%p -> ref %d desc %d\n",

               node->debug_id, node->ptr, ref->debug_id, ref->desc);

    } break;


请求服务时,内核先通过handle找到对应的进程,然后把请求放到服务进程的队列中。

o C调用JAVA

前面我们分析的是C代码的处理。对于JAVA代码,JAVA调用C的函数通过JNI调用即可。从内核时读取请求是在C代码(executeCommand)里进行了,那如何在C代码中调用那些用JAVA实现的服务呢?

android_os_Binder_init里的JavaBBinder对Java里的Binder对象进行包装。

JavaBBinder::onTransact调用Java里的execTransact函数:


        jboolean res = env->CallBooleanMethod(mObject, gBinderOffsets.mExecTransact, 
            code, (int32_t)&data, (int32_t)reply, flags); 
        jthrowable excep = env->ExceptionOccurred(); 
        if (excep) { 
            report_exception(env, excep, 
                "*** Uncaught remote exception!  " 
                "(Exceptions are not yet supported across processes.)"); 
            res = JNI_FALSE;  
  
            /* clean up JNI local ref -- we don't return to Java code */ 
            env->DeleteLocalRef(excep); 
        } 

        jboolean res = env->CallBooleanMethod(mObject, gBinderOffsets.mExecTransact,

            code, (int32_t)&data, (int32_t)reply, flags);

        jthrowable excep = env->ExceptionOccurred();

        if (excep) {

            report_exception(env, excep,

                "*** Uncaught remote exception!  "

                "(Exceptions are not yet supported across processes.)");

            res = JNI_FALSE;

            /* clean up JNI local ref -- we don't return to Java code */

            env->DeleteLocalRef(excep);

        }

o 广播消息

binder不提供广播消息,不过可以ActivityManagerService服务来实现广播。
(frameworks/base/core/java/android/app/ActivityManagerNative.java)

接收广播消息需要实现接口BroadcastReceiver,然后调用ActivityManagerProxy::registerReceiver注册。

触发广播调用ActivityManagerProxy::broadcastIntent。(应用程序并不直接调用它,而是调用Context对它的包装)

Binder通信简介:
    Linux系统中进程间通信的方式有:socket, named pipe,message queque, signal,share memory。Java系统中的进程间通信方式有socket, named pipe等,android应用程序理所当然可以应用JAVA的IPC机制实现进程间的通信,但我查看android的源码,在同一终端上的应用软件的通 信几乎看不到这些IPC通信方式,取而代之的是Binder通信。Google为什么要采用这种方式呢,这取决于Binder通信方式的高效率。 Binder通信是通过linux的binder driver来实现的,Binder通信操作类似线程迁移(thread migration),两个进程间IPC看起来就象是一个进程进入另一个进程执行代码然后带着执行的结果返回。Binder的用户空间为每一个进程维护着一个可用的线程池,线程池用于处理到来的IPC以及执行进程本地消息,Binder通信是同步而不是异步。
    Android中的 Binder通信是基于Service与Client的,所有需要IBinder通信的进程都必须创建一个IBinder接口,系统中有一个进程管理所有 的system service,Android不允许用户添加非授权的System service,当然现在源码开发了,我们可以修改一些代码来实现添加底层system Service的目的。对用户程序来说,我们也要创建server,或者Service用于进程间通信,这里有一个 ActivityManagerService管理JAVA应用层所有的service创建与连接(connect),disconnect,所有的 Activity也是通过这个service来启动,加载的。ActivityManagerService也是加载在Systems Servcie中的。
    Android虚拟机启动之前系统会先启动service Manager进程,service Manager打开binder驱动,并通知binder kernel驱动程序这个进程将作为System Service Manager,然后该进程将进入一个循环,等待处理来自其他进程的数据。用户创建一个System service后,通过defaultServiceManager得到一个远程ServiceManager的接口,通过这个接口我们可以调用 addService函数将System service添加到Service Manager进程中,然后client可以通过getService获取到需要连接的目的Service的IBinder对象,这个IBinder是 Service的BBinder在binder kernel的一个参考,所以service IBinder 在binder kernel中不会存在相同的两个IBinder对象,每一个Client进程同样需要打开Binder驱动程序。对用户程序而言,我们获得这个对象就可 以通过binder kernel访问service对象中的方法。Client与Service在不同的进程中,通过这种方式实现了类似线程间的迁移的通信方式,对用户程序 而言当调用Service返回的IBinder接口后,访问Service中的方法就如同调用自己的函数。
下图为client与Service建立连接的示意图

IPC 通讯机制源码分析 (一)">

首先从ServiceManager注册过程来逐步分析上述过程是如何实现的。

ServiceMananger进程注册过程源码分析:
Service Manager Process(Service_manager.c):
    Service_manager为其他进程的Service提供管理,这个服务程序必须在Android Runtime起来之前运行,否则Android JAVA Vm ActivityManagerService无法注册。
int main(int argc, char **argv)
{
    struct binder_state *bs;
    void *svcmgr = BINDER_SERVICE_MANAGER;

    bs = binder_open(128*1024); //打开/dev/binder 驱动

    if (binder_become_context_manager(bs)) {//注册为service manager in binder kernel
        LOGE("cannot become context manager (%s)\n", strerror(errno));
        return -1;
    }
    svcmgr_handle = svcmgr;
    binder_loop(bs, svcmgr_handler);
    return 0;
}
首 先打开binder的驱动程序然后通过binder_become_context_manager函数调用ioctl告诉Binder Kernel驱动程序这是一个服务管理进程,然后调用binder_loop等待来自其他进程的数据。BINDER_SERVICE_MANAGER是服务管理进程的句柄,它的定义是:
#define BINDER_SERVICE_MANAGER ((void*) 0)
如果客户端进程获取Service时所使用的句柄与此不符,Service Manager将不接受Client的请求。客户端如何设置这个句柄在下面会介绍。

CameraSerivce服务的注册(Main_mediaservice.c)
int main(int argc, char** argv)
{
    sp<ProcessState> proc(ProcessState::self());
    sp<IServiceManager> sm = defaultServiceManager();
    LOGI("ServiceManager: %p", sm.get());
    AudioFlinger::instantiate();             //Audio 服务
    MediaPlayerService::instantiate();       //mediaPlayer服务
    CameraService::instantiate();             //Camera 服务
    ProcessState::self()->startThreadPool(); //为进程开启缓冲池
    IPCThreadState::self()->joinThreadPool(); //将进程加入到缓冲池
}

CameraService.cpp
void CameraService::instantiate() {
    defaultServiceManager()->addService(
            String16("media.camera"), new CameraService());
}
创建CameraService服务对象并添加到ServiceManager进程中。


client获取remote IServiceManager IBinder接口:
sp<IServiceManager> defaultServiceManager()
{
    if (gDefaultServiceManager != NULL) return gDefaultServiceManager;
  
    {
        AutoMutex _l(gDefaultServiceManagerLock);
        if (gDefaultServiceManager == NULL) {
            gDefaultServiceManager = interface_cast<IServiceManager>(
                ProcessState::self()->getContextObject(NULL));
        }
    }
    return gDefaultServiceManager;
}
任 何一个进程在第一次调用defaultServiceManager的时候gDefaultServiceManager值为Null,所以该进程会通过 ProcessState::self得到ProcessState实例。ProcessState将打开Binder驱动。
ProcessState.cpp
sp<ProcessState> ProcessState::self()
{
    if (gProcess != NULL) return gProcess;
  
    AutoMutex _l(gProcessMutex);
    if (gProcess == NULL) gProcess = new ProcessState;
    return gProcess;
}

ProcessState::ProcessState()
: mDriverFD(open_driver()) //打开/dev/binder驱动
...........................
{
}

sp<IBinder> ProcessState::getContextObject(const sp<IBinder>& caller)
{
    if (supportsProcesses()) {
        return getStrongProxyForHandle(0);
    } else {
        return getContextObject(String16("default"), caller);
    }
}
Android是支持Binder驱动的所以程序会调用getStrongProxyForHandle。这里handle为0,正好与Service_manager中的BINDER_SERVICE_MANAGER一致。
sp<IBinder> ProcessState::getStrongProxyForHandle(int32_t handle)
{
    sp<IBinder> result;
    AutoMutex _l(mLock);
    handle_entry* e = lookupHandleLocked(handle);

    if (e != NULL) {
        // We need to create a new BpBinder if there isn't currently one, OR we
        // are unable to acquire a weak reference on this current one. See comment
        // in getWeakProxyForHandle() for more info about this.
        IBinder* b = e->binder; //第一次调用该函数b为Null
        if (b == NULL || !e->refs->attemptIncWeak(this)) {
            b = new BpBinder(handle);
            e->binder = b;
            if (b) e->refs = b->getWeakRefs();
            result = b;
        } else {
            // This little bit of nastyness is to allow us to add a primary
            // reference to the remote proxy when this team doesn't have one
            // but another team is sending the handle to us.
            result.force_set(b);
            e->refs->decWeak(this);
        }
    }
    return result;
}
第一次调用的时候b为Null所以会为b生成一BpBinder对象:
BpBinder::BpBinder(int32_t handle)
    : mHandle(handle)
    , mAlive(1)
    , mObitsSent(0)
    , mObituaries(NULL)
{
    LOGV("Creating BpBinder %p handle %d\n", this, mHandle);

    extendObjectLifetime(OBJECT_LIFETIME_WEAK);
    IPCThreadState::self()->incWeakHandle(handle);
}

void IPCThreadState::incWeakHandle(int32_t handle)
{
    LOG_REMOTEREFS("IPCThreadState::incWeakHandle(%d)\n", handle);
    mOut.writeInt32(BC_INCREFS);
    mOut.writeInt32(handle);
}
getContextObject返回了一个BpBinder对象。
interface_cast<IServiceManager>(
                ProcessState::self()->getContextObject(NULL));

template<typename INTERFACE>
inline sp<INTERFACE> interface_cast(const sp<IBinder>& obj)
{
    return INTERFACE::asInterface(obj);
}
将这个宏扩展后最终得到的是:
sp<IServiceManager> IServiceManager::asInterface(const sp<IBinder>& obj)   
    {                                                                  
        sp<IServiceManager> intr;                                         
        if (obj != NULL) {                                             
            intr = static_cast<IServiceManager*>(                         
                obj->queryLocalInterface(                              
                        IServiceManager::descriptor).get());              
            if (intr == NULL) {                                        
                intr = new BpServiceManager(obj);                         
            }                                                          
        }                                                              
        return intr;
}
返回一个BpServiceManager对象,这里obj就是前面我们创建的BpBInder对象。

client获取Service的远程IBinder接口
以CameraService为例(camera.cpp):
const sp<ICameraService>& Camera::getCameraService()
{
    Mutex::Autolock _l(mLock);
    if (mCameraService.get() == 0) {
        sp<IServiceManager> sm = defaultServiceManager();
        sp<IBinder> binder;
        do {
            binder = sm->getService(String16("media.camera"));
            if (binder != 0)
                break;
            LOGW("CameraService not published, waiting...");
            usleep(500000); // 0.5 s
        } while(true);
        if (mDeathNotifier == NULL) {
            mDeathNotifier = new DeathNotifier();
        }
        binder->linkToDeath(mDeathNotifier);
        mCameraService = interface_cast<ICameraService>(binder);
    }
    LOGE_IF(mCameraService==0, "no CameraService!?");
    return mCameraService;
}
由前面的分析可知sm是BpCameraService对象://应该为BpServiceManager对象
    virtual sp<IBinder> getService(const String16& name) const
    {
        unsigned n;
        for (n = 0; n < 5; n++){
            sp<IBinder> svc = checkService(name);
            if (svc != NULL) return svc;
            LOGI("Waiting for sevice %s...\n", String8(name).string());
            sleep(1);
        }
        return NULL;
    }
    virtual sp<IBinder> checkService( const String16& name) const
    {
        Parcel data, reply;
        data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());
        data.writeString16(name);
        remote()->transact(CHECK_SERVICE_TRANSACTION, data, &reply);
        return reply.readStrongBinder();
}
这里的remote就是我们前面得到BpBinder对象。所以checkService将调用BpBinder中的transact函数:
status_t BpBinder::transact(
    uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
    // Once a binder has died, it will never come back to life.
    if (mAlive) {
        status_t status = IPCThreadState::self()->transact(
            mHandle, code, data, reply, flags);
        if (status == DEAD_OBJECT) mAlive = 0;
        return status;
    }
    return DEAD_OBJECT;
}
mHandle为0,BpBinder继续往下调用IPCThreadState:transact函数将数据发给与mHandle相关联的Service Manager Process。
status_t IPCThreadState::transact(int32_t handle,
                                  uint32_t code, const Parcel& data,
                                  Parcel* reply, uint32_t flags)
{
   ............................................................
    if (err == NO_ERROR) {
        LOG_ONEWAY(">>>> SEND from pid %d uid %d %s", getpid(), getuid(),
            (flags & TF_ONE_WAY) == 0 ? "READ REPLY" : "ONE WAY");
        err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);
    }
  
    if (err != NO_ERROR) {
        if (reply) reply->setError(err);
        return (mLastError = err);
    }
  
    if ((flags & TF_ONE_WAY) == 0) {
        if (reply) {
            err = waitForResponse(reply);
        } else {
            Parcel fakeReply;
            err = waitForResponse(&fakeReply);
        }
       ..............................
  
    return err;
}

通过writeTransactionData构造要发送的数据
status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags,
    int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer)
{
    binder_transaction_data tr;

    tr.target.handle = handle; //这个handle将传递到service_manager
    tr.code = code;
    tr.flags = bindrFlags;
。。。。。。。。。。。。。。
}
waitForResponse 将调用talkWithDriver与对Binder kernel进行读写操作。当Binder kernel接收到数据后,service_mananger线程的ThreadPool就会启动,service_manager查找到 CameraService服务后调用binder_send_reply,将返回的数据写入Binder kernel,Binder kernel。
status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
{
    int32_t cmd;
    int32_t err;

    while (1) {
        if ((err=talkWithDriver()) < NO_ERROR) break;
     
..............................................   
}
status_t IPCThreadState::talkWithDriver(bool doReceive)
{
   ............................................
#if defined(HAVE_ANDROID_OS)
        if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)
            err = NO_ERROR;
        else
            err = -errno;
#else
        err = INVALID_OPERATION;
#endif
...................................................
}
通过上面的ioctl系统函数中BINDER_WRITE_READ对binder kernel进行读写。

Client A与Binder kernel通信:
kernel\drivers\android\Binder.c)
static int binder_open(struct inode *nodp, struct file *filp)
{
struct binder_proc *proc;

if (binder_debug_mask & BINDER_DEBUG_OPEN_CLOSE)
   printk(KERN_INFO "binder_open: %d:%d\n", current->group_leader->pid, current->pid);

proc = kzalloc(sizeof(*proc), GFP_KERNEL);
if (proc == NULL)
   return -ENOMEM;
get_task_struct(current);
proc->tsk = current;         //保存打开/dev/binder驱动的当前进程任务数据结构
INIT_LIST_HEAD(&proc->todo);
init_waitqueue_head(&proc->wait);
proc->default_priority = task_nice(current);
mutex_lock(&binder_lock);
binder_stats.obj_created[BINDER_STAT_PROC]++;
hlist_add_head(&proc->proc_node, &binder_procs);
proc->pid = current->group_leader->pid;
INIT_LIST_HEAD(&proc->delivered_death);
filp->private_data = proc;
mutex_unlock(&binder_lock);

if (binder_proc_dir_entry_proc) {
   char strbuf[11];
   snprintf(strbuf, sizeof(strbuf), "%u", proc->pid);
   create_proc_read_entry(strbuf, S_IRUGO, binder_proc_dir_entry_proc, binder_read_proc_proc, proc); //为当前进程创建一个process入口结构信息
}
return 0;
}
从 这里可以知道每一个打开/dev/binder的进程的信息都保存在binder kernel中,因而当一个进程调用ioctl与kernel binder通信时,binder kernel就能查询到调用进程的信息。BINDER_WRITE_READ是调用ioctl进程与Binder kernel通信一个非常重要的command。大家可以看到在IPCThreadState中的transact函数这个函数中call talkWithDriver发送的command就是BINDER_WRITE_READ。
static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
int ret;
struct binder_proc *proc = filp->private_data;
struct binder_thread *thread;
unsigned int size = _IOC_SIZE(cmd);
void __user *ubuf = (void __user *)arg;

/*printk(KERN_INFO "binder_ioctl: %d:%d %x %lx\n", proc->pid, current->pid, cmd, arg);*/
    //将调用ioctl的进程挂起 caller将挂起直到 service 返回
ret = wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);
if (ret)
   return ret;

mutex_lock(&binder_lock);
thread = binder_get_thread(proc);//根据当caller进程消息获取该进程线程池数据结构
if (thread == NULL) {
   ret = -ENOMEM;
   goto err;
}

switch (cmd) {
case BINDER_WRITE_READ: { //IPcThreadState中talkWithDriver设置ioctl的CMD
   struct binder_write_read bwr;
   if (size != sizeof(struct binder_write_read)) {
    ret = -EINVAL;
    goto err;
   }
   if (copy_from_user(&bwr, ubuf, sizeof(bwr))) {
    ret = -EFAULT;
    goto err;
   }
   if (binder_debug_mask & BINDER_DEBUG_READ_WRITE)
    printk(KERN_INFO "binder: %d:%d write %ld at %08lx, read %ld at %08lx\n",
          proc->pid, thread->pid, bwr.write_size, bwr.write_buffer, bwr.read_size, bwr.read_buffer);
   if (bwr.write_size > 0) {
    ret = binder_thread_write(proc, thread, (void __user *)bwr.write_buffer, bwr.write_size, &bwr.write_consumed);
    if (ret < 0) {
     bwr.read_consumed = 0;
     if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
      ret = -EFAULT;
     goto err;
    }
   }
   if (bwr.read_size > 0) {//数据写入到caller process。
    ret = binder_thread_read(proc, thread, (void __user *)bwr.read_buffer, bwr.read_size, &bwr.read_consumed, filp->f_flags & O_NONBLOCK);
    if (!list_empty(&proc->todo))
     wake_up_interruptible(&proc->wait); //恢复挂起的caller进程
    if (ret < 0) {
     if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
      ret = -EFAULT;
     goto err;
    }
   }
    .........................................
}

Int binder_thread_write(struct binder_proc *proc, struct binder_thread *thread,void __user *buffer, int size, signed long *consumed)
{
uint32_t cmd;
void __user *ptr = buffer + *consumed;
void __user *end = buffer + size;

while (ptr < end && thread->return_error == BR_OK) {
   if (get_user(cmd, (uint32_t __user *)ptr))//从user空间获取cmd数据到内核空间
    return -EFAULT;
   ptr += sizeof(uint32_t);
   if (_IOC_NR(cmd) < ARRAY_SIZE(binder_stats.bc)) {
    binder_stats.bc[_IOC_NR(cmd)]++;
    proc->stats.bc[_IOC_NR(cmd)]++;
    thread->stats.bc[_IOC_NR(cmd)]++;
   }
   switch (cmd) {
   case BC_INCREFS:
.........................................
        case BC_TRANSACTION: //IPCThreadState通过writeTransactionData设置该cmd
   case BC_REPLY: {
    struct binder_transaction_data tr;

    if (copy_from_user(&tr, ptr, sizeof(tr)))
     return -EFAULT;
    ptr += sizeof(tr);
    binder_transaction(proc, thread, &tr, cmd == BC_REPLY);
    break;
   }
........................................

static void
binder_transaction(struct binder_proc *proc, struct binder_thread *thread,
struct binder_transaction_data *tr, int reply)
{
     ..............................................
    if (reply) // cmd != BC_REPLY 不走这个case
{
        ......................................
   }
   else
{
   if (tr->target.handle) { //对于service_manager来说这个条件不满足(handle == 0)
    .......................................
    }
   } else {//这一段我们获取到了service_mananger process 注册在binder kernle的进程信息
target_node = binder_context_mgr_node; //BINDER_SET_CONTEXT_MGR 注册了service
    if (target_node == NULL) {             //manager
     return_error = BR_DEAD_REPLY;
     goto err_no_context_mgr_node;
    }
   }
   e->to_node = target_node->debug_id;
   target_proc = target_node->proc; //得到目标进程service_mananger 的结构
   if (target_proc == NULL) {
    return_error = BR_DEAD_REPLY;
    goto err_dead_binder;
   }
   ....................
}
if (target_thread) {
   e->to_thread = target_thread->pid;
   target_list = &target_thread->todo;
   target_wait = &target_thread->wait; //得到service manager挂起的线程
} else {
   target_list = &target_proc->todo;
   target_wait = &target_proc->wait;
}
............................................
case BINDER_TYPE_BINDER:
case BINDER_TYPE_WEAK_BINDER: {
    ..........................
    ref = binder_get_ref_for_node(target_proc, node); //在Binder kernel中创建
    ..........................                        //查找到的service参考
   } break;

............................................
if (target_wait)
      wake_up_interruptible(target_wait);   //唤醒挂起的线程 处理caller process请求
............................................//处理命令可以看svcmgr_handler
}
   到这里我们已经通过getService连接到service manager进程了,service manager进程得到请求后,如果他的状态是挂起的话,将被唤醒。现在我们来看一下service manager中的binder_loop函数。
Service_manager.c
void binder_loop(struct binder_state *bs, binder_handler func)
{
    .................................
    binder_write(bs, readbuf, sizeof(unsigned));

    for (;;) {
        bwr.read_size = sizeof(readbuf);
        bwr.read_consumed = 0;
        bwr.read_buffer = (unsigned) readbuf;
    res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr); //如果没有要处理的请求进程将挂起
        if (res < 0) {
            LOGE("binder_loop: ioctl failed (%s)\n", strerror(errno));
            break;
        }
res = binder_parse(bs, 0, readbuf, bwr.read_consumed, func);//这里func就是
       ...................................                  //svcmgr_handler
    }
}
接收到数据处理的请求,这里进行解析并调用前面注册的回调函数查找caller请求的service
int binder_parse(struct binder_state *bs, struct binder_io *bio,
                 uint32_t *ptr, uint32_t size, binder_handler func)
{
        ....................................
      switch(cmd) {
           ......
          case BR_TRANSACTION: {
            struct binder_txn *txn = (void *) ptr;
            if ((end - ptr) * sizeof(uint32_t) < sizeof(struct binder_txn)) {
                LOGE("parse: txn too small!\n");
                return -1;
            }
            binder_dump_txn(txn);
            if (func) {
                unsigned rdata[256/4];
                struct binder_io msg;
                struct binder_io reply;
                int res;

                bio_init(&reply, rdata, sizeof(rdata), 4);
                bio_init_from_txn(&msg, txn);
                res = func(bs, txn, &msg, &reply);     //找到caller请求的service
       binder_send_reply(bs, &reply, txn->data, res);//将找到的service返回给caller
            }
            ptr += sizeof(*txn) / sizeof(uint32_t);
            break;
         ........
        }

}
void binder_send_reply(struct binder_state *bs,
                       struct binder_io *reply,
                       void *buffer_to_free,
                       int status)
{
    struct {
        uint32_t cmd_free;
        void *buffer;
        uint32_t cmd_reply;
        struct binder_txn txn;
    } __attribute__((packed)) data;

    data.cmd_free = BC_FREE_BUFFER;
    data.buffer = buffer_to_free;
data.cmd_reply = BC_REPLY; //将我们前面binder_thread_write中cmd替换为BC_REPLY就可以知
data.txn.target = 0;       //道service manager如何将找到的service返回给caller了
   ..........................
    binder_write(bs, &data, sizeof(data)); //调用ioctl与binder kernel通信
}
从这里走出去后,caller该被唤醒了,client进程就得到了所请求的service的IBinder对象在Binder kernel中的参考,这是一个远程BBinder对象。

连接建立后的client连接Service的通信过程:
virtual sp<ICamera> connect(const sp<ICameraClient>& cameraClient)
    {
        Parcel data, reply;
        data.writeInterfaceToken(ICameraService::getInterfaceDescriptor());
        data.writeStrongBinder(cameraClient->asBinder());
        remote()->transact(BnCameraService::CONNECT, data, &reply);
        return interface_cast<ICamera>(reply.readStrongBinder());
    }
向 前面分析的这里remote是我们得到的CameraService的对象,caller进程会切入到CameraService。android的每 一个进程都会创建一个线程池,这个线程池用处理其他进程的请求。当没有数据的时候线程是挂起的,这时binder kernel唤醒了这个线程:
IPCThreadState::joinThreadPool(bool isMain)
{
    LOG_THREADPOOL("**** THREAD %p (PID %d) IS JOINING THE THREAD POOL\n", (void*)pthread_self(), getpid());

    mOut.writeInt32(isMain ? BC_ENTER_LOOPER : BC_REGISTER_LOOPER);
   
    status_t result;
    do {
        int32_t cmd;
        result = talkWithDriver();
        if (result >= NO_ERROR) {
            size_t IN = mIn.dataAvail();   //binder kernel传递数据到service
            if (IN < sizeof(int32_t)) continue;
            cmd = mIn.readInt32();
            IF_LOG_COMMANDS() {
                alog << "Processing top-level Command: "
                    << getReturnString(cmd) << endl;
            }
          result = executeCommand(cmd); //service 执行binder kernel请求的命令
        }
       
        // Let this thread exit the thread pool if it is no longer
        // needed and it is not the main process thread.
        if(result == TIMED_OUT && !isMain) {
            break;
        }
    } while (result != -ECONNREFUSED && result != -EBADF);
      .......................
}

status_t IPCThreadState::executeCommand(int32_t cmd)
{
    BBinder* obj;
    RefBase::weakref_type* refs;
    status_t result = NO_ERROR;
   
switch (cmd) {
.........................
    case BR_TRANSACTION:
        {
            binder_transaction_data tr;
            result = mIn.read(&tr, sizeof(tr));
            LOG_ASSERT(result == NO_ERROR,
                "Not enough command data for brTRANSACTION");
            if (result != NO_ERROR) break;
           
            Parcel buffer;
            buffer.ipcSetDataReference(
                reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
                tr.data_size,
                reinterpret_cast<const size_t*>(tr.data.ptr.offsets),
                tr.offsets_size/sizeof(size_t), freeBuffer, this);
           
            const pid_t origPid = mCallingPid;
            const uid_t origUid = mCallingUid;
           
            mCallingPid = tr.sender_pid;
            mCallingUid = tr.sender_euid;
           
            //LOGI(">>>> TRANSACT from pid %d uid %d\n", mCallingPid, mCallingUid);
           
            Parcel reply;
            .........................
            if (tr.target.ptr) {
      sp<BBinder> b((BBinder*)tr.cookie); //service中Binder对象即CameraService
       const status_t error = b->transact(tr.code, buffer, &reply, 0);//将调用
   if (error < NO_ERROR) reply.setError(error);//CameraService的onTransact函数
               
            } else {
                const status_t error = the_context_object->transact(tr.code, buffer, &reply, 0);
                if (error < NO_ERROR) reply.setError(error);
            }
           
            //LOGI("<<<< TRANSACT from pid %d restore pid %d uid %d\n",
            //     mCallingPid, origPid, origUid);
           
            if ((tr.flags & TF_ONE_WAY) == 0) {
                LOG_ONEWAY("Sending reply to %d!", mCallingPid);
                sendReply(reply, 0);
            } else {
                LOG_ONEWAY("NOT sending reply to %d!", mCallingPid);
            }
           
            mCallingPid = origPid;
            mCallingUid = origUid;
           
            IF_LOG_TRANSACTIONS() {
                TextOutput::Bundle _b(alog);
                alog << "BC_REPLY thr " << (void*)pthread_self() << " / obj "
                    << tr.target.ptr << ": " << indent << reply << dedent << endl;
            }
        ..................................   
   }
        break;
}
    ..................................
if ((tr.flags & TF_ONE_WAY) == 0) {
                LOG_ONEWAY("Sending reply to %d!", mCallingPid);
      sendReply(reply, 0); //通过binder kernel返回数据到caller进程这个过程大家
             } else {                 //参照前面的叙述自己分析一下
                LOG_ONEWAY("NOT sending reply to %d!", mCallingPid);
            }
    if (result != NO_ERROR) {
        mLastError = result;
    }
    return result;
}
调用CameraService BBinder对象中的transact函数:
status_t BBinder::transact(
    uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
   .....................
    switch (code) {
        case PING_TRANSACTION:
            reply->writeInt32(pingBinder());
            break;
        default:
            err = onTransact(code, data, reply, flags);
            break;
    }
    ...................
    return err;
}

将调用CameraService的onTransact函数,CameraService继承了BBinder。
status_t BnCameraService::onTransact(
    uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
    switch(code) {
        case CONNECT: {
            CHECK_INTERFACE(ICameraService, data, reply);
            sp<ICameraClient> cameraClient = interface_cast<ICameraClient>(data.readStrongBinder());
            sp<ICamera> camera = connect(cameraClient); //真正的处理函数
            reply->writeStrongBinder(camera->asBinder());
            return NO_ERROR;
        } break;
        default:
            return BBinder::onTransact(code, data, reply, flags);
    }
}
至此完成了一次从client到service的通信。


设计一个多客户端的Service
Service可以连接不同的Client,这里说的多客户端是指在Service中为不同的 client创建不同的IClient接口,如果看过 AIDL编程的话,应该清楚,Service需要开放一个IService接口给客户端,我们通过 defaultServiceManager->getService就可以得到相应的service一个BpBinder接口,通过这个接口调用 transact函数就可以与service通信了,这样也就完成了一个简单的service与client程序了,但这里有个缺点就是,这个 IService是对所有的client开放的,如果我们要对不同的client做区分的话,在建立连接的时候所有的client需要给Service一 个特性,这样做也未尝不可,但会很麻烦。比如对Camera来说可能不止一个摄像头,摄像头的功能也不一样,这样做就比较麻烦了。其实我们完全可以参照 QT中多客户端的设计方式,在Service中为每一个Client都创建一个IClient接口,IService接口只用于Serivce与 Client建立连接用。对于Camera,如果存在多摄像头我们就可以在Service中为不同的Client打开不同的设备。
import android.os.IBinder;
import android.os.RemoteException;
public class TestServerServer extends android.app.testServer.ITestServer.Stub
{
int mClientCount = 0;
testServerClient mClient[];
@Override
public android.app.testServer.ITestClient.Stub connect(ITestClient client) throws RemoteException
{
   // TODO Auto-generated method stub
testServerClient tClient = new testServerClient(this, client); //为Client创建
   mClient[mClientCount] = tClient;                //不同的IClient
   mClientCount ++;
   System.out.printf("*** Server connect client is %d", client.asBinder());
   return tClient;
}

@Override
public void receivedData(int count) throws RemoteException
{
   // TODO Auto-generated method stub
 
}
Public static class testServerClient extends android.app.testServer.ITestClient.Stub
{
   public android.app.testServer.ITestClient mClient;
   public TestServerServer mServer;
   public testServerClient(TestServerServer tServer, android.app.testServer.ITestClient tClient)
   {
    mServer = tServer;
    mClient = tClient;
   }
   public IBinder asBinder()
   {
    // TODO Auto-generated method stub
    return this;
   }
}
}
这仅仅是个Service的demo而已,如果添加这个作为system Service还得改一下android代码avoid permission check!

总结:
    假定一个Client A 进程与Service B 进程要建立IPC通信,通过前面的分析我们知道他的流程如下:
1:Service B 打开Binder driver,将自己的进程信息注册到kernel并为Service创建一个binder_ref。
2:Service B 通过Add_Service 将Service信息添加到service_manager进程
3:Service B 的Thread pool 挂起等待client 的请求
4:Client A 调用open_driver打开Binder driver 将自己的进程信息注册到kernel并为Service创建一个binder_ref
5: Client A 调用defaultManagerService.getService 得到Service B在kernel中的IBinder对象
6:通过transact 与Binder kernel 通信,Binder Kernel将Client A 挂起。
7:Binder Kernel恢复Service B thread pool线程,并在 joinThreadPool 中处理Client的请求
8: Binder Kernel 挂起Service B 并将Service B 返回的数据写到Client A
9:Binder Kernle 恢复Client A
Binder kernel driver在Client A 与Service B之间扮演着中间代理的角色。任何通过transact传递的IBinder对象都会在Binder kernel中创建一个与此相关联的独一无二的BInder对象,用于区分不同的Client。

 

  • 0
    点赞
  • 6
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
Android 中,Binder 是一种进程间通信(IPC)机制,用于在不同的应用程序或进程之间进行通信。通过 Binder,应用程序可以调用 Android 系统提供的各种服务。 下面是安卓中使用 Binder 调用系统服务的详细步骤: 1. 获取系统服务的引用: 首先,你需要获取要调用的系统服务的引用。Android 提供了 `Context.getSystemService()` 方法来获取系统服务。例如,要获取 AudioManager 服务的引用,可以使用以下代码: ```java AudioManager audioManager = (AudioManager) getSystemService(Context.AUDIO_SERVICE); ``` 2. 创建 ServiceConnection 对象: 在 Binder 机制中,需要使用 ServiceConnection 来处理与要调用的服务的连接。ServiceConnection 提供了两个回调方法:`onServiceConnected()` 和 `onServiceDisconnected()`。你需要实现这个接口,并在 `onServiceConnected()` 方法中处理与服务连接成功时的逻辑,在 `onServiceDisconnected()` 方法中处理与服务断开连接时的逻辑。 ```java ServiceConnection serviceConnection = new ServiceConnection() { @Override public void onServiceConnected(ComponentName name, IBinder service) { // 处理与服务连接成功时的逻辑 } @Override public void onServiceDisconnected(ComponentName name) { // 处理与服务断开连接时的逻辑 } }; ``` 3. 绑定服务: 调用 `Context.bindService()` 方法来绑定要调用的服务。该方法需要传入一个 Intent 对象和上面创建的 ServiceConnection 对象。 ```java Intent intent = new Intent(); intent.setComponent(new ComponentName("com.example.app", "com.example.app.MyService")); bindService(intent, serviceConnection, Context.BIND_AUTO_CREATE); ``` 4. 在 `onServiceConnected()` 方法中获取服务的 Binder 对象: 在 `onServiceConnected()` 方法中,你可以通过 `service` 参数获取服务的 Binder 对象。你可以通过这个 Binder 对象调用服务提供的方法。 ```java MyService.MyBinder binder = (MyService.MyBinder) service; MyService myService = binder.getService(); myService.doSomething(); ``` 5. 解绑服务: 当你不再需要与服务交互时,应该调用 `Context.unbindService()` 方法解绑服务。 ```java unbindService(serviceConnection); ``` 这些步骤将帮助你使用 Binder 机制调用 Android 系统提供的各种服务。请注意,确保在正确的线程中执行相关操作,并遵循 Android 的线程规则。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值