Android进程间通信IPC机制Binder

Android进程间通信(IPC)机制Binder简要介绍和学习计划

        在Android系统中,每一个应用程序都是由一些Activity和Service组成的,一般Service运行在独立的进程中,而Activity有可能运行在同一个进程中,也有可能运行在不同的进程中。那么,不在同一个进程的Activity或者Service是如何通信的呢?这就是本文中要介绍的Binder进程间通信机制了。

        我们知道,Android系统是基于Linux内核的,而Linux内核继承和兼容了丰富的Unix系统进程间通信(IPC)机制。有传统的管道(Pipe)、信号(Signal)和跟踪(Trace),这三项通信手段只能用于父进程与子进程之间,或者兄弟进程之间;后来又增加了命令管道(Named Pipe),使得进程间通信不再局限于父子进程或者兄弟进程之间;为了更好地支持商业应用中的事务处理,在AT&T的Unix系统V中,又增加了三种称为“System V IPC”的进程间通信机制,分别是报文队列(Message)、共享内存(Share Memory)和信号量(Semaphore);后来BSD Unix对“System V IPC”机制进行了重要的扩充,提供了一种称为插口(Socket)的进程间通信机制。若想进一步详细了解这些进程间通信机制,建议参考Android学习启动篇一文中提到《Linux内核源代码情景分析》一书。

        但是,Android系统没有采用上述提到的各种进程间通信机制,而是采用Binder机制,难道是因为考虑到了移动设备硬件性能较差、内存较低的特点?不得而知。Binder其实也不是Android提出来的一套新的进程间通信机制,它是基于OpenBinder来实现的。OpenBinder最先是由Be Inc.开发的,接着Palm Inc.也跟着使用。现在OpenBinder的作者Dianne Hackborn就是在Google工作,负责Android平台的开发工作。

        前面一再提到,Binder是一种进程间通信机制,它是一种类似于COM和CORBA分布式组件架构,通俗一点,其实是提供远程过程调用(RPC)功能。从英文字面上意思看,Binder具有粘结剂的意思,那么它把什么东西粘结在一起呢?在Android系统的Binder机制中,由一系统组件组成,分别是Client、Server、Service Manager和Binder驱动程序,其中Client、Server和Service Manager运行在用户空间,Binder驱动程序运行内核空间。Binder就是一种把这四个组件粘合在一起的粘结剂了,其中,核心组件便是Binder驱动程序了,Service Manager提供了辅助管理的功能,Client和Server正是在Binder驱动和Service Manager提供的基础设施上,进行Client-Server之间的通信。Service Manager和Binder驱动已经在Android平台中实现好,开发者只要按照规范实现自己的Client和Server组件就可以了。说起来简单,做起难,对初学者来说,Android系统的Binder机制是最难理解的了,而Binder机制无论从系统开发还是应用开发的角度来看,都是Android系统中最重要的组成,因此,很有必要深入了解Binder的工作方式。要深入了解Binder的工作方式,最好的方式莫过于是阅读Binder相关的源代码了,Linux的鼻祖Linus Torvalds曾经曰过一句名言RTFSC:Read The Fucking Source Code。

        虽说阅读Binder的源代码是学习Binder机制的最好的方式,但是也绝不能打无准备之仗,因为Binder的相关源代码是比较枯燥无味而且比较难以理解的,如果能够辅予一些理论知识,那就更好了。闲话少说,网上关于Binder机制的资料还是不少的,这里就不想再详细写一遍了,强烈推荐下面两篇文章:

        Android深入浅出之Binder机制

        Android Binder设计与实现 – 设计篇

        Android深入浅出之Binder机制一文从情景出发,深入地介绍了Binder在用户空间的三个组件Client、Server和Service Manager的相互关系,Android Binder设计与实现一文则是详细地介绍了内核空间的Binder驱动程序的数据结构和设计原理。非常感谢这两位作者给我们带来这么好的Binder学习资料。总结一下,Android系统Binder机制中的四个组件Client、Server、Service Manager和Binder驱动程序的关系如下图所示:

        

        1. Client、Server和Service Manager实现在用户空间中,Binder驱动程序实现在内核空间中

        2. Binder驱动程序和Service Manager在Android平台中已经实现,开发者只需要在用户空间实现自己的Client和Server

        3. Binder驱动程序提供设备文件/dev/binder与用户空间交互,Client、Server和Service Manager通过open和ioctl文件操作函数与Binder驱动程序进行通信

        4. Client和Server之间的进程间通信通过Binder驱动程序间接实现

        5. Service Manager是一个守护进程,用来管理Server,并向Client提供查询Server接口的能力

        至此,对Binder机制总算是有了一个感性的认识,但仍然感到不能很好地从上到下贯穿整个IPC通信过程,于是,打算通过下面四个情景来分析Binder源代码,以进一步理解Binder机制:

        1. Service Manager是如何成为一个守护进程的?即Service Manager是如何告知Binder驱动程序它是Binder机制的上下文管理者。

        2. Server和Client是如何获得Service Manager接口的?即defaultServiceManager接口是如何实现的。

        3. Server是如何把自己的服务启动起来的?Service Manager在Server启动的过程中是如何为Server提供服务的?即IServiceManager::addService接口是如何实现的。

        4  Service Manager是如何为Client提供服务的?即IServiceManager::getService接口是如何实现的。

        在接下来的四篇文章中,将按照这四个情景来分析Binder源代码,都将会涉及到用户空间到内核空间的Binder相关源代码。这里为什么没有Client和Server是如何进行进程间通信的情景呢? 这是因为Service Manager在作为守护进程的同时,它也充当Server角色。因此,只要我们能够理解第三和第四个情景,也就理解了Binder机制中Client和Server是如何通过Binder驱动程序进行进程间通信的了。

        为了方便描述Android系统进程间通信Binder机制的原理和实现,在接下来的四篇文章中,我们都是基于C/C++语言来介绍Binder机制的实现的,但是,我们在Android系统开发应用程序时,都是基于Java语言的,因此,我们会在最后一篇文章中,详细介绍Android系统进程间通信Binder机制在应用程序框架层的Java接口实现:

        5. Android系统进程间通信Binder机制在应用程序框架层的Java接口源代码分析。

浅谈Service Manager成为Android进程间通信(IPC)机制Binder守护进程之路

分类: Android 18539人阅读 评论(45) 收藏 举报

        上一篇文章Android进程间通信(IPC)机制Binder简要介绍和学习计划简要介绍了Android系统进程间通信机制Binder的总体架构,它由Client、Server、Service Manager和驱动程序Binder四个组件构成。本文着重介绍组件Service Manager,它是整个Binder机制的守护进程,用来管理开发者创建的各种Server,并且向Client提供查询Server远程接口的功能。

        既然Service Manager组件是用来管理Server并且向Client提供查询Server远程接口的功能,那么,Service Manager就必然要和Server以及Client进行通信了。我们知道,Service Manger、Client和Server三者分别是运行在独立的进程当中,这样它们之间的通信也属于进程间通信了,而且也是采用Binder机制进行进程间通信,因此,Service Manager在充当Binder机制的守护进程的角色的同时,也在充当Server的角色,然而,它是一种特殊的Server,下面我们将会看到它的特殊之处。

       与Service Manager相关的源代码较多,这里不会完整去分析每一行代码,主要是带着Service Manager是如何成为整个Binder机制中的守护进程这条主线来一步一步地深入分析相关源代码,包括从用户空间到内核空间的相关源代码。希望读者在阅读下面的内容之前,先阅读一下前一篇文章提到的两个参考资料Android深入浅出之Binder机制Android Binder设计与实现,熟悉相关概念和数据结构,这有助于理解下面要分析的源代码。

       Service Manager在用户空间的源代码位于frameworks/base/cmds/servicemanager目录下,主要是由binder.h、binder.c和service_manager.c三个文件组成。Service Manager的入口位于service_manager.c文件中的main函数:

  1. int main(int argc, char **argv)  
  2. {  
  3.     struct binder_state *bs;  
  4.     void *svcmgr = BINDER_SERVICE_MANAGER;  
  5.   
  6.     bs = binder_open(128*1024);  
  7.   
  8.     if (binder_become_context_manager(bs)) {  
  9.         LOGE("cannot become context manager (%s)\n", strerror(errno));  
  10.         return -1;  
  11.     }  
  12.   
  13.     svcmgr_handle = svcmgr;  
  14.     binder_loop(bs, svcmgr_handler);  
  15.     return 0;  
  16. }  
int main(int argc, char **argv)
{
    struct binder_state *bs;
    void *svcmgr = BINDER_SERVICE_MANAGER;

    bs = binder_open(128*1024);

    if (binder_become_context_manager(bs)) {
        LOGE("cannot become context manager (%s)\n", strerror(errno));
        return -1;
    }

    svcmgr_handle = svcmgr;
    binder_loop(bs, svcmgr_handler);
    return 0;
}
        main函数主要有三个功能:一是打开Binder设备文件;二是告诉Binder驱动程序自己是Binder上下文管理者,即我们前面所说的守护进程;三是进入一个无穷循环,充当Server的角色,等待Client的请求。进入这三个功能之间,先来看一下这里用到的结构体binder_state、宏BINDER_SERVICE_MANAGER的定义:

        struct binder_state定义在frameworks/base/cmds/servicemanager/binder.c文件中:

  1. struct binder_state  
  2. {  
  3.     int fd;  
  4.     void *mapped;  
  5.     unsigned mapsize;  
  6. };  
struct binder_state
{
    int fd;
    void *mapped;
    unsigned mapsize;
};
        fd是文件描述符,即表示打开的/dev/binder设备文件描述符;mapped是把设备文件/dev/binder映射到进程空间的起始地址;mapsize是上述内存映射空间的大小。

        宏BINDER_SERVICE_MANAGER定义frameworks/base/cmds/servicemanager/binder.h文件中:

  1. /* the one magic object */  
  2. #define BINDER_SERVICE_MANAGER ((void*) 0)  
/* the one magic object */
#define BINDER_SERVICE_MANAGER ((void*) 0)
        它表示Service Manager的句柄为0。Binder通信机制使用句柄来代表远程接口,这个句柄的意义和Windows编程中用到的句柄是差不多的概念。前面说到,Service Manager在充当守护进程的同时,它充当Server的角色,当它作为远程接口使用时,它的句柄值便为0,这就是它的特殊之处,其余的Server的远程接口句柄值都是一个大于0 而且由Binder驱动程序自动进行分配的。

        函数首先是执行打开Binder设备文件的操作binder_open,这个函数位于frameworks/base/cmds/servicemanager/binder.c文件中:

  1. struct binder_state *binder_open(unsigned mapsize)  
  2. {  
  3.     struct binder_state *bs;  
  4.   
  5.     bs = malloc(sizeof(*bs));  
  6.     if (!bs) {  
  7.         errno = ENOMEM;  
  8.         return 0;  
  9.     }  
  10.   
  11.     bs->fd = open("/dev/binder", O_RDWR);  
  12.     if (bs->fd < 0) {  
  13.         fprintf(stderr,"binder: cannot open device (%s)\n",  
  14.                 strerror(errno));  
  15.         goto fail_open;  
  16.     }  
  17.   
  18.     bs->mapsize = mapsize;  
  19.     bs->mapped = mmap(NULL, mapsize, PROT_READ, MAP_PRIVATE, bs->fd, 0);  
  20.     if (bs->mapped == MAP_FAILED) {  
  21.         fprintf(stderr,"binder: cannot map device (%s)\n",  
  22.                 strerror(errno));  
  23.         goto fail_map;  
  24.     }  
  25.   
  26.         /* TODO: check version */  
  27.   
  28.     return bs;  
  29.   
  30. fail_map:  
  31.     close(bs->fd);  
  32. fail_open:  
  33.     free(bs);  
  34.     return 0;  
  35. }  
struct binder_state *binder_open(unsigned mapsize)
{
    struct binder_state *bs;

    bs = malloc(sizeof(*bs));
    if (!bs) {
        errno = ENOMEM;
        return 0;
    }

    bs->fd = open("/dev/binder", O_RDWR);
    if (bs->fd < 0) {
        fprintf(stderr,"binder: cannot open device (%s)\n",
                strerror(errno));
        goto fail_open;
    }

    bs->mapsize = mapsize;
    bs->mapped = mmap(NULL, mapsize, PROT_READ, MAP_PRIVATE, bs->fd, 0);
    if (bs->mapped == MAP_FAILED) {
        fprintf(stderr,"binder: cannot map device (%s)\n",
                strerror(errno));
        goto fail_map;
    }

        /* TODO: check version */

    return bs;

fail_map:
    close(bs->fd);
fail_open:
    free(bs);
    return 0;
}
       通过文件操作函数open来打开/dev/binder设备文件。设备文件/dev/binder是在Binder驱动程序模块初始化的时候创建的,我们先看一下这个设备文件的创建过程。进入到kernel/common/drivers/staging/android目录中,打开binder.c文件,可以看到模块初始化入口binder_init:

  1. static struct file_operations binder_fops = {  
  2.     .owner = THIS_MODULE,  
  3.     .poll = binder_poll,  
  4.     .unlocked_ioctl = binder_ioctl,  
  5.     .mmap = binder_mmap,  
  6.     .open = binder_open,  
  7.     .flush = binder_flush,  
  8.     .release = binder_release,  
  9. };  
  10.   
  11. static struct miscdevice binder_miscdev = {  
  12.     .minor = MISC_DYNAMIC_MINOR,  
  13.     .name = "binder",  
  14.     .fops = &binder_fops  
  15. };  
  16.   
  17. static int __init binder_init(void)  
  18. {  
  19.     int ret;  
  20.   
  21.     binder_proc_dir_entry_root = proc_mkdir("binder", NULL);  
  22.     if (binder_proc_dir_entry_root)  
  23.         binder_proc_dir_entry_proc = proc_mkdir("proc", binder_proc_dir_entry_root);  
  24.     ret = misc_register(&binder_miscdev);  
  25.     if (binder_proc_dir_entry_root) {  
  26.         create_proc_read_entry("state", S_IRUGO, binder_proc_dir_entry_root, binder_read_proc_state, NULL);  
  27.         create_proc_read_entry("stats", S_IRUGO, binder_proc_dir_entry_root, binder_read_proc_stats, NULL);  
  28.         create_proc_read_entry("transactions", S_IRUGO, binder_proc_dir_entry_root, binder_read_proc_transactions, NULL);  
  29.         create_proc_read_entry("transaction_log", S_IRUGO, binder_proc_dir_entry_root, binder_read_proc_transaction_log, &binder_transaction_log);  
  30.         create_proc_read_entry("failed_transaction_log", S_IRUGO, binder_proc_dir_entry_root, binder_read_proc_transaction_log, &binder_transaction_log_failed);  
  31.     }  
  32.     return ret;  
  33. }  
  34.   
  35. device_initcall(binder_init);  
static struct file_operations binder_fops = {
	.owner = THIS_MODULE,
	.poll = binder_poll,
	.unlocked_ioctl = binder_ioctl,
	.mmap = binder_mmap,
	.open = binder_open,
	.flush = binder_flush,
	.release = binder_release,
};

static struct miscdevice binder_miscdev = {
	.minor = MISC_DYNAMIC_MINOR,
	.name = "binder",
	.fops = &binder_fops
};

static int __init binder_init(void)
{
	int ret;

	binder_proc_dir_entry_root = proc_mkdir("binder", NULL);
	if (binder_proc_dir_entry_root)
		binder_proc_dir_entry_proc = proc_mkdir("proc", binder_proc_dir_entry_root);
	ret = misc_register(&binder_miscdev);
	if (binder_proc_dir_entry_root) {
		create_proc_read_entry("state", S_IRUGO, binder_proc_dir_entry_root, binder_read_proc_state, NULL);
		create_proc_read_entry("stats", S_IRUGO, binder_proc_dir_entry_root, binder_read_proc_stats, NULL);
		create_proc_read_entry("transactions", S_IRUGO, binder_proc_dir_entry_root, binder_read_proc_transactions, NULL);
		create_proc_read_entry("transaction_log", S_IRUGO, binder_proc_dir_entry_root, binder_read_proc_transaction_log, &binder_transaction_log);
		create_proc_read_entry("failed_transaction_log", S_IRUGO, binder_proc_dir_entry_root, binder_read_proc_transaction_log, &binder_transaction_log_failed);
	}
	return ret;
}

device_initcall(binder_init);

        创建设备文件的地方在misc_register函数里面,关于misc设备的注册,我们在Android日志系统驱动程序Logger源代码分析一文中有提到,有兴趣的读取不访去了解一下。其余的逻辑主要是在/proc目录创建各种Binder相关的文件,供用户访问。从设备文件的操作方法binder_fops可以看出,前面的binder_open函数执行语句:

  1. bs->fd = open("/dev/binder", O_RDWR);  
bs->fd = open("/dev/binder", O_RDWR);

        就进入到Binder驱动程序的binder_open函数了:

  1. static int binder_open(struct inode *nodp, struct file *filp)  
  2. {  
  3.     struct binder_proc *proc;  
  4.   
  5.     if (binder_debug_mask & BINDER_DEBUG_OPEN_CLOSE)  
  6.         printk(KERN_INFO "binder_open: %d:%d\n", current->group_leader->pid, current->pid);  
  7.   
  8.     proc = kzalloc(sizeof(*proc), GFP_KERNEL);  
  9.     if (proc == NULL)  
  10.         return -ENOMEM;  
  11.     get_task_struct(current);  
  12.     proc->tsk = current;  
  13.     INIT_LIST_HEAD(&proc->todo);  
  14.     init_waitqueue_head(&proc->wait);  
  15.     proc->default_priority = task_nice(current);  
  16.     mutex_lock(&binder_lock);  
  17.     binder_stats.obj_created[BINDER_STAT_PROC]++;  
  18.     hlist_add_head(&proc->proc_node, &binder_procs);  
  19.     proc->pid = current->group_leader->pid;  
  20.     INIT_LIST_HEAD(&proc->delivered_death);  
  21.     filp->private_data = proc;  
  22.     mutex_unlock(&binder_lock);  
  23.   
  24.     if (binder_proc_dir_entry_proc) {  
  25.         char strbuf[11];  
  26.         snprintf(strbuf, sizeof(strbuf), "%u", proc->pid);  
  27.         remove_proc_entry(strbuf, binder_proc_dir_entry_proc);  
  28.         create_proc_read_entry(strbuf, S_IRUGO, binder_proc_dir_entry_proc, binder_read_proc_proc, proc);  
  29.     }  
  30.   
  31.     return 0;  
  32. }  
static int binder_open(struct inode *nodp, struct file *filp)
{
	struct binder_proc *proc;

	if (binder_debug_mask & BINDER_DEBUG_OPEN_CLOSE)
		printk(KERN_INFO "binder_open: %d:%d\n", current->group_leader->pid, current->pid);

	proc = kzalloc(sizeof(*proc), GFP_KERNEL);
	if (proc == NULL)
		return -ENOMEM;
	get_task_struct(current);
	proc->tsk = current;
	INIT_LIST_HEAD(&proc->todo);
	init_waitqueue_head(&proc->wait);
	proc->default_priority = task_nice(current);
	mutex_lock(&binder_lock);
	binder_stats.obj_created[BINDER_STAT_PROC]++;
	hlist_add_head(&proc->proc_node, &binder_procs);
	proc->pid = current->group_leader->pid;
	INIT_LIST_HEAD(&proc->delivered_death);
	filp->private_data = proc;
	mutex_unlock(&binder_lock);

	if (binder_proc_dir_entry_proc) {
		char strbuf[11];
		snprintf(strbuf, sizeof(strbuf), "%u", proc->pid);
		remove_proc_entry(strbuf, binder_proc_dir_entry_proc);
		create_proc_read_entry(strbuf, S_IRUGO, binder_proc_dir_entry_proc, binder_read_proc_proc, proc);
	}

	return 0;
}
         这个函数的主要作用是创建一个struct binder_proc数据结构来保存打开设备文件/dev/binder的进程的上下文信息,并且将这个进程上下文信息保存在打开文件结构struct file的私有数据成员变量private_data中,这样,在执行其它文件操作时,就通过打开文件结构struct file来取回这个进程上下文信息了。这个进程上下文信息同时还会保存在一个全局哈希表binder_procs中,驱动程序内部使用。binder_procs定义在文件的开头:

  1. static HLIST_HEAD(binder_procs);  
static HLIST_HEAD(binder_procs);
        结构体struct binder_proc也是定义在kernel/common/drivers/staging/android/binder.c文件中:

  1. struct binder_proc {  
  2.     struct hlist_node proc_node;  
  3.     struct rb_root threads;  
  4.     struct rb_root nodes;  
  5.     struct rb_root refs_by_desc;  
  6.     struct rb_root refs_by_node;  
  7.     int pid;  
  8.     struct vm_area_struct *vma;  
  9.     struct task_struct *tsk;  
  10.     struct files_struct *files;  
  11.     struct hlist_node deferred_work_node;  
  12.     int deferred_work;  
  13.     void *buffer;  
  14.     ptrdiff_t user_buffer_offset;  
  15.   
  16.     struct list_head buffers;  
  17.     struct rb_root free_buffers;  
  18.     struct rb_root allocated_buffers;  
  19.     size_t free_async_space;  
  20.   
  21.     struct page **pages;  
  22.     size_t buffer_size;  
  23.     uint32_t buffer_free;  
  24.     struct list_head todo;  
  25.     wait_queue_head_t wait;  
  26.     struct binder_stats stats;  
  27.     struct list_head delivered_death;  
  28.     int max_threads;  
  29.     int requested_threads;  
  30.     int requested_threads_started;  
  31.     int ready_threads;  
  32.     long default_priority;  
  33. };  
struct binder_proc {
	struct hlist_node proc_node;
	struct rb_root threads;
	struct rb_root nodes;
	struct rb_root refs_by_desc;
	struct rb_root refs_by_node;
	int pid;
	struct vm_area_struct *vma;
	struct task_struct *tsk;
	struct files_struct *files;
	struct hlist_node deferred_work_node;
	int deferred_work;
	void *buffer;
	ptrdiff_t user_buffer_offset;

	struct list_head buffers;
	struct rb_root free_buffers;
	struct rb_root allocated_buffers;
	size_t free_async_space;

	struct page **pages;
	size_t buffer_size;
	uint32_t buffer_free;
	struct list_head todo;
	wait_queue_head_t wait;
	struct binder_stats stats;
	struct list_head delivered_death;
	int max_threads;
	int requested_threads;
	int requested_threads_started;
	int ready_threads;
	long default_priority;
};
        这个结构体的成员比较多,这里就不一一说明了,简单解释一下四个成员变量threads、nodes、 refs_by_desc和refs_by_node,其它的我们在遇到的时候再详细解释。这四个成员变量都是表示红黑树的节点,也就是说,binder_proc分别挂会四个红黑树下。threads树用来保存binder_proc进程内用于处理用户请求的线程,它的最大数量由max_threads来决定;node树成用来保存binder_proc进程内的Binder实体;refs_by_desc树和refs_by_node树用来保存binder_proc进程内的Binder引用,即引用的其它进程的Binder实体,它分别用两种方式来组织红黑树,一种是以句柄作来key值来组织,一种是以引用的实体节点的地址值作来key值来组织,它们都是表示同一样东西,只不过是为了内部查找方便而用两个红黑树来表示。

         这样,打开设备文件/dev/binder的操作就完成了,接着是对打开的设备文件进行内存映射操作mmap:

  1. bs->mapped = mmap(NULL, mapsize, PROT_READ, MAP_PRIVATE, bs->fd, 0);  
bs->mapped = mmap(NULL, mapsize, PROT_READ, MAP_PRIVATE, bs->fd, 0);

         对应Binder驱动程序的binder_mmap函数:

  1. static int binder_mmap(struct file *filp, struct vm_area_struct *vma)  
  2. {  
  3.     int ret;  
  4.     struct vm_struct *area;  
  5.     struct binder_proc *proc = filp->private_data;  
  6.     const char *failure_string;  
  7.     struct binder_buffer *buffer;  
  8.   
  9.     if ((vma->vm_end - vma->vm_start) > SZ_4M)  
  10.         vma->vm_end = vma->vm_start + SZ_4M;  
  11.   
  12.     if (binder_debug_mask & BINDER_DEBUG_OPEN_CLOSE)  
  13.         printk(KERN_INFO  
  14.             "binder_mmap: %d %lx-%lx (%ld K) vma %lx pagep %lx\n",  
  15.             proc->pid, vma->vm_start, vma->vm_end,  
  16.             (vma->vm_end - vma->vm_start) / SZ_1K, vma->vm_flags,  
  17.             (unsigned long)pgprot_val(vma->vm_page_prot));  
  18.   
  19.     if (vma->vm_flags & FORBIDDEN_MMAP_FLAGS) {  
  20.         ret = -EPERM;  
  21.         failure_string = "bad vm_flags";  
  22.         goto err_bad_arg;  
  23.     }  
  24.     vma->vm_flags = (vma->vm_flags | VM_DONTCOPY) & ~VM_MAYWRITE;  
  25.   
  26.     if (proc->buffer) {  
  27.         ret = -EBUSY;  
  28.         failure_string = "already mapped";  
  29.         goto err_already_mapped;  
  30.     }  
  31.   
  32.     area = get_vm_area(vma->vm_end - vma->vm_start, VM_IOREMAP);  
  33.     if (area == NULL) {  
  34.         ret = -ENOMEM;  
  35.         failure_string = "get_vm_area";  
  36.         goto err_get_vm_area_failed;  
  37.     }  
  38.     proc->buffer = area->addr;  
  39.     proc->user_buffer_offset = vma->vm_start - (uintptr_t)proc->buffer;  
  40.   
  41. #ifdef CONFIG_CPU_CACHE_VIPT  
  42.     if (cache_is_vipt_aliasing()) {  
  43.         while (CACHE_COLOUR((vma->vm_start ^ (uint32_t)proc->buffer))) {  
  44.             printk(KERN_INFO "binder_mmap: %d %lx-%lx maps %p bad alignment\n", proc->pid, vma->vm_start, vma->vm_end, proc->buffer);  
  45.             vma->vm_start += PAGE_SIZE;  
  46.         }  
  47.     }  
  48. #endif   
  49.     proc->pages = kzalloc(sizeof(proc->pages[0]) * ((vma->vm_end - vma->vm_start) / PAGE_SIZE), GFP_KERNEL);  
  50.     if (proc->pages == NULL) {  
  51.         ret = -ENOMEM;  
  52.         failure_string = "alloc page array";  
  53.         goto err_alloc_pages_failed;  
  54.     }  
  55.     proc->buffer_size = vma->vm_end - vma->vm_start;  
  56.   
  57.     vma->vm_ops = &binder_vm_ops;  
  58.     vma->vm_private_data = proc;  
  59.   
  60.     if (binder_update_page_range(proc, 1, proc->buffer, proc->buffer + PAGE_SIZE, vma)) {  
  61.         ret = -ENOMEM;  
  62.         failure_string = "alloc small buf";  
  63.         goto err_alloc_small_buf_failed;  
  64.     }  
  65.     buffer = proc->buffer;  
  66.     INIT_LIST_HEAD(&proc->buffers);  
  67.     list_add(&buffer->entry, &proc->buffers);  
  68.     buffer->free = 1;  
  69.     binder_insert_free_buffer(proc, buffer);  
  70.     proc->free_async_space = proc->buffer_size / 2;  
  71.     barrier();  
  72.     proc->files = get_files_struct(current);  
  73.     proc->vma = vma;  
  74.   
  75.     /*printk(KERN_INFO "binder_mmap: %d %lx-%lx maps %p\n", proc->pid, vma->vm_start, vma->vm_end, proc->buffer);*/  
  76.     return 0;  
  77.   
  78. err_alloc_small_buf_failed:  
  79.     kfree(proc->pages);  
  80.     proc->pages = NULL;  
  81. err_alloc_pages_failed:  
  82.     vfree(proc->buffer);  
  83.     proc->buffer = NULL;  
  84. err_get_vm_area_failed:  
  85. err_already_mapped:  
  86. err_bad_arg:  
  87.     printk(KERN_ERR "binder_mmap: %d %lx-%lx %s failed %d\n", proc->pid, vma->vm_start, vma->vm_end, failure_string, ret);  
  88.     return ret;  
  89. }  
static int binder_mmap(struct file *filp, struct vm_area_struct *vma)
{
	int ret;
	struct vm_struct *area;
	struct binder_proc *proc = filp->private_data;
	const char *failure_string;
	struct binder_buffer *buffer;

	if ((vma->vm_end - vma->vm_start) > SZ_4M)
		vma->vm_end = vma->vm_start + SZ_4M;

	if (binder_debug_mask & BINDER_DEBUG_OPEN_CLOSE)
		printk(KERN_INFO
			"binder_mmap: %d %lx-%lx (%ld K) vma %lx pagep %lx\n",
			proc->pid, vma->vm_start, vma->vm_end,
			(vma->vm_end - vma->vm_start) / SZ_1K, vma->vm_flags,
			(unsigned long)pgprot_val(vma->vm_page_prot));

	if (vma->vm_flags & FORBIDDEN_MMAP_FLAGS) {
		ret = -EPERM;
		failure_string = "bad vm_flags";
		goto err_bad_arg;
	}
	vma->vm_flags = (vma->vm_flags | VM_DONTCOPY) & ~VM_MAYWRITE;

	if (proc->buffer) {
		ret = -EBUSY;
		failure_string = "already mapped";
		goto err_already_mapped;
	}

	area = get_vm_area(vma->vm_end - vma->vm_start, VM_IOREMAP);
	if (area == NULL) {
		ret = -ENOMEM;
		failure_string = "get_vm_area";
		goto err_get_vm_area_failed;
	}
	proc->buffer = area->addr;
	proc->user_buffer_offset = vma->vm_start - (uintptr_t)proc->buffer;

#ifdef CONFIG_CPU_CACHE_VIPT
	if (cache_is_vipt_aliasing()) {
		while (CACHE_COLOUR((vma->vm_start ^ (uint32_t)proc->buffer))) {
			printk(KERN_INFO "binder_mmap: %d %lx-%lx maps %p bad alignment\n", proc->pid, vma->vm_start, vma->vm_end, proc->buffer);
			vma->vm_start += PAGE_SIZE;
		}
	}
#endif
	proc->pages = kzalloc(sizeof(proc->pages[0]) * ((vma->vm_end - vma->vm_start) / PAGE_SIZE), GFP_KERNEL);
	if (proc->pages == NULL) {
		ret = -ENOMEM;
		failure_string = "alloc page array";
		goto err_alloc_pages_failed;
	}
	proc->buffer_size = vma->vm_end - vma->vm_start;

	vma->vm_ops = &binder_vm_ops;
	vma->vm_private_data = proc;

	if (binder_update_page_range(proc, 1, proc->buffer, proc->buffer + PAGE_SIZE, vma)) {
		ret = -ENOMEM;
		failure_string = "alloc small buf";
		goto err_alloc_small_buf_failed;
	}
	buffer = proc->buffer;
	INIT_LIST_HEAD(&proc->buffers);
	list_add(&buffer->entry, &proc->buffers);
	buffer->free = 1;
	binder_insert_free_buffer(proc, buffer);
	proc->free_async_space = proc->buffer_size / 2;
	barrier();
	proc->files = get_files_struct(current);
	proc->vma = vma;

	/*printk(KERN_INFO "binder_mmap: %d %lx-%lx maps %p\n", proc->pid, vma->vm_start, vma->vm_end, proc->buffer);*/
	return 0;

err_alloc_small_buf_failed:
	kfree(proc->pages);
	proc->pages = NULL;
err_alloc_pages_failed:
	vfree(proc->buffer);
	proc->buffer = NULL;
err_get_vm_area_failed:
err_already_mapped:
err_bad_arg:
	printk(KERN_ERR "binder_mmap: %d %lx-%lx %s failed %d\n", proc->pid, vma->vm_start, vma->vm_end, failure_string, ret);
	return ret;
}

         函数首先通过filp->private_data得到在打开设备文件/dev/binder时创建的struct binder_proc结构。内存映射信息放在vma参数中,注意,这里的vma的数据类型是struct vm_area_struct,它表示的是一块连续的虚拟地址空间区域,在函数变量声明的地方,我们还看到有一个类似的结构体struct vm_struct,这个数据结构也是表示一块连续的虚拟地址空间区域,那么,这两者的区别是什么呢?在Linux中,struct vm_area_struct表示的虚拟地址是给进程使用的,而struct vm_struct表示的虚拟地址是给内核使用的,它们对应的物理页面都可以是不连续的。struct vm_area_struct表示的地址空间范围是0~3G,而struct vm_struct表示的地址空间范围是(3G + 896M + 8M) ~ 4G。struct vm_struct表示的地址空间范围为什么不是3G~4G呢?原来,3G ~ (3G + 896M)范围的地址是用来映射连续的物理页面的,这个范围的虚拟地址和对应的实际物理地址有着简单的对应关系,即对应0~896M的物理地址空间,而(3G + 896M) ~ (3G + 896M + 8M)是安全保护区域(例如,所有指向这8M地址空间的指针都是非法的),因此struct vm_struct使用(3G + 896M + 8M) ~ 4G地址空间来映射非连续的物理页面。有关Linux的内存管理知识,可以参考Android学习启动篇一文提到的《Understanding the Linux Kernel》一书中的第8章。

        这里为什么会同时使用进程虚拟地址空间和内核虚拟地址空间来映射同一个物理页面呢?这就是Binder进程间通信机制的精髓所在了,同一个物理页面,一方映射到进程虚拟地址空间,一方面映射到内核虚拟地址空间,这样,进程和内核之间就可以减少一次内存拷贝了,提到了进程间通信效率。举个例子如,Client要将一块内存数据传递给Server,一般的做法是,Client将这块数据从它的进程空间拷贝到内核空间中,然后内核再将这个数据从内核空间拷贝到Server的进程空间,这样,Server就可以访问这个数据了。但是在这种方法中,执行了两次内存拷贝操作,而采用我们上面提到的方法,只需要把Client进程空间的数据拷贝一次到内核空间,然后Server与内核共享这个数据就可以了,整个过程只需要执行一次内存拷贝,提高了效率。

        binder_mmap的原理讲完了,这个函数的逻辑就好理解了。不过,这里还是先要解释一下struct binder_proc结构体的几个成员变量。buffer成员变量是一个void*指针,它表示要映射的物理内存在内核空间中的起始位置;buffer_size成员变量是一个size_t类型的变量,表示要映射的内存的大小;pages成员变量是一个struct page*类型的数组,struct page是用来描述物理页面的数据结构;user_buffer_offset成员变量是一个ptrdiff_t类型的变量,它表示的是内核使用的虚拟地址与进程使用的虚拟地址之间的差值,即如果某个物理页面在内核空间中对应的虚拟地址是addr的话,那么这个物理页面在进程空间对应的虚拟地址就为addr + user_buffer_offset。

        再解释一下Binder驱动程序管理这个内存映射地址空间的方法,即是如何管理buffer ~ (buffer + buffer_size)这段地址空间的,这个地址空间被划分为一段一段来管理,每一段是结构体struct binder_buffer来描述:

  1. struct binder_buffer {  
  2.     struct list_head entry; /* free and allocated entries by addesss */  
  3.     struct rb_node rb_node; /* free entry by size or allocated entry */  
  4.                 /* by address */  
  5.     unsigned free : 1;  
  6.     unsigned allow_user_free : 1;  
  7.     unsigned async_transaction : 1;  
  8.     unsigned debug_id : 29;  
  9.   
  10.     struct binder_transaction *transaction;  
  11.   
  12.     struct binder_node *target_node;  
  13.     size_t data_size;  
  14.     size_t offsets_size;  
  15.     uint8_t data[0];  
  16. };  
struct binder_buffer {
	struct list_head entry; /* free and allocated entries by addesss */
	struct rb_node rb_node; /* free entry by size or allocated entry */
				/* by address */
	unsigned free : 1;
	unsigned allow_user_free : 1;
	unsigned async_transaction : 1;
	unsigned debug_id : 29;

	struct binder_transaction *transaction;

	struct binder_node *target_node;
	size_t data_size;
	size_t offsets_size;
	uint8_t data[0];
};
        每一个binder_buffer通过其成员entry按从低址到高地址连入到struct binder_proc中的buffers表示的链表中去,同时,每一个binder_buffer又分为正在使用的和空闲的,通过free成员变量来区分,空闲的binder_buffer通过成员变量rb_node连入到struct binder_proc中的free_buffers表示的红黑树中去,正在使用的binder_buffer通过成员变量rb_node连入到struct binder_proc中的allocated_buffers表示的红黑树中去。这样做当然是为了方便查询和维护这块地址空间了,这一点我们可以从其它的代码中看到,等遇到的时候我们再分析。

        终于可以回到binder_mmap这个函数来了,首先是对参数作一些健康体检(sanity check),例如,要映射的内存大小不能超过SIZE_4M,即4M,回到service_manager.c中的main 函数,这里传进来的值是128 * 1024个字节,即128K,这个检查没有问题。通过健康体检后,调用get_vm_area函数获得一个空闲的vm_struct区间,并初始化proc结构体的buffer、user_buffer_offset、pages和buffer_size和成员变量,接着调用binder_update_page_range来为虚拟地址空间proc->buffer ~ proc->buffer + PAGE_SIZE分配一个空闲的物理页面,同时这段地址空间使用一个binder_buffer来描述,分别插入到proc->buffers链表和proc->free_buffers红黑树中去,最后,还初始化了proc结构体的free_async_space、files和vma三个成员变量。

        这里,我们继续进入到binder_update_page_range函数中去看一下Binder驱动程序是如何实现把一个物理页面同时映射到内核空间和进程空间去的:

  1. static int binder_update_page_range(struct binder_proc *proc, int allocate,  
  2.     void *start, void *end, struct vm_area_struct *vma)  
  3. {  
  4.     void *page_addr;  
  5.     unsigned long user_page_addr;  
  6.     struct vm_struct tmp_area;  
  7.     struct page **page;  
  8.     struct mm_struct *mm;  
  9.   
  10.     if (binder_debug_mask & BINDER_DEBUG_BUFFER_ALLOC)  
  11.         printk(KERN_INFO "binder: %d: %s pages %p-%p\n",  
  12.                proc->pid, allocate ? "allocate" : "free", start, end);  
  13.   
  14.     if (end <= start)  
  15.         return 0;  
  16.   
  17.     if (vma)  
  18.         mm = NULL;  
  19.     else  
  20.         mm = get_task_mm(proc->tsk);  
  21.   
  22.     if (mm) {  
  23.         down_write(&mm->mmap_sem);  
  24.         vma = proc->vma;  
  25.     }  
  26.   
  27.     if (allocate == 0)  
  28.         goto free_range;  
  29.   
  30.     if (vma == NULL) {  
  31.         printk(KERN_ERR "binder: %d: binder_alloc_buf failed to "  
  32.                "map pages in userspace, no vma\n", proc->pid);  
  33.         goto err_no_vma;  
  34.     }  
  35.   
  36.     for (page_addr = start; page_addr < end; page_addr += PAGE_SIZE) {  
  37.         int ret;  
  38.         struct page **page_array_ptr;  
  39.         page = &proc->pages[(page_addr - proc->buffer) / PAGE_SIZE];  
  40.   
  41.         BUG_ON(*page);  
  42.         *page = alloc_page(GFP_KERNEL | __GFP_ZERO);  
  43.         if (*page == NULL) {  
  44.             printk(KERN_ERR "binder: %d: binder_alloc_buf failed "  
  45.                    "for page at %p\n", proc->pid, page_addr);  
  46.             goto err_alloc_page_failed;  
  47.         }  
  48.         tmp_area.addr = page_addr;  
  49.         tmp_area.size = PAGE_SIZE + PAGE_SIZE /* guard page? */;  
  50.         page_array_ptr = page;  
  51.         ret = map_vm_area(&tmp_area, PAGE_KERNEL, &page_array_ptr);  
  52.         if (ret) {  
  53.             printk(KERN_ERR "binder: %d: binder_alloc_buf failed "  
  54.                    "to map page at %p in kernel\n",  
  55.                    proc->pid, page_addr);  
  56.             goto err_map_kernel_failed;  
  57.         }  
  58.         user_page_addr =  
  59.             (uintptr_t)page_addr + proc->user_buffer_offset;  
  60.         ret = vm_insert_page(vma, user_page_addr, page[0]);  
  61.         if (ret) {  
  62.             printk(KERN_ERR "binder: %d: binder_alloc_buf failed "  
  63.                    "to map page at %lx in userspace\n",  
  64.                    proc->pid, user_page_addr);  
  65.             goto err_vm_insert_page_failed;  
  66.         }  
  67.         /* vm_insert_page does not seem to increment the refcount */  
  68.     }  
  69.     if (mm) {  
  70.         up_write(&mm->mmap_sem);  
  71.         mmput(mm);  
  72.     }  
  73.     return 0;  
  74.   
  75. free_range:  
  76.     for (page_addr = end - PAGE_SIZE; page_addr >= start;  
  77.          page_addr -= PAGE_SIZE) {  
  78.         page = &proc->pages[(page_addr - proc->buffer) / PAGE_SIZE];  
  79.         if (vma)  
  80.             zap_page_range(vma, (uintptr_t)page_addr +  
  81.                 proc->user_buffer_offset, PAGE_SIZE, NULL);  
  82. err_vm_insert_page_failed:  
  83.         unmap_kernel_range((unsigned long)page_addr, PAGE_SIZE);  
  84. err_map_kernel_failed:  
  85.         __free_page(*page);  
  86.         *page = NULL;  
  87. err_alloc_page_failed:  
  88.         ;  
  89.     }  
  90. err_no_vma:  
  91.     if (mm) {  
  92.         up_write(&mm->mmap_sem);  
  93.         mmput(mm);  
  94.     }  
  95.     return -ENOMEM;  
  96. }  
static int binder_update_page_range(struct binder_proc *proc, int allocate,
	void *start, void *end, struct vm_area_struct *vma)
{
	void *page_addr;
	unsigned long user_page_addr;
	struct vm_struct tmp_area;
	struct page **page;
	struct mm_struct *mm;

	if (binder_debug_mask & BINDER_DEBUG_BUFFER_ALLOC)
		printk(KERN_INFO "binder: %d: %s pages %p-%p\n",
		       proc->pid, allocate ? "allocate" : "free", start, end);

	if (end <= start)
		return 0;

	if (vma)
		mm = NULL;
	else
		mm = get_task_mm(proc->tsk);

	if (mm) {
		down_write(&mm->mmap_sem);
		vma = proc->vma;
	}

	if (allocate == 0)
		goto free_range;

	if (vma == NULL) {
		printk(KERN_ERR "binder: %d: binder_alloc_buf failed to "
		       "map pages in userspace, no vma\n", proc->pid);
		goto err_no_vma;
	}

	for (page_addr = start; page_addr < end; page_addr += PAGE_SIZE) {
		int ret;
		struct page **page_array_ptr;
		page = &proc->pages[(page_addr - proc->buffer) / PAGE_SIZE];

		BUG_ON(*page);
		*page = alloc_page(GFP_KERNEL | __GFP_ZERO);
		if (*page == NULL) {
			printk(KERN_ERR "binder: %d: binder_alloc_buf failed "
			       "for page at %p\n", proc->pid, page_addr);
			goto err_alloc_page_failed;
		}
		tmp_area.addr = page_addr;
		tmp_area.size = PAGE_SIZE + PAGE_SIZE /* guard page? */;
		page_array_ptr = page;
		ret = map_vm_area(&tmp_area, PAGE_KERNEL, &page_array_ptr);
		if (ret) {
			printk(KERN_ERR "binder: %d: binder_alloc_buf failed "
			       "to map page at %p in kernel\n",
			       proc->pid, page_addr);
			goto err_map_kernel_failed;
		}
		user_page_addr =
			(uintptr_t)page_addr + proc->user_buffer_offset;
		ret = vm_insert_page(vma, user_page_addr, page[0]);
		if (ret) {
			printk(KERN_ERR "binder: %d: binder_alloc_buf failed "
			       "to map page at %lx in userspace\n",
			       proc->pid, user_page_addr);
			goto err_vm_insert_page_failed;
		}
		/* vm_insert_page does not seem to increment the refcount */
	}
	if (mm) {
		up_write(&mm->mmap_sem);
		mmput(mm);
	}
	return 0;

free_range:
	for (page_addr = end - PAGE_SIZE; page_addr >= start;
	     page_addr -= PAGE_SIZE) {
		page = &proc->pages[(page_addr - proc->buffer) / PAGE_SIZE];
		if (vma)
			zap_page_range(vma, (uintptr_t)page_addr +
				proc->user_buffer_offset, PAGE_SIZE, NULL);
err_vm_insert_page_failed:
		unmap_kernel_range((unsigned long)page_addr, PAGE_SIZE);
err_map_kernel_failed:
		__free_page(*page);
		*page = NULL;
err_alloc_page_failed:
		;
	}
err_no_vma:
	if (mm) {
		up_write(&mm->mmap_sem);
		mmput(mm);
	}
	return -ENOMEM;
}
        这个函数既可以分配物理页面,也可以用来释放物理页面,通过allocate参数来区别,这里我们只关注分配物理页面的情况。要分配物理页面的虚拟地址空间范围为(start ~ end),函数前面的一些检查逻辑就不看了,直接看中间的for循环:

  1. for (page_addr = start; page_addr < end; page_addr += PAGE_SIZE) {  
  2.     int ret;  
  3.     struct page **page_array_ptr;  
  4.     page = &proc->pages[(page_addr - proc->buffer) / PAGE_SIZE];  
  5.   
  6.     BUG_ON(*page);  
  7.     *page = alloc_page(GFP_KERNEL | __GFP_ZERO);  
  8.     if (*page == NULL) {  
  9.         printk(KERN_ERR "binder: %d: binder_alloc_buf failed "  
  10.                "for page at %p\n", proc->pid, page_addr);  
  11.         goto err_alloc_page_failed;  
  12.     }  
  13.     tmp_area.addr = page_addr;  
  14.     tmp_area.size = PAGE_SIZE + PAGE_SIZE /* guard page? */;  
  15.     page_array_ptr = page;  
  16.     ret = map_vm_area(&tmp_area, PAGE_KERNEL, &page_array_ptr);  
  17.     if (ret) {  
  18.         printk(KERN_ERR "binder: %d: binder_alloc_buf failed "  
  19.                "to map page at %p in kernel\n",  
  20.                proc->pid, page_addr);  
  21.         goto err_map_kernel_failed;  
  22.     }  
  23.     user_page_addr =  
  24.         (uintptr_t)page_addr + proc->user_buffer_offset;  
  25.     ret = vm_insert_page(vma, user_page_addr, page[0]);  
  26.     if (ret) {  
  27.         printk(KERN_ERR "binder: %d: binder_alloc_buf failed "  
  28.                "to map page at %lx in userspace\n",  
  29.                proc->pid, user_page_addr);  
  30.         goto err_vm_insert_page_failed;  
  31.     }  
  32.     /* vm_insert_page does not seem to increment the refcount */  
  33. }  
	for (page_addr = start; page_addr < end; page_addr += PAGE_SIZE) {
		int ret;
		struct page **page_array_ptr;
		page = &proc->pages[(page_addr - proc->buffer) / PAGE_SIZE];

		BUG_ON(*page);
		*page = alloc_page(GFP_KERNEL | __GFP_ZERO);
		if (*page == NULL) {
			printk(KERN_ERR "binder: %d: binder_alloc_buf failed "
			       "for page at %p\n", proc->pid, page_addr);
			goto err_alloc_page_failed;
		}
		tmp_area.addr = page_addr;
		tmp_area.size = PAGE_SIZE + PAGE_SIZE /* guard page? */;
		page_array_ptr = page;
		ret = map_vm_area(&tmp_area, PAGE_KERNEL, &page_array_ptr);
		if (ret) {
			printk(KERN_ERR "binder: %d: binder_alloc_buf failed "
			       "to map page at %p in kernel\n",
			       proc->pid, page_addr);
			goto err_map_kernel_failed;
		}
		user_page_addr =
			(uintptr_t)page_addr + proc->user_buffer_offset;
		ret = vm_insert_page(vma, user_page_addr, page[0]);
		if (ret) {
			printk(KERN_ERR "binder: %d: binder_alloc_buf failed "
			       "to map page at %lx in userspace\n",
			       proc->pid, user_page_addr);
			goto err_vm_insert_page_failed;
		}
		/* vm_insert_page does not seem to increment the refcount */
	}
        首先是调用alloc_page来分配一个物理页面,这个函数返回一个struct page物理页面描述符,根据这个描述的内容初始化好struct vm_struct tmp_area结构体,然后通过map_vm_area将这个物理页面插入到tmp_area描述的内核空间去,接着通过page_addr + proc->user_buffer_offset获得进程虚拟空间地址,并通过vm_insert_page函数将这个物理页面插入到进程地址空间去,参数vma代表了要插入的进程的地址空间。
       这样,frameworks/base/cmds/servicemanager/binder.c文件中的binder_open函数就描述完了,回到frameworks/base/cmds/servicemanager/service_manager.c文件中的main函数,下一步就是调用binder_become_context_manager来通知Binder驱动程序自己是Binder机制的上下文管理者,即守护进程。binder_become_context_manager函数位于frameworks/base/cmds/servicemanager/binder.c文件中:

  1. int binder_become_context_manager(struct binder_state *bs)  
  2. {  
  3.     return ioctl(bs->fd, BINDER_SET_CONTEXT_MGR, 0);  
  4. }  
int binder_become_context_manager(struct binder_state *bs)
{
    return ioctl(bs->fd, BINDER_SET_CONTEXT_MGR, 0);
}
       这里通过调用ioctl文件操作函数来通知Binder驱动程序自己是守护进程,命令号是BINDER_SET_CONTEXT_MGR,没有参数。BINDER_SET_CONTEXT_MGR定义为:

  1. #define BINDER_SET_CONTEXT_MGR      _IOW('b', 7, int)  
#define	BINDER_SET_CONTEXT_MGR		_IOW('b', 7, int)
       这样就进入到Binder驱动程序的binder_ioctl函数,我们只关注BINDER_SET_CONTEXT_MGR命令:

  1. static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)  
  2. {  
  3.     int ret;  
  4.     struct binder_proc *proc = filp->private_data;  
  5.     struct binder_thread *thread;  
  6.     unsigned int size = _IOC_SIZE(cmd);  
  7.     void __user *ubuf = (void __user *)arg;  
  8.   
  9.     /*printk(KERN_INFO "binder_ioctl: %d:%d %x %lx\n", proc->pid, current->pid, cmd, arg);*/  
  10.   
  11.     ret = wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);  
  12.     if (ret)  
  13.         return ret;  
  14.   
  15.     mutex_lock(&binder_lock);  
  16.     thread = binder_get_thread(proc);  
  17.     if (thread == NULL) {  
  18.         ret = -ENOMEM;  
  19.         goto err;  
  20.     }  
  21.   
  22.     switch (cmd) {  
  23.         ......  
  24.     case BINDER_SET_CONTEXT_MGR:  
  25.         if (binder_context_mgr_node != NULL) {  
  26.             printk(KERN_ERR "binder: BINDER_SET_CONTEXT_MGR already set\n");  
  27.             ret = -EBUSY;  
  28.             goto err;  
  29.         }  
  30.         if (binder_context_mgr_uid != -1) {  
  31.             if (binder_context_mgr_uid != current->cred->euid) {  
  32.                 printk(KERN_ERR "binder: BINDER_SET_"  
  33.                     "CONTEXT_MGR bad uid %d != %d\n",  
  34.                     current->cred->euid,  
  35.                     binder_context_mgr_uid);  
  36.                 ret = -EPERM;  
  37.                 goto err;  
  38.             }  
  39.         } else  
  40.             binder_context_mgr_uid = current->cred->euid;  
  41.         binder_context_mgr_node = binder_new_node(proc, NULL, NULL);  
  42.         if (binder_context_mgr_node == NULL) {  
  43.             ret = -ENOMEM;  
  44.             goto err;  
  45.         }  
  46.         binder_context_mgr_node->local_weak_refs++;  
  47.         binder_context_mgr_node->local_strong_refs++;  
  48.         binder_context_mgr_node->has_strong_ref = 1;  
  49.         binder_context_mgr_node->has_weak_ref = 1;  
  50.         break;  
  51.         ......  
  52.     default:  
  53.         ret = -EINVAL;  
  54.         goto err;  
  55.     }  
  56.     ret = 0;  
  57. err:  
  58.     if (thread)  
  59.         thread->looper &= ~BINDER_LOOPER_STATE_NEED_RETURN;  
  60.     mutex_unlock(&binder_lock);  
  61.     wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);  
  62.     if (ret && ret != -ERESTARTSYS)  
  63.         printk(KERN_INFO "binder: %d:%d ioctl %x %lx returned %d\n", proc->pid, current->pid, cmd, arg, ret);  
  64.     return ret;  
  65. }  
static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
	int ret;
	struct binder_proc *proc = filp->private_data;
	struct binder_thread *thread;
	unsigned int size = _IOC_SIZE(cmd);
	void __user *ubuf = (void __user *)arg;

	/*printk(KERN_INFO "binder_ioctl: %d:%d %x %lx\n", proc->pid, current->pid, cmd, arg);*/

	ret = wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);
	if (ret)
		return ret;

	mutex_lock(&binder_lock);
	thread = binder_get_thread(proc);
	if (thread == NULL) {
		ret = -ENOMEM;
		goto err;
	}

	switch (cmd) {
        ......
	case BINDER_SET_CONTEXT_MGR:
		if (binder_context_mgr_node != NULL) {
			printk(KERN_ERR "binder: BINDER_SET_CONTEXT_MGR already set\n");
			ret = -EBUSY;
			goto err;
		}
		if (binder_context_mgr_uid != -1) {
			if (binder_context_mgr_uid != current->cred->euid) {
				printk(KERN_ERR "binder: BINDER_SET_"
					"CONTEXT_MGR bad uid %d != %d\n",
					current->cred->euid,
					binder_context_mgr_uid);
				ret = -EPERM;
				goto err;
			}
		} else
			binder_context_mgr_uid = current->cred->euid;
		binder_context_mgr_node = binder_new_node(proc, NULL, NULL);
		if (binder_context_mgr_node == NULL) {
			ret = -ENOMEM;
			goto err;
		}
		binder_context_mgr_node->local_weak_refs++;
		binder_context_mgr_node->local_strong_refs++;
		binder_context_mgr_node->has_strong_ref = 1;
		binder_context_mgr_node->has_weak_ref = 1;
		break;
        ......
	default:
		ret = -EINVAL;
		goto err;
	}
	ret = 0;
err:
	if (thread)
		thread->looper &= ~BINDER_LOOPER_STATE_NEED_RETURN;
	mutex_unlock(&binder_lock);
	wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);
	if (ret && ret != -ERESTARTSYS)
		printk(KERN_INFO "binder: %d:%d ioctl %x %lx returned %d\n", proc->pid, current->pid, cmd, arg, ret);
	return ret;
}
        继续分析这个函数之前,又要解释两个数据结构了,一个是struct binder_thread结构体,顾名思久,它表示一个线程,这里就是执行binder_become_context_manager函数的线程了。

  1. struct binder_thread {  
  2.     struct binder_proc *proc;  
  3.     struct rb_node rb_node;  
  4.     int pid;  
  5.     int looper;  
  6.     struct binder_transaction *transaction_stack;  
  7.     struct list_head todo;  
  8.     uint32_t return_error; /* Write failed, return error code in read buf */  
  9.     uint32_t return_error2; /* Write failed, return error code in read */  
  10.         /* buffer. Used when sending a reply to a dead process that */  
  11.         /* we are also waiting on */  
  12.     wait_queue_head_t wait;  
  13.     struct binder_stats stats;  
  14. };  
struct binder_thread {
	struct binder_proc *proc;
	struct rb_node rb_node;
	int pid;
	int looper;
	struct binder_transaction *transaction_stack;
	struct list_head todo;
	uint32_t return_error; /* Write failed, return error code in read buf */
	uint32_t return_error2; /* Write failed, return error code in read */
		/* buffer. Used when sending a reply to a dead process that */
		/* we are also waiting on */
	wait_queue_head_t wait;
	struct binder_stats stats;
};
       proc表示这个线程所属的进程。struct binder_proc有一个成员变量threads,它的类型是rb_root,它表示一查红黑树,把属于这个进程的所有线程都组织起来,struct binder_thread的成员变量rb_node就是用来链入这棵红黑树的节点了。looper成员变量表示线程的状态,它可以取下面这几个值:

  1. enum {  
  2.     BINDER_LOOPER_STATE_REGISTERED  = 0x01,  
  3.     BINDER_LOOPER_STATE_ENTERED     = 0x02,  
  4.     BINDER_LOOPER_STATE_EXITED      = 0x04,  
  5.     BINDER_LOOPER_STATE_INVALID     = 0x08,  
  6.     BINDER_LOOPER_STATE_WAITING     = 0x10,  
  7.     BINDER_LOOPER_STATE_NEED_RETURN = 0x20  
  8. };  
enum {
	BINDER_LOOPER_STATE_REGISTERED  = 0x01,
	BINDER_LOOPER_STATE_ENTERED     = 0x02,
	BINDER_LOOPER_STATE_EXITED      = 0x04,
	BINDER_LOOPER_STATE_INVALID     = 0x08,
	BINDER_LOOPER_STATE_WAITING     = 0x10,
	BINDER_LOOPER_STATE_NEED_RETURN = 0x20
};
        其余的成员变量,transaction_stack表示线程正在处理的事务,todo表示发往该线程的数据列表,return_error和return_error2表示操作结果返回码,wait用来阻塞线程等待某个事件的发生,stats用来保存一些统计信息。这些成员变量遇到的时候再分析它们的作用。

        另外一个数据结构是struct binder_node,它表示一个binder实体:

  1. struct binder_node {  
  2.     int debug_id;  
  3.     struct binder_work work;  
  4.     union {  
  5.         struct rb_node rb_node;  
  6.         struct hlist_node dead_node;  
  7.     };  
  8.     struct binder_proc *proc;  
  9.     struct hlist_head refs;  
  10.     int internal_strong_refs;  
  11.     int local_weak_refs;  
  12.     int local_strong_refs;  
  13.     void __user *ptr;  
  14.     void __user *cookie;  
  15.     unsigned has_strong_ref : 1;  
  16.     unsigned pending_strong_ref : 1;  
  17.     unsigned has_weak_ref : 1;  
  18.     unsigned pending_weak_ref : 1;  
  19.     unsigned has_async_transaction : 1;  
  20.     unsigned accept_fds : 1;  
  21.     int min_priority : 8;  
  22.     struct list_head async_todo;  
  23. };  
struct binder_node {
	int debug_id;
	struct binder_work work;
	union {
		struct rb_node rb_node;
		struct hlist_node dead_node;
	};
	struct binder_proc *proc;
	struct hlist_head refs;
	int internal_strong_refs;
	int local_weak_refs;
	int local_strong_refs;
	void __user *ptr;
	void __user *cookie;
	unsigned has_strong_ref : 1;
	unsigned pending_strong_ref : 1;
	unsigned has_weak_ref : 1;
	unsigned pending_weak_ref : 1;
	unsigned has_async_transaction : 1;
	unsigned accept_fds : 1;
	int min_priority : 8;
	struct list_head async_todo;
};
        rb_node和dead_node组成一个联合体。 如果这个Binder实体还在正常使用,则使用rb_node来连入proc->nodes所表示的红黑树的节点,这棵红黑树用来组织属于这个进程的所有Binder实体;如果这个Binder实体所属的进程已经销毁,而这个Binder实体又被其它进程所引用,则这个Binder实体通过dead_node进入到一个哈希表中去存放。proc成员变量就是表示这个Binder实例所属于进程了。refs成员变量把所有引用了该Binder实体的Binder引用连接起来构成一个链表。internal_strong_refs、local_weak_refs和local_strong_refs表示这个Binder实体的引用计数。ptr和cookie成员变量分别表示这个Binder实体在用户空间的地址以及附加数据。其余的成员变量就不描述了,遇到的时候再分析。

        现在回到binder_ioctl函数中,首先是通过filp->private_data获得proc变量,这里binder_mmap函数是一样的。接着通过binder_get_thread函数获得线程信息,我们来看一下这个函数:

  1. static struct binder_thread *binder_get_thread(struct binder_proc *proc)  
  2. {  
  3.     struct binder_thread *thread = NULL;  
  4.     struct rb_node *parent = NULL;  
  5.     struct rb_node **p = &proc->threads.rb_node;  
  6.   
  7.     while (*p) {  
  8.         parent = *p;  
  9.         thread = rb_entry(parent, struct binder_thread, rb_node);  
  10.   
  11.         if (current->pid < thread->pid)  
  12.             p = &(*p)->rb_left;  
  13.         else if (current->pid > thread->pid)  
  14.             p = &(*p)->rb_right;  
  15.         else  
  16.             break;  
  17.     }  
  18.     if (*p == NULL) {  
  19.         thread = kzalloc(sizeof(*thread), GFP_KERNEL);  
  20.         if (thread == NULL)  
  21.             return NULL;  
  22.         binder_stats.obj_created[BINDER_STAT_THREAD]++;  
  23.         thread->proc = proc;  
  24.         thread->pid = current->pid;  
  25.         init_waitqueue_head(&thread->wait);  
  26.         INIT_LIST_HEAD(&thread->todo);  
  27.         rb_link_node(&thread->rb_node, parent, p);  
  28.         rb_insert_color(&thread->rb_node, &proc->threads);  
  29.         thread->looper |= BINDER_LOOPER_STATE_NEED_RETURN;  
  30.         thread->return_error = BR_OK;  
  31.         thread->return_error2 = BR_OK;  
  32.     }  
  33.     return thread;  
  34. }  
static struct binder_thread *binder_get_thread(struct binder_proc *proc)
{
	struct binder_thread *thread = NULL;
	struct rb_node *parent = NULL;
	struct rb_node **p = &proc->threads.rb_node;

	while (*p) {
		parent = *p;
		thread = rb_entry(parent, struct binder_thread, rb_node);

		if (current->pid < thread->pid)
			p = &(*p)->rb_left;
		else if (current->pid > thread->pid)
			p = &(*p)->rb_right;
		else
			break;
	}
	if (*p == NULL) {
		thread = kzalloc(sizeof(*thread), GFP_KERNEL);
		if (thread == NULL)
			return NULL;
		binder_stats.obj_created[BINDER_STAT_THREAD]++;
		thread->proc = proc;
		thread->pid = current->pid;
		init_waitqueue_head(&thread->wait);
		INIT_LIST_HEAD(&thread->todo);
		rb_link_node(&thread->rb_node, parent, p);
		rb_insert_color(&thread->rb_node, &proc->threads);
		thread->looper |= BINDER_LOOPER_STATE_NEED_RETURN;
		thread->return_error = BR_OK;
		thread->return_error2 = BR_OK;
	}
	return thread;
}
        这里把当前线程current的pid作为键值,在进程proc->threads表示的红黑树中进行查找,看是否已经为当前线程创建过了binder_thread信息。在这个场景下,由于当前线程是第一次进到这里,所以肯定找不到,即*p == NULL成立,于是,就为当前线程创建一个线程上下文信息结构体binder_thread,并初始化相应成员变量,并插入到proc->threads所表示的红黑树中去,下次要使用时就可以从proc中找到了。注意,这里的thread->looper = BINDER_LOOPER_STATE_NEED_RETURN。

        回到binder_ioctl函数,继续往下面,有两个全局变量binder_context_mgr_node和binder_context_mgr_uid,它定义如下:

  1. static struct binder_node *binder_context_mgr_node;  
  2. static uid_t binder_context_mgr_uid = -1;  
static struct binder_node *binder_context_mgr_node;
static uid_t binder_context_mgr_uid = -1;
        binder_context_mgr_node用来表示Service Manager实体,binder_context_mgr_uid表示Service Manager守护进程的uid。在这个场景下,由于当前线程是第一次进到这里,所以binder_context_mgr_node为NULL,binder_context_mgr_uid为-1,于是初始化binder_context_mgr_uid为current->cred->euid,这样,当前线程就成为Binder机制的守护进程了,并且通过binder_new_node为Service Manager创建Binder实体:

  1. static struct binder_node *  
  2. binder_new_node(struct binder_proc *proc, void __user *ptr, void __user *cookie)  
  3. {  
  4.     struct rb_node **p = &proc->nodes.rb_node;  
  5.     struct rb_node *parent = NULL;  
  6.     struct binder_node *node;  
  7.   
  8.     while (*p) {  
  9.         parent = *p;  
  10.         node = rb_entry(parent, struct binder_node, rb_node);  
  11.   
  12.         if (ptr < node->ptr)  
  13.             p = &(*p)->rb_left;  
  14.         else if (ptr > node->ptr)  
  15.             p = &(*p)->rb_right;  
  16.         else  
  17.             return NULL;  
  18.     }  
  19.   
  20.     node = kzalloc(sizeof(*node), GFP_KERNEL);  
  21.     if (node == NULL)  
  22.         return NULL;  
  23.     binder_stats.obj_created[BINDER_STAT_NODE]++;  
  24.     rb_link_node(&node->rb_node, parent, p);  
  25.     rb_insert_color(&node->rb_node, &proc->nodes);  
  26.     node->debug_id = ++binder_last_id;  
  27.     node->proc = proc;  
  28.     node->ptr = ptr;  
  29.     node->cookie = cookie;  
  30.     node->work.type = BINDER_WORK_NODE;  
  31.     INIT_LIST_HEAD(&node->work.entry);  
  32.     INIT_LIST_HEAD(&node->async_todo);  
  33.     if (binder_debug_mask & BINDER_DEBUG_INTERNAL_REFS)  
  34.         printk(KERN_INFO "binder: %d:%d node %d u%p c%p created\n",  
  35.                proc->pid, current->pid, node->debug_id,  
  36.                node->ptr, node->cookie);  
  37.     return node;  
  38. }  
static struct binder_node *
binder_new_node(struct binder_proc *proc, void __user *ptr, void __user *cookie)
{
	struct rb_node **p = &proc->nodes.rb_node;
	struct rb_node *parent = NULL;
	struct binder_node *node;

	while (*p) {
		parent = *p;
		node = rb_entry(parent, struct binder_node, rb_node);

		if (ptr < node->ptr)
			p = &(*p)->rb_left;
		else if (ptr > node->ptr)
			p = &(*p)->rb_right;
		else
			return NULL;
	}

	node = kzalloc(sizeof(*node), GFP_KERNEL);
	if (node == NULL)
		return NULL;
	binder_stats.obj_created[BINDER_STAT_NODE]++;
	rb_link_node(&node->rb_node, parent, p);
	rb_insert_color(&node->rb_node, &proc->nodes);
	node->debug_id = ++binder_last_id;
	node->proc = proc;
	node->ptr = ptr;
	node->cookie = cookie;
	node->work.type = BINDER_WORK_NODE;
	INIT_LIST_HEAD(&node->work.entry);
	INIT_LIST_HEAD(&node->async_todo);
	if (binder_debug_mask & BINDER_DEBUG_INTERNAL_REFS)
		printk(KERN_INFO "binder: %d:%d node %d u%p c%p created\n",
		       proc->pid, current->pid, node->debug_id,
		       node->ptr, node->cookie);
	return node;
}
        注意,这里传进来的ptr和cookie均为NULL。函数首先检查proc->nodes红黑树中是否已经存在以ptr为键值的node,如果已经存在,就返回NULL。在这个场景下,由于当前线程是第一次进入到这里,所以肯定不存在,于是就新建了一个ptr为NULL的binder_node,并且初始化其它成员变量,并插入到proc->nodes红黑树中去。

        binder_new_node返回到binder_ioctl函数后,就把新建的binder_node指针保存在binder_context_mgr_node中了,紧接着,又初始化了binder_context_mgr_node的引用计数值。

        这样,BINDER_SET_CONTEXT_MGR命令就执行完毕了,binder_ioctl函数返回之前,执行了下面语句:

  1. if (thread)  
  2.         thread->looper &= ~BINDER_LOOPER_STATE_NEED_RETURN;  
if (thread)
		thread->looper &= ~BINDER_LOOPER_STATE_NEED_RETURN;

       回忆上面执行binder_get_thread时,thread->looper = BINDER_LOOPER_STATE_NEED_RETURN,执行了这条语句后,thread->looper = 0。

       回到frameworks/base/cmds/servicemanager/service_manager.c文件中的main函数,下一步就是调用binder_loop函数进入循环,等待Client来请求了。binder_loop函数定义在frameworks/base/cmds/servicemanager/binder.c文件中:

  1. void binder_loop(struct binder_state *bs, binder_handler func)  
  2. {  
  3.     int res;  
  4.     struct binder_write_read bwr;  
  5.     unsigned readbuf[32];  
  6.   
  7.     bwr.write_size = 0;  
  8.     bwr.write_consumed = 0;  
  9.     bwr.write_buffer = 0;  
  10.       
  11.     readbuf[0] = BC_ENTER_LOOPER;  
  12.     binder_write(bs, readbuf, sizeof(unsigned));  
  13.   
  14.     for (;;) {  
  15.         bwr.read_size = sizeof(readbuf);  
  16.         bwr.read_consumed = 0;  
  17.         bwr.read_buffer = (unsigned) readbuf;  
  18.   
  19.         res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);  
  20.   
  21.         if (res < 0) {  
  22.             LOGE("binder_loop: ioctl failed (%s)\n", strerror(errno));  
  23.             break;  
  24.         }  
  25.   
  26.         res = binder_parse(bs, 0, readbuf, bwr.read_consumed, func);  
  27.         if (res == 0) {  
  28.             LOGE("binder_loop: unexpected reply?!\n");  
  29.             break;  
  30.         }  
  31.         if (res < 0) {  
  32.             LOGE("binder_loop: io error %d %s\n", res, strerror(errno));  
  33.             break;  
  34.         }  
  35.     }  
  36. }  
void binder_loop(struct binder_state *bs, binder_handler func)
{
    int res;
    struct binder_write_read bwr;
    unsigned readbuf[32];

    bwr.write_size = 0;
    bwr.write_consumed = 0;
    bwr.write_buffer = 0;
    
    readbuf[0] = BC_ENTER_LOOPER;
    binder_write(bs, readbuf, sizeof(unsigned));

    for (;;) {
        bwr.read_size = sizeof(readbuf);
        bwr.read_consumed = 0;
        bwr.read_buffer = (unsigned) readbuf;

        res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);

        if (res < 0) {
            LOGE("binder_loop: ioctl failed (%s)\n", strerror(errno));
            break;
        }

        res = binder_parse(bs, 0, readbuf, bwr.read_consumed, func);
        if (res == 0) {
            LOGE("binder_loop: unexpected reply?!\n");
            break;
        }
        if (res < 0) {
            LOGE("binder_loop: io error %d %s\n", res, strerror(errno));
            break;
        }
    }
}
       首先是通过binder_write函数执行BC_ENTER_LOOPER命令告诉Binder驱动程序, Service Manager要进入循环了。

       这里又要介绍一下设备文件/dev/binder文件操作函数ioctl的操作码BINDER_WRITE_READ了,首先看定义:

  1. #define BINDER_WRITE_READ           _IOWR('b', 1, struct binder_write_read)  
#define BINDER_WRITE_READ   		_IOWR('b', 1, struct binder_write_read)
       这个io操作码有一个参数,形式为struct binder_write_read:

  1. struct binder_write_read {  
  2.     signed long write_size; /* bytes to write */  
  3.     signed long write_consumed; /* bytes consumed by driver */  
  4.     unsigned long   write_buffer;  
  5.     signed long read_size;  /* bytes to read */  
  6.     signed long read_consumed;  /* bytes consumed by driver */  
  7.     unsigned long   read_buffer;  
  8. };  
struct binder_write_read {
	signed long	write_size;	/* bytes to write */
	signed long	write_consumed;	/* bytes consumed by driver */
	unsigned long	write_buffer;
	signed long	read_size;	/* bytes to read */
	signed long	read_consumed;	/* bytes consumed by driver */
	unsigned long	read_buffer;
};
      这里顺便说一下,用户空间程序和Binder驱动程序交互大多数都是通过BINDER_WRITE_READ命令的,write_bufffer和read_buffer所指向的数据结构还指定了具体要执行的操作,write_bufffer和read_buffer所指向的结构体是struct binder_transaction_data:

  1. struct binder_transaction_data {  
  2.     /* The first two are only used for bcTRANSACTION and brTRANSACTION, 
  3.      * identifying the target and contents of the transaction. 
  4.      */  
  5.     union {  
  6.         size_t  handle; /* target descriptor of command transaction */  
  7.         void    *ptr;   /* target descriptor of return transaction */  
  8.     } target;  
  9.     void        *cookie;    /* target object cookie */  
  10.     unsigned int    code;       /* transaction command */  
  11.   
  12.     /* General information about the transaction. */  
  13.     unsigned int    flags;  
  14.     pid_t       sender_pid;  
  15.     uid_t       sender_euid;  
  16.     size_t      data_size;  /* number of bytes of data */  
  17.     size_t      offsets_size;   /* number of bytes of offsets */  
  18.   
  19.     /* If this transaction is inline, the data immediately 
  20.      * follows here; otherwise, it ends with a pointer to 
  21.      * the data buffer. 
  22.      */  
  23.     union {  
  24.         struct {  
  25.             /* transaction data */  
  26.             const void  *buffer;  
  27.             /* offsets from buffer to flat_binder_object structs */  
  28.             const void  *offsets;  
  29.         } ptr;  
  30.         uint8_t buf[8];  
  31.     } data;  
  32. };  
struct binder_transaction_data {
	/* The first two are only used for bcTRANSACTION and brTRANSACTION,
	 * identifying the target and contents of the transaction.
	 */
	union {
		size_t	handle;	/* target descriptor of command transaction */
		void	*ptr;	/* target descriptor of return transaction */
	} target;
	void		*cookie;	/* target object cookie */
	unsigned int	code;		/* transaction command */

	/* General information about the transaction. */
	unsigned int	flags;
	pid_t		sender_pid;
	uid_t		sender_euid;
	size_t		data_size;	/* number of bytes of data */
	size_t		offsets_size;	/* number of bytes of offsets */

	/* If this transaction is inline, the data immediately
	 * follows here; otherwise, it ends with a pointer to
	 * the data buffer.
	 */
	union {
		struct {
			/* transaction data */
			const void	*buffer;
			/* offsets from buffer to flat_binder_object structs */
			const void	*offsets;
		} ptr;
		uint8_t	buf[8];
	} data;
};
       有一个联合体target,当这个BINDER_WRITE_READ命令的目标对象是本地Binder实体时,就使用ptr来表示这个对象在本进程中的地址,否则就使用handle来表示这个Binder实体的引用。只有目标对象是Binder实体时,cookie成员变量才有意义,表示一些附加数据,由Binder实体来解释这个个附加数据。code表示要对目标对象请求的命令代码,有很多请求代码,这里就不列举了,在这个场景中,就是BC_ENTER_LOOPER了,用来告诉Binder驱动程序, Service Manager要进入循环了。其余的请求命令代码可以参考kernel/common/drivers/staging/android/binder.h文件中定义的两个枚举类型BinderDriverReturnProtocol和BinderDriverCommandProtocol。

       flags成员变量表示事务标志:

  1. enum transaction_flags {  
  2.     TF_ONE_WAY  = 0x01, /* this is a one-way call: async, no return */  
  3.     TF_ROOT_OBJECT  = 0x04, /* contents are the component's root object */  
  4.     TF_STATUS_CODE  = 0x08, /* contents are a 32-bit status code */  
  5.     TF_ACCEPT_FDS   = 0x10, /* allow replies with file descriptors */  
  6. };  
enum transaction_flags {
	TF_ONE_WAY	= 0x01,	/* this is a one-way call: async, no return */
	TF_ROOT_OBJECT	= 0x04,	/* contents are the component's root object */
	TF_STATUS_CODE	= 0x08,	/* contents are a 32-bit status code */
	TF_ACCEPT_FDS	= 0x10,	/* allow replies with file descriptors */
};
      每一个标志位所表示的意义看注释就行了,遇到时再具体分析。

      sender_pid和sender_euid表示发送者进程的pid和euid。

      data_size表示data.buffer缓冲区的大小,offsets_size表示data.offsets缓冲区的大小。这里需要解释一下data成员变量,命令的真正要传输的数据就保存在data.buffer缓冲区中,前面的一成员变量都是一些用来描述数据的特征的。data.buffer所表示的缓冲区数据分为两类,一类是普通数据,Binder驱动程序不关心,一类是Binder实体或者Binder引用,这需要Binder驱动程序介入处理。为什么呢?想想,如果一个进程A传递了一个Binder实体或Binder引用给进程B,那么,Binder驱动程序就需要介入维护这个Binder实体或者引用的引用计数,防止B进程还在使用这个Binder实体时,A却销毁这个实体,这样的话,B进程就会crash了。所以在传输数据时,如果数据中含有Binder实体和Binder引和,就需要告诉Binder驱动程序它们的具体位置,以便Binder驱动程序能够去维护它们。data.offsets的作用就在这里了,它指定在data.buffer缓冲区中,所有Binder实体或者引用的偏移位置。每一个Binder实体或者引用,通过struct flat_binder_object 来表示:

  1. /* 
  2.  * This is the flattened representation of a Binder object for transfer 
  3.  * between processes.  The 'offsets' supplied as part of a binder transaction 
  4.  * contains offsets into the data where these structures occur.  The Binder 
  5.  * driver takes care of re-writing the structure type and data as it moves 
  6.  * between processes. 
  7.  */  
  8. struct flat_binder_object {  
  9.     /* 8 bytes for large_flat_header. */  
  10.     unsigned long       type;  
  11.     unsigned long       flags;  
  12.   
  13.     /* 8 bytes of data. */  
  14.     union {  
  15.         void        *binder;    /* local object */  
  16.         signed long handle;     /* remote object */  
  17.     };  
  18.   
  19.     /* extra data associated with local object */  
  20.     void            *cookie;  
  21. };  
/*
 * This is the flattened representation of a Binder object for transfer
 * between processes.  The 'offsets' supplied as part of a binder transaction
 * contains offsets into the data where these structures occur.  The Binder
 * driver takes care of re-writing the structure type and data as it moves
 * between processes.
 */
struct flat_binder_object {
	/* 8 bytes for large_flat_header. */
	unsigned long		type;
	unsigned long		flags;

	/* 8 bytes of data. */
	union {
		void		*binder;	/* local object */
		signed long	handle;		/* remote object */
	};

	/* extra data associated with local object */
	void			*cookie;
};
       type表示Binder对象的类型,它取值如下所示:

  1. enum {  
  2.     BINDER_TYPE_BINDER  = B_PACK_CHARS('s''b''*', B_TYPE_LARGE),  
  3.     BINDER_TYPE_WEAK_BINDER = B_PACK_CHARS('w''b''*', B_TYPE_LARGE),  
  4.     BINDER_TYPE_HANDLE  = B_PACK_CHARS('s''h''*', B_TYPE_LARGE),  
  5.     BINDER_TYPE_WEAK_HANDLE = B_PACK_CHARS('w''h''*', B_TYPE_LARGE),  
  6.     BINDER_TYPE_FD      = B_PACK_CHARS('f''d''*', B_TYPE_LARGE),  
  7. };  
enum {
	BINDER_TYPE_BINDER	= B_PACK_CHARS('s', 'b', '*', B_TYPE_LARGE),
	BINDER_TYPE_WEAK_BINDER	= B_PACK_CHARS('w', 'b', '*', B_TYPE_LARGE),
	BINDER_TYPE_HANDLE	= B_PACK_CHARS('s', 'h', '*', B_TYPE_LARGE),
	BINDER_TYPE_WEAK_HANDLE	= B_PACK_CHARS('w', 'h', '*', B_TYPE_LARGE),
	BINDER_TYPE_FD		= B_PACK_CHARS('f', 'd', '*', B_TYPE_LARGE),
};
       flags表示Binder对象的标志,该域只对第一次传递Binder实体时有效,因为此刻驱动需要在内核中创建相应的实体节点,有些参数需要从该域取出。

       type和flags的具体意义可以参考Android Binder设计与实现一文。

       最后,binder表示这是一个Binder实体,handle表示这是一个Binder引用,当这是一个Binder实体时,cookie才有意义,表示附加数据,由进程自己解释。

       数据结构分析完了,回到binder_loop函数中,首先是执行BC_ENTER_LOOPER命令:

  1. readbuf[0] = BC_ENTER_LOOPER;  
  2. binder_write(bs, readbuf, sizeof(unsigned));  
    readbuf[0] = BC_ENTER_LOOPER;
    binder_write(bs, readbuf, sizeof(unsigned));
        进入到binder_write函数中:

  1. int binder_write(struct binder_state *bs, void *data, unsigned len)  
  2. {  
  3.     struct binder_write_read bwr;  
  4.     int res;  
  5.     bwr.write_size = len;  
  6.     bwr.write_consumed = 0;  
  7.     bwr.write_buffer = (unsigned) data;  
  8.     bwr.read_size = 0;  
  9.     bwr.read_consumed = 0;  
  10.     bwr.read_buffer = 0;  
  11.     res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);  
  12.     if (res < 0) {  
  13.         fprintf(stderr,"binder_write: ioctl failed (%s)\n",  
  14.                 strerror(errno));  
  15.     }  
  16.     return res;  
  17. }  
int binder_write(struct binder_state *bs, void *data, unsigned len)
{
    struct binder_write_read bwr;
    int res;
    bwr.write_size = len;
    bwr.write_consumed = 0;
    bwr.write_buffer = (unsigned) data;
    bwr.read_size = 0;
    bwr.read_consumed = 0;
    bwr.read_buffer = 0;
    res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);
    if (res < 0) {
        fprintf(stderr,"binder_write: ioctl failed (%s)\n",
                strerror(errno));
    }
    return res;
}
        注意这里的binder_write_read变量bwr,write_size大小为4,表示write_buffer缓冲区大小为4,它的内容是一个BC_ENTER_LOOPER命令协议号,read_buffer为空。接着又是调用ioctl函数进入到Binder驱动程序的binder_ioctl函数,这里我们也只是关注BC_ENTER_LOOPER相关的逻辑:

  1. static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)  
  2. {  
  3.     int ret;  
  4.     struct binder_proc *proc = filp->private_data;  
  5.     struct binder_thread *thread;  
  6.     unsigned int size = _IOC_SIZE(cmd);  
  7.     void __user *ubuf = (void __user *)arg;  
  8.   
  9.     /*printk(KERN_INFO "binder_ioctl: %d:%d %x %lx\n", proc->pid, current->pid, cmd, arg);*/  
  10.   
  11.     ret = wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);  
  12.     if (ret)  
  13.         return ret;  
  14.   
  15.     mutex_lock(&binder_lock);  
  16.     thread = binder_get_thread(proc);  
  17.     if (thread == NULL) {  
  18.         ret = -ENOMEM;  
  19.         goto err;  
  20.     }  
  21.   
  22.     switch (cmd) {  
  23.     case BINDER_WRITE_READ: {  
  24.         struct binder_write_read bwr;  
  25.         if (size != sizeof(struct binder_write_read)) {  
  26.             ret = -EINVAL;  
  27.             goto err;  
  28.         }  
  29.         if (copy_from_user(&bwr, ubuf, sizeof(bwr))) {  
  30.             ret = -EFAULT;  
  31.             goto err;  
  32.         }  
  33.         if (binder_debug_mask & BINDER_DEBUG_READ_WRITE)  
  34.             printk(KERN_INFO "binder: %d:%d write %ld at %08lx, read %ld at %08lx\n",  
  35.             proc->pid, thread->pid, bwr.write_size, bwr.write_buffer, bwr.read_size, bwr.read_buffer);  
  36.         if (bwr.write_size > 0) {  
  37.             ret = binder_thread_write(proc, thread, (void __user *)bwr.write_buffer, bwr.write_size, &bwr.write_consumed);  
  38.             if (ret < 0) {  
  39.                 bwr.read_consumed = 0;  
  40.                 if (copy_to_user(ubuf, &bwr, sizeof(bwr)))  
  41.                     ret = -EFAULT;  
  42.                 goto err;  
  43.             }  
  44.         }  
  45.         if (bwr.read_size > 0) {  
  46.             ret = binder_thread_read(proc, thread, (void __user *)bwr.read_buffer, bwr.read_size, &bwr.read_consumed, filp->f_flags & O_NONBLOCK);  
  47.             if (!list_empty(&proc->todo))  
  48.                 wake_up_interruptible(&proc->wait);  
  49.             if (ret < 0) {  
  50.                 if (copy_to_user(ubuf, &bwr, sizeof(bwr)))  
  51.                     ret = -EFAULT;  
  52.                 goto err;  
  53.             }  
  54.         }  
  55.         if (binder_debug_mask & BINDER_DEBUG_READ_WRITE)  
  56.             printk(KERN_INFO "binder: %d:%d wrote %ld of %ld, read return %ld of %ld\n",  
  57.             proc->pid, thread->pid, bwr.write_consumed, bwr.write_size, bwr.read_consumed, bwr.read_size);  
  58.         if (copy_to_user(ubuf, &bwr, sizeof(bwr))) {  
  59.             ret = -EFAULT;  
  60.             goto err;  
  61.         }  
  62.         break;  
  63.                             }  
  64.     ......  
  65.     default:  
  66.         ret = -EINVAL;  
  67.         goto err;  
  68.     }  
  69.     ret = 0;  
  70. err:  
  71.     if (thread)  
  72.         thread->looper &= ~BINDER_LOOPER_STATE_NEED_RETURN;  
  73.     mutex_unlock(&binder_lock);  
  74.     wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);  
  75.     if (ret && ret != -ERESTARTSYS)  
  76.         printk(KERN_INFO "binder: %d:%d ioctl %x %lx returned %d\n", proc->pid, current->pid, cmd, arg, ret);  
  77.     return ret;  
  78. }  
static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
	int ret;
	struct binder_proc *proc = filp->private_data;
	struct binder_thread *thread;
	unsigned int size = _IOC_SIZE(cmd);
	void __user *ubuf = (void __user *)arg;

	/*printk(KERN_INFO "binder_ioctl: %d:%d %x %lx\n", proc->pid, current->pid, cmd, arg);*/

	ret = wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);
	if (ret)
		return ret;

	mutex_lock(&binder_lock);
	thread = binder_get_thread(proc);
	if (thread == NULL) {
		ret = -ENOMEM;
		goto err;
	}

	switch (cmd) {
	case BINDER_WRITE_READ: {
		struct binder_write_read bwr;
		if (size != sizeof(struct binder_write_read)) {
			ret = -EINVAL;
			goto err;
		}
		if (copy_from_user(&bwr, ubuf, sizeof(bwr))) {
			ret = -EFAULT;
			goto err;
		}
		if (binder_debug_mask & BINDER_DEBUG_READ_WRITE)
			printk(KERN_INFO "binder: %d:%d write %ld at %08lx, read %ld at %08lx\n",
			proc->pid, thread->pid, bwr.write_size, bwr.write_buffer, bwr.read_size, bwr.read_buffer);
		if (bwr.write_size > 0) {
			ret = binder_thread_write(proc, thread, (void __user *)bwr.write_buffer, bwr.write_size, &bwr.write_consumed);
			if (ret < 0) {
				bwr.read_consumed = 0;
				if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
					ret = -EFAULT;
				goto err;
			}
		}
		if (bwr.read_size > 0) {
			ret = binder_thread_read(proc, thread, (void __user *)bwr.read_buffer, bwr.read_size, &bwr.read_consumed, filp->f_flags & O_NONBLOCK);
			if (!list_empty(&proc->todo))
				wake_up_interruptible(&proc->wait);
			if (ret < 0) {
				if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
					ret = -EFAULT;
				goto err;
			}
		}
		if (binder_debug_mask & BINDER_DEBUG_READ_WRITE)
			printk(KERN_INFO "binder: %d:%d wrote %ld of %ld, read return %ld of %ld\n",
			proc->pid, thread->pid, bwr.write_consumed, bwr.write_size, bwr.read_consumed, bwr.read_size);
		if (copy_to_user(ubuf, &bwr, sizeof(bwr))) {
			ret = -EFAULT;
			goto err;
		}
		break;
							}
	......
	default:
		ret = -EINVAL;
		goto err;
	}
	ret = 0;
err:
	if (thread)
		thread->looper &= ~BINDER_LOOPER_STATE_NEED_RETURN;
	mutex_unlock(&binder_lock);
	wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);
	if (ret && ret != -ERESTARTSYS)
		printk(KERN_INFO "binder: %d:%d ioctl %x %lx returned %d\n", proc->pid, current->pid, cmd, arg, ret);
	return ret;
}
       函数前面的代码就不解释了,同前面调用binder_become_context_manager是一样的,只不过这里调用binder_get_thread函数获取binder_thread,就能从proc中直接找到了,不需要创建一个新的。

       首先是通过copy_from_user(&bwr, ubuf, sizeof(bwr))语句把用户传递进来的参数转换成struct binder_write_read结构体,并保存在本地变量bwr中,这里可以看出bwr.write_size等于4,于是进入binder_thread_write函数,这里我们只关注BC_ENTER_LOOPER相关的代码:

  1. int  
  2. binder_thread_write(struct binder_proc *proc, struct binder_thread *thread,  
  3.                     void __user *buffer, int size, signed long *consumed)  
  4. {  
  5.     uint32_t cmd;  
  6.     void __user *ptr = buffer + *consumed;  
  7.     void __user *end = buffer + size;  
  8.   
  9.     while (ptr < end && thread->return_error == BR_OK) {  
  10.         if (get_user(cmd, (uint32_t __user *)ptr))  
  11.             return -EFAULT;  
  12.         ptr += sizeof(uint32_t);  
  13.         if (_IOC_NR(cmd) < ARRAY_SIZE(binder_stats.bc)) {  
  14.             binder_stats.bc[_IOC_NR(cmd)]++;  
  15.             proc->stats.bc[_IOC_NR(cmd)]++;  
  16.             thread->stats.bc[_IOC_NR(cmd)]++;  
  17.         }  
  18.         switch (cmd) {  
  19.         ......  
  20.         case BC_ENTER_LOOPER:  
  21.             if (binder_debug_mask & BINDER_DEBUG_THREADS)  
  22.                 printk(KERN_INFO "binder: %d:%d BC_ENTER_LOOPER\n",  
  23.                 proc->pid, thread->pid);  
  24.             if (thread->looper & BINDER_LOOPER_STATE_REGISTERED) {  
  25.                 thread->looper |= BINDER_LOOPER_STATE_INVALID;  
  26.                 binder_user_error("binder: %d:%d ERROR:"  
  27.                     " BC_ENTER_LOOPER called after "  
  28.                     "BC_REGISTER_LOOPER\n",  
  29.                     proc->pid, thread->pid);  
  30.             }  
  31.             thread->looper |= BINDER_LOOPER_STATE_ENTERED;  
  32.             break;  
  33.         ......  
  34.         default:  
  35.             printk(KERN_ERR "binder: %d:%d unknown command %d\n", proc->pid, thread->pid, cmd);  
  36.             return -EINVAL;  
  37.         }  
  38.         *consumed = ptr - buffer;  
  39.     }  
  40.     return 0;  
  41. }  
int
binder_thread_write(struct binder_proc *proc, struct binder_thread *thread,
					void __user *buffer, int size, signed long *consumed)
{
	uint32_t cmd;
	void __user *ptr = buffer + *consumed;
	void __user *end = buffer + size;

	while (ptr < end && thread->return_error == BR_OK) {
		if (get_user(cmd, (uint32_t __user *)ptr))
			return -EFAULT;
		ptr += sizeof(uint32_t);
		if (_IOC_NR(cmd) < ARRAY_SIZE(binder_stats.bc)) {
			binder_stats.bc[_IOC_NR(cmd)]++;
			proc->stats.bc[_IOC_NR(cmd)]++;
			thread->stats.bc[_IOC_NR(cmd)]++;
		}
		switch (cmd) {
	    ......
		case BC_ENTER_LOOPER:
			if (binder_debug_mask & BINDER_DEBUG_THREADS)
				printk(KERN_INFO "binder: %d:%d BC_ENTER_LOOPER\n",
				proc->pid, thread->pid);
			if (thread->looper & BINDER_LOOPER_STATE_REGISTERED) {
				thread->looper |= BINDER_LOOPER_STATE_INVALID;
				binder_user_error("binder: %d:%d ERROR:"
					" BC_ENTER_LOOPER called after "
					"BC_REGISTER_LOOPER\n",
					proc->pid, thread->pid);
			}
			thread->looper |= BINDER_LOOPER_STATE_ENTERED;
			break;
        ......
		default:
			printk(KERN_ERR "binder: %d:%d unknown command %d\n", proc->pid, thread->pid, cmd);
			return -EINVAL;
		}
		*consumed = ptr - buffer;
	}
	return 0;
}
       回忆前面执行binder_become_context_manager到binder_ioctl时,调用binder_get_thread函数创建的thread->looper值为0,所以这里执行完BC_ENTER_LOOPER时,thread->looper值就变为BINDER_LOOPER_STATE_ENTERED了,表明当前线程进入循环状态了。

       回到binder_ioctl函数,由于bwr.read_size == 0,binder_thread_read函数就不会被执行了,这样,binder_ioctl的任务就完成了。

       回到binder_loop函数,进入for循环:

  1. for (;;) {  
  2.     bwr.read_size = sizeof(readbuf);  
  3.     bwr.read_consumed = 0;  
  4.     bwr.read_buffer = (unsigned) readbuf;  
  5.   
  6.     res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);  
  7.   
  8.     if (res < 0) {  
  9.         LOGE("binder_loop: ioctl failed (%s)\n", strerror(errno));  
  10.         break;  
  11.     }  
  12.   
  13.     res = binder_parse(bs, 0, readbuf, bwr.read_consumed, func);  
  14.     if (res == 0) {  
  15.         LOGE("binder_loop: unexpected reply?!\n");  
  16.         break;  
  17.     }  
  18.     if (res < 0) {  
  19.         LOGE("binder_loop: io error %d %s\n", res, strerror(errno));  
  20.         break;  
  21.     }  
  22. }  
    for (;;) {
        bwr.read_size = sizeof(readbuf);
        bwr.read_consumed = 0;
        bwr.read_buffer = (unsigned) readbuf;

        res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);

        if (res < 0) {
            LOGE("binder_loop: ioctl failed (%s)\n", strerror(errno));
            break;
        }

        res = binder_parse(bs, 0, readbuf, bwr.read_consumed, func);
        if (res == 0) {
            LOGE("binder_loop: unexpected reply?!\n");
            break;
        }
        if (res < 0) {
            LOGE("binder_loop: io error %d %s\n", res, strerror(errno));
            break;
        }
    }
        又是执行一个ioctl命令,注意,这里的bwr参数各个成员的值:

  1. bwr.write_size = 0;  
  2. bwr.write_consumed = 0;  
  3. bwr.write_buffer = 0;  
  4. readbuf[0] = BC_ENTER_LOOPER;  
  5. bwr.read_size = sizeof(readbuf);  
  6. bwr.read_consumed = 0;  
  7. bwr.read_buffer = (unsigned) readbuf;  
    bwr.write_size = 0;
    bwr.write_consumed = 0;
    bwr.write_buffer = 0;
    readbuf[0] = BC_ENTER_LOOPER;
    bwr.read_size = sizeof(readbuf);
    bwr.read_consumed = 0;
    bwr.read_buffer = (unsigned) readbuf;
        再次进入到binder_ioctl函数:

  1. static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)  
  2. {  
  3.     int ret;  
  4.     struct binder_proc *proc = filp->private_data;  
  5.     struct binder_thread *thread;  
  6.     unsigned int size = _IOC_SIZE(cmd);  
  7.     void __user *ubuf = (void __user *)arg;  
  8.   
  9.     /*printk(KERN_INFO "binder_ioctl: %d:%d %x %lx\n", proc->pid, current->pid, cmd, arg);*/  
  10.   
  11.     ret = wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);  
  12.     if (ret)  
  13.         return ret;  
  14.   
  15.     mutex_lock(&binder_lock);  
  16.     thread = binder_get_thread(proc);  
  17.     if (thread == NULL) {  
  18.         ret = -ENOMEM;  
  19.         goto err;  
  20.     }  
  21.   
  22.     switch (cmd) {  
  23.     case BINDER_WRITE_READ: {  
  24.         struct binder_write_read bwr;  
  25.         if (size != sizeof(struct binder_write_read)) {  
  26.             ret = -EINVAL;  
  27.             goto err;  
  28.         }  
  29.         if (copy_from_user(&bwr, ubuf, sizeof(bwr))) {  
  30.             ret = -EFAULT;  
  31.             goto err;  
  32.         }  
  33.         if (binder_debug_mask & BINDER_DEBUG_READ_WRITE)  
  34.             printk(KERN_INFO "binder: %d:%d write %ld at %08lx, read %ld at %08lx\n",  
  35.             proc->pid, thread->pid, bwr.write_size, bwr.write_buffer, bwr.read_size, bwr.read_buffer);  
  36.         if (bwr.write_size > 0) {  
  37.             ret = binder_thread_write(proc, thread, (void __user *)bwr.write_buffer, bwr.write_size, &bwr.write_consumed);  
  38.             if (ret < 0) {  
  39.                 bwr.read_consumed = 0;  
  40.                 if (copy_to_user(ubuf, &bwr, sizeof(bwr)))  
  41.                     ret = -EFAULT;  
  42.                 goto err;  
  43.             }  
  44.         }  
  45.         if (bwr.read_size > 0) {  
  46.             ret = binder_thread_read(proc, thread, (void __user *)bwr.read_buffer, bwr.read_size, &bwr.read_consumed, filp->f_flags & O_NONBLOCK);  
  47.             if (!list_empty(&proc->todo))  
  48.                 wake_up_interruptible(&proc->wait);  
  49.             if (ret < 0) {  
  50.                 if (copy_to_user(ubuf, &bwr, sizeof(bwr)))  
  51.                     ret = -EFAULT;  
  52.                 goto err;  
  53.             }  
  54.         }  
  55.         if (binder_debug_mask & BINDER_DEBUG_READ_WRITE)  
  56.             printk(KERN_INFO "binder: %d:%d wrote %ld of %ld, read return %ld of %ld\n",  
  57.             proc->pid, thread->pid, bwr.write_consumed, bwr.write_size, bwr.read_consumed, bwr.read_size);  
  58.         if (copy_to_user(ubuf, &bwr, sizeof(bwr))) {  
  59.             ret = -EFAULT;  
  60.             goto err;  
  61.         }  
  62.         break;  
  63.                             }  
  64.     ......  
  65.     default:  
  66.         ret = -EINVAL;  
  67.         goto err;  
  68.     }  
  69.     ret = 0;  
  70. err:  
  71.     if (thread)  
  72.         thread->looper &= ~BINDER_LOOPER_STATE_NEED_RETURN;  
  73.     mutex_unlock(&binder_lock);  
  74.     wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);  
  75.     if (ret && ret != -ERESTARTSYS)  
  76.         printk(KERN_INFO "binder: %d:%d ioctl %x %lx returned %d\n", proc->pid, current->pid, cmd, arg, ret);  
  77.     return ret;  
  78. }  
static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
	int ret;
	struct binder_proc *proc = filp->private_data;
	struct binder_thread *thread;
	unsigned int size = _IOC_SIZE(cmd);
	void __user *ubuf = (void __user *)arg;

	/*printk(KERN_INFO "binder_ioctl: %d:%d %x %lx\n", proc->pid, current->pid, cmd, arg);*/

	ret = wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);
	if (ret)
		return ret;

	mutex_lock(&binder_lock);
	thread = binder_get_thread(proc);
	if (thread == NULL) {
		ret = -ENOMEM;
		goto err;
	}

	switch (cmd) {
	case BINDER_WRITE_READ: {
		struct binder_write_read bwr;
		if (size != sizeof(struct binder_write_read)) {
			ret = -EINVAL;
			goto err;
		}
		if (copy_from_user(&bwr, ubuf, sizeof(bwr))) {
			ret = -EFAULT;
			goto err;
		}
		if (binder_debug_mask & BINDER_DEBUG_READ_WRITE)
			printk(KERN_INFO "binder: %d:%d write %ld at %08lx, read %ld at %08lx\n",
			proc->pid, thread->pid, bwr.write_size, bwr.write_buffer, bwr.read_size, bwr.read_buffer);
		if (bwr.write_size > 0) {
			ret = binder_thread_write(proc, thread, (void __user *)bwr.write_buffer, bwr.write_size, &bwr.write_consumed);
			if (ret < 0) {
				bwr.read_consumed = 0;
				if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
					ret = -EFAULT;
				goto err;
			}
		}
		if (bwr.read_size > 0) {
			ret = binder_thread_read(proc, thread, (void __user *)bwr.read_buffer, bwr.read_size, &bwr.read_consumed, filp->f_flags & O_NONBLOCK);
			if (!list_empty(&proc->todo))
				wake_up_interruptible(&proc->wait);
			if (ret < 0) {
				if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
					ret = -EFAULT;
				goto err;
			}
		}
		if (binder_debug_mask & BINDER_DEBUG_READ_WRITE)
			printk(KERN_INFO "binder: %d:%d wrote %ld of %ld, read return %ld of %ld\n",
			proc->pid, thread->pid, bwr.write_consumed, bwr.write_size, bwr.read_consumed, bwr.read_size);
		if (copy_to_user(ubuf, &bwr, sizeof(bwr))) {
			ret = -EFAULT;
			goto err;
		}
		break;
							}
	......
	default:
		ret = -EINVAL;
		goto err;
	}
	ret = 0;
err:
	if (thread)
		thread->looper &= ~BINDER_LOOPER_STATE_NEED_RETURN;
	mutex_unlock(&binder_lock);
	wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);
	if (ret && ret != -ERESTARTSYS)
		printk(KERN_INFO "binder: %d:%d ioctl %x %lx returned %d\n", proc->pid, current->pid, cmd, arg, ret);
	return ret;
}
         这次,bwr.write_size等于0,于是不会执行binder_thread_write函数,bwr.read_size等于32,于是进入到binder_thread_read函数:

  1. static int  
  2. binder_thread_read(struct binder_proc *proc, struct binder_thread *thread,  
  3.                    void  __user *buffer, int size, signed long *consumed, int non_block)  
  4. {  
  5.     void __user *ptr = buffer + *consumed;  
  6.     void __user *end = buffer + size;  
  7.   
  8.     int ret = 0;  
  9.     int wait_for_proc_work;  
  10.   
  11.     if (*consumed == 0) {  
  12.         if (put_user(BR_NOOP, (uint32_t __user *)ptr))  
  13.             return -EFAULT;  
  14.         ptr += sizeof(uint32_t);  
  15.     }  
  16.   
  17. retry:  
  18.     wait_for_proc_work = thread->transaction_stack == NULL && list_empty(&thread->todo);  
  19.   
  20.     if (thread->return_error != BR_OK && ptr < end) {  
  21.         if (thread->return_error2 != BR_OK) {  
  22.             if (put_user(thread->return_error2, (uint32_t __user *)ptr))  
  23.                 return -EFAULT;  
  24.             ptr += sizeof(uint32_t);  
  25.             if (ptr == end)  
  26.                 goto done;  
  27.             thread->return_error2 = BR_OK;  
  28.         }  
  29.         if (put_user(thread->return_error, (uint32_t __user *)ptr))  
  30.             return -EFAULT;  
  31.         ptr += sizeof(uint32_t);  
  32.         thread->return_error = BR_OK;  
  33.         goto done;  
  34.     }  
  35.   
  36.   
  37.     thread->looper |= BINDER_LOOPER_STATE_WAITING;  
  38.     if (wait_for_proc_work)  
  39.         proc->ready_threads++;  
  40.     mutex_unlock(&binder_lock);  
  41.     if (wait_for_proc_work) {  
  42.         if (!(thread->looper & (BINDER_LOOPER_STATE_REGISTERED |  
  43.             BINDER_LOOPER_STATE_ENTERED))) {  
  44.                 binder_user_error("binder: %d:%d ERROR: Thread waiting "  
  45.                     "for process work before calling BC_REGISTER_"  
  46.                     "LOOPER or BC_ENTER_LOOPER (state %x)\n",  
  47.                     proc->pid, thread->pid, thread->looper);  
  48.                 wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);  
  49.         }  
  50.         binder_set_nice(proc->default_priority);  
  51.         if (non_block) {  
  52.             if (!binder_has_proc_work(proc, thread))  
  53.                 ret = -EAGAIN;  
  54.         } else  
  55.             ret = wait_event_interruptible_exclusive(proc->wait, binder_has_proc_work(proc, thread));  
  56.     } else {  
  57.         if (non_block) {  
  58.             if (!binder_has_thread_work(thread))  
  59.                 ret = -EAGAIN;  
  60.         } else  
  61.             ret = wait_event_interruptible(thread->wait, binder_has_thread_work(thread));  
  62.     }  
  63.         .......  
  64. }  
static int
binder_thread_read(struct binder_proc *proc, struct binder_thread *thread,
				   void  __user *buffer, int size, signed long *consumed, int non_block)
{
	void __user *ptr = buffer + *consumed;
	void __user *end = buffer + size;

	int ret = 0;
	int wait_for_proc_work;

	if (*consumed == 0) {
		if (put_user(BR_NOOP, (uint32_t __user *)ptr))
			return -EFAULT;
		ptr += sizeof(uint32_t);
	}

retry:
	wait_for_proc_work = thread->transaction_stack == NULL && list_empty(&thread->todo);

	if (thread->return_error != BR_OK && ptr < end) {
		if (thread->return_error2 != BR_OK) {
			if (put_user(thread->return_error2, (uint32_t __user *)ptr))
				return -EFAULT;
			ptr += sizeof(uint32_t);
			if (ptr == end)
				goto done;
			thread->return_error2 = BR_OK;
		}
		if (put_user(thread->return_error, (uint32_t __user *)ptr))
			return -EFAULT;
		ptr += sizeof(uint32_t);
		thread->return_error = BR_OK;
		goto done;
	}


	thread->looper |= BINDER_LOOPER_STATE_WAITING;
	if (wait_for_proc_work)
		proc->ready_threads++;
	mutex_unlock(&binder_lock);
	if (wait_for_proc_work) {
		if (!(thread->looper & (BINDER_LOOPER_STATE_REGISTERED |
			BINDER_LOOPER_STATE_ENTERED))) {
				binder_user_error("binder: %d:%d ERROR: Thread waiting "
					"for process work before calling BC_REGISTER_"
					"LOOPER or BC_ENTER_LOOPER (state %x)\n",
					proc->pid, thread->pid, thread->looper);
				wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);
		}
		binder_set_nice(proc->default_priority);
		if (non_block) {
			if (!binder_has_proc_work(proc, thread))
				ret = -EAGAIN;
		} else
			ret = wait_event_interruptible_exclusive(proc->wait, binder_has_proc_work(proc, thread));
	} else {
		if (non_block) {
			if (!binder_has_thread_work(thread))
				ret = -EAGAIN;
		} else
			ret = wait_event_interruptible(thread->wait, binder_has_thread_work(thread));
	}
        .......
}
         传入的参数*consumed == 0,于是写入一个值BR_NOOP到参数ptr指向的缓冲区中去,即用户传进来的bwr.read_buffer缓冲区。这时候,thread->transaction_stack == NULL,并且thread->todo列表也是空的,这表示当前线程没有事务需要处理,于是wait_for_proc_work为true,表示要去查看proc是否有未处理的事务。当前thread->return_error == BR_OK,这是前面创建binder_thread时初始化设置的。于是继续往下执行,设置thread的状态为BINDER_LOOPER_STATE_WAITING,表示线程处于等待状态。调用binder_set_nice函数设置当前线程的优先级别为proc->default_priority,这是因为thread要去处理属于proc的事务,因此要将此thread的优先级别设置和proc一样。在这个场景中,proc也没有事务处理,即binder_has_proc_work(proc, thread)为false。如果文件打开模式为非阻塞模式,即non_block为true,那么函数就直接返回-EAGAIN,要求用户重新执行ioctl;否则的话,就通过当前线程就通过wait_event_interruptible_exclusive函数进入休眠状态,等待请求到来再唤醒了。

        至此,我们就从源代码一步一步地分析完Service Manager是如何成为Android进程间通信(IPC)机制Binder守护进程的了。总结一下,Service Manager是成为Android进程间通信(IPC)机制Binder守护进程的过程是这样的:

        1. 打开/dev/binder文件:open("/dev/binder", O_RDWR);

        2. 建立128K内存映射:mmap(NULL, mapsize, PROT_READ, MAP_PRIVATE, bs->fd, 0);

        3. 通知Binder驱动程序它是守护进程:binder_become_context_manager(bs);

        4. 进入循环等待请求的到来:binder_loop(bs, svcmgr_handler);

        在这个过程中,在Binder驱动程序中建立了一个struct binder_proc结构、一个struct  binder_thread结构和一个struct binder_node结构,这样,Service Manager就在Android系统的进程间通信机制Binder担负起守护进程的职责了

浅谈Android系统进程间通信(IPC)机制Binder中的Server和Client获得Service Manager接口之路

分类: Android 11467人阅读 评论(15) 收藏 举报
        在前面一篇文章 浅谈Service Manager成为Android进程间通信(IPC)机制Binder守护进程之路中,介绍了Service Manager是如何成为Binder机制的守护进程的。既然作为守护进程,Service Manager的职责当然就是为Server和Client服务了。那么,Server和Client如何获得Service Manager接口,进而享受它提供的服务呢?本文将简要分析Server和Client获得Service Manager的过程。

        在阅读本文之前,希望读者先阅读Android进程间通信(IPC)机制Binder简要介绍和学习计划一文提到的参考资料Android深入浅出之Binder机制,这样可以加深对本文的理解。

        我们知道,Service Manager在Binder机制中既充当守护进程的角色,同时它也充当着Server角色,然而它又与一般的Server不一样。对于普通的Server来说,Client如果想要获得Server的远程接口,那么必须通过Service Manager远程接口提供的getService接口来获得,这本身就是一个使用Binder机制来进行进程间通信的过程。而对于Service Manager这个Server来说,Client如果想要获得Service Manager远程接口,却不必通过进程间通信机制来获得,因为Service Manager远程接口是一个特殊的Binder引用,它的引用句柄一定是0。

        获取Service Manager远程接口的函数是defaultServiceManager,这个函数声明在frameworks/base/include/binder/IServiceManager.h文件中:

  1. sp<IServiceManager> defaultServiceManager();  
sp<IServiceManager> defaultServiceManager();

       实现在frameworks/base/libs/binder/IServiceManager.cpp文件中:

  1. sp<IServiceManager> defaultServiceManager()  
  2. {  
  3.   
  4.     if (gDefaultServiceManager != NULL) return gDefaultServiceManager;  
  5.   
  6.     {  
  7.         AutoMutex _l(gDefaultServiceManagerLock);  
  8.         if (gDefaultServiceManager == NULL) {  
  9.             gDefaultServiceManager = interface_cast<IServiceManager>(  
  10.                 ProcessState::self()->getContextObject(NULL));  
  11.         }  
  12.     }  
  13.   
  14.     return gDefaultServiceManager;  
  15. }  
sp<IServiceManager> defaultServiceManager()
{

    if (gDefaultServiceManager != NULL) return gDefaultServiceManager;

    {
        AutoMutex _l(gDefaultServiceManagerLock);
        if (gDefaultServiceManager == NULL) {
            gDefaultServiceManager = interface_cast<IServiceManager>(
                ProcessState::self()->getContextObject(NULL));
        }
    }

    return gDefaultServiceManager;
}
        gDefaultServiceManagerLock和gDefaultServiceManager是全局变量,定义在frameworks/base/libs/binder/Static.cpp文件中:

  1. Mutex gDefaultServiceManagerLock;  
  2. sp<IServiceManager> gDefaultServiceManager;  
Mutex gDefaultServiceManagerLock;
sp<IServiceManager> gDefaultServiceManager;
        从这个函数可以看出,gDefaultServiceManager是单例模式,调用defaultServiceManager函数时,如果gDefaultServiceManager已经创建,则直接返回,否则通过interface_cast<IServiceManager>(ProcessState::self()->getContextObject(NULL))来创建一个,并保存在gDefaultServiceManager全局变量中。

       在继续介绍interface_cast<IServiceManager>(ProcessState::self()->getContextObject(NULL))的实现之前,先来看一个类图,这能够帮助我们了解Service Manager远程接口的创建过程。


        参考资料Android深入浅出之Binder机制一文的读者,应该会比较容易理解这个图。这个图表明了,BpServiceManager类继承了BpInterface<IServiceManager>类,BpInterface是一个模板类,它定义在frameworks/base/include/binder/IInterface.h文件中:

  1. template<typename INTERFACE>  
  2. class BpInterface : public INTERFACE, public BpRefBase  
  3. {  
  4. public:  
  5.     BpInterface(const sp<IBinder>& remote);  
  6.   
  7. protected:  
  8.     virtual IBinder* onAsBinder();  
  9. };  
template<typename INTERFACE>
class BpInterface : public INTERFACE, public BpRefBase
{
public:
	BpInterface(const sp<IBinder>& remote);

protected:
	virtual IBinder* onAsBinder();
};
        IServiceManager类继承了IInterface类,而IInterface类和BpRefBase类又分别继承了RefBase类。在BpRefBase类中,有一个成员变量mRemote,它的类型是IBinder*,实现类为BpBinder,它表示一个Binder引用,引用句柄值保存在BpBinder类的mHandle成员变量中。BpBinder类通过IPCThreadState类来和Binder驱动程序并互,而IPCThreadState又通过它的成员变量mProcess来打开/dev/binder设备文件,mProcess成员变量的类型为ProcessState。ProcessState类打开设备/dev/binder之后,将打开文件描述符保存在mDriverFD成员变量中,以供后续使用。

        理解了这些概念之后,就可以继续分析创建Service Manager远程接口的过程了,最终目的是要创建一个BpServiceManager实例,并且返回它的IServiceManager接口。创建Service Manager远程接口主要是下面语句:

  1. gDefaultServiceManager = interface_cast<IServiceManager>(  
  2.     ProcessState::self()->getContextObject(NULL));  
            gDefaultServiceManager = interface_cast<IServiceManager>(
                ProcessState::self()->getContextObject(NULL));
        看起来简短,却暗藏玄机,具体可阅读 Android深入浅出之Binder机制这篇参考资料,这里作简要描述。

        首先是调用ProcessState::self函数,self函数是ProcessState的静态成员函数,它的作用是返回一个全局唯一的ProcessState实例变量,就是单例模式了,这个变量名为gProcess。如果gProcess尚未创建,就会执行创建操作,在ProcessState的构造函数中,会通过open文件操作函数打开设备文件/dev/binder,并且返回来的设备文件描述符保存在成员变量mDriverFD中。

        接着调用gProcess->getContextObject函数来获得一个句柄值为0的Binder引用,即BpBinder了,于是创建Service Manager远程接口的语句可以简化为:

  1. gDefaultServiceManager = interface_cast<IServiceManager>(new BpBinder(0));  
            gDefaultServiceManager = interface_cast<IServiceManager>(new BpBinder(0));
        再来看函数interface_cast<IServiceManager>的实现,它是一个模板函数,定义在framework/base/include/binder/IInterface.h文件中:

  1. template<typename INTERFACE>  
  2. inline sp<INTERFACE> interface_cast(const sp<IBinder>& obj)  
  3. {  
  4.     return INTERFACE::asInterface(obj);  
  5. }  
template<typename INTERFACE>
inline sp<INTERFACE> interface_cast(const sp<IBinder>& obj)
{
    return INTERFACE::asInterface(obj);
}
        这里的INTERFACE是IServiceManager,于是调用了IServiceManager::asInterface函数。IServiceManager::asInterface是通过DECLARE_META_INTERFACE(ServiceManager)宏在IServiceManager类中声明的,它位于framework/base/include/binder/IServiceManager.h文件中:

  1. DECLARE_META_INTERFACE(ServiceManager);  
DECLARE_META_INTERFACE(ServiceManager);

        展开即为:

  1. #define DECLARE_META_INTERFACE(ServiceManager)                              \  
  2.     static const android::String16 descriptor;                          \  
  3.     static android::sp<IServiceManager> asInterface(                    \  
  4.     const android::sp<android::IBinder>& obj);                          \  
  5.     virtual const android::String16& getInterfaceDescriptor() const;    \  
  6.     IServiceManager();                                                  \  
  7.     virtual ~IServiceManager();                                           
#define DECLARE_META_INTERFACE(ServiceManager)                              \
	static const android::String16 descriptor;                          \
	static android::sp<IServiceManager> asInterface(                    \
	const android::sp<android::IBinder>& obj);                          \
	virtual const android::String16& getInterfaceDescriptor() const;    \
	IServiceManager();                                                  \
	virtual ~IServiceManager();                                         

       IServiceManager::asInterface的实现是通过IMPLEMENT_META_INTERFACE(ServiceManager, "android.os.IServiceManager")宏定义的,它位于framework/base/libs/binder/IServiceManager.cpp文件中:

  1. IMPLEMENT_META_INTERFACE(ServiceManager, "android.os.IServiceManager");  
IMPLEMENT_META_INTERFACE(ServiceManager, "android.os.IServiceManager");
       展开即为:

  1. #define IMPLEMENT_META_INTERFACE(ServiceManager, "android.os.IServiceManager")                 \  
  2.     const android::String16 IServiceManager::descriptor("android.os.IServiceManager");     \  
  3.     const android::String16&                                   \  
  4.     IServiceManager::getInterfaceDescriptor() const {                                      \  
  5.     return IServiceManager::descriptor;                                                    \  
  6.     }                                                                                      \  
  7.     android::sp<IServiceManager> IServiceManager::asInterface(                             \  
  8.     const android::sp<android::IBinder>& obj)                                              \  
  9.     {                                                                                      \  
  10.     android::sp<IServiceManager> intr;                                                     \  
  11.     if (obj != NULL) {                                                                     \  
  12.     intr = static_cast<IServiceManager*>(                                                  \  
  13.     obj->queryLocalInterface(                                                              \  
  14.     IServiceManager::descriptor).get());                                                   \  
  15.     if (intr == NULL) {                                                                    \  
  16.     intr = new BpServiceManager(obj);                                                      \  
  17.     }                                                                                      \  
  18.     }                                                                                      \  
  19.     return intr;                                                                           \  
  20.     }                                                                                      \  
  21.     IServiceManager::IServiceManager() { }                                                 \  
  22.     IServiceManager::~IServiceManager() { }        
#define IMPLEMENT_META_INTERFACE(ServiceManager, "android.os.IServiceManager")                 \
	const android::String16 IServiceManager::descriptor("android.os.IServiceManager");     \
	const android::String16&							       \
	IServiceManager::getInterfaceDescriptor() const {                                      \
	return IServiceManager::descriptor;                                                    \
	}                                                                                      \
	android::sp<IServiceManager> IServiceManager::asInterface(                             \
	const android::sp<android::IBinder>& obj)                                              \
	{                                                                                      \
	android::sp<IServiceManager> intr;                                                     \
	if (obj != NULL) {                                                                     \
	intr = static_cast<IServiceManager*>(                                                  \
	obj->queryLocalInterface(                                                              \
	IServiceManager::descriptor).get());                                                   \
	if (intr == NULL) {                                                                    \
	intr = new BpServiceManager(obj);                                                      \
	}                                                                                      \
	}                                                                                      \
	return intr;                                                                           \
	}                                                                                      \
	IServiceManager::IServiceManager() { }                                                 \
	IServiceManager::~IServiceManager() { }      
         估计写这段代码的员工是从Microsoft跳槽到Google的。这里我们关注IServiceManager::asInterface的实现:

  1. android::sp<IServiceManager> IServiceManager::asInterface(const android::sp<android::IBinder>& obj)                                                
  2. {                                                                                       
  3.     android::sp<IServiceManager> intr;                                                      
  4.       
  5.     if (obj != NULL) {                                                                       
  6.         intr = static_cast<IServiceManager*>(                                                    
  7.                     obj->queryLocalInterface(IServiceManager::descriptor).get());  
  8.           
  9.         if (intr == NULL) {                  
  10.             intr = new BpServiceManager(obj);                                          
  11.         }                                            
  12.     }  
  13.     return intr;                                    
  14. }     
android::sp<IServiceManager> IServiceManager::asInterface(const android::sp<android::IBinder>& obj)                                              
{                                                                                     
	android::sp<IServiceManager> intr;                                                    
	
	if (obj != NULL) {                                                                     
		intr = static_cast<IServiceManager*>(                                                  
                    obj->queryLocalInterface(IServiceManager::descriptor).get());
		
		if (intr == NULL) {                
			intr = new BpServiceManager(obj);                                        
		}                                          
	}
	return intr;                                  
}   
         这里传进来的参数obj就则刚才创建的new BpBinder(0)了,BpBinder类中的成员函数queryLocalInterface继承自基类IBinder,IBinder::queryLocalInterface函数位于framework/base/libs/binder/Binder.cpp文件中:

  1. sp<IInterface>  IBinder::queryLocalInterface(const String16& descriptor)  
  2. {  
  3.     return NULL;  
  4. }  
sp<IInterface>  IBinder::queryLocalInterface(const String16& descriptor)
{
    return NULL;
}
         由此可见,在IServiceManager::asInterface函数中,最终会调用下面语句:

  1. intr = new BpServiceManager(obj);   
intr = new BpServiceManager(obj); 
         即为:

  1. intr = new BpServiceManager(new BpBinder(0));   
intr = new BpServiceManager(new BpBinder(0)); 
        回到defaultServiceManager函数中,最终结果为:

  1. gDefaultServiceManager = new BpServiceManager(new BpBinder(0));  
gDefaultServiceManager = new BpServiceManager(new BpBinder(0));
        这样,Service Manager远程接口就创建完成了,它本质上是一个BpServiceManager,包含了一个句柄值为0的Binder引用。

        在Android系统的Binder机制中,Server和Client拿到这个Service Manager远程接口之后怎么用呢?

        对Server来说,就是调用IServiceManager::addService这个接口来和Binder驱动程序交互了,即调用BpServiceManager::addService 。而BpServiceManager::addService又会调用通过其基类BpRefBase的成员函数remote获得原先创建的BpBinder实例,接着调用BpBinder::transact成员函数。在BpBinder::transact函数中,又会调用IPCThreadState::transact成员函数,这里就是最终与Binder驱动程序交互的地方了。回忆一下前面的类图,IPCThreadState有一个PorcessState类型的成中变量mProcess,而mProcess有一个成员变量mDriverFD,它是设备文件/dev/binder的打开文件描述符,因此,IPCThreadState就相当于间接在拥有了设备文件/dev/binder的打开文件描述符,于是,便可以与Binder驱动程序交互了。

       对Client来说,就是调用IServiceManager::getService这个接口来和Binder驱动程序交互了。具体过程上述Server使用Service Manager的方法是一样的,这里就不再累述了。

      IServiceManager::addService和IServiceManager::getService这两个函数的具体实现,在下面两篇文章中,会深入到Binder驱动程序这一层,进行详细的源代码分析,以便更好地理解Binder进程间通信机制,敬请关注。

Android系统进程间通信(IPC)机制Binder中的Server启动过程源代码分析

分类: Android 14451人阅读 评论(52) 收藏 举报

        在前面一篇文章浅谈Android系统进程间通信(IPC)机制Binder中的Server和Client获得Service Manager接口之路中,介绍了在Android系统中Binder进程间通信机制中的Server角色是如何获得Service Manager远程接口的,即defaultServiceManager函数的实现。Server获得了Service Manager远程接口之后,就要把自己的Service添加到Service Manager中去,然后把自己启动起来,等待Client的请求。本文将通过分析源代码了解Server的启动过程是怎么样的。

        本文通过一个具体的例子来说明Binder机制中Server的启动过程。我们知道,在Android系统中,提供了多媒体播放的功能,这个功能是以服务的形式来提供的。这里,我们就通过分析MediaPlayerService的实现来了解Media Server的启动过程。

        首先,看一下MediaPlayerService的类图,以便我们理解下面要描述的内容。


        我们将要介绍的主角MediaPlayerService继承于BnMediaPlayerService类,熟悉Binder机制的同学应该知道BnMediaPlayerService是一个Binder Native类,用来处理Client请求的。BnMediaPlayerService继承于BnInterface<IMediaPlayerService>类,BnInterface是一个模板类,它定义在frameworks/base/include/binder/IInterface.h文件中:

  1. template<typename INTERFACE>  
  2. class BnInterface : public INTERFACE, public BBinder  
  3. {  
  4. public:  
  5.     virtual sp<IInterface>      queryLocalInterface(const String16& _descriptor);  
  6.     virtual const String16&     getInterfaceDescriptor() const;  
  7.   
  8. protected:  
  9.     virtual IBinder*            onAsBinder();  
  10. };  
template<typename INTERFACE>
class BnInterface : public INTERFACE, public BBinder
{
public:
    virtual sp<IInterface>      queryLocalInterface(const String16& _descriptor);
    virtual const String16&     getInterfaceDescriptor() const;

protected:
    virtual IBinder*            onAsBinder();
};
       这里可以看出,BnMediaPlayerService实际是继承了IMediaPlayerService和BBinder类。IMediaPlayerService和BBinder类又分别继承了IInterface和IBinder类,IInterface和IBinder类又同时继承了RefBase类。

       实际上,BnMediaPlayerService并不是直接接收到Client处发送过来的请求,而是使用了IPCThreadState接收Client处发送过来的请求,而IPCThreadState又借助了ProcessState类来与Binder驱动程序交互。有关IPCThreadState和ProcessState的关系,可以参考上一篇文章浅谈Android系统进程间通信(IPC)机制Binder中的Server和Client获得Service Manager接口之路,接下来也会有相应的描述。IPCThreadState接收到了Client处的请求后,就会调用BBinder类的transact函数,并传入相关参数,BBinder类的transact函数最终调用BnMediaPlayerService类的onTransact函数,于是,就开始真正地处理Client的请求了。

      了解了MediaPlayerService类结构之后,就要开始进入到本文的主题了。

      首先,看看MediaPlayerService是如何启动的。启动MediaPlayerService的代码位于frameworks/base/media/mediaserver/main_mediaserver.cpp文件中:

  1. int main(int argc, char** argv)  
  2. {  
  3.     sp<ProcessState> proc(ProcessState::self());  
  4.     sp<IServiceManager> sm = defaultServiceManager();  
  5.     LOGI("ServiceManager: %p", sm.get());  
  6.     AudioFlinger::instantiate();  
  7.     MediaPlayerService::instantiate();  
  8.     CameraService::instantiate();  
  9.     AudioPolicyService::instantiate();  
  10.     ProcessState::self()->startThreadPool();  
  11.     IPCThreadState::self()->joinThreadPool();  
  12. }  
int main(int argc, char** argv)
{
    sp<ProcessState> proc(ProcessState::self());
    sp<IServiceManager> sm = defaultServiceManager();
    LOGI("ServiceManager: %p", sm.get());
    AudioFlinger::instantiate();
    MediaPlayerService::instantiate();
    CameraService::instantiate();
    AudioPolicyService::instantiate();
    ProcessState::self()->startThreadPool();
    IPCThreadState::self()->joinThreadPool();
}
       这里我们不关注AudioFlinger和CameraService相关的代码。

       先看下面这句代码:

  1. sp<ProcessState> proc(ProcessState::self());  
   sp<ProcessState> proc(ProcessState::self());
       这句代码的作用是通过ProcessState::self()调用创建一个ProcessState实例。ProcessState::self()是ProcessState类的一个静态成员变量,定义在frameworks/base/libs/binder/ProcessState.cpp文件中:

  1. sp<ProcessState> ProcessState::self()  
  2. {  
  3.     if (gProcess != NULL) return gProcess;  
  4.       
  5.     AutoMutex _l(gProcessMutex);  
  6.     if (gProcess == NULL) gProcess = new ProcessState;  
  7.     return gProcess;  
  8. }  
sp<ProcessState> ProcessState::self()
{
    if (gProcess != NULL) return gProcess;
    
    AutoMutex _l(gProcessMutex);
    if (gProcess == NULL) gProcess = new ProcessState;
    return gProcess;
}
       这里可以看出,这个函数作用是返回一个全局唯一的ProcessState实例gProcess。全局唯一实例变量gProcess定义在frameworks/base/libs/binder/Static.cpp文件中:

  1. Mutex gProcessMutex;  
  2. sp<ProcessState> gProcess;  
Mutex gProcessMutex;
sp<ProcessState> gProcess;
       再来看ProcessState的构造函数:

  1. ProcessState::ProcessState()  
  2.     : mDriverFD(open_driver())  
  3.     , mVMStart(MAP_FAILED)  
  4.     , mManagesContexts(false)  
  5.     , mBinderContextCheckFunc(NULL)  
  6.     , mBinderContextUserData(NULL)  
  7.     , mThreadPoolStarted(false)  
  8.     , mThreadPoolSeq(1)  
  9. {  
  10.     if (mDriverFD >= 0) {  
  11.         // XXX Ideally, there should be a specific define for whether we  
  12.         // have mmap (or whether we could possibly have the kernel module  
  13.         // availabla).   
  14. #if !defined(HAVE_WIN32_IPC)   
  15.         // mmap the binder, providing a chunk of virtual address space to receive transactions.  
  16.         mVMStart = mmap(0, BINDER_VM_SIZE, PROT_READ, MAP_PRIVATE | MAP_NORESERVE, mDriverFD, 0);  
  17.         if (mVMStart == MAP_FAILED) {  
  18.             // *sigh*   
  19.             LOGE("Using /dev/binder failed: unable to mmap transaction memory.\n");  
  20.             close(mDriverFD);  
  21.             mDriverFD = -1;  
  22.         }  
  23. #else   
  24.         mDriverFD = -1;  
  25. #endif   
  26.     }  
  27.     if (mDriverFD < 0) {  
  28.         // Need to run without the driver, starting our own thread pool.  
  29.     }  
  30. }  
ProcessState::ProcessState()
    : mDriverFD(open_driver())
    , mVMStart(MAP_FAILED)
    , mManagesContexts(false)
    , mBinderContextCheckFunc(NULL)
    , mBinderContextUserData(NULL)
    , mThreadPoolStarted(false)
    , mThreadPoolSeq(1)
{
    if (mDriverFD >= 0) {
        // XXX Ideally, there should be a specific define for whether we
        // have mmap (or whether we could possibly have the kernel module
        // availabla).
#if !defined(HAVE_WIN32_IPC)
        // mmap the binder, providing a chunk of virtual address space to receive transactions.
        mVMStart = mmap(0, BINDER_VM_SIZE, PROT_READ, MAP_PRIVATE | MAP_NORESERVE, mDriverFD, 0);
        if (mVMStart == MAP_FAILED) {
            // *sigh*
            LOGE("Using /dev/binder failed: unable to mmap transaction memory.\n");
            close(mDriverFD);
            mDriverFD = -1;
        }
#else
        mDriverFD = -1;
#endif
    }
    if (mDriverFD < 0) {
        // Need to run without the driver, starting our own thread pool.
    }
}
        这个函数有两个关键地方,一是通过open_driver函数打开Binder设备文件/dev/binder,并将打开设备文件描述符保存在成员变量mDriverFD中;二是通过mmap来把设备文件/dev/binder映射到内存中。

        先看open_driver函数的实现,这个函数同样位于frameworks/base/libs/binder/ProcessState.cpp文件中:

  1. static int open_driver()  
  2. {  
  3.     if (gSingleProcess) {  
  4.         return -1;  
  5.     }  
  6.   
  7.     int fd = open("/dev/binder", O_RDWR);  
  8.     if (fd >= 0) {  
  9.         fcntl(fd, F_SETFD, FD_CLOEXEC);  
  10.         int vers;  
  11. #if defined(HAVE_ANDROID_OS)  
  12.         status_t result = ioctl(fd, BINDER_VERSION, &vers);  
  13. #else   
  14.         status_t result = -1;  
  15.         errno = EPERM;  
  16. #endif   
  17.         if (result == -1) {  
  18.             LOGE("Binder ioctl to obtain version failed: %s", strerror(errno));  
  19.             close(fd);  
  20.             fd = -1;  
  21.         }  
  22.         if (result != 0 || vers != BINDER_CURRENT_PROTOCOL_VERSION) {  
  23.             LOGE("Binder driver protocol does not match user space protocol!");  
  24.             close(fd);  
  25.             fd = -1;  
  26.         }  
  27. #if defined(HAVE_ANDROID_OS)  
  28.         size_t maxThreads = 15;  
  29.         result = ioctl(fd, BINDER_SET_MAX_THREADS, &maxThreads);  
  30.         if (result == -1) {  
  31.             LOGE("Binder ioctl to set max threads failed: %s", strerror(errno));  
  32.         }  
  33. #endif   
  34.           
  35.     } else {  
  36.         LOGW("Opening '/dev/binder' failed: %s\n", strerror(errno));  
  37.     }  
  38.     return fd;  
  39. }  
static int open_driver()
{
    if (gSingleProcess) {
        return -1;
    }

    int fd = open("/dev/binder", O_RDWR);
    if (fd >= 0) {
        fcntl(fd, F_SETFD, FD_CLOEXEC);
        int vers;
#if defined(HAVE_ANDROID_OS)
        status_t result = ioctl(fd, BINDER_VERSION, &vers);
#else
        status_t result = -1;
        errno = EPERM;
#endif
        if (result == -1) {
            LOGE("Binder ioctl to obtain version failed: %s", strerror(errno));
            close(fd);
            fd = -1;
        }
        if (result != 0 || vers != BINDER_CURRENT_PROTOCOL_VERSION) {
            LOGE("Binder driver protocol does not match user space protocol!");
            close(fd);
            fd = -1;
        }
#if defined(HAVE_ANDROID_OS)
        size_t maxThreads = 15;
        result = ioctl(fd, BINDER_SET_MAX_THREADS, &maxThreads);
        if (result == -1) {
            LOGE("Binder ioctl to set max threads failed: %s", strerror(errno));
        }
#endif
        
    } else {
        LOGW("Opening '/dev/binder' failed: %s\n", strerror(errno));
    }
    return fd;
}
        这个函数的作用主要是通过open文件操作函数来打开/dev/binder设备文件,然后再调用ioctl文件控制函数来分别执行BINDER_VERSION和BINDER_SET_MAX_THREADS两个命令来和Binder驱动程序进行交互,前者用于获得当前Binder驱动程序的版本号,后者用于通知Binder驱动程序,MediaPlayerService最多可同时启动15个线程来处理Client端的请求。

        open在Binder驱动程序中的具体实现,请参考前面一篇文章浅谈Service Manager成为Android进程间通信(IPC)机制Binder守护进程之路,这里不再重复描述。打开/dev/binder设备文件后,Binder驱动程序就为MediaPlayerService进程创建了一个struct binder_proc结构体实例来维护MediaPlayerService进程上下文相关信息。

        我们来看一下ioctl文件操作函数执行BINDER_VERSION命令的过程:

  1. status_t result = ioctl(fd, BINDER_VERSION, &vers);  
status_t result = ioctl(fd, BINDER_VERSION, &vers);
        这个函数调用最终进入到Binder驱动程序的binder_ioctl函数中,我们只关注BINDER_VERSION相关的部分逻辑:

  1. static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)  
  2. {  
  3.     int ret;  
  4.     struct binder_proc *proc = filp->private_data;  
  5.     struct binder_thread *thread;  
  6.     unsigned int size = _IOC_SIZE(cmd);  
  7.     void __user *ubuf = (void __user *)arg;  
  8.   
  9.     /*printk(KERN_INFO "binder_ioctl: %d:%d %x %lx\n", proc->pid, current->pid, cmd, arg);*/  
  10.   
  11.     ret = wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);  
  12.     if (ret)  
  13.         return ret;  
  14.   
  15.     mutex_lock(&binder_lock);  
  16.     thread = binder_get_thread(proc);  
  17.     if (thread == NULL) {  
  18.         ret = -ENOMEM;  
  19.         goto err;  
  20.     }  
  21.   
  22.     switch (cmd) {  
  23.     ......  
  24.     case BINDER_VERSION:  
  25.         if (size != sizeof(struct binder_version)) {  
  26.             ret = -EINVAL;  
  27.             goto err;  
  28.         }  
  29.         if (put_user(BINDER_CURRENT_PROTOCOL_VERSION, &((struct binder_version *)ubuf)->protocol_version)) {  
  30.             ret = -EINVAL;  
  31.             goto err;  
  32.         }  
  33.         break;  
  34.     ......  
  35.     }  
  36.     ret = 0;  
  37. err:  
  38.         ......  
  39.     return ret;  
  40. }  
static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
	int ret;
	struct binder_proc *proc = filp->private_data;
	struct binder_thread *thread;
	unsigned int size = _IOC_SIZE(cmd);
	void __user *ubuf = (void __user *)arg;

	/*printk(KERN_INFO "binder_ioctl: %d:%d %x %lx\n", proc->pid, current->pid, cmd, arg);*/

	ret = wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);
	if (ret)
		return ret;

	mutex_lock(&binder_lock);
	thread = binder_get_thread(proc);
	if (thread == NULL) {
		ret = -ENOMEM;
		goto err;
	}

	switch (cmd) {
	......
	case BINDER_VERSION:
		if (size != sizeof(struct binder_version)) {
			ret = -EINVAL;
			goto err;
		}
		if (put_user(BINDER_CURRENT_PROTOCOL_VERSION, &((struct binder_version *)ubuf)->protocol_version)) {
			ret = -EINVAL;
			goto err;
		}
		break;
	......
	}
	ret = 0;
err:
        ......
	return ret;
}

        很简单,只是将BINDER_CURRENT_PROTOCOL_VERSION写入到传入的参数arg指向的用户缓冲区中去就返回了。BINDER_CURRENT_PROTOCOL_VERSION是一个宏,定义在kernel/common/drivers/staging/android/binder.h文件中:

  1. /* This is the current protocol version. */  
  2. #define BINDER_CURRENT_PROTOCOL_VERSION 7  
/* This is the current protocol version. */
#define BINDER_CURRENT_PROTOCOL_VERSION 7
       这里为什么要把ubuf转换成struct binder_version之后,再通过其protocol_version成员变量再来写入呢,转了一圈,最终内容还是写入到ubuf中。我们看一下struct binder_version的定义就会明白,同样是在kernel/common/drivers/staging/android/binder.h文件中:

  1. /* Use with BINDER_VERSION, driver fills in fields. */  
  2. struct binder_version {  
  3.     /* driver protocol version -- increment with incompatible change */  
  4.     signed long protocol_version;  
  5. };  
/* Use with BINDER_VERSION, driver fills in fields. */
struct binder_version {
	/* driver protocol version -- increment with incompatible change */
	signed long	protocol_version;
};
        从注释中可以看出来,这里是考虑到兼容性,因为以后很有可能不是用signed long来表示版本号。

        这里有一个重要的地方要注意的是,由于这里是打开设备文件/dev/binder之后,第一次进入到binder_ioctl函数,因此,这里调用binder_get_thread的时候,就会为当前线程创建一个struct binder_thread结构体变量来维护线程上下文信息,具体可以参考浅谈Service Manager成为Android进程间通信(IPC)机制Binder守护进程之路一文。

        接着我们再来看一下ioctl文件操作函数执行BINDER_SET_MAX_THREADS命令的过程:

  1. result = ioctl(fd, BINDER_SET_MAX_THREADS, &maxThreads);  
result = ioctl(fd, BINDER_SET_MAX_THREADS, &maxThreads);

        这个函数调用最终进入到Binder驱动程序的binder_ioctl函数中,我们只关注BINDER_SET_MAX_THREADS相关的部分逻辑:

  1. static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)  
  2. {  
  3.     int ret;  
  4.     struct binder_proc *proc = filp->private_data;  
  5.     struct binder_thread *thread;  
  6.     unsigned int size = _IOC_SIZE(cmd);  
  7.     void __user *ubuf = (void __user *)arg;  
  8.   
  9.     /*printk(KERN_INFO "binder_ioctl: %d:%d %x %lx\n", proc->pid, current->pid, cmd, arg);*/  
  10.   
  11.     ret = wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);  
  12.     if (ret)  
  13.         return ret;  
  14.   
  15.     mutex_lock(&binder_lock);  
  16.     thread = binder_get_thread(proc);  
  17.     if (thread == NULL) {  
  18.         ret = -ENOMEM;  
  19.         goto err;  
  20.     }  
  21.   
  22.     switch (cmd) {  
  23.     ......  
  24.     case BINDER_SET_MAX_THREADS:  
  25.         if (copy_from_user(&proc->max_threads, ubuf, sizeof(proc->max_threads))) {  
  26.             ret = -EINVAL;  
  27.             goto err;  
  28.         }  
  29.         break;  
  30.     ......  
  31.     }  
  32.     ret = 0;  
  33. err:  
  34.     ......  
  35.     return ret;  
  36. }  
static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
	int ret;
	struct binder_proc *proc = filp->private_data;
	struct binder_thread *thread;
	unsigned int size = _IOC_SIZE(cmd);
	void __user *ubuf = (void __user *)arg;

	/*printk(KERN_INFO "binder_ioctl: %d:%d %x %lx\n", proc->pid, current->pid, cmd, arg);*/

	ret = wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);
	if (ret)
		return ret;

	mutex_lock(&binder_lock);
	thread = binder_get_thread(proc);
	if (thread == NULL) {
		ret = -ENOMEM;
		goto err;
	}

	switch (cmd) {
	......
	case BINDER_SET_MAX_THREADS:
		if (copy_from_user(&proc->max_threads, ubuf, sizeof(proc->max_threads))) {
			ret = -EINVAL;
			goto err;
		}
		break;
	......
	}
	ret = 0;
err:
	......
	return ret;
}
        这里实现也是非常简单,只是简单地把用户传进来的参数保存在proc->max_threads中就完毕了。注意,这里再调用binder_get_thread函数的时候,就可以在proc->threads中找到当前线程对应的struct binder_thread结构了,因为前面已经创建好并保存在proc->threads红黑树中。

        回到ProcessState的构造函数中,这里还通过mmap函数来把设备文件/dev/binder映射到内存中,这个函数在浅谈Service Manager成为Android进程间通信(IPC)机制Binder守护进程之路一文也已经有详细介绍,这里不再重复描述。宏BINDER_VM_SIZE就定义在ProcessState.cpp文件中:

  1. #define BINDER_VM_SIZE ((1*1024*1024) - (4096 *2))  
#define BINDER_VM_SIZE ((1*1024*1024) - (4096 *2))
        mmap函数调用完成之后,Binder驱动程序就为当前进程预留了BINDER_VM_SIZE大小的内存空间了。

        这样,ProcessState全局唯一变量gProcess就创建完毕了,回到frameworks/base/media/mediaserver/main_mediaserver.cpp文件中的main函数,下一步是调用defaultServiceManager函数来获得Service Manager的远程接口,这个已经在上一篇文章浅谈Android系统进程间通信(IPC)机制Binder中的Server和Client获得Service Manager接口之路有详细描述,读者可以回过头去参考一下。

        再接下来,就进入到MediaPlayerService::instantiate函数把MediaPlayerService添加到Service Manger中去了。这个函数定义在frameworks/base/media/libmediaplayerservice/MediaPlayerService.cpp文件中:

  1. void MediaPlayerService::instantiate() {  
  2.     defaultServiceManager()->addService(  
  3.             String16("media.player"), new MediaPlayerService());  
  4. }  
void MediaPlayerService::instantiate() {
    defaultServiceManager()->addService(
            String16("media.player"), new MediaPlayerService());
}
        我们重点看一下IServiceManger::addService的过程,这有助于我们加深对Binder机制的理解。

        在上一篇文章浅谈Android系统进程间通信(IPC)机制Binder中的Server和Client获得Service Manager接口之路中说到,defaultServiceManager返回的实际是一个BpServiceManger类实例,因此,我们看一下BpServiceManger::addService的实现,这个函数实现在frameworks/base/libs/binder/IServiceManager.cpp文件中:

  1. class BpServiceManager : public BpInterface<IServiceManager>  
  2. {  
  3. public:  
  4.     BpServiceManager(const sp<IBinder>& impl)  
  5.         : BpInterface<IServiceManager>(impl)  
  6.     {  
  7.     }  
  8.   
  9.     ......  
  10.   
  11.     virtual status_t addService(const String16& name, const sp<IBinder>& service)  
  12.     {  
  13.         Parcel data, reply;  
  14.         data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());  
  15.         data.writeString16(name);  
  16.         data.writeStrongBinder(service);  
  17.         status_t err = remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply);  
  18.         return err == NO_ERROR ? reply.readExceptionCode()   
  19.     }  
  20.   
  21.     ......  
  22.   
  23. };  
class BpServiceManager : public BpInterface<IServiceManager>
{
public:
	BpServiceManager(const sp<IBinder>& impl)
		: BpInterface<IServiceManager>(impl)
	{
	}

	......

	virtual status_t addService(const String16& name, const sp<IBinder>& service)
	{
		Parcel data, reply;
		data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());
		data.writeString16(name);
		data.writeStrongBinder(service);
		status_t err = remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply);
		return err == NO_ERROR ? reply.readExceptionCode() 
	}

	......

};

         这里的Parcel类是用来于序列化进程间通信数据用的。

         先来看这一句的调用:

  1. data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());  
data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());
         IServiceManager::getInterfaceDescriptor()返回来的是一个字符串,即"android.os.IServiceManager",具体可以参考IServiceManger的实现。我们看一下Parcel::writeInterfaceToken的实现,位于frameworks/base/libs/binder/Parcel.cpp文件中:

  1. // Write RPC headers.  (previously just the interface token)  
  2. status_t Parcel::writeInterfaceToken(const String16& interface)  
  3. {  
  4.     writeInt32(IPCThreadState::self()->getStrictModePolicy() |  
  5.                STRICT_MODE_PENALTY_GATHER);  
  6.     // currently the interface identification token is just its name as a string  
  7.     return writeString16(interface);  
  8. }  
// Write RPC headers.  (previously just the interface token)
status_t Parcel::writeInterfaceToken(const String16& interface)
{
    writeInt32(IPCThreadState::self()->getStrictModePolicy() |
               STRICT_MODE_PENALTY_GATHER);
    // currently the interface identification token is just its name as a string
    return writeString16(interface);
}
         它的作用是写入一个整数和一个字符串到Parcel中去。

         再来看下面的调用:

  1. data.writeString16(name);  
data.writeString16(name);
        这里又是写入一个字符串到Parcel中去,这里的name即是上面传进来的“media.player”字符串。

        往下看:

  1. data.writeStrongBinder(service);  
data.writeStrongBinder(service);
        这里定入一个Binder对象到Parcel去。我们重点看一下这个函数的实现,因为它涉及到进程间传输Binder实体的问题,比较复杂,需要重点关注,同时,也是理解Binder机制的一个重点所在。注意,这里的service参数是一个MediaPlayerService对象。

  1. status_t Parcel::writeStrongBinder(const sp<IBinder>& val)  
  2. {  
  3.     return flatten_binder(ProcessState::self(), val, this);  
  4. }  
status_t Parcel::writeStrongBinder(const sp<IBinder>& val)
{
    return flatten_binder(ProcessState::self(), val, this);
}
        看到flatten_binder函数,是不是似曾相识的感觉?我们在前面一篇文章 浅谈Service Manager成为Android进程间通信(IPC)机制Binder守护进程之路中,曾经提到在Binder驱动程序中,使用struct flat_binder_object来表示传输中的一个binder对象,它的定义如下所示:

  1. /* 
  2.  * This is the flattened representation of a Binder object for transfer 
  3.  * between processes.  The 'offsets' supplied as part of a binder transaction 
  4.  * contains offsets into the data where these structures occur.  The Binder 
  5.  * driver takes care of re-writing the structure type and data as it moves 
  6.  * between processes. 
  7.  */  
  8. struct flat_binder_object {  
  9.     /* 8 bytes for large_flat_header. */  
  10.     unsigned long       type;  
  11.     unsigned long       flags;  
  12.   
  13.     /* 8 bytes of data. */  
  14.     union {  
  15.         void        *binder;    /* local object */  
  16.         signed long handle;     /* remote object */  
  17.     };  
  18.   
  19.     /* extra data associated with local object */  
  20.     void            *cookie;  
  21. };  
/*
 * This is the flattened representation of a Binder object for transfer
 * between processes.  The 'offsets' supplied as part of a binder transaction
 * contains offsets into the data where these structures occur.  The Binder
 * driver takes care of re-writing the structure type and data as it moves
 * between processes.
 */
struct flat_binder_object {
	/* 8 bytes for large_flat_header. */
	unsigned long		type;
	unsigned long		flags;

	/* 8 bytes of data. */
	union {
		void		*binder;	/* local object */
		signed long	handle;		/* remote object */
	};

	/* extra data associated with local object */
	void			*cookie;
};
        各个成员变量的含义请参考资料 Android Binder设计与实现

        我们进入到flatten_binder函数看看:

  1. status_t flatten_binder(const sp<ProcessState>& proc,  
  2.     const sp<IBinder>& binder, Parcel* out)  
  3. {  
  4.     flat_binder_object obj;  
  5.       
  6.     obj.flags = 0x7f | FLAT_BINDER_FLAG_ACCEPTS_FDS;  
  7.     if (binder != NULL) {  
  8.         IBinder *local = binder->localBinder();  
  9.         if (!local) {  
  10.             BpBinder *proxy = binder->remoteBinder();  
  11.             if (proxy == NULL) {  
  12.                 LOGE("null proxy");  
  13.             }  
  14.             const int32_t handle = proxy ? proxy->handle() : 0;  
  15.             obj.type = BINDER_TYPE_HANDLE;  
  16.             obj.handle = handle;  
  17.             obj.cookie = NULL;  
  18.         } else {  
  19.             obj.type = BINDER_TYPE_BINDER;  
  20.             obj.binder = local->getWeakRefs();  
  21.             obj.cookie = local;  
  22.         }  
  23.     } else {  
  24.         obj.type = BINDER_TYPE_BINDER;  
  25.         obj.binder = NULL;  
  26.         obj.cookie = NULL;  
  27.     }  
  28.       
  29.     return finish_flatten_binder(binder, obj, out);  
  30. }  
status_t flatten_binder(const sp<ProcessState>& proc,
    const sp<IBinder>& binder, Parcel* out)
{
    flat_binder_object obj;
    
    obj.flags = 0x7f | FLAT_BINDER_FLAG_ACCEPTS_FDS;
    if (binder != NULL) {
        IBinder *local = binder->localBinder();
        if (!local) {
            BpBinder *proxy = binder->remoteBinder();
            if (proxy == NULL) {
                LOGE("null proxy");
            }
            const int32_t handle = proxy ? proxy->handle() : 0;
            obj.type = BINDER_TYPE_HANDLE;
            obj.handle = handle;
            obj.cookie = NULL;
        } else {
            obj.type = BINDER_TYPE_BINDER;
            obj.binder = local->getWeakRefs();
            obj.cookie = local;
        }
    } else {
        obj.type = BINDER_TYPE_BINDER;
        obj.binder = NULL;
        obj.cookie = NULL;
    }
    
    return finish_flatten_binder(binder, obj, out);
}
        首先是初始化flat_binder_object的flags域:

  1. obj.flags = 0x7f | FLAT_BINDER_FLAG_ACCEPTS_FDS;  
obj.flags = 0x7f | FLAT_BINDER_FLAG_ACCEPTS_FDS;
        0x7f表示处理本Binder实体请求数据包的线程的最低优先级,FLAT_BINDER_FLAG_ACCEPTS_FDS表示这个Binder实体可以接受文件描述符,Binder实体在收到文件描述符时,就会在本进程中打开这个文件。

       传进来的binder即为MediaPlayerService::instantiate函数中new出来的MediaPlayerService实例,因此,不为空。又由于MediaPlayerService继承自BBinder类,它是一个本地Binder实体,因此binder->localBinder返回一个BBinder指针,而且肯定不为空,于是执行下面语句:

  1. obj.type = BINDER_TYPE_BINDER;  
  2. obj.binder = local->getWeakRefs();  
  3. obj.cookie = local;  
obj.type = BINDER_TYPE_BINDER;
obj.binder = local->getWeakRefs();
obj.cookie = local;
        设置了flat_binder_obj的其他成员变量,注意,指向这个Binder实体地址的指针local保存在flat_binder_obj的成员变量cookie中。

        函数调用finish_flatten_binder来将这个flat_binder_obj写入到Parcel中去:

  1. inline static status_t finish_flatten_binder(  
  2.     const sp<IBinder>& binder, const flat_binder_object& flat, Parcel* out)  
  3. {  
  4.     return out->writeObject(flat, false);  
  5. }  
inline static status_t finish_flatten_binder(
    const sp<IBinder>& binder, const flat_binder_object& flat, Parcel* out)
{
    return out->writeObject(flat, false);
}
       Parcel::writeObject的实现如下:

  1. status_t Parcel::writeObject(const flat_binder_object& val, bool nullMetaData)  
  2. {  
  3.     const bool enoughData = (mDataPos+sizeof(val)) <= mDataCapacity;  
  4.     const bool enoughObjects = mObjectsSize < mObjectsCapacity;  
  5.     if (enoughData && enoughObjects) {  
  6. restart_write:  
  7.         *reinterpret_cast<flat_binder_object*>(mData+mDataPos) = val;  
  8.           
  9.         // Need to write meta-data?  
  10.         if (nullMetaData || val.binder != NULL) {  
  11.             mObjects[mObjectsSize] = mDataPos;  
  12.             acquire_object(ProcessState::self(), val, this);  
  13.             mObjectsSize++;  
  14.         }  
  15.           
  16.         // remember if it's a file descriptor  
  17.         if (val.type == BINDER_TYPE_FD) {  
  18.             mHasFds = mFdsKnown = true;  
  19.         }  
  20.   
  21.         return finishWrite(sizeof(flat_binder_object));  
  22.     }  
  23.   
  24.     if (!enoughData) {  
  25.         const status_t err = growData(sizeof(val));  
  26.         if (err != NO_ERROR) return err;  
  27.     }  
  28.     if (!enoughObjects) {  
  29.         size_t newSize = ((mObjectsSize+2)*3)/2;  
  30.         size_t* objects = (size_t*)realloc(mObjects, newSize*sizeof(size_t));  
  31.         if (objects == NULL) return NO_MEMORY;  
  32.         mObjects = objects;  
  33.         mObjectsCapacity = newSize;  
  34.     }  
  35.       
  36.     goto restart_write;  
  37. }  
status_t Parcel::writeObject(const flat_binder_object& val, bool nullMetaData)
{
    const bool enoughData = (mDataPos+sizeof(val)) <= mDataCapacity;
    const bool enoughObjects = mObjectsSize < mObjectsCapacity;
    if (enoughData && enoughObjects) {
restart_write:
        *reinterpret_cast<flat_binder_object*>(mData+mDataPos) = val;
        
        // Need to write meta-data?
        if (nullMetaData || val.binder != NULL) {
            mObjects[mObjectsSize] = mDataPos;
            acquire_object(ProcessState::self(), val, this);
            mObjectsSize++;
        }
        
        // remember if it's a file descriptor
        if (val.type == BINDER_TYPE_FD) {
            mHasFds = mFdsKnown = true;
        }

        return finishWrite(sizeof(flat_binder_object));
    }

    if (!enoughData) {
        const status_t err = growData(sizeof(val));
        if (err != NO_ERROR) return err;
    }
    if (!enoughObjects) {
        size_t newSize = ((mObjectsSize+2)*3)/2;
        size_t* objects = (size_t*)realloc(mObjects, newSize*sizeof(size_t));
        if (objects == NULL) return NO_MEMORY;
        mObjects = objects;
        mObjectsCapacity = newSize;
    }
    
    goto restart_write;
}
        这里除了把flat_binder_obj写到Parcel里面之内,还要记录这个flat_binder_obj在Parcel里面的偏移位置:

  1. mObjects[mObjectsSize] = mDataPos;  
mObjects[mObjectsSize] = mDataPos;
       这里因为,如果进程间传输的数据间带有Binder对象的时候,Binder驱动程序需要作进一步的处理,以维护各个Binder实体的一致性,下面我们将会看到Binder驱动程序是怎么处理这些Binder对象的。

       再回到BpServiceManager::addService函数中,调用下面语句:

  1. status_t err = remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply);  
status_t err = remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply);
       回到 浅谈Android系统进程间通信(IPC)机制Binder中的Server和Client获得Service Manager接口之路一文中的类图中去看一下,这里的remote成员函数来自于BpRefBase类,它返回一个BpBinder指针。因此,我们继续进入到BpBinder::transact函数中去看看:

  1. status_t BpBinder::transact(  
  2.     uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)  
  3. {  
  4.     // Once a binder has died, it will never come back to life.  
  5.     if (mAlive) {  
  6.         status_t status = IPCThreadState::self()->transact(  
  7.             mHandle, code, data, reply, flags);  
  8.         if (status == DEAD_OBJECT) mAlive = 0;  
  9.         return status;  
  10.     }  
  11.   
  12.     return DEAD_OBJECT;  
  13. }  
status_t BpBinder::transact(
    uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
    // Once a binder has died, it will never come back to life.
    if (mAlive) {
        status_t status = IPCThreadState::self()->transact(
            mHandle, code, data, reply, flags);
        if (status == DEAD_OBJECT) mAlive = 0;
        return status;
    }

    return DEAD_OBJECT;
}
       这里又调用了IPCThreadState::transact进执行实际的操作。注意,这里的mHandle为0,code为ADD_SERVICE_TRANSACTION。ADD_SERVICE_TRANSACTION是上面以参数形式传进来的,那mHandle为什么是0呢?因为这里表示的是Service Manager远程接口,它的句柄值一定是0,具体请参考 浅谈Android系统进程间通信(IPC)机制Binder中的Server和Client获得Service Manager接口之路一文。
       再进入到IPCThreadState::transact函数,看看做了些什么事情:

  1. status_t IPCThreadState::transact(int32_t handle,  
  2.                                   uint32_t code, const Parcel& data,  
  3.                                   Parcel* reply, uint32_t flags)  
  4. {  
  5.     status_t err = data.errorCheck();  
  6.   
  7.     flags |= TF_ACCEPT_FDS;  
  8.   
  9.     IF_LOG_TRANSACTIONS() {  
  10.         TextOutput::Bundle _b(alog);  
  11.         alog << "BC_TRANSACTION thr " << (void*)pthread_self() << " / hand "  
  12.             << handle << " / code " << TypeCode(code) << ": "  
  13.             << indent << data << dedent << endl;  
  14.     }  
  15.       
  16.     if (err == NO_ERROR) {  
  17.         LOG_ONEWAY(">>>> SEND from pid %d uid %d %s", getpid(), getuid(),  
  18.             (flags & TF_ONE_WAY) == 0 ? "READ REPLY" : "ONE WAY");  
  19.         err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);  
  20.     }  
  21.       
  22.     if (err != NO_ERROR) {  
  23.         if (reply) reply->setError(err);  
  24.         return (mLastError = err);  
  25.     }  
  26.       
  27.     if ((flags & TF_ONE_WAY) == 0) {  
  28.         #if 0   
  29.         if (code == 4) { // relayout  
  30.             LOGI(">>>>>> CALLING transaction 4");  
  31.         } else {  
  32.             LOGI(">>>>>> CALLING transaction %d", code);  
  33.         }  
  34.         #endif   
  35.         if (reply) {  
  36.             err = waitForResponse(reply);  
  37.         } else {  
  38.             Parcel fakeReply;  
  39.             err = waitForResponse(&fakeReply);  
  40.         }  
  41.         #if 0   
  42.         if (code == 4) { // relayout  
  43.             LOGI("<<<<<< RETURNING transaction 4");  
  44.         } else {  
  45.             LOGI("<<<<<< RETURNING transaction %d", code);  
  46.         }  
  47.         #endif   
  48.           
  49.         IF_LOG_TRANSACTIONS() {  
  50.             TextOutput::Bundle _b(alog);  
  51.             alog << "BR_REPLY thr " << (void*)pthread_self() << " / hand "  
  52.                 << handle << ": ";  
  53.             if (reply) alog << indent << *reply << dedent << endl;  
  54.             else alog << "(none requested)" << endl;  
  55.         }  
  56.     } else {  
  57.         err = waitForResponse(NULL, NULL);  
  58.     }  
  59.       
  60.     return err;  
  61. }  
status_t IPCThreadState::transact(int32_t handle,
                                  uint32_t code, const Parcel& data,
                                  Parcel* reply, uint32_t flags)
{
    status_t err = data.errorCheck();

    flags |= TF_ACCEPT_FDS;

    IF_LOG_TRANSACTIONS() {
        TextOutput::Bundle _b(alog);
        alog << "BC_TRANSACTION thr " << (void*)pthread_self() << " / hand "
            << handle << " / code " << TypeCode(code) << ": "
            << indent << data << dedent << endl;
    }
    
    if (err == NO_ERROR) {
        LOG_ONEWAY(">>>> SEND from pid %d uid %d %s", getpid(), getuid(),
            (flags & TF_ONE_WAY) == 0 ? "READ REPLY" : "ONE WAY");
        err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);
    }
    
    if (err != NO_ERROR) {
        if (reply) reply->setError(err);
        return (mLastError = err);
    }
    
    if ((flags & TF_ONE_WAY) == 0) {
        #if 0
        if (code == 4) { // relayout
            LOGI(">>>>>> CALLING transaction 4");
        } else {
            LOGI(">>>>>> CALLING transaction %d", code);
        }
        #endif
        if (reply) {
            err = waitForResponse(reply);
        } else {
            Parcel fakeReply;
            err = waitForResponse(&fakeReply);
        }
        #if 0
        if (code == 4) { // relayout
            LOGI("<<<<<< RETURNING transaction 4");
        } else {
            LOGI("<<<<<< RETURNING transaction %d", code);
        }
        #endif
        
        IF_LOG_TRANSACTIONS() {
            TextOutput::Bundle _b(alog);
            alog << "BR_REPLY thr " << (void*)pthread_self() << " / hand "
                << handle << ": ";
            if (reply) alog << indent << *reply << dedent << endl;
            else alog << "(none requested)" << endl;
        }
    } else {
        err = waitForResponse(NULL, NULL);
    }
    
    return err;
}
        IPCThreadState::transact函数的参数flags是一个默认值为0的参数,上面没有传相应的实参进来,因此,这里就为0。

        函数首先调用writeTransactionData函数准备好一个struct binder_transaction_data结构体变量,这个是等一下要传输给Binder驱动程序的。struct binder_transaction_data的定义我们在浅谈Service Manager成为Android进程间通信(IPC)机制Binder守护进程之路一文中有详细描述,读者不妨回过去读一下。这里为了方便描述,将struct binder_transaction_data的定义再次列出来:

  1. struct binder_transaction_data {  
  2.     /* The first two are only used for bcTRANSACTION and brTRANSACTION, 
  3.      * identifying the target and contents of the transaction. 
  4.      */  
  5.     union {  
  6.         size_t  handle; /* target descriptor of command transaction */  
  7.         void    *ptr;   /* target descriptor of return transaction */  
  8.     } target;  
  9.     void        *cookie;    /* target object cookie */  
  10.     unsigned int    code;       /* transaction command */  
  11.   
  12.     /* General information about the transaction. */  
  13.     unsigned int    flags;  
  14.     pid_t       sender_pid;  
  15.     uid_t       sender_euid;  
  16.     size_t      data_size;  /* number of bytes of data */  
  17.     size_t      offsets_size;   /* number of bytes of offsets */  
  18.   
  19.     /* If this transaction is inline, the data immediately 
  20.      * follows here; otherwise, it ends with a pointer to 
  21.      * the data buffer. 
  22.      */  
  23.     union {  
  24.         struct {  
  25.             /* transaction data */  
  26.             const void  *buffer;  
  27.             /* offsets from buffer to flat_binder_object structs */  
  28.             const void  *offsets;  
  29.         } ptr;  
  30.         uint8_t buf[8];  
  31.     } data;  
  32. };  
struct binder_transaction_data {
	/* The first two are only used for bcTRANSACTION and brTRANSACTION,
	 * identifying the target and contents of the transaction.
	 */
	union {
		size_t	handle;	/* target descriptor of command transaction */
		void	*ptr;	/* target descriptor of return transaction */
	} target;
	void		*cookie;	/* target object cookie */
	unsigned int	code;		/* transaction command */

	/* General information about the transaction. */
	unsigned int	flags;
	pid_t		sender_pid;
	uid_t		sender_euid;
	size_t		data_size;	/* number of bytes of data */
	size_t		offsets_size;	/* number of bytes of offsets */

	/* If this transaction is inline, the data immediately
	 * follows here; otherwise, it ends with a pointer to
	 * the data buffer.
	 */
	union {
		struct {
			/* transaction data */
			const void	*buffer;
			/* offsets from buffer to flat_binder_object structs */
			const void	*offsets;
		} ptr;
		uint8_t	buf[8];
	} data;
};
         writeTransactionData函数的实现如下:

  1. status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags,  
  2.     int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer)  
  3. {  
  4.     binder_transaction_data tr;  
  5.   
  6.     tr.target.handle = handle;  
  7.     tr.code = code;  
  8.     tr.flags = binderFlags;  
  9.       
  10.     const status_t err = data.errorCheck();  
  11.     if (err == NO_ERROR) {  
  12.         tr.data_size = data.ipcDataSize();  
  13.         tr.data.ptr.buffer = data.ipcData();  
  14.         tr.offsets_size = data.ipcObjectsCount()*sizeof(size_t);  
  15.         tr.data.ptr.offsets = data.ipcObjects();  
  16.     } else if (statusBuffer) {  
  17.         tr.flags |= TF_STATUS_CODE;  
  18.         *statusBuffer = err;  
  19.         tr.data_size = sizeof(status_t);  
  20.         tr.data.ptr.buffer = statusBuffer;  
  21.         tr.offsets_size = 0;  
  22.         tr.data.ptr.offsets = NULL;  
  23.     } else {  
  24.         return (mLastError = err);  
  25.     }  
  26.       
  27.     mOut.writeInt32(cmd);  
  28.     mOut.write(&tr, sizeof(tr));  
  29.       
  30.     return NO_ERROR;  
  31. }  
status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags,
    int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer)
{
    binder_transaction_data tr;

    tr.target.handle = handle;
    tr.code = code;
    tr.flags = binderFlags;
    
    const status_t err = data.errorCheck();
    if (err == NO_ERROR) {
        tr.data_size = data.ipcDataSize();
        tr.data.ptr.buffer = data.ipcData();
        tr.offsets_size = data.ipcObjectsCount()*sizeof(size_t);
        tr.data.ptr.offsets = data.ipcObjects();
    } else if (statusBuffer) {
        tr.flags |= TF_STATUS_CODE;
        *statusBuffer = err;
        tr.data_size = sizeof(status_t);
        tr.data.ptr.buffer = statusBuffer;
        tr.offsets_size = 0;
        tr.data.ptr.offsets = NULL;
    } else {
        return (mLastError = err);
    }
    
    mOut.writeInt32(cmd);
    mOut.write(&tr, sizeof(tr));
    
    return NO_ERROR;
}

        注意,这里的cmd为BC_TRANSACTION。 这个函数很简单,在这个场景下,就是执行下面语句来初始化本地变量tr:

  1. tr.data_size = data.ipcDataSize();  
  2. tr.data.ptr.buffer = data.ipcData();  
  3. tr.offsets_size = data.ipcObjectsCount()*sizeof(size_t);  
  4. tr.data.ptr.offsets = data.ipcObjects();  
tr.data_size = data.ipcDataSize();
tr.data.ptr.buffer = data.ipcData();
tr.offsets_size = data.ipcObjectsCount()*sizeof(size_t);
tr.data.ptr.offsets = data.ipcObjects();
       回忆一下上面的内容,写入到tr.data.ptr.buffer的内容相当于下面的内容:

  1. writeInt32(IPCThreadState::self()->getStrictModePolicy() |  
  2.                STRICT_MODE_PENALTY_GATHER);  
  3. writeString16("android.os.IServiceManager");  
  4. writeString16("media.player");  
  5. writeStrongBinder(new MediaPlayerService());  
writeInt32(IPCThreadState::self()->getStrictModePolicy() |
               STRICT_MODE_PENALTY_GATHER);
writeString16("android.os.IServiceManager");
writeString16("media.player");
writeStrongBinder(new MediaPlayerService());
       其中包含了一个Binder实体MediaPlayerService,因此需要设置tr.offsets_size就为1,tr.data.ptr.offsets就指向了这个MediaPlayerService的地址在tr.data.ptr.buffer中的偏移量。最后,将tr的内容保存在IPCThreadState的成员变量mOut中。
       回到IPCThreadState::transact函数中,接下去看,(flags & TF_ONE_WAY) == 0为true,并且reply不为空,所以最终进入到waitForResponse(reply)这条路径来。我们看一下waitForResponse函数的实现:

  1. status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)  
  2. {  
  3.     int32_t cmd;  
  4.     int32_t err;  
  5.   
  6.     while (1) {  
  7.         if ((err=talkWithDriver()) < NO_ERROR) break;  
  8.         err = mIn.errorCheck();  
  9.         if (err < NO_ERROR) break;  
  10.         if (mIn.dataAvail() == 0) continue;  
  11.           
  12.         cmd = mIn.readInt32();  
  13.           
  14.         IF_LOG_COMMANDS() {  
  15.             alog << "Processing waitForResponse Command: "  
  16.                 << getReturnString(cmd) << endl;  
  17.         }  
  18.   
  19.         switch (cmd) {  
  20.         case BR_TRANSACTION_COMPLETE:  
  21.             if (!reply && !acquireResult) goto finish;  
  22.             break;  
  23.           
  24.         case BR_DEAD_REPLY:  
  25.             err = DEAD_OBJECT;  
  26.             goto finish;  
  27.   
  28.         case BR_FAILED_REPLY:  
  29.             err = FAILED_TRANSACTION;  
  30.             goto finish;  
  31.           
  32.         case BR_ACQUIRE_RESULT:  
  33.             {  
  34.                 LOG_ASSERT(acquireResult != NULL, "Unexpected brACQUIRE_RESULT");  
  35.                 const int32_t result = mIn.readInt32();  
  36.                 if (!acquireResult) continue;  
  37.                 *acquireResult = result ? NO_ERROR : INVALID_OPERATION;  
  38.             }  
  39.             goto finish;  
  40.           
  41.         case BR_REPLY:  
  42.             {  
  43.                 binder_transaction_data tr;  
  44.                 err = mIn.read(&tr, sizeof(tr));  
  45.                 LOG_ASSERT(err == NO_ERROR, "Not enough command data for brREPLY");  
  46.                 if (err != NO_ERROR) goto finish;  
  47.   
  48.                 if (reply) {  
  49.                     if ((tr.flags & TF_STATUS_CODE) == 0) {  
  50.                         reply->ipcSetDataReference(  
  51.                             reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),  
  52.                             tr.data_size,  
  53.                             reinterpret_cast<const size_t*>(tr.data.ptr.offsets),  
  54.                             tr.offsets_size/sizeof(size_t),  
  55.                             freeBuffer, this);  
  56.                     } else {  
  57.                         err = *static_cast<const status_t*>(tr.data.ptr.buffer);  
  58.                         freeBuffer(NULL,  
  59.                             reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),  
  60.                             tr.data_size,  
  61.                             reinterpret_cast<const size_t*>(tr.data.ptr.offsets),  
  62.                             tr.offsets_size/sizeof(size_t), this);  
  63.                     }  
  64.                 } else {  
  65.                     freeBuffer(NULL,  
  66.                         reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),  
  67.                         tr.data_size,  
  68.                         reinterpret_cast<const size_t*>(tr.data.ptr.offsets),  
  69.                         tr.offsets_size/sizeof(size_t), this);  
  70.                     continue;  
  71.                 }  
  72.             }  
  73.             goto finish;  
  74.   
  75.         default:  
  76.             err = executeCommand(cmd);  
  77.             if (err != NO_ERROR) goto finish;  
  78.             break;  
  79.         }  
  80.     }  
  81.   
  82. finish:  
  83.     if (err != NO_ERROR) {  
  84.         if (acquireResult) *acquireResult = err;  
  85.         if (reply) reply->setError(err);  
  86.         mLastError = err;  
  87.     }  
  88.       
  89.     return err;  
  90. }  
status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
{
    int32_t cmd;
    int32_t err;

    while (1) {
        if ((err=talkWithDriver()) < NO_ERROR) break;
        err = mIn.errorCheck();
        if (err < NO_ERROR) break;
        if (mIn.dataAvail() == 0) continue;
        
        cmd = mIn.readInt32();
        
        IF_LOG_COMMANDS() {
            alog << "Processing waitForResponse Command: "
                << getReturnString(cmd) << endl;
        }

        switch (cmd) {
        case BR_TRANSACTION_COMPLETE:
            if (!reply && !acquireResult) goto finish;
            break;
        
        case BR_DEAD_REPLY:
            err = DEAD_OBJECT;
            goto finish;

        case BR_FAILED_REPLY:
            err = FAILED_TRANSACTION;
            goto finish;
        
        case BR_ACQUIRE_RESULT:
            {
                LOG_ASSERT(acquireResult != NULL, "Unexpected brACQUIRE_RESULT");
                const int32_t result = mIn.readInt32();
                if (!acquireResult) continue;
                *acquireResult = result ? NO_ERROR : INVALID_OPERATION;
            }
            goto finish;
        
        case BR_REPLY:
            {
                binder_transaction_data tr;
                err = mIn.read(&tr, sizeof(tr));
                LOG_ASSERT(err == NO_ERROR, "Not enough command data for brREPLY");
                if (err != NO_ERROR) goto finish;

                if (reply) {
                    if ((tr.flags & TF_STATUS_CODE) == 0) {
                        reply->ipcSetDataReference(
                            reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
                            tr.data_size,
                            reinterpret_cast<const size_t*>(tr.data.ptr.offsets),
                            tr.offsets_size/sizeof(size_t),
                            freeBuffer, this);
                    } else {
                        err = *static_cast<const status_t*>(tr.data.ptr.buffer);
                        freeBuffer(NULL,
                            reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
                            tr.data_size,
                            reinterpret_cast<const size_t*>(tr.data.ptr.offsets),
                            tr.offsets_size/sizeof(size_t), this);
                    }
                } else {
                    freeBuffer(NULL,
                        reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
                        tr.data_size,
                        reinterpret_cast<const size_t*>(tr.data.ptr.offsets),
                        tr.offsets_size/sizeof(size_t), this);
                    continue;
                }
            }
            goto finish;

        default:
            err = executeCommand(cmd);
            if (err != NO_ERROR) goto finish;
            break;
        }
    }

finish:
    if (err != NO_ERROR) {
        if (acquireResult) *acquireResult = err;
        if (reply) reply->setError(err);
        mLastError = err;
    }
    
    return err;
}
        这个函数虽然很长,但是主要调用了talkWithDriver函数来与Binder驱动程序进行交互:

  1. status_t IPCThreadState::talkWithDriver(bool doReceive)  
  2. {  
  3.     LOG_ASSERT(mProcess->mDriverFD >= 0, "Binder driver is not opened");  
  4.       
  5.     binder_write_read bwr;  
  6.       
  7.     // Is the read buffer empty?  
  8.     const bool needRead = mIn.dataPosition() >= mIn.dataSize();  
  9.       
  10.     // We don't want to write anything if we are still reading  
  11.     // from data left in the input buffer and the caller  
  12.     // has requested to read the next data.  
  13.     const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0;  
  14.       
  15.     bwr.write_size = outAvail;  
  16.     bwr.write_buffer = (long unsigned int)mOut.data();  
  17.   
  18.     // This is what we'll read.   
  19.     if (doReceive && needRead) {  
  20.         bwr.read_size = mIn.dataCapacity();  
  21.         bwr.read_buffer = (long unsigned int)mIn.data();  
  22.     } else {  
  23.         bwr.read_size = 0;  
  24.     }  
  25.       
  26.     IF_LOG_COMMANDS() {  
  27.         TextOutput::Bundle _b(alog);  
  28.         if (outAvail != 0) {  
  29.             alog << "Sending commands to driver: " << indent;  
  30.             const void* cmds = (const void*)bwr.write_buffer;  
  31.             const void* end = ((const uint8_t*)cmds)+bwr.write_size;  
  32.             alog << HexDump(cmds, bwr.write_size) << endl;  
  33.             while (cmds < end) cmds = printCommand(alog, cmds);  
  34.             alog << dedent;  
  35.         }  
  36.         alog << "Size of receive buffer: " << bwr.read_size  
  37.             << ", needRead: " << needRead << ", doReceive: " << doReceive << endl;  
  38.     }  
  39.       
  40.     // Return immediately if there is nothing to do.  
  41.     if ((bwr.write_size == 0) && (bwr.read_size == 0)) return NO_ERROR;  
  42.       
  43.     bwr.write_consumed = 0;  
  44.     bwr.read_consumed = 0;  
  45.     status_t err;  
  46.     do {  
  47.         IF_LOG_COMMANDS() {  
  48.             alog << "About to read/write, write size = " << mOut.dataSize() << endl;  
  49.         }  
  50. #if defined(HAVE_ANDROID_OS)   
  51.         if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)  
  52.             err = NO_ERROR;  
  53.         else  
  54.             err = -errno;  
  55. #else   
  56.         err = INVALID_OPERATION;  
  57. #endif   
  58.         IF_LOG_COMMANDS() {  
  59.             alog << "Finished read/write, write size = " << mOut.dataSize() << endl;  
  60.         }  
  61.     } while (err == -EINTR);  
  62.       
  63.     IF_LOG_COMMANDS() {  
  64.         alog << "Our err: " << (void*)err << ", write consumed: "  
  65.             << bwr.write_consumed << " (of " << mOut.dataSize()  
  66.             << "), read consumed: " << bwr.read_consumed << endl;  
  67.     }  
  68.   
  69.     if (err >= NO_ERROR) {  
  70.         if (bwr.write_consumed > 0) {  
  71.             if (bwr.write_consumed < (ssize_t)mOut.dataSize())  
  72.                 mOut.remove(0, bwr.write_consumed);  
  73.             else  
  74.                 mOut.setDataSize(0);  
  75.         }  
  76.         if (bwr.read_consumed > 0) {  
  77.             mIn.setDataSize(bwr.read_consumed);  
  78.             mIn.setDataPosition(0);  
  79.         }  
  80.         IF_LOG_COMMANDS() {  
  81.             TextOutput::Bundle _b(alog);  
  82.             alog << "Remaining data size: " << mOut.dataSize() << endl;  
  83.             alog << "Received commands from driver: " << indent;  
  84.             const void* cmds = mIn.data();  
  85.             const void* end = mIn.data() + mIn.dataSize();  
  86.             alog << HexDump(cmds, mIn.dataSize()) << endl;  
  87.             while (cmds < end) cmds = printReturnCommand(alog, cmds);  
  88.             alog << dedent;  
  89.         }  
  90.         return NO_ERROR;  
  91.     }  
  92.       
  93.     return err;  
  94. }  
status_t IPCThreadState::talkWithDriver(bool doReceive)
{
    LOG_ASSERT(mProcess->mDriverFD >= 0, "Binder driver is not opened");
    
    binder_write_read bwr;
    
    // Is the read buffer empty?
    const bool needRead = mIn.dataPosition() >= mIn.dataSize();
    
    // We don't want to write anything if we are still reading
    // from data left in the input buffer and the caller
    // has requested to read the next data.
    const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0;
    
    bwr.write_size = outAvail;
    bwr.write_buffer = (long unsigned int)mOut.data();

    // This is what we'll read.
    if (doReceive && needRead) {
        bwr.read_size = mIn.dataCapacity();
        bwr.read_buffer = (long unsigned int)mIn.data();
    } else {
        bwr.read_size = 0;
    }
    
    IF_LOG_COMMANDS() {
        TextOutput::Bundle _b(alog);
        if (outAvail != 0) {
            alog << "Sending commands to driver: " << indent;
            const void* cmds = (const void*)bwr.write_buffer;
            const void* end = ((const uint8_t*)cmds)+bwr.write_size;
            alog << HexDump(cmds, bwr.write_size) << endl;
            while (cmds < end) cmds = printCommand(alog, cmds);
            alog << dedent;
        }
        alog << "Size of receive buffer: " << bwr.read_size
            << ", needRead: " << needRead << ", doReceive: " << doReceive << endl;
    }
    
    // Return immediately if there is nothing to do.
    if ((bwr.write_size == 0) && (bwr.read_size == 0)) return NO_ERROR;
    
    bwr.write_consumed = 0;
    bwr.read_consumed = 0;
    status_t err;
    do {
        IF_LOG_COMMANDS() {
            alog << "About to read/write, write size = " << mOut.dataSize() << endl;
        }
#if defined(HAVE_ANDROID_OS)
        if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)
            err = NO_ERROR;
        else
            err = -errno;
#else
        err = INVALID_OPERATION;
#endif
        IF_LOG_COMMANDS() {
            alog << "Finished read/write, write size = " << mOut.dataSize() << endl;
        }
    } while (err == -EINTR);
    
    IF_LOG_COMMANDS() {
        alog << "Our err: " << (void*)err << ", write consumed: "
            << bwr.write_consumed << " (of " << mOut.dataSize()
			<< "), read consumed: " << bwr.read_consumed << endl;
    }

    if (err >= NO_ERROR) {
        if (bwr.write_consumed > 0) {
            if (bwr.write_consumed < (ssize_t)mOut.dataSize())
                mOut.remove(0, bwr.write_consumed);
            else
                mOut.setDataSize(0);
        }
        if (bwr.read_consumed > 0) {
            mIn.setDataSize(bwr.read_consumed);
            mIn.setDataPosition(0);
        }
        IF_LOG_COMMANDS() {
            TextOutput::Bundle _b(alog);
            alog << "Remaining data size: " << mOut.dataSize() << endl;
            alog << "Received commands from driver: " << indent;
            const void* cmds = mIn.data();
            const void* end = mIn.data() + mIn.dataSize();
            alog << HexDump(cmds, mIn.dataSize()) << endl;
            while (cmds < end) cmds = printReturnCommand(alog, cmds);
            alog << dedent;
        }
        return NO_ERROR;
    }
    
    return err;
}
        这里doReceive和needRead均为1,有兴趣的读者可以自已分析一下。因此,这里告诉Binder驱动程序,先执行write操作,再执行read操作,下面我们将会看到。

        最后,通过ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr)进行到Binder驱动程序的binder_ioctl函数,我们只关注cmd为BINDER_WRITE_READ的逻辑:

  1. static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)  
  2. {  
  3.     int ret;  
  4.     struct binder_proc *proc = filp->private_data;  
  5.     struct binder_thread *thread;  
  6.     unsigned int size = _IOC_SIZE(cmd);  
  7.     void __user *ubuf = (void __user *)arg;  
  8.   
  9.     /*printk(KERN_INFO "binder_ioctl: %d:%d %x %lx\n", proc->pid, current->pid, cmd, arg);*/  
  10.   
  11.     ret = wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);  
  12.     if (ret)  
  13.         return ret;  
  14.   
  15.     mutex_lock(&binder_lock);  
  16.     thread = binder_get_thread(proc);  
  17.     if (thread == NULL) {  
  18.         ret = -ENOMEM;  
  19.         goto err;  
  20.     }  
  21.   
  22.     switch (cmd) {  
  23.     case BINDER_WRITE_READ: {  
  24.         struct binder_write_read bwr;  
  25.         if (size != sizeof(struct binder_write_read)) {  
  26.             ret = -EINVAL;  
  27.             goto err;  
  28.         }  
  29.         if (copy_from_user(&bwr, ubuf, sizeof(bwr))) {  
  30.             ret = -EFAULT;  
  31.             goto err;  
  32.         }  
  33.         if (binder_debug_mask & BINDER_DEBUG_READ_WRITE)  
  34.             printk(KERN_INFO "binder: %d:%d write %ld at %08lx, read %ld at %08lx\n",  
  35.             proc->pid, thread->pid, bwr.write_size, bwr.write_buffer, bwr.read_size, bwr.read_buffer);  
  36.         if (bwr.write_size > 0) {  
  37.             ret = binder_thread_write(proc, thread, (void __user *)bwr.write_buffer, bwr.write_size, &bwr.write_consumed);  
  38.             if (ret < 0) {  
  39.                 bwr.read_consumed = 0;  
  40.                 if (copy_to_user(ubuf, &bwr, sizeof(bwr)))  
  41.                     ret = -EFAULT;  
  42.                 goto err;  
  43.             }  
  44.         }  
  45.         if (bwr.read_size > 0) {  
  46.             ret = binder_thread_read(proc, thread, (void __user *)bwr.read_buffer, bwr.read_size, &bwr.read_consumed, filp->f_flags & O_NONBLOCK);  
  47.             if (!list_empty(&proc->todo))  
  48.                 wake_up_interruptible(&proc->wait);  
  49.             if (ret < 0) {  
  50.                 if (copy_to_user(ubuf, &bwr, sizeof(bwr)))  
  51.                     ret = -EFAULT;  
  52.                 goto err;  
  53.             }  
  54.         }  
  55.         if (binder_debug_mask & BINDER_DEBUG_READ_WRITE)  
  56.             printk(KERN_INFO "binder: %d:%d wrote %ld of %ld, read return %ld of %ld\n",  
  57.             proc->pid, thread->pid, bwr.write_consumed, bwr.write_size, bwr.read_consumed, bwr.read_size);  
  58.         if (copy_to_user(ubuf, &bwr, sizeof(bwr))) {  
  59.             ret = -EFAULT;  
  60.             goto err;  
  61.         }  
  62.         break;  
  63.     }  
  64.     ......  
  65.     }  
  66.     ret = 0;  
  67. err:  
  68.     ......  
  69.     return ret;  
  70. }  
static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
	int ret;
	struct binder_proc *proc = filp->private_data;
	struct binder_thread *thread;
	unsigned int size = _IOC_SIZE(cmd);
	void __user *ubuf = (void __user *)arg;

	/*printk(KERN_INFO "binder_ioctl: %d:%d %x %lx\n", proc->pid, current->pid, cmd, arg);*/

	ret = wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);
	if (ret)
		return ret;

	mutex_lock(&binder_lock);
	thread = binder_get_thread(proc);
	if (thread == NULL) {
		ret = -ENOMEM;
		goto err;
	}

	switch (cmd) {
	case BINDER_WRITE_READ: {
		struct binder_write_read bwr;
		if (size != sizeof(struct binder_write_read)) {
			ret = -EINVAL;
			goto err;
		}
		if (copy_from_user(&bwr, ubuf, sizeof(bwr))) {
			ret = -EFAULT;
			goto err;
		}
		if (binder_debug_mask & BINDER_DEBUG_READ_WRITE)
			printk(KERN_INFO "binder: %d:%d write %ld at %08lx, read %ld at %08lx\n",
			proc->pid, thread->pid, bwr.write_size, bwr.write_buffer, bwr.read_size, bwr.read_buffer);
		if (bwr.write_size > 0) {
			ret = binder_thread_write(proc, thread, (void __user *)bwr.write_buffer, bwr.write_size, &bwr.write_consumed);
			if (ret < 0) {
				bwr.read_consumed = 0;
				if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
					ret = -EFAULT;
				goto err;
			}
		}
		if (bwr.read_size > 0) {
			ret = binder_thread_read(proc, thread, (void __user *)bwr.read_buffer, bwr.read_size, &bwr.read_consumed, filp->f_flags & O_NONBLOCK);
			if (!list_empty(&proc->todo))
				wake_up_interruptible(&proc->wait);
			if (ret < 0) {
				if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
					ret = -EFAULT;
				goto err;
			}
		}
		if (binder_debug_mask & BINDER_DEBUG_READ_WRITE)
			printk(KERN_INFO "binder: %d:%d wrote %ld of %ld, read return %ld of %ld\n",
			proc->pid, thread->pid, bwr.write_consumed, bwr.write_size, bwr.read_consumed, bwr.read_size);
		if (copy_to_user(ubuf, &bwr, sizeof(bwr))) {
			ret = -EFAULT;
			goto err;
		}
		break;
	}
	......
	}
	ret = 0;
err:
	......
	return ret;
}
         函数首先是将用户传进来的参数拷贝到本地变量struct binder_write_read bwr中去。这里bwr.write_size > 0为true,因此,进入到binder_thread_write函数中,我们只关注BC_TRANSACTION部分的逻辑:

  1. binder_thread_write(struct binder_proc *proc, struct binder_thread *thread,  
  2.                     void __user *buffer, int size, signed long *consumed)  
  3. {  
  4.     uint32_t cmd;  
  5.     void __user *ptr = buffer + *consumed;  
  6.     void __user *end = buffer + size;  
  7.   
  8.     while (ptr < end && thread->return_error == BR_OK) {  
  9.         if (get_user(cmd, (uint32_t __user *)ptr))  
  10.             return -EFAULT;  
  11.         ptr += sizeof(uint32_t);  
  12.         if (_IOC_NR(cmd) < ARRAY_SIZE(binder_stats.bc)) {  
  13.             binder_stats.bc[_IOC_NR(cmd)]++;  
  14.             proc->stats.bc[_IOC_NR(cmd)]++;  
  15.             thread->stats.bc[_IOC_NR(cmd)]++;  
  16.         }  
  17.         switch (cmd) {  
  18.             .....  
  19.         case BC_TRANSACTION:  
  20.         case BC_REPLY: {  
  21.             struct binder_transaction_data tr;  
  22.   
  23.             if (copy_from_user(&tr, ptr, sizeof(tr)))  
  24.                 return -EFAULT;  
  25.             ptr += sizeof(tr);  
  26.             binder_transaction(proc, thread, &tr, cmd == BC_REPLY);  
  27.             break;  
  28.         }  
  29.         ......  
  30.         }  
  31.         *consumed = ptr - buffer;  
  32.     }  
  33.     return 0;  
  34. }  
binder_thread_write(struct binder_proc *proc, struct binder_thread *thread,
					void __user *buffer, int size, signed long *consumed)
{
	uint32_t cmd;
	void __user *ptr = buffer + *consumed;
	void __user *end = buffer + size;

	while (ptr < end && thread->return_error == BR_OK) {
		if (get_user(cmd, (uint32_t __user *)ptr))
			return -EFAULT;
		ptr += sizeof(uint32_t);
		if (_IOC_NR(cmd) < ARRAY_SIZE(binder_stats.bc)) {
			binder_stats.bc[_IOC_NR(cmd)]++;
			proc->stats.bc[_IOC_NR(cmd)]++;
			thread->stats.bc[_IOC_NR(cmd)]++;
		}
		switch (cmd) {
	        .....
		case BC_TRANSACTION:
		case BC_REPLY: {
			struct binder_transaction_data tr;

			if (copy_from_user(&tr, ptr, sizeof(tr)))
				return -EFAULT;
			ptr += sizeof(tr);
			binder_transaction(proc, thread, &tr, cmd == BC_REPLY);
			break;
		}
		......
		}
		*consumed = ptr - buffer;
	}
	return 0;
}
         首先将用户传进来的transact参数拷贝在本地变量struct binder_transaction_data tr中去,接着调用binder_transaction函数进一步处理,这里我们忽略掉无关代码:

  1. static void  
  2. binder_transaction(struct binder_proc *proc, struct binder_thread *thread,  
  3. struct binder_transaction_data *tr, int reply)  
  4. {  
  5.     struct binder_transaction *t;  
  6.     struct binder_work *tcomplete;  
  7.     size_t *offp, *off_end;  
  8.     struct binder_proc *target_proc;  
  9.     struct binder_thread *target_thread = NULL;  
  10.     struct binder_node *target_node = NULL;  
  11.     struct list_head *target_list;  
  12.     wait_queue_head_t *target_wait;  
  13.     struct binder_transaction *in_reply_to = NULL;  
  14.     struct binder_transaction_log_entry *e;  
  15.     uint32_t return_error;  
  16.   
  17.         ......  
  18.   
  19.     if (reply) {  
  20.          ......  
  21.     } else {  
  22.         if (tr->target.handle) {  
  23.             ......  
  24.         } else {  
  25.             target_node = binder_context_mgr_node;  
  26.             if (target_node == NULL) {  
  27.                 return_error = BR_DEAD_REPLY;  
  28.                 goto err_no_context_mgr_node;  
  29.             }  
  30.         }  
  31.         ......  
  32.         target_proc = target_node->proc;  
  33.         if (target_proc == NULL) {  
  34.             return_error = BR_DEAD_REPLY;  
  35.             goto err_dead_binder;  
  36.         }  
  37.         ......  
  38.     }  
  39.     if (target_thread) {  
  40.         ......  
  41.     } else {  
  42.         target_list = &target_proc->todo;  
  43.         target_wait = &target_proc->wait;  
  44.     }  
  45.       
  46.     ......  
  47.   
  48.     /* TODO: reuse incoming transaction for reply */  
  49.     t = kzalloc(sizeof(*t), GFP_KERNEL);  
  50.     if (t == NULL) {  
  51.         return_error = BR_FAILED_REPLY;  
  52.         goto err_alloc_t_failed;  
  53.     }  
  54.     ......  
  55.   
  56.     tcomplete = kzalloc(sizeof(*tcomplete), GFP_KERNEL);  
  57.     if (tcomplete == NULL) {  
  58.         return_error = BR_FAILED_REPLY;  
  59.         goto err_alloc_tcomplete_failed;  
  60.     }  
  61.       
  62.     ......  
  63.   
  64.     if (!reply && !(tr->flags & TF_ONE_WAY))  
  65.         t->from = thread;  
  66.     else  
  67.         t->from = NULL;  
  68.     t->sender_euid = proc->tsk->cred->euid;  
  69.     t->to_proc = target_proc;  
  70.     t->to_thread = target_thread;  
  71.     t->code = tr->code;  
  72.     t->flags = tr->flags;  
  73.     t->priority = task_nice(current);  
  74.     t->buffer = binder_alloc_buf(target_proc, tr->data_size,  
  75.         tr->offsets_size, !reply && (t->flags & TF_ONE_WAY));  
  76.     if (t->buffer == NULL) {  
  77.         return_error = BR_FAILED_REPLY;  
  78.         goto err_binder_alloc_buf_failed;  
  79.     }  
  80.     t->buffer->allow_user_free = 0;  
  81.     t->buffer->debug_id = t->debug_id;  
  82.     t->buffer->transaction = t;  
  83.     t->buffer->target_node = target_node;  
  84.     if (target_node)  
  85.         binder_inc_node(target_node, 1, 0, NULL);  
  86.   
  87.     offp = (size_t *)(t->buffer->data + ALIGN(tr->data_size, sizeof(void *)));  
  88.   
  89.     if (copy_from_user(t->buffer->data, tr->data.ptr.buffer, tr->data_size)) {  
  90.         ......  
  91.         return_error = BR_FAILED_REPLY;  
  92.         goto err_copy_data_failed;  
  93.     }  
  94.     if (copy_from_user(offp, tr->data.ptr.offsets, tr->offsets_size)) {  
  95.         ......  
  96.         return_error = BR_FAILED_REPLY;  
  97.         goto err_copy_data_failed;  
  98.     }  
  99.     ......  
  100.   
  101.     off_end = (void *)offp + tr->offsets_size;  
  102.     for (; offp < off_end; offp++) {  
  103.         struct flat_binder_object *fp;  
  104.         ......  
  105.         fp = (struct flat_binder_object *)(t->buffer->data + *offp);  
  106.         switch (fp->type) {  
  107.         case BINDER_TYPE_BINDER:  
  108.         case BINDER_TYPE_WEAK_BINDER: {  
  109.             struct binder_ref *ref;  
  110.             struct binder_node *node = binder_get_node(proc, fp->binder);  
  111.             if (node == NULL) {  
  112.                 node = binder_new_node(proc, fp->binder, fp->cookie);  
  113.                 if (node == NULL) {  
  114.                     return_error = BR_FAILED_REPLY;  
  115.                     goto err_binder_new_node_failed;  
  116.                 }  
  117.                 node->min_priority = fp->flags & FLAT_BINDER_FLAG_PRIORITY_MASK;  
  118.                 node->accept_fds = !!(fp->flags & FLAT_BINDER_FLAG_ACCEPTS_FDS);  
  119.             }  
  120.             if (fp->cookie != node->cookie) {  
  121.                 ......  
  122.                 goto err_binder_get_ref_for_node_failed;  
  123.             }  
  124.             ref = binder_get_ref_for_node(target_proc, node);  
  125.             if (ref == NULL) {  
  126.                 return_error = BR_FAILED_REPLY;  
  127.                 goto err_binder_get_ref_for_node_failed;  
  128.             }  
  129.             if (fp->type == BINDER_TYPE_BINDER)  
  130.                 fp->type = BINDER_TYPE_HANDLE;  
  131.             else  
  132.                 fp->type = BINDER_TYPE_WEAK_HANDLE;  
  133.             fp->handle = ref->desc;  
  134.             binder_inc_ref(ref, fp->type == BINDER_TYPE_HANDLE, &thread->todo);  
  135.             ......  
  136.                                 
  137.         } break;  
  138.         ......  
  139.         }  
  140.     }  
  141.   
  142.     if (reply) {  
  143.         ......  
  144.     } else if (!(t->flags & TF_ONE_WAY)) {  
  145.         BUG_ON(t->buffer->async_transaction != 0);  
  146.         t->need_reply = 1;  
  147.         t->from_parent = thread->transaction_stack;  
  148.         thread->transaction_stack = t;  
  149.     } else {  
  150.         ......  
  151.     }  
  152.     t->work.type = BINDER_WORK_TRANSACTION;  
  153.     list_add_tail(&t->work.entry, target_list);  
  154.     tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE;  
  155.     list_add_tail(&tcomplete->entry, &thread->todo);  
  156.     if (target_wait)  
  157.         wake_up_interruptible(target_wait);  
  158.     return;  
  159.     ......  
  160. }  
static void
binder_transaction(struct binder_proc *proc, struct binder_thread *thread,
struct binder_transaction_data *tr, int reply)
{
	struct binder_transaction *t;
	struct binder_work *tcomplete;
	size_t *offp, *off_end;
	struct binder_proc *target_proc;
	struct binder_thread *target_thread = NULL;
	struct binder_node *target_node = NULL;
	struct list_head *target_list;
	wait_queue_head_t *target_wait;
	struct binder_transaction *in_reply_to = NULL;
	struct binder_transaction_log_entry *e;
	uint32_t return_error;

        ......

	if (reply) {
         ......
	} else {
		if (tr->target.handle) {
            ......
		} else {
			target_node = binder_context_mgr_node;
			if (target_node == NULL) {
				return_error = BR_DEAD_REPLY;
				goto err_no_context_mgr_node;
			}
		}
		......
		target_proc = target_node->proc;
		if (target_proc == NULL) {
			return_error = BR_DEAD_REPLY;
			goto err_dead_binder;
		}
		......
	}
	if (target_thread) {
		......
	} else {
		target_list = &target_proc->todo;
		target_wait = &target_proc->wait;
	}
	
	......

	/* TODO: reuse incoming transaction for reply */
	t = kzalloc(sizeof(*t), GFP_KERNEL);
	if (t == NULL) {
		return_error = BR_FAILED_REPLY;
		goto err_alloc_t_failed;
	}
	......

	tcomplete = kzalloc(sizeof(*tcomplete), GFP_KERNEL);
	if (tcomplete == NULL) {
		return_error = BR_FAILED_REPLY;
		goto err_alloc_tcomplete_failed;
	}
	
	......

	if (!reply && !(tr->flags & TF_ONE_WAY))
		t->from = thread;
	else
		t->from = NULL;
	t->sender_euid = proc->tsk->cred->euid;
	t->to_proc = target_proc;
	t->to_thread = target_thread;
	t->code = tr->code;
	t->flags = tr->flags;
	t->priority = task_nice(current);
	t->buffer = binder_alloc_buf(target_proc, tr->data_size,
		tr->offsets_size, !reply && (t->flags & TF_ONE_WAY));
	if (t->buffer == NULL) {
		return_error = BR_FAILED_REPLY;
		goto err_binder_alloc_buf_failed;
	}
	t->buffer->allow_user_free = 0;
	t->buffer->debug_id = t->debug_id;
	t->buffer->transaction = t;
	t->buffer->target_node = target_node;
	if (target_node)
		binder_inc_node(target_node, 1, 0, NULL);

	offp = (size_t *)(t->buffer->data + ALIGN(tr->data_size, sizeof(void *)));

	if (copy_from_user(t->buffer->data, tr->data.ptr.buffer, tr->data_size)) {
		......
		return_error = BR_FAILED_REPLY;
		goto err_copy_data_failed;
	}
	if (copy_from_user(offp, tr->data.ptr.offsets, tr->offsets_size)) {
		......
		return_error = BR_FAILED_REPLY;
		goto err_copy_data_failed;
	}
	......

	off_end = (void *)offp + tr->offsets_size;
	for (; offp < off_end; offp++) {
		struct flat_binder_object *fp;
		......
		fp = (struct flat_binder_object *)(t->buffer->data + *offp);
		switch (fp->type) {
		case BINDER_TYPE_BINDER:
		case BINDER_TYPE_WEAK_BINDER: {
			struct binder_ref *ref;
			struct binder_node *node = binder_get_node(proc, fp->binder);
			if (node == NULL) {
				node = binder_new_node(proc, fp->binder, fp->cookie);
				if (node == NULL) {
					return_error = BR_FAILED_REPLY;
					goto err_binder_new_node_failed;
				}
				node->min_priority = fp->flags & FLAT_BINDER_FLAG_PRIORITY_MASK;
				node->accept_fds = !!(fp->flags & FLAT_BINDER_FLAG_ACCEPTS_FDS);
			}
			if (fp->cookie != node->cookie) {
				......
				goto err_binder_get_ref_for_node_failed;
			}
			ref = binder_get_ref_for_node(target_proc, node);
			if (ref == NULL) {
				return_error = BR_FAILED_REPLY;
				goto err_binder_get_ref_for_node_failed;
			}
			if (fp->type == BINDER_TYPE_BINDER)
				fp->type = BINDER_TYPE_HANDLE;
			else
				fp->type = BINDER_TYPE_WEAK_HANDLE;
			fp->handle = ref->desc;
			binder_inc_ref(ref, fp->type == BINDER_TYPE_HANDLE, &thread->todo);
			......
							  
		} break;
		......
		}
	}

	if (reply) {
		......
	} else if (!(t->flags & TF_ONE_WAY)) {
		BUG_ON(t->buffer->async_transaction != 0);
		t->need_reply = 1;
		t->from_parent = thread->transaction_stack;
		thread->transaction_stack = t;
	} else {
		......
	}
	t->work.type = BINDER_WORK_TRANSACTION;
	list_add_tail(&t->work.entry, target_list);
	tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE;
	list_add_tail(&tcomplete->entry, &thread->todo);
	if (target_wait)
		wake_up_interruptible(target_wait);
	return;
    ......
}
       注意,这里传进来的参数reply为0,tr->target.handle也为0。因此,target_proc、target_thread、target_node、target_list和target_wait的值分别为:

  1. target_node = binder_context_mgr_node;  
  2. target_proc = target_node->proc;  
  3. target_list = &target_proc->todo;  
  4. target_wait = &target_proc->wait;   
target_node = binder_context_mgr_node;
target_proc = target_node->proc;
target_list = &target_proc->todo;
target_wait = &target_proc->wait; 
       接着,分配了一个待处理事务t和一个待完成工作项tcomplete,并执行初始化工作:

  1. /* TODO: reuse incoming transaction for reply */  
  2. t = kzalloc(sizeof(*t), GFP_KERNEL);  
  3. if (t == NULL) {  
  4.     return_error = BR_FAILED_REPLY;  
  5.     goto err_alloc_t_failed;  
  6. }  
  7. ......  
  8.   
  9. tcomplete = kzalloc(sizeof(*tcomplete), GFP_KERNEL);  
  10. if (tcomplete == NULL) {  
  11.     return_error = BR_FAILED_REPLY;  
  12.     goto err_alloc_tcomplete_failed;  
  13. }  
  14.   
  15. ......  
  16.   
  17. if (!reply && !(tr->flags & TF_ONE_WAY))  
  18.     t->from = thread;  
  19. else  
  20.     t->from = NULL;  
  21. t->sender_euid = proc->tsk->cred->euid;  
  22. t->to_proc = target_proc;  
  23. t->to_thread = target_thread;  
  24. t->code = tr->code;  
  25. t->flags = tr->flags;  
  26. t->priority = task_nice(current);  
  27. t->buffer = binder_alloc_buf(target_proc, tr->data_size,  
  28.     tr->offsets_size, !reply && (t->flags & TF_ONE_WAY));  
  29. if (t->buffer == NULL) {  
  30.     return_error = BR_FAILED_REPLY;  
  31.     goto err_binder_alloc_buf_failed;  
  32. }  
  33. t->buffer->allow_user_free = 0;  
  34. t->buffer->debug_id = t->debug_id;  
  35. t->buffer->transaction = t;  
  36. t->buffer->target_node = target_node;  
  37. if (target_node)  
  38.     binder_inc_node(target_node, 1, 0, NULL);  
  39.   
  40. offp = (size_t *)(t->buffer->data + ALIGN(tr->data_size, sizeof(void *)));  
  41.   
  42. if (copy_from_user(t->buffer->data, tr->data.ptr.buffer, tr->data_size)) {  
  43.     ......  
  44.     return_error = BR_FAILED_REPLY;  
  45.     goto err_copy_data_failed;  
  46. }  
  47. if (copy_from_user(offp, tr->data.ptr.offsets, tr->offsets_size)) {  
  48.     ......  
  49.     return_error = BR_FAILED_REPLY;  
  50.     goto err_copy_data_failed;  
  51. }  
	/* TODO: reuse incoming transaction for reply */
	t = kzalloc(sizeof(*t), GFP_KERNEL);
	if (t == NULL) {
		return_error = BR_FAILED_REPLY;
		goto err_alloc_t_failed;
	}
	......

	tcomplete = kzalloc(sizeof(*tcomplete), GFP_KERNEL);
	if (tcomplete == NULL) {
		return_error = BR_FAILED_REPLY;
		goto err_alloc_tcomplete_failed;
	}
	
	......

	if (!reply && !(tr->flags & TF_ONE_WAY))
		t->from = thread;
	else
		t->from = NULL;
	t->sender_euid = proc->tsk->cred->euid;
	t->to_proc = target_proc;
	t->to_thread = target_thread;
	t->code = tr->code;
	t->flags = tr->flags;
	t->priority = task_nice(current);
	t->buffer = binder_alloc_buf(target_proc, tr->data_size,
		tr->offsets_size, !reply && (t->flags & TF_ONE_WAY));
	if (t->buffer == NULL) {
		return_error = BR_FAILED_REPLY;
		goto err_binder_alloc_buf_failed;
	}
	t->buffer->allow_user_free = 0;
	t->buffer->debug_id = t->debug_id;
	t->buffer->transaction = t;
	t->buffer->target_node = target_node;
	if (target_node)
		binder_inc_node(target_node, 1, 0, NULL);

	offp = (size_t *)(t->buffer->data + ALIGN(tr->data_size, sizeof(void *)));

	if (copy_from_user(t->buffer->data, tr->data.ptr.buffer, tr->data_size)) {
		......
		return_error = BR_FAILED_REPLY;
		goto err_copy_data_failed;
	}
	if (copy_from_user(offp, tr->data.ptr.offsets, tr->offsets_size)) {
		......
		return_error = BR_FAILED_REPLY;
		goto err_copy_data_failed;
	}
         注意,这里的事务t是要交给target_proc处理的,在这个场景之下,就是Service Manager了。因此,下面的语句:

  1. t->buffer = binder_alloc_buf(target_proc, tr->data_size,  
  2.         tr->offsets_size, !reply && (t->flags & TF_ONE_WAY));  
t->buffer = binder_alloc_buf(target_proc, tr->data_size,
		tr->offsets_size, !reply && (t->flags & TF_ONE_WAY));
         就是在Service Manager的进程空间中分配一块内存来保存用户传进入的参数了:

  1. if (copy_from_user(t->buffer->data, tr->data.ptr.buffer, tr->data_size)) {  
  2.     ......  
  3.     return_error = BR_FAILED_REPLY;  
  4.     goto err_copy_data_failed;  
  5. }  
  6. if (copy_from_user(offp, tr->data.ptr.offsets, tr->offsets_size)) {  
  7.     ......  
  8.     return_error = BR_FAILED_REPLY;  
  9.     goto err_copy_data_failed;  
  10. }  
	if (copy_from_user(t->buffer->data, tr->data.ptr.buffer, tr->data_size)) {
		......
		return_error = BR_FAILED_REPLY;
		goto err_copy_data_failed;
	}
	if (copy_from_user(offp, tr->data.ptr.offsets, tr->offsets_size)) {
		......
		return_error = BR_FAILED_REPLY;
		goto err_copy_data_failed;
	}
         由于现在target_node要被使用了,增加它的引用计数:

  1. if (target_node)  
  2.         binder_inc_node(target_node, 1, 0, NULL);  
if (target_node)
		binder_inc_node(target_node, 1, 0, NULL);
        接下去的for循环,就是用来处理传输数据中的Binder对象了。在我们的场景中,有一个类型为BINDER_TYPE_BINDER的Binder实体MediaPlayerService:

  1.    switch (fp->type) {  
  2.    case BINDER_TYPE_BINDER:  
  3.    case BINDER_TYPE_WEAK_BINDER: {  
  4. struct binder_ref *ref;  
  5. struct binder_node *node = binder_get_node(proc, fp->binder);  
  6. if (node == NULL) {  
  7.     node = binder_new_node(proc, fp->binder, fp->cookie);  
  8.     if (node == NULL) {  
  9.         return_error = BR_FAILED_REPLY;  
  10.         goto err_binder_new_node_failed;  
  11.     }  
  12.     node->min_priority = fp->flags & FLAT_BINDER_FLAG_PRIORITY_MASK;  
  13.     node->accept_fds = !!(fp->flags & FLAT_BINDER_FLAG_ACCEPTS_FDS);  
  14. }  
  15. if (fp->cookie != node->cookie) {  
  16.     ......  
  17.     goto err_binder_get_ref_for_node_failed;  
  18. }  
  19. ref = binder_get_ref_for_node(target_proc, node);  
  20. if (ref == NULL) {  
  21.     return_error = BR_FAILED_REPLY;  
  22.     goto err_binder_get_ref_for_node_failed;  
  23. }  
  24. if (fp->type == BINDER_TYPE_BINDER)  
  25.     fp->type = BINDER_TYPE_HANDLE;  
  26. else  
  27.     fp->type = BINDER_TYPE_WEAK_HANDLE;  
  28. fp->handle = ref->desc;  
  29. binder_inc_ref(ref, fp->type == BINDER_TYPE_HANDLE, &thread->todo);  
  30. ......  
  31.                             
  32. break;  
    switch (fp->type) {
    case BINDER_TYPE_BINDER:
    case BINDER_TYPE_WEAK_BINDER: {
	struct binder_ref *ref;
	struct binder_node *node = binder_get_node(proc, fp->binder);
	if (node == NULL) {
		node = binder_new_node(proc, fp->binder, fp->cookie);
		if (node == NULL) {
			return_error = BR_FAILED_REPLY;
			goto err_binder_new_node_failed;
		}
		node->min_priority = fp->flags & FLAT_BINDER_FLAG_PRIORITY_MASK;
		node->accept_fds = !!(fp->flags & FLAT_BINDER_FLAG_ACCEPTS_FDS);
	}
	if (fp->cookie != node->cookie) {
		......
		goto err_binder_get_ref_for_node_failed;
	}
	ref = binder_get_ref_for_node(target_proc, node);
	if (ref == NULL) {
		return_error = BR_FAILED_REPLY;
		goto err_binder_get_ref_for_node_failed;
	}
	if (fp->type == BINDER_TYPE_BINDER)
		fp->type = BINDER_TYPE_HANDLE;
	else
		fp->type = BINDER_TYPE_WEAK_HANDLE;
	fp->handle = ref->desc;
	binder_inc_ref(ref, fp->type == BINDER_TYPE_HANDLE, &thread->todo);
	......
							  
	} break;
        由于是第一次在Binder驱动程序中传输这个MediaPlayerService,调用binder_get_node函数查询这个Binder实体时,会返回空,于是binder_new_node在proc中新建一个,下次就可以直接使用了。

        现在,由于要把这个Binder实体MediaPlayerService交给target_proc,也就是Service Manager来管理,也就是说Service Manager要引用这个MediaPlayerService了,于是通过binder_get_ref_for_node为MediaPlayerService创建一个引用,并且通过binder_inc_ref来增加这个引用计数,防止这个引用还在使用过程当中就被销毁。注意,到了这里的时候,t->buffer中的flat_binder_obj的type已经改为BINDER_TYPE_HANDLE,handle已经改为ref->desc,跟原来不一样了,因为这个flat_binder_obj是最终是要传给Service Manager的,而Service Manager只能够通过句柄值来引用这个Binder实体。

        最后,把待处理事务加入到target_list列表中去:

  1. list_add_tail(&t->work.entry, target_list);  
list_add_tail(&t->work.entry, target_list);
        并且把待完成工作项加入到本线程的todo等待执行列表中去:

  1. list_add_tail(&tcomplete->entry, &thread->todo);  
list_add_tail(&tcomplete->entry, &thread->todo);
        现在目标进程有事情可做了,于是唤醒它:

  1. if (target_wait)  
  2.     wake_up_interruptible(target_wait);  
if (target_wait)
	wake_up_interruptible(target_wait);
       这里就是要唤醒Service Manager进程了。回忆一下前面 浅谈Service Manager成为Android进程间通信(IPC)机制Binder守护进程之路这篇文章,此时, Service Manager正在binder_thread_read函数中调用wait_event_interruptible进入休眠状态。

       这里我们先忽略一下Service Manager被唤醒之后的场景,继续MedaPlayerService的启动过程,然后再回来。

       回到binder_ioctl函数,bwr.read_size > 0为true,于是进入binder_thread_read函数:

  1. static int  
  2. binder_thread_read(struct binder_proc *proc, struct binder_thread *thread,  
  3.                    void  __user *buffer, int size, signed long *consumed, int non_block)  
  4. {  
  5.     void __user *ptr = buffer + *consumed;  
  6.     void __user *end = buffer + size;  
  7.   
  8.     int ret = 0;  
  9.     int wait_for_proc_work;  
  10.   
  11.     if (*consumed == 0) {  
  12.         if (put_user(BR_NOOP, (uint32_t __user *)ptr))  
  13.             return -EFAULT;  
  14.         ptr += sizeof(uint32_t);  
  15.     }  
  16.   
  17. retry:  
  18.     wait_for_proc_work = thread->transaction_stack == NULL && list_empty(&thread->todo);  
  19.       
  20.     .......  
  21.   
  22.     if (wait_for_proc_work) {  
  23.         .......  
  24.     } else {  
  25.         if (non_block) {  
  26.             if (!binder_has_thread_work(thread))  
  27.                 ret = -EAGAIN;  
  28.         } else  
  29.             ret = wait_event_interruptible(thread->wait, binder_has_thread_work(thread));  
  30.     }  
  31.       
  32.     ......  
  33.   
  34.     while (1) {  
  35.         uint32_t cmd;  
  36.         struct binder_transaction_data tr;  
  37.         struct binder_work *w;  
  38.         struct binder_transaction *t = NULL;  
  39.   
  40.         if (!list_empty(&thread->todo))  
  41.             w = list_first_entry(&thread->todo, struct binder_work, entry);  
  42.         else if (!list_empty(&proc->todo) && wait_for_proc_work)  
  43.             w = list_first_entry(&proc->todo, struct binder_work, entry);  
  44.         else {  
  45.             if (ptr - buffer == 4 && !(thread->looper & BINDER_LOOPER_STATE_NEED_RETURN)) /* no data added */  
  46.                 goto retry;  
  47.             break;  
  48.         }  
  49.   
  50.         if (end - ptr < sizeof(tr) + 4)  
  51.             break;  
  52.   
  53.         switch (w->type) {  
  54.         ......  
  55.         case BINDER_WORK_TRANSACTION_COMPLETE: {  
  56.             cmd = BR_TRANSACTION_COMPLETE;  
  57.             if (put_user(cmd, (uint32_t __user *)ptr))  
  58.                 return -EFAULT;  
  59.             ptr += sizeof(uint32_t);  
  60.   
  61.             binder_stat_br(proc, thread, cmd);  
  62.             if (binder_debug_mask & BINDER_DEBUG_TRANSACTION_COMPLETE)  
  63.                 printk(KERN_INFO "binder: %d:%d BR_TRANSACTION_COMPLETE\n",  
  64.                 proc->pid, thread->pid);  
  65.   
  66.             list_del(&w->entry);  
  67.             kfree(w);  
  68.             binder_stats.obj_deleted[BINDER_STAT_TRANSACTION_COMPLETE]++;  
  69.                                                } break;  
  70.         ......  
  71.         }  
  72.   
  73.         if (!t)  
  74.             continue;  
  75.   
  76.         ......  
  77.     }  
  78.   
  79. done:  
  80.     ......  
  81.     return 0;  
  82. }  
static int
binder_thread_read(struct binder_proc *proc, struct binder_thread *thread,
				   void  __user *buffer, int size, signed long *consumed, int non_block)
{
	void __user *ptr = buffer + *consumed;
	void __user *end = buffer + size;

	int ret = 0;
	int wait_for_proc_work;

	if (*consumed == 0) {
		if (put_user(BR_NOOP, (uint32_t __user *)ptr))
			return -EFAULT;
		ptr += sizeof(uint32_t);
	}

retry:
	wait_for_proc_work = thread->transaction_stack == NULL && list_empty(&thread->todo);
	
	.......

	if (wait_for_proc_work) {
		.......
	} else {
		if (non_block) {
			if (!binder_has_thread_work(thread))
				ret = -EAGAIN;
		} else
			ret = wait_event_interruptible(thread->wait, binder_has_thread_work(thread));
	}
	
	......

	while (1) {
		uint32_t cmd;
		struct binder_transaction_data tr;
		struct binder_work *w;
		struct binder_transaction *t = NULL;

		if (!list_empty(&thread->todo))
			w = list_first_entry(&thread->todo, struct binder_work, entry);
		else if (!list_empty(&proc->todo) && wait_for_proc_work)
			w = list_first_entry(&proc->todo, struct binder_work, entry);
		else {
			if (ptr - buffer == 4 && !(thread->looper & BINDER_LOOPER_STATE_NEED_RETURN)) /* no data added */
				goto retry;
			break;
		}

		if (end - ptr < sizeof(tr) + 4)
			break;

		switch (w->type) {
		......
		case BINDER_WORK_TRANSACTION_COMPLETE: {
			cmd = BR_TRANSACTION_COMPLETE;
			if (put_user(cmd, (uint32_t __user *)ptr))
				return -EFAULT;
			ptr += sizeof(uint32_t);

			binder_stat_br(proc, thread, cmd);
			if (binder_debug_mask & BINDER_DEBUG_TRANSACTION_COMPLETE)
				printk(KERN_INFO "binder: %d:%d BR_TRANSACTION_COMPLETE\n",
				proc->pid, thread->pid);

			list_del(&w->entry);
			kfree(w);
			binder_stats.obj_deleted[BINDER_STAT_TRANSACTION_COMPLETE]++;
											   } break;
		......
		}

		if (!t)
			continue;

		......
	}

done:
	......
	return 0;
}

        这里,thread->transaction_stack和thread->todo均不为空,于是wait_for_proc_work为false,由于binder_has_thread_work的时候,返回true,这里因为thread->todo不为空,因此,线程虽然调用了wait_event_interruptible,但是不会睡眠,于是继续往下执行。

        由于thread->todo不为空,执行下列语句:

  1. if (!list_empty(&thread->todo))  
  2.      w = list_first_entry(&thread->todo, struct binder_work, entry);  
if (!list_empty(&thread->todo))
     w = list_first_entry(&thread->todo, struct binder_work, entry);
        w->type为BINDER_WORK_TRANSACTION_COMPLETE,这是在上面的binder_transaction函数设置的,于是执行:

  1.    switch (w->type) {  
  2.    ......  
  3.    case BINDER_WORK_TRANSACTION_COMPLETE: {  
  4. cmd = BR_TRANSACTION_COMPLETE;  
  5. if (put_user(cmd, (uint32_t __user *)ptr))  
  6.     return -EFAULT;  
  7. ptr += sizeof(uint32_t);  
  8.   
  9.        ......  
  10. list_del(&w->entry);  
  11. kfree(w);  
  12.           
  13. break;  
  14. ......  
  15.    }  
    switch (w->type) {
    ......
    case BINDER_WORK_TRANSACTION_COMPLETE: {
	cmd = BR_TRANSACTION_COMPLETE;
	if (put_user(cmd, (uint32_t __user *)ptr))
		return -EFAULT;
	ptr += sizeof(uint32_t);

        ......
	list_del(&w->entry);
	kfree(w);
			
	} break;
	......
    }
        这里就将w从thread->todo删除了。由于这里t为空,重新执行while循环,这时由于已经没有事情可做了,最后就返回到binder_ioctl函数中。注间,这里一共往用户传进来的缓冲区buffer写入了两个整数,分别是BR_NOOP和BR_TRANSACTION_COMPLETE。

        binder_ioctl函数返回到用户空间之前,把数据消耗情况拷贝回用户空间中:

  1. if (copy_to_user(ubuf, &bwr, sizeof(bwr))) {  
  2.     ret = -EFAULT;  
  3.     goto err;  
  4. }  
if (copy_to_user(ubuf, &bwr, sizeof(bwr))) {
	ret = -EFAULT;
	goto err;
}
        最后返回到IPCThreadState::talkWithDriver函数中,执行下面语句:

  1.     if (err >= NO_ERROR) {  
  2.         if (bwr.write_consumed > 0) {  
  3.             if (bwr.write_consumed < (ssize_t)mOut.dataSize())  
  4.                 mOut.remove(0, bwr.write_consumed);  
  5.             else  
  6.                 mOut.setDataSize(0);  
  7.         }  
  8.         if (bwr.read_consumed > 0) {  
  9. <PRE class=cpp name="code">            mIn.setDataSize(bwr.read_consumed);  
  10.             mIn.setDataPosition(0);</PRE>        }        ......        return NO_ERROR;    }  
    if (err >= NO_ERROR) {
        if (bwr.write_consumed > 0) {
            if (bwr.write_consumed < (ssize_t)mOut.dataSize())
                mOut.remove(0, bwr.write_consumed);
            else
                mOut.setDataSize(0);
        }
        if (bwr.read_consumed > 0) {

      
      
  1. mIn.setDataSize(bwr.read_consumed);  
  2. mIn.setDataPosition(0);  
} ...... return NO_ERROR; }

        首先是把mOut的数据清空:

  1. mOut.setDataSize(0);  
    mOut.setDataSize(0);
        然后设置已经读取的内容的大小:

  1. mIn.setDataSize(bwr.read_consumed);  
  2. mIn.setDataPosition(0);  
    mIn.setDataSize(bwr.read_consumed);
    mIn.setDataPosition(0);
        然后返回到IPCThreadState::waitForResponse函数中。在IPCThreadState::waitForResponse函数,先是从mIn读出一个整数,这个便是BR_NOOP了,这是一个空操作,什么也不做。然后继续进入IPCThreadState::talkWithDriver函数中。
        这时候,下面语句执行后:

  1. const bool needRead = mIn.dataPosition() >= mIn.dataSize();  
const bool needRead = mIn.dataPosition() >= mIn.dataSize();
        needRead为false,因为在mIn中,尚有一个整数BR_TRANSACTION_COMPLETE未读出。

       这时候,下面语句执行后:

  1. const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0;  
const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0;
        outAvail等于0。因此,最后bwr.write_size和bwr.read_size均为0,IPCThreadState::talkWithDriver函数什么也不做,直接返回到IPCThreadState::waitForResponse函数中。在IPCThreadState::waitForResponse函数,又继续从mIn读出一个整数,这个便是BR_TRANSACTION_COMPLETE:

  1. switch (cmd) {  
  2. case BR_TRANSACTION_COMPLETE:  
  3.        if (!reply && !acquireResult) goto finish;  
  4.        break;  
  5. ......  
  6. }  
switch (cmd) {
case BR_TRANSACTION_COMPLETE:
       if (!reply && !acquireResult) goto finish;
       break;
......
}
        reply不为NULL,因此,IPCThreadState::waitForResponse的循环没有结束,继续执行,又进入到IPCThreadState::talkWithDrive中。

        这次,needRead就为true了,而outAvail仍为0,所以bwr.read_size不为0,bwr.write_size为0。于是通过:

  1. ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr)  
ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr)
        进入到Binder驱动程序中的binder_ioctl函数中。由于bwr.write_size为0,bwr.read_size不为0,这次直接就进入到binder_thread_read函数中。这时候,thread->transaction_stack不等于0,但是thread->todo为空,于是线程就通过:

  1. wait_event_interruptible(thread->wait, binder_has_thread_work(thread));  
wait_event_interruptible(thread->wait, binder_has_thread_work(thread));
        进入睡眠状态,等待Service Manager来唤醒了。

        现在,我们可以回到Service Manager被唤醒的过程了。我们接着前面浅谈Service Manager成为Android进程间通信(IPC)机制Binder守护进程之路这篇文章的最后,继续描述。此时, Service Manager正在binder_thread_read函数中调用wait_event_interruptible_exclusive进入休眠状态。上面被MediaPlayerService启动后进程唤醒后,继续执行binder_thread_read函数:

  1. static int  
  2. binder_thread_read(struct binder_proc *proc, struct binder_thread *thread,  
  3.                    void  __user *buffer, int size, signed long *consumed, int non_block)  
  4. {  
  5.     void __user *ptr = buffer + *consumed;  
  6.     void __user *end = buffer + size;  
  7.   
  8.     int ret = 0;  
  9.     int wait_for_proc_work;  
  10.   
  11.     if (*consumed == 0) {  
  12.         if (put_user(BR_NOOP, (uint32_t __user *)ptr))  
  13.             return -EFAULT;  
  14.         ptr += sizeof(uint32_t);  
  15.     }  
  16.   
  17. retry:  
  18.     wait_for_proc_work = thread->transaction_stack == NULL && list_empty(&thread->todo);  
  19.   
  20.     ......  
  21.   
  22.     if (wait_for_proc_work) {  
  23.         ......  
  24.         if (non_block) {  
  25.             if (!binder_has_proc_work(proc, thread))  
  26.                 ret = -EAGAIN;  
  27.         } else  
  28.             ret = wait_event_interruptible_exclusive(proc->wait, binder_has_proc_work(proc, thread));  
  29.     } else {  
  30.         ......  
  31.     }  
  32.       
  33.     ......  
  34.   
  35.     while (1) {  
  36.         uint32_t cmd;  
  37.         struct binder_transaction_data tr;  
  38.         struct binder_work *w;  
  39.         struct binder_transaction *t = NULL;  
  40.   
  41.         if (!list_empty(&thread->todo))  
  42.             w = list_first_entry(&thread->todo, struct binder_work, entry);  
  43.         else if (!list_empty(&proc->todo) && wait_for_proc_work)  
  44.             w = list_first_entry(&proc->todo, struct binder_work, entry);  
  45.         else {  
  46.             if (ptr - buffer == 4 && !(thread->looper & BINDER_LOOPER_STATE_NEED_RETURN)) /* no data added */  
  47.                 goto retry;  
  48.             break;  
  49.         }  
  50.   
  51.         if (end - ptr < sizeof(tr) + 4)  
  52.             break;  
  53.   
  54.         switch (w->type) {  
  55.         case BINDER_WORK_TRANSACTION: {  
  56.             t = container_of(w, struct binder_transaction, work);  
  57.                                       } break;  
  58.         ......  
  59.         }  
  60.   
  61.         if (!t)  
  62.             continue;  
  63.   
  64.         BUG_ON(t->buffer == NULL);  
  65.         if (t->buffer->target_node) {  
  66.             struct binder_node *target_node = t->buffer->target_node;  
  67.             tr.target.ptr = target_node->ptr;  
  68.             tr.cookie =  target_node->cookie;  
  69.             ......  
  70.             cmd = BR_TRANSACTION;  
  71.         } else {  
  72.             ......  
  73.         }  
  74.         tr.code = t->code;  
  75.         tr.flags = t->flags;  
  76.         tr.sender_euid = t->sender_euid;  
  77.   
  78.         if (t->from) {  
  79.             struct task_struct *sender = t->from->proc->tsk;  
  80.             tr.sender_pid = task_tgid_nr_ns(sender, current->nsproxy->pid_ns);  
  81.         } else {  
  82.             tr.sender_pid = 0;  
  83.         }  
  84.   
  85.         tr.data_size = t->buffer->data_size;  
  86.         tr.offsets_size = t->buffer->offsets_size;  
  87.         tr.data.ptr.buffer = (void *)t->buffer->data + proc->user_buffer_offset;  
  88.         tr.data.ptr.offsets = tr.data.ptr.buffer + ALIGN(t->buffer->data_size, sizeof(void *));  
  89.   
  90.         if (put_user(cmd, (uint32_t __user *)ptr))  
  91.             return -EFAULT;  
  92.         ptr += sizeof(uint32_t);  
  93.         if (copy_to_user(ptr, &tr, sizeof(tr)))  
  94.             return -EFAULT;  
  95.         ptr += sizeof(tr);  
  96.   
  97.         ......  
  98.   
  99.         list_del(&t->work.entry);  
  100.         t->buffer->allow_user_free = 1;  
  101.         if (cmd == BR_TRANSACTION && !(t->flags & TF_ONE_WAY)) {  
  102.             t->to_parent = thread->transaction_stack;  
  103.             t->to_thread = thread;  
  104.             thread->transaction_stack = t;  
  105.         } else {  
  106.             t->buffer->transaction = NULL;  
  107.             kfree(t);  
  108.             binder_stats.obj_deleted[BINDER_STAT_TRANSACTION]++;  
  109.         }  
  110.         break;  
  111.     }  
  112.   
  113. done:  
  114.   
  115.     ......  
  116.     return 0;  
  117. }  
static int
binder_thread_read(struct binder_proc *proc, struct binder_thread *thread,
				   void  __user *buffer, int size, signed long *consumed, int non_block)
{
	void __user *ptr = buffer + *consumed;
	void __user *end = buffer + size;

	int ret = 0;
	int wait_for_proc_work;

	if (*consumed == 0) {
		if (put_user(BR_NOOP, (uint32_t __user *)ptr))
			return -EFAULT;
		ptr += sizeof(uint32_t);
	}

retry:
	wait_for_proc_work = thread->transaction_stack == NULL && list_empty(&thread->todo);

	......

	if (wait_for_proc_work) {
		......
		if (non_block) {
			if (!binder_has_proc_work(proc, thread))
				ret = -EAGAIN;
		} else
			ret = wait_event_interruptible_exclusive(proc->wait, binder_has_proc_work(proc, thread));
	} else {
		......
	}
	
	......

	while (1) {
		uint32_t cmd;
		struct binder_transaction_data tr;
		struct binder_work *w;
		struct binder_transaction *t = NULL;

		if (!list_empty(&thread->todo))
			w = list_first_entry(&thread->todo, struct binder_work, entry);
		else if (!list_empty(&proc->todo) && wait_for_proc_work)
			w = list_first_entry(&proc->todo, struct binder_work, entry);
		else {
			if (ptr - buffer == 4 && !(thread->looper & BINDER_LOOPER_STATE_NEED_RETURN)) /* no data added */
				goto retry;
			break;
		}

		if (end - ptr < sizeof(tr) + 4)
			break;

		switch (w->type) {
		case BINDER_WORK_TRANSACTION: {
			t = container_of(w, struct binder_transaction, work);
									  } break;
		......
		}

		if (!t)
			continue;

		BUG_ON(t->buffer == NULL);
		if (t->buffer->target_node) {
			struct binder_node *target_node = t->buffer->target_node;
			tr.target.ptr = target_node->ptr;
			tr.cookie =  target_node->cookie;
			......
			cmd = BR_TRANSACTION;
		} else {
			......
		}
		tr.code = t->code;
		tr.flags = t->flags;
		tr.sender_euid = t->sender_euid;

		if (t->from) {
			struct task_struct *sender = t->from->proc->tsk;
			tr.sender_pid = task_tgid_nr_ns(sender, current->nsproxy->pid_ns);
		} else {
			tr.sender_pid = 0;
		}

		tr.data_size = t->buffer->data_size;
		tr.offsets_size = t->buffer->offsets_size;
		tr.data.ptr.buffer = (void *)t->buffer->data + proc->user_buffer_offset;
		tr.data.ptr.offsets = tr.data.ptr.buffer + ALIGN(t->buffer->data_size, sizeof(void *));

		if (put_user(cmd, (uint32_t __user *)ptr))
			return -EFAULT;
		ptr += sizeof(uint32_t);
		if (copy_to_user(ptr, &tr, sizeof(tr)))
			return -EFAULT;
		ptr += sizeof(tr);

		......

		list_del(&t->work.entry);
		t->buffer->allow_user_free = 1;
		if (cmd == BR_TRANSACTION && !(t->flags & TF_ONE_WAY)) {
			t->to_parent = thread->transaction_stack;
			t->to_thread = thread;
			thread->transaction_stack = t;
		} else {
			t->buffer->transaction = NULL;
			kfree(t);
			binder_stats.obj_deleted[BINDER_STAT_TRANSACTION]++;
		}
		break;
	}

done:

    ......
	return 0;
}

        Service Manager被唤醒之后,就进入while循环开始处理事务了。这里wait_for_proc_work等于1,并且proc->todo不为空,所以从proc->todo列表中得到第一个工作项:

  1. w = list_first_entry(&proc->todo, struct binder_work, entry);  
w = list_first_entry(&proc->todo, struct binder_work, entry);
        从上面的描述中,我们知道,这个工作项的类型为BINDER_WORK_TRANSACTION,于是通过下面语句得到事务项:

  1. t = container_of(w, struct binder_transaction, work);  
t = container_of(w, struct binder_transaction, work);
       接着就是把事务项t中的数据拷贝到本地局部变量struct binder_transaction_data tr中去了:

  1. if (t->buffer->target_node) {  
  2.     struct binder_node *target_node = t->buffer->target_node;  
  3.     tr.target.ptr = target_node->ptr;  
  4.     tr.cookie =  target_node->cookie;  
  5.     ......  
  6.     cmd = BR_TRANSACTION;  
  7. else {  
  8.     ......  
  9. }  
  10. tr.code = t->code;  
  11. tr.flags = t->flags;  
  12. tr.sender_euid = t->sender_euid;  
  13.   
  14. if (t->from) {  
  15.     struct task_struct *sender = t->from->proc->tsk;  
  16.     tr.sender_pid = task_tgid_nr_ns(sender, current->nsproxy->pid_ns);  
  17. else {  
  18.     tr.sender_pid = 0;  
  19. }  
  20.   
  21. tr.data_size = t->buffer->data_size;  
  22. tr.offsets_size = t->buffer->offsets_size;  
  23. tr.data.ptr.buffer = (void *)t->buffer->data + proc->user_buffer_offset;  
  24. tr.data.ptr.offsets = tr.data.ptr.buffer + ALIGN(t->buffer->data_size, sizeof(void *));  
if (t->buffer->target_node) {
	struct binder_node *target_node = t->buffer->target_node;
	tr.target.ptr = target_node->ptr;
	tr.cookie =  target_node->cookie;
	......
	cmd = BR_TRANSACTION;
} else {
	......
}
tr.code = t->code;
tr.flags = t->flags;
tr.sender_euid = t->sender_euid;

if (t->from) {
	struct task_struct *sender = t->from->proc->tsk;
	tr.sender_pid = task_tgid_nr_ns(sender, current->nsproxy->pid_ns);
} else {
	tr.sender_pid = 0;
}

tr.data_size = t->buffer->data_size;
tr.offsets_size = t->buffer->offsets_size;
tr.data.ptr.buffer = (void *)t->buffer->data + proc->user_buffer_offset;
tr.data.ptr.offsets = tr.data.ptr.buffer + ALIGN(t->buffer->data_size, sizeof(void *));
        这里有一个非常重要的地方,是Binder进程间通信机制的精髓所在:

  1. tr.data.ptr.buffer = (void *)t->buffer->data + proc->user_buffer_offset;  
  2. tr.data.ptr.offsets = tr.data.ptr.buffer + ALIGN(t->buffer->data_size, sizeof(void *));  
tr.data.ptr.buffer = (void *)t->buffer->data + proc->user_buffer_offset;
tr.data.ptr.offsets = tr.data.ptr.buffer + ALIGN(t->buffer->data_size, sizeof(void *));
        t->buffer->data所指向的地址是内核空间的,现在要把数据返回给Service Manager进程的用户空间,而Service Manager进程的用户空间是不能访问内核空间的数据的,所以这里要作一下处理。怎么处理呢?我们在学面向对象语言的时候,对象的拷贝有深拷贝和浅拷贝之分,深拷贝是把另外分配一块新内存,然后把原始对象的内容搬过去,浅拷贝是并没有为新对象分配一块新空间,而只是分配一个引用,而个引用指向原始对象。Binder机制用的是类似浅拷贝的方法,通过在用户空间分配一个虚拟地址,然后让这个用户空间虚拟地址与 t->buffer->data这个内核空间虚拟地址指向同一个物理地址,这样就可以实现浅拷贝了。怎么样用户空间和内核空间的虚拟地址同时指向同一个物理地址呢?请参考前面一篇文章浅谈Service Manager成为Android进程间通信(IPC)机制Binder守护进程之路,那里有详细描述。这里只要将t->buffer->data加上一个偏移值proc->user_buffer_offset就可以得到t->buffer->data对应的用户空间虚拟地址了。调整了tr.data.ptr.buffer的值之后,不要忘记也要一起调整tr.data.ptr.offsets的值。

        接着就是把tr的内容拷贝到用户传进来的缓冲区去了,指针ptr指向这个用户缓冲区的地址:

  1. if (put_user(cmd, (uint32_t __user *)ptr))  
  2.     return -EFAULT;  
  3. ptr += sizeof(uint32_t);  
  4. if (copy_to_user(ptr, &tr, sizeof(tr)))  
  5.     return -EFAULT;  
  6. ptr += sizeof(tr);  
if (put_user(cmd, (uint32_t __user *)ptr))
	return -EFAULT;
ptr += sizeof(uint32_t);
if (copy_to_user(ptr, &tr, sizeof(tr)))
	return -EFAULT;
ptr += sizeof(tr);
         这里可以看出,这里只是对作tr.data.ptr.bufferr和tr.data.ptr.offsets的内容作了浅拷贝。

         最后,由于已经处理了这个事务,要把它从todo列表中删除:

  1. list_del(&t->work.entry);  
  2. t->buffer->allow_user_free = 1;  
  3. if (cmd == BR_TRANSACTION && !(t->flags & TF_ONE_WAY)) {  
  4.     t->to_parent = thread->transaction_stack;  
  5.     t->to_thread = thread;  
  6.     thread->transaction_stack = t;  
  7. else {  
  8.     t->buffer->transaction = NULL;  
  9.     kfree(t);  
  10.     binder_stats.obj_deleted[BINDER_STAT_TRANSACTION]++;  
  11. }  
list_del(&t->work.entry);
t->buffer->allow_user_free = 1;
if (cmd == BR_TRANSACTION && !(t->flags & TF_ONE_WAY)) {
	t->to_parent = thread->transaction_stack;
	t->to_thread = thread;
	thread->transaction_stack = t;
} else {
	t->buffer->transaction = NULL;
	kfree(t);
	binder_stats.obj_deleted[BINDER_STAT_TRANSACTION]++;
}
         注意,这里的cmd == BR_TRANSACTION && !(t->flags & TF_ONE_WAY)为true,表明这个事务虽然在驱动程序中已经处理完了,但是它仍然要等待Service Manager完成之后,给驱动程序一个确认,也就是需要等待回复,于是把当前事务t放在thread->transaction_stack队列的头部:

  1. t->to_parent = thread->transaction_stack;  
  2. t->to_thread = thread;  
  3. thread->transaction_stack = t;  
t->to_parent = thread->transaction_stack;
t->to_thread = thread;
thread->transaction_stack = t;
         如果cmd == BR_TRANSACTION && !(t->flags & TF_ONE_WAY)为false,那就不需要等待回复了,直接把事务t删掉。

         这个while最后通过一个break跳了出来,最后返回到binder_ioctl函数中:

  1. static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)  
  2. {  
  3.     int ret;  
  4.     struct binder_proc *proc = filp->private_data;  
  5.     struct binder_thread *thread;  
  6.     unsigned int size = _IOC_SIZE(cmd);  
  7.     void __user *ubuf = (void __user *)arg;  
  8.   
  9.     ......  
  10.   
  11.     switch (cmd) {  
  12.     case BINDER_WRITE_READ: {  
  13.         struct binder_write_read bwr;  
  14.         if (size != sizeof(struct binder_write_read)) {  
  15.             ret = -EINVAL;  
  16.             goto err;  
  17.         }  
  18.         if (copy_from_user(&bwr, ubuf, sizeof(bwr))) {  
  19.             ret = -EFAULT;  
  20.             goto err;  
  21.         }  
  22.         ......  
  23.         if (bwr.read_size > 0) {  
  24.             ret = binder_thread_read(proc, thread, (void __user *)bwr.read_buffer, bwr.read_size, &bwr.read_consumed, filp->f_flags & O_NONBLOCK);  
  25.             if (!list_empty(&proc->todo))  
  26.                 wake_up_interruptible(&proc->wait);  
  27.             if (ret < 0) {  
  28.                 if (copy_to_user(ubuf, &bwr, sizeof(bwr)))  
  29.                     ret = -EFAULT;  
  30.                 goto err;  
  31.             }  
  32.         }  
  33.         ......  
  34.         if (copy_to_user(ubuf, &bwr, sizeof(bwr))) {  
  35.             ret = -EFAULT;  
  36.             goto err;  
  37.         }  
  38.         break;  
  39.         }  
  40.     ......  
  41.     default:  
  42.         ret = -EINVAL;  
  43.         goto err;  
  44.     }  
  45.     ret = 0;  
  46. err:  
  47.     ......  
  48.     return ret;  
  49. }  
static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
	int ret;
	struct binder_proc *proc = filp->private_data;
	struct binder_thread *thread;
	unsigned int size = _IOC_SIZE(cmd);
	void __user *ubuf = (void __user *)arg;

	......

	switch (cmd) {
	case BINDER_WRITE_READ: {
		struct binder_write_read bwr;
		if (size != sizeof(struct binder_write_read)) {
			ret = -EINVAL;
			goto err;
		}
		if (copy_from_user(&bwr, ubuf, sizeof(bwr))) {
			ret = -EFAULT;
			goto err;
		}
		......
		if (bwr.read_size > 0) {
			ret = binder_thread_read(proc, thread, (void __user *)bwr.read_buffer, bwr.read_size, &bwr.read_consumed, filp->f_flags & O_NONBLOCK);
			if (!list_empty(&proc->todo))
				wake_up_interruptible(&proc->wait);
			if (ret < 0) {
				if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
					ret = -EFAULT;
				goto err;
			}
		}
		......
		if (copy_to_user(ubuf, &bwr, sizeof(bwr))) {
			ret = -EFAULT;
			goto err;
		}
		break;
	    }
	......
	default:
		ret = -EINVAL;
		goto err;
	}
	ret = 0;
err:
	......
	return ret;
}
         从binder_thread_read返回来后,再看看proc->todo是否还有事务等待处理,如果是,就把睡眠在proc->wait队列的线程唤醒来处理。最后,把本地变量struct binder_write_read bwr的内容拷贝回到用户传进来的缓冲区中,就返回了。

        这里就是返回到frameworks/base/cmds/servicemanager/binder.c文件中的binder_loop函数了:

  1. void binder_loop(struct binder_state *bs, binder_handler func)  
  2. {  
  3.     int res;  
  4.     struct binder_write_read bwr;  
  5.     unsigned readbuf[32];  
  6.   
  7.     bwr.write_size = 0;  
  8.     bwr.write_consumed = 0;  
  9.     bwr.write_buffer = 0;  
  10.       
  11.     readbuf[0] = BC_ENTER_LOOPER;  
  12.     binder_write(bs, readbuf, sizeof(unsigned));  
  13.   
  14.     for (;;) {  
  15.         bwr.read_size = sizeof(readbuf);  
  16.         bwr.read_consumed = 0;  
  17.         bwr.read_buffer = (unsigned) readbuf;  
  18.   
  19.         res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);  
  20.   
  21.         if (res < 0) {  
  22.             LOGE("binder_loop: ioctl failed (%s)\n", strerror(errno));  
  23.             break;  
  24.         }  
  25.   
  26.         res = binder_parse(bs, 0, readbuf, bwr.read_consumed, func);  
  27.         if (res == 0) {  
  28.             LOGE("binder_loop: unexpected reply?!\n");  
  29.             break;  
  30.         }  
  31.         if (res < 0) {  
  32.             LOGE("binder_loop: io error %d %s\n", res, strerror(errno));  
  33.             break;  
  34.         }  
  35.     }  
  36. }  
void binder_loop(struct binder_state *bs, binder_handler func)
{
    int res;
    struct binder_write_read bwr;
    unsigned readbuf[32];

    bwr.write_size = 0;
    bwr.write_consumed = 0;
    bwr.write_buffer = 0;
    
    readbuf[0] = BC_ENTER_LOOPER;
    binder_write(bs, readbuf, sizeof(unsigned));

    for (;;) {
        bwr.read_size = sizeof(readbuf);
        bwr.read_consumed = 0;
        bwr.read_buffer = (unsigned) readbuf;

        res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);

        if (res < 0) {
            LOGE("binder_loop: ioctl failed (%s)\n", strerror(errno));
            break;
        }

        res = binder_parse(bs, 0, readbuf, bwr.read_consumed, func);
        if (res == 0) {
            LOGE("binder_loop: unexpected reply?!\n");
            break;
        }
        if (res < 0) {
            LOGE("binder_loop: io error %d %s\n", res, strerror(errno));
            break;
        }
    }
}
       返回来的数据都放在readbuf中,接着调用binder_parse进行解析:

  1. int binder_parse(struct binder_state *bs, struct binder_io *bio,  
  2.                  uint32_t *ptr, uint32_t size, binder_handler func)  
  3. {  
  4.     int r = 1;  
  5.     uint32_t *end = ptr + (size / 4);  
  6.   
  7.     while (ptr < end) {  
  8.         uint32_t cmd = *ptr++;  
  9.         ......  
  10.         case BR_TRANSACTION: {  
  11.             struct binder_txn *txn = (void *) ptr;  
  12.             if ((end - ptr) * sizeof(uint32_t) < sizeof(struct binder_txn)) {  
  13.                 LOGE("parse: txn too small!\n");  
  14.                 return -1;  
  15.             }  
  16.             binder_dump_txn(txn);  
  17.             if (func) {  
  18.                 unsigned rdata[256/4];  
  19.                 struct binder_io msg;  
  20.                 struct binder_io reply;  
  21.                 int res;  
  22.   
  23.                 bio_init(&reply, rdata, sizeof(rdata), 4);  
  24.                 bio_init_from_txn(&msg, txn);  
  25.                 res = func(bs, txn, &msg, &reply);  
  26.                 binder_send_reply(bs, &reply, txn->data, res);  
  27.             }  
  28.             ptr += sizeof(*txn) / sizeof(uint32_t);  
  29.             break;  
  30.                              }  
  31.         ......  
  32.         default:  
  33.             LOGE("parse: OOPS %d\n", cmd);  
  34.             return -1;  
  35.         }  
  36.     }  
  37.   
  38.     return r;  
  39. }  
int binder_parse(struct binder_state *bs, struct binder_io *bio,
				 uint32_t *ptr, uint32_t size, binder_handler func)
{
	int r = 1;
	uint32_t *end = ptr + (size / 4);

	while (ptr < end) {
		uint32_t cmd = *ptr++;
        ......
		case BR_TRANSACTION: {
			struct binder_txn *txn = (void *) ptr;
			if ((end - ptr) * sizeof(uint32_t) < sizeof(struct binder_txn)) {
				LOGE("parse: txn too small!\n");
				return -1;
			}
			binder_dump_txn(txn);
			if (func) {
				unsigned rdata[256/4];
				struct binder_io msg;
				struct binder_io reply;
				int res;

				bio_init(&reply, rdata, sizeof(rdata), 4);
				bio_init_from_txn(&msg, txn);
				res = func(bs, txn, &msg, &reply);
				binder_send_reply(bs, &reply, txn->data, res);
			}
			ptr += sizeof(*txn) / sizeof(uint32_t);
			break;
							 }
		......
		default:
			LOGE("parse: OOPS %d\n", cmd);
			return -1;
		}
	}

	return r;
}
        首先把从Binder驱动程序读出来的数据转换为一个struct binder_txn结构体,保存在txn本地变量中,struct binder_txn定义在frameworks/base/cmds/servicemanager/binder.h文件中:

  1. struct binder_txn  
  2. {  
  3.     void *target;  
  4.     void *cookie;  
  5.     uint32_t code;  
  6.     uint32_t flags;  
  7.   
  8.     uint32_t sender_pid;  
  9.     uint32_t sender_euid;  
  10.   
  11.     uint32_t data_size;  
  12.     uint32_t offs_size;  
  13.     void *data;  
  14.     void *offs;  
  15. };  
struct binder_txn
{
    void *target;
    void *cookie;
    uint32_t code;
    uint32_t flags;

    uint32_t sender_pid;
    uint32_t sender_euid;

    uint32_t data_size;
    uint32_t offs_size;
    void *data;
    void *offs;
};
       函数中还用到了另外一个数据结构struct binder_io,也是定义在frameworks/base/cmds/servicemanager/binder.h文件中:

  1. struct binder_io  
  2. {  
  3.     char *data;            /* pointer to read/write from */  
  4.     uint32_t *offs;        /* array of offsets */  
  5.     uint32_t data_avail;   /* bytes available in data buffer */  
  6.     uint32_t offs_avail;   /* entries available in offsets array */  
  7.   
  8.     char *data0;           /* start of data buffer */  
  9.     uint32_t *offs0;       /* start of offsets buffer */  
  10.     uint32_t flags;  
  11.     uint32_t unused;  
  12. };  
struct binder_io
{
    char *data;            /* pointer to read/write from */
    uint32_t *offs;        /* array of offsets */
    uint32_t data_avail;   /* bytes available in data buffer */
    uint32_t offs_avail;   /* entries available in offsets array */

    char *data0;           /* start of data buffer */
    uint32_t *offs0;       /* start of offsets buffer */
    uint32_t flags;
    uint32_t unused;
};
       接着往下看,函数调bio_init来初始化reply变量:

  1. void bio_init(struct binder_io *bio, void *data,  
  2.               uint32_t maxdata, uint32_t maxoffs)  
  3. {  
  4.     uint32_t n = maxoffs * sizeof(uint32_t);  
  5.   
  6.     if (n > maxdata) {  
  7.         bio->flags = BIO_F_OVERFLOW;  
  8.         bio->data_avail = 0;  
  9.         bio->offs_avail = 0;  
  10.         return;  
  11.     }  
  12.   
  13.     bio->data = bio->data0 = data + n;  
  14.     bio->offs = bio->offs0 = data;  
  15.     bio->data_avail = maxdata - n;  
  16.     bio->offs_avail = maxoffs;  
  17.     bio->flags = 0;  
  18. }  
void bio_init(struct binder_io *bio, void *data,
              uint32_t maxdata, uint32_t maxoffs)
{
    uint32_t n = maxoffs * sizeof(uint32_t);

    if (n > maxdata) {
        bio->flags = BIO_F_OVERFLOW;
        bio->data_avail = 0;
        bio->offs_avail = 0;
        return;
    }

    bio->data = bio->data0 = data + n;
    bio->offs = bio->offs0 = data;
    bio->data_avail = maxdata - n;
    bio->offs_avail = maxoffs;
    bio->flags = 0;
}
       接着又调用bio_init_from_txn来初始化msg变量:

  1. void bio_init_from_txn(struct binder_io *bio, struct binder_txn *txn)  
  2. {  
  3.     bio->data = bio->data0 = txn->data;  
  4.     bio->offs = bio->offs0 = txn->offs;  
  5.     bio->data_avail = txn->data_size;  
  6.     bio->offs_avail = txn->offs_size / 4;  
  7.     bio->flags = BIO_F_SHARED;  
  8. }  
void bio_init_from_txn(struct binder_io *bio, struct binder_txn *txn)
{
    bio->data = bio->data0 = txn->data;
    bio->offs = bio->offs0 = txn->offs;
    bio->data_avail = txn->data_size;
    bio->offs_avail = txn->offs_size / 4;
    bio->flags = BIO_F_SHARED;
}
      最后,真正进行处理的函数是从参数中传进来的函数指针func,这里就是定义在frameworks/base/cmds/servicemanager/service_manager.c文件中的svcmgr_handler函数:

  1. int svcmgr_handler(struct binder_state *bs,  
  2.                    struct binder_txn *txn,  
  3.                    struct binder_io *msg,  
  4.                    struct binder_io *reply)  
  5. {  
  6.     struct svcinfo *si;  
  7.     uint16_t *s;  
  8.     unsigned len;  
  9.     void *ptr;  
  10.     uint32_t strict_policy;  
  11.   
  12.     if (txn->target != svcmgr_handle)  
  13.         return -1;  
  14.   
  15.     // Equivalent to Parcel::enforceInterface(), reading the RPC  
  16.     // header with the strict mode policy mask and the interface name.  
  17.     // Note that we ignore the strict_policy and don't propagate it  
  18.     // further (since we do no outbound RPCs anyway).  
  19.     strict_policy = bio_get_uint32(msg);  
  20.     s = bio_get_string16(msg, &len);  
  21.     if ((len != (sizeof(svcmgr_id) / 2)) ||  
  22.         memcmp(svcmgr_id, s, sizeof(svcmgr_id))) {  
  23.             fprintf(stderr,"invalid id %s\n", str8(s));  
  24.             return -1;  
  25.     }  
  26.   
  27.     switch(txn->code) {  
  28.     ......  
  29.     case SVC_MGR_ADD_SERVICE:  
  30.         s = bio_get_string16(msg, &len);  
  31.         ptr = bio_get_ref(msg);  
  32.         if (do_add_service(bs, s, len, ptr, txn->sender_euid))  
  33.             return -1;  
  34.         break;  
  35.     ......  
  36.     }  
  37.   
  38.     bio_put_uint32(reply, 0);  
  39.     return 0;  
  40. }  
int svcmgr_handler(struct binder_state *bs,
				   struct binder_txn *txn,
				   struct binder_io *msg,
				   struct binder_io *reply)
{
	struct svcinfo *si;
	uint16_t *s;
	unsigned len;
	void *ptr;
	uint32_t strict_policy;

	if (txn->target != svcmgr_handle)
		return -1;

	// Equivalent to Parcel::enforceInterface(), reading the RPC
	// header with the strict mode policy mask and the interface name.
	// Note that we ignore the strict_policy and don't propagate it
	// further (since we do no outbound RPCs anyway).
	strict_policy = bio_get_uint32(msg);
	s = bio_get_string16(msg, &len);
	if ((len != (sizeof(svcmgr_id) / 2)) ||
		memcmp(svcmgr_id, s, sizeof(svcmgr_id))) {
			fprintf(stderr,"invalid id %s\n", str8(s));
			return -1;
	}

	switch(txn->code) {
	......
	case SVC_MGR_ADD_SERVICE:
		s = bio_get_string16(msg, &len);
		ptr = bio_get_ref(msg);
		if (do_add_service(bs, s, len, ptr, txn->sender_euid))
			return -1;
		break;
	......
	}

	bio_put_uint32(reply, 0);
	return 0;
}
         回忆一下,在BpServiceManager::addService时,传给Binder驱动程序的参数为:

  1. writeInt32(IPCThreadState::self()->getStrictModePolicy() | STRICT_MODE_PENALTY_GATHER);  
  2. writeString16("android.os.IServiceManager");  
  3. writeString16("media.player");  
  4. writeStrongBinder(new MediaPlayerService());  
writeInt32(IPCThreadState::self()->getStrictModePolicy() | STRICT_MODE_PENALTY_GATHER);
writeString16("android.os.IServiceManager");
writeString16("media.player");
writeStrongBinder(new MediaPlayerService());
         这里的语句:

  1. strict_policy = bio_get_uint32(msg);  
  2. s = bio_get_string16(msg, &len);  
  3. s = bio_get_string16(msg, &len);  
  4. ptr = bio_get_ref(msg);  
strict_policy = bio_get_uint32(msg);
s = bio_get_string16(msg, &len);
s = bio_get_string16(msg, &len);
ptr = bio_get_ref(msg);
         就是依次把它们读取出来了,这里,我们只要看一下bio_get_ref的实现。先看一个数据结构struct binder_obj的定义:

  1. struct binder_object  
  2. {  
  3.     uint32_t type;  
  4.     uint32_t flags;  
  5.     void *pointer;  
  6.     void *cookie;  
  7. };  
struct binder_object
{
    uint32_t type;
    uint32_t flags;
    void *pointer;
    void *cookie;
};
        这个结构体其实就是对应struct flat_binder_obj的。

        接着看bio_get_ref实现:

  1. void *bio_get_ref(struct binder_io *bio)  
  2. {  
  3.     struct binder_object *obj;  
  4.   
  5.     obj = _bio_get_obj(bio);  
  6.     if (!obj)  
  7.         return 0;  
  8.   
  9.     if (obj->type == BINDER_TYPE_HANDLE)  
  10.         return obj->pointer;  
  11.   
  12.     return 0;  
  13. }  
void *bio_get_ref(struct binder_io *bio)
{
    struct binder_object *obj;

    obj = _bio_get_obj(bio);
    if (!obj)
        return 0;

    if (obj->type == BINDER_TYPE_HANDLE)
        return obj->pointer;

    return 0;
}
       _bio_get_obj这个函数就不跟进去看了,它的作用就是从binder_io中取得第一个还没取获取过的binder_object。在这个场景下,就是我们最开始传过来代表MediaPlayerService的flat_binder_obj了,这个原始的flat_binder_obj的type为BINDER_TYPE_BINDER,binder为指向MediaPlayerService的弱引用的地址。在前面我们说过,在Binder驱动驱动程序里面,会把这个flat_binder_obj的type改为BINDER_TYPE_HANDLE,handle改为一个句柄值。这里的handle值就等于obj->pointer的值。

        回到svcmgr_handler函数,调用do_add_service进一步处理:

  1. int do_add_service(struct binder_state *bs,  
  2.                    uint16_t *s, unsigned len,  
  3.                    void *ptr, unsigned uid)  
  4. {  
  5.     struct svcinfo *si;  
  6. //    LOGI("add_service('%s',%p) uid=%d\n", str8(s), ptr, uid);  
  7.   
  8.     if (!ptr || (len == 0) || (len > 127))  
  9.         return -1;  
  10.   
  11.     if (!svc_can_register(uid, s)) {  
  12.         LOGE("add_service('%s',%p) uid=%d - PERMISSION DENIED\n",  
  13.              str8(s), ptr, uid);  
  14.         return -1;  
  15.     }  
  16.   
  17.     si = find_svc(s, len);  
  18.     if (si) {  
  19.         if (si->ptr) {  
  20.             LOGE("add_service('%s',%p) uid=%d - ALREADY REGISTERED\n",  
  21.                  str8(s), ptr, uid);  
  22.             return -1;  
  23.         }  
  24.         si->ptr = ptr;  
  25.     } else {  
  26.         si = malloc(sizeof(*si) + (len + 1) * sizeof(uint16_t));  
  27.         if (!si) {  
  28.             LOGE("add_service('%s',%p) uid=%d - OUT OF MEMORY\n",  
  29.                  str8(s), ptr, uid);  
  30.             return -1;  
  31.         }  
  32.         si->ptr = ptr;  
  33.         si->len = len;  
  34.         memcpy(si->name, s, (len + 1) * sizeof(uint16_t));  
  35.         si->name[len] = '\0';  
  36.         si->death.func = svcinfo_death;  
  37.         si->death.ptr = si;  
  38.         si->next = svclist;  
  39.         svclist = si;  
  40.     }  
  41.   
  42.     binder_acquire(bs, ptr);  
  43.     binder_link_to_death(bs, ptr, &si->death);  
  44.     return 0;  
  45. }  
int do_add_service(struct binder_state *bs,
                   uint16_t *s, unsigned len,
                   void *ptr, unsigned uid)
{
    struct svcinfo *si;
//    LOGI("add_service('%s',%p) uid=%d\n", str8(s), ptr, uid);

    if (!ptr || (len == 0) || (len > 127))
        return -1;

    if (!svc_can_register(uid, s)) {
        LOGE("add_service('%s',%p) uid=%d - PERMISSION DENIED\n",
             str8(s), ptr, uid);
        return -1;
    }

    si = find_svc(s, len);
    if (si) {
        if (si->ptr) {
            LOGE("add_service('%s',%p) uid=%d - ALREADY REGISTERED\n",
                 str8(s), ptr, uid);
            return -1;
        }
        si->ptr = ptr;
    } else {
        si = malloc(sizeof(*si) + (len + 1) * sizeof(uint16_t));
        if (!si) {
            LOGE("add_service('%s',%p) uid=%d - OUT OF MEMORY\n",
                 str8(s), ptr, uid);
            return -1;
        }
        si->ptr = ptr;
        si->len = len;
        memcpy(si->name, s, (len + 1) * sizeof(uint16_t));
        si->name[len] = '\0';
        si->death.func = svcinfo_death;
        si->death.ptr = si;
        si->next = svclist;
        svclist = si;
    }

    binder_acquire(bs, ptr);
    binder_link_to_death(bs, ptr, &si->death);
    return 0;
}
        这个函数的实现很简单,就是把MediaPlayerService这个Binder实体的引用写到一个struct svcinfo结构体中,主要是它的名称和句柄值,然后插入到链接svclist的头部去。这样,Client来向Service Manager查询服务接口时,只要给定服务名称,Service Manger就可以返回相应的句柄值了。

        这个函数执行完成后,返回到svcmgr_handler函数,函数的最后,将一个错误码0写到reply变量中去,表示一切正常:

  1. bio_put_uint32(reply, 0);  
bio_put_uint32(reply, 0);

       svcmgr_handler函数执行完成后,返回到binder_parse函数,执行下面语句:

  1. binder_send_reply(bs, &reply, txn->data, res);  
binder_send_reply(bs, &reply, txn->data, res);
       我们看一下binder_send_reply的实现,从函数名就可以猜到它要做什么了,告诉Binder驱动程序,它完成了Binder驱动程序交给它的任务了。

  1. void binder_send_reply(struct binder_state *bs,  
  2.                        struct binder_io *reply,  
  3.                        void *buffer_to_free,  
  4.                        int status)  
  5. {  
  6.     struct {  
  7.         uint32_t cmd_free;  
  8.         void *buffer;  
  9.         uint32_t cmd_reply;  
  10.         struct binder_txn txn;  
  11.     } __attribute__((packed)) data;  
  12.   
  13.     data.cmd_free = BC_FREE_BUFFER;  
  14.     data.buffer = buffer_to_free;  
  15.     data.cmd_reply = BC_REPLY;  
  16.     data.txn.target = 0;  
  17.     data.txn.cookie = 0;  
  18.     data.txn.code = 0;  
  19.     if (status) {  
  20.         data.txn.flags = TF_STATUS_CODE;  
  21.         data.txn.data_size = sizeof(int);  
  22.         data.txn.offs_size = 0;  
  23.         data.txn.data = &status;  
  24.         data.txn.offs = 0;  
  25.     } else {  
  26.         data.txn.flags = 0;  
  27.         data.txn.data_size = reply->data - reply->data0;  
  28.         data.txn.offs_size = ((char*) reply->offs) - ((char*) reply->offs0);  
  29.         data.txn.data = reply->data0;  
  30.         data.txn.offs = reply->offs0;  
  31.     }  
  32.     binder_write(bs, &data, sizeof(data));  
  33. }  
void binder_send_reply(struct binder_state *bs,
                       struct binder_io *reply,
                       void *buffer_to_free,
                       int status)
{
    struct {
        uint32_t cmd_free;
        void *buffer;
        uint32_t cmd_reply;
        struct binder_txn txn;
    } __attribute__((packed)) data;

    data.cmd_free = BC_FREE_BUFFER;
    data.buffer = buffer_to_free;
    data.cmd_reply = BC_REPLY;
    data.txn.target = 0;
    data.txn.cookie = 0;
    data.txn.code = 0;
    if (status) {
        data.txn.flags = TF_STATUS_CODE;
        data.txn.data_size = sizeof(int);
        data.txn.offs_size = 0;
        data.txn.data = &status;
        data.txn.offs = 0;
    } else {
        data.txn.flags = 0;
        data.txn.data_size = reply->data - reply->data0;
        data.txn.offs_size = ((char*) reply->offs) - ((char*) reply->offs0);
        data.txn.data = reply->data0;
        data.txn.offs = reply->offs0;
    }
    binder_write(bs, &data, sizeof(data));
}
       从这里可以看出,binder_send_reply告诉Binder驱动程序执行BC_FREE_BUFFER和BC_REPLY命令,前者释放之前在binder_transaction分配的空间,地址为buffer_to_free,buffer_to_free这个地址是Binder驱动程序把自己在内核空间用的地址转换成用户空间地址再传给Service Manager的,所以Binder驱动程序拿到这个地址后,知道怎么样释放这个空间;后者告诉MediaPlayerService,它的addService操作已经完成了,错误码是0,保存在data.txn.data中。

       再来看binder_write函数:

  1. int binder_write(struct binder_state *bs, void *data, unsigned len)  
  2. {  
  3.     struct binder_write_read bwr;  
  4.     int res;  
  5.     bwr.write_size = len;  
  6.     bwr.write_consumed = 0;  
  7.     bwr.write_buffer = (unsigned) data;  
  8.     bwr.read_size = 0;  
  9.     bwr.read_consumed = 0;  
  10.     bwr.read_buffer = 0;  
  11.     res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);  
  12.     if (res < 0) {  
  13.         fprintf(stderr,"binder_write: ioctl failed (%s)\n",  
  14.                 strerror(errno));  
  15.     }  
  16.     return res;  
  17. }  
int binder_write(struct binder_state *bs, void *data, unsigned len)
{
    struct binder_write_read bwr;
    int res;
    bwr.write_size = len;
    bwr.write_consumed = 0;
    bwr.write_buffer = (unsigned) data;
    bwr.read_size = 0;
    bwr.read_consumed = 0;
    bwr.read_buffer = 0;
    res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);
    if (res < 0) {
        fprintf(stderr,"binder_write: ioctl failed (%s)\n",
                strerror(errno));
    }
    return res;
}
       这里可以看出,只有写操作,没有读操作,即read_size为0。

       这里又是一个ioctl的BINDER_WRITE_READ操作。直入到驱动程序的binder_ioctl函数后,执行BINDER_WRITE_READ命令,这里就不累述了。

       最后,从binder_ioctl执行到binder_thread_write函数,我们首先看第一个命令BC_FREE_BUFFER:

  1. int  
  2. binder_thread_write(struct binder_proc *proc, struct binder_thread *thread,  
  3.                     void __user *buffer, int size, signed long *consumed)  
  4. {  
  5.     uint32_t cmd;  
  6.     void __user *ptr = buffer + *consumed;  
  7.     void __user *end = buffer + size;  
  8.   
  9.     while (ptr < end && thread->return_error == BR_OK) {  
  10.         if (get_user(cmd, (uint32_t __user *)ptr))  
  11.             return -EFAULT;  
  12.         ptr += sizeof(uint32_t);  
  13.         if (_IOC_NR(cmd) < ARRAY_SIZE(binder_stats.bc)) {  
  14.             binder_stats.bc[_IOC_NR(cmd)]++;  
  15.             proc->stats.bc[_IOC_NR(cmd)]++;  
  16.             thread->stats.bc[_IOC_NR(cmd)]++;  
  17.         }  
  18.         switch (cmd) {  
  19.         ......  
  20.         case BC_FREE_BUFFER: {  
  21.             void __user *data_ptr;  
  22.             struct binder_buffer *buffer;  
  23.   
  24.             if (get_user(data_ptr, (void * __user *)ptr))  
  25.                 return -EFAULT;  
  26.             ptr += sizeof(void *);  
  27.   
  28.             buffer = binder_buffer_lookup(proc, data_ptr);  
  29.             if (buffer == NULL) {  
  30.                 binder_user_error("binder: %d:%d "  
  31.                     "BC_FREE_BUFFER u%p no match\n",  
  32.                     proc->pid, thread->pid, data_ptr);  
  33.                 break;  
  34.             }  
  35.             if (!buffer->allow_user_free) {  
  36.                 binder_user_error("binder: %d:%d "  
  37.                     "BC_FREE_BUFFER u%p matched "  
  38.                     "unreturned buffer\n",  
  39.                     proc->pid, thread->pid, data_ptr);  
  40.                 break;  
  41.             }  
  42.             if (binder_debug_mask & BINDER_DEBUG_FREE_BUFFER)  
  43.                 printk(KERN_INFO "binder: %d:%d BC_FREE_BUFFER u%p found buffer %d for %s transaction\n",  
  44.                 proc->pid, thread->pid, data_ptr, buffer->debug_id,  
  45.                 buffer->transaction ? "active" : "finished");  
  46.   
  47.             if (buffer->transaction) {  
  48.                 buffer->transaction->buffer = NULL;  
  49.                 buffer->transaction = NULL;  
  50.             }  
  51.             if (buffer->async_transaction && buffer->target_node) {  
  52.                 BUG_ON(!buffer->target_node->has_async_transaction);  
  53.                 if (list_empty(&buffer->target_node->async_todo))  
  54.                     buffer->target_node->has_async_transaction = 0;  
  55.                 else  
  56.                     list_move_tail(buffer->target_node->async_todo.next, &thread->todo);  
  57.             }  
  58.             binder_transaction_buffer_release(proc, buffer, NULL);  
  59.             binder_free_buf(proc, buffer);  
  60.             break;  
  61.                              }  
  62.   
  63.         ......  
  64.         *consumed = ptr - buffer;  
  65.     }  
  66.     return 0;  
  67. }  
int
binder_thread_write(struct binder_proc *proc, struct binder_thread *thread,
					void __user *buffer, int size, signed long *consumed)
{
	uint32_t cmd;
	void __user *ptr = buffer + *consumed;
	void __user *end = buffer + size;

	while (ptr < end && thread->return_error == BR_OK) {
		if (get_user(cmd, (uint32_t __user *)ptr))
			return -EFAULT;
		ptr += sizeof(uint32_t);
		if (_IOC_NR(cmd) < ARRAY_SIZE(binder_stats.bc)) {
			binder_stats.bc[_IOC_NR(cmd)]++;
			proc->stats.bc[_IOC_NR(cmd)]++;
			thread->stats.bc[_IOC_NR(cmd)]++;
		}
		switch (cmd) {
		......
		case BC_FREE_BUFFER: {
			void __user *data_ptr;
			struct binder_buffer *buffer;

			if (get_user(data_ptr, (void * __user *)ptr))
				return -EFAULT;
			ptr += sizeof(void *);

			buffer = binder_buffer_lookup(proc, data_ptr);
			if (buffer == NULL) {
				binder_user_error("binder: %d:%d "
					"BC_FREE_BUFFER u%p no match\n",
					proc->pid, thread->pid, data_ptr);
				break;
			}
			if (!buffer->allow_user_free) {
				binder_user_error("binder: %d:%d "
					"BC_FREE_BUFFER u%p matched "
					"unreturned buffer\n",
					proc->pid, thread->pid, data_ptr);
				break;
			}
			if (binder_debug_mask & BINDER_DEBUG_FREE_BUFFER)
				printk(KERN_INFO "binder: %d:%d BC_FREE_BUFFER u%p found buffer %d for %s transaction\n",
				proc->pid, thread->pid, data_ptr, buffer->debug_id,
				buffer->transaction ? "active" : "finished");

			if (buffer->transaction) {
				buffer->transaction->buffer = NULL;
				buffer->transaction = NULL;
			}
			if (buffer->async_transaction && buffer->target_node) {
				BUG_ON(!buffer->target_node->has_async_transaction);
				if (list_empty(&buffer->target_node->async_todo))
					buffer->target_node->has_async_transaction = 0;
				else
					list_move_tail(buffer->target_node->async_todo.next, &thread->todo);
			}
			binder_transaction_buffer_release(proc, buffer, NULL);
			binder_free_buf(proc, buffer);
			break;
							 }

		......
		*consumed = ptr - buffer;
	}
	return 0;
}
       首先通过看这个语句:
  1. get_user(data_ptr, (void * __user *)ptr)  
get_user(data_ptr, (void * __user *)ptr)
       这个是获得要删除的Buffer的用户空间地址,接着通过下面这个语句来找到这个地址对应的struct binder_buffer信息:

  1. buffer = binder_buffer_lookup(proc, data_ptr);  
buffer = binder_buffer_lookup(proc, data_ptr);
       因为这个空间是前面在binder_transaction里面分配的,所以这里一定能找到。

       最后,就可以释放这块内存了:

  1. binder_transaction_buffer_release(proc, buffer, NULL);  
  2. binder_free_buf(proc, buffer);  
binder_transaction_buffer_release(proc, buffer, NULL);
binder_free_buf(proc, buffer);
       再来看另外一个命令BC_REPLY:

  1. int  
  2. binder_thread_write(struct binder_proc *proc, struct binder_thread *thread,  
  3.                     void __user *buffer, int size, signed long *consumed)  
  4. {  
  5.     uint32_t cmd;  
  6.     void __user *ptr = buffer + *consumed;  
  7.     void __user *end = buffer + size;  
  8.   
  9.     while (ptr < end && thread->return_error == BR_OK) {  
  10.         if (get_user(cmd, (uint32_t __user *)ptr))  
  11.             return -EFAULT;  
  12.         ptr += sizeof(uint32_t);  
  13.         if (_IOC_NR(cmd) < ARRAY_SIZE(binder_stats.bc)) {  
  14.             binder_stats.bc[_IOC_NR(cmd)]++;  
  15.             proc->stats.bc[_IOC_NR(cmd)]++;  
  16.             thread->stats.bc[_IOC_NR(cmd)]++;  
  17.         }  
  18.         switch (cmd) {  
  19.         ......  
  20.         case BC_TRANSACTION:  
  21.         case BC_REPLY: {  
  22.             struct binder_transaction_data tr;  
  23.   
  24.             if (copy_from_user(&tr, ptr, sizeof(tr)))  
  25.                 return -EFAULT;  
  26.             ptr += sizeof(tr);  
  27.             binder_transaction(proc, thread, &tr, cmd == BC_REPLY);  
  28.             break;  
  29.                        }  
  30.   
  31.         ......  
  32.         *consumed = ptr - buffer;  
  33.     }  
  34.     return 0;  
  35. }  
int
binder_thread_write(struct binder_proc *proc, struct binder_thread *thread,
					void __user *buffer, int size, signed long *consumed)
{
	uint32_t cmd;
	void __user *ptr = buffer + *consumed;
	void __user *end = buffer + size;

	while (ptr < end && thread->return_error == BR_OK) {
		if (get_user(cmd, (uint32_t __user *)ptr))
			return -EFAULT;
		ptr += sizeof(uint32_t);
		if (_IOC_NR(cmd) < ARRAY_SIZE(binder_stats.bc)) {
			binder_stats.bc[_IOC_NR(cmd)]++;
			proc->stats.bc[_IOC_NR(cmd)]++;
			thread->stats.bc[_IOC_NR(cmd)]++;
		}
		switch (cmd) {
		......
		case BC_TRANSACTION:
		case BC_REPLY: {
			struct binder_transaction_data tr;

			if (copy_from_user(&tr, ptr, sizeof(tr)))
				return -EFAULT;
			ptr += sizeof(tr);
			binder_transaction(proc, thread, &tr, cmd == BC_REPLY);
			break;
					   }

		......
		*consumed = ptr - buffer;
	}
	return 0;
}
       又再次进入到binder_transaction函数:

  1. static void  
  2. binder_transaction(struct binder_proc *proc, struct binder_thread *thread,  
  3. struct binder_transaction_data *tr, int reply)  
  4. {  
  5.     struct binder_transaction *t;  
  6.     struct binder_work *tcomplete;  
  7.     size_t *offp, *off_end;  
  8.     struct binder_proc *target_proc;  
  9.     struct binder_thread *target_thread = NULL;  
  10.     struct binder_node *target_node = NULL;  
  11.     struct list_head *target_list;  
  12.     wait_queue_head_t *target_wait;  
  13.     struct binder_transaction *in_reply_to = NULL;  
  14.     struct binder_transaction_log_entry *e;  
  15.     uint32_t return_error;  
  16.   
  17.     ......  
  18.   
  19.     if (reply) {  
  20.         in_reply_to = thread->transaction_stack;  
  21.         if (in_reply_to == NULL) {  
  22.             ......  
  23.             return_error = BR_FAILED_REPLY;  
  24.             goto err_empty_call_stack;  
  25.         }  
  26.         binder_set_nice(in_reply_to->saved_priority);  
  27.         if (in_reply_to->to_thread != thread) {  
  28.             .......  
  29.             goto err_bad_call_stack;  
  30.         }  
  31.         thread->transaction_stack = in_reply_to->to_parent;  
  32.         target_thread = in_reply_to->from;  
  33.         if (target_thread == NULL) {  
  34.             return_error = BR_DEAD_REPLY;  
  35.             goto err_dead_binder;  
  36.         }  
  37.         if (target_thread->transaction_stack != in_reply_to) {  
  38.             ......  
  39.             return_error = BR_FAILED_REPLY;  
  40.             in_reply_to = NULL;  
  41.             target_thread = NULL;  
  42.             goto err_dead_binder;  
  43.         }  
  44.         target_proc = target_thread->proc;  
  45.     } else {  
  46.         ......  
  47.     }  
  48.     if (target_thread) {  
  49.         e->to_thread = target_thread->pid;  
  50.         target_list = &target_thread->todo;  
  51.         target_wait = &target_thread->wait;  
  52.     } else {  
  53.         ......  
  54.     }  
  55.   
  56.   
  57.     /* TODO: reuse incoming transaction for reply */  
  58.     t = kzalloc(sizeof(*t), GFP_KERNEL);  
  59.     if (t == NULL) {  
  60.         return_error = BR_FAILED_REPLY;  
  61.         goto err_alloc_t_failed;  
  62.     }  
  63.       
  64.   
  65.     tcomplete = kzalloc(sizeof(*tcomplete), GFP_KERNEL);  
  66.     if (tcomplete == NULL) {  
  67.         return_error = BR_FAILED_REPLY;  
  68.         goto err_alloc_tcomplete_failed;  
  69.     }  
  70.   
  71.     if (!reply && !(tr->flags & TF_ONE_WAY))  
  72.         t->from = thread;  
  73.     else  
  74.         t->from = NULL;  
  75.     t->sender_euid = proc->tsk->cred->euid;  
  76.     t->to_proc = target_proc;  
  77.     t->to_thread = target_thread;  
  78.     t->code = tr->code;  
  79.     t->flags = tr->flags;  
  80.     t->priority = task_nice(current);  
  81.     t->buffer = binder_alloc_buf(target_proc, tr->data_size,  
  82.         tr->offsets_size, !reply && (t->flags & TF_ONE_WAY));  
  83.     if (t->buffer == NULL) {  
  84.         return_error = BR_FAILED_REPLY;  
  85.         goto err_binder_alloc_buf_failed;  
  86.     }  
  87.     t->buffer->allow_user_free = 0;  
  88.     t->buffer->debug_id = t->debug_id;  
  89.     t->buffer->transaction = t;  
  90.     t->buffer->target_node = target_node;  
  91.     if (target_node)  
  92.         binder_inc_node(target_node, 1, 0, NULL);  
  93.   
  94.     offp = (size_t *)(t->buffer->data + ALIGN(tr->data_size, sizeof(void *)));  
  95.   
  96.     if (copy_from_user(t->buffer->data, tr->data.ptr.buffer, tr->data_size)) {  
  97.         binder_user_error("binder: %d:%d got transaction with invalid "  
  98.             "data ptr\n", proc->pid, thread->pid);  
  99.         return_error = BR_FAILED_REPLY;  
  100.         goto err_copy_data_failed;  
  101.     }  
  102.     if (copy_from_user(offp, tr->data.ptr.offsets, tr->offsets_size)) {  
  103.         binder_user_error("binder: %d:%d got transaction with invalid "  
  104.             "offsets ptr\n", proc->pid, thread->pid);  
  105.         return_error = BR_FAILED_REPLY;  
  106.         goto err_copy_data_failed;  
  107.     }  
  108.       
  109.     ......  
  110.   
  111.     if (reply) {  
  112.         BUG_ON(t->buffer->async_transaction != 0);  
  113.         binder_pop_transaction(target_thread, in_reply_to);  
  114.     } else if (!(t->flags & TF_ONE_WAY)) {  
  115.         ......  
  116.     } else {  
  117.         ......  
  118.     }  
  119.     t->work.type = BINDER_WORK_TRANSACTION;  
  120.     list_add_tail(&t->work.entry, target_list);  
  121.     tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE;  
  122.     list_add_tail(&tcomplete->entry, &thread->todo);  
  123.     if (target_wait)  
  124.         wake_up_interruptible(target_wait);  
  125.     return;  
  126.     ......  
  127. }  
static void
binder_transaction(struct binder_proc *proc, struct binder_thread *thread,
struct binder_transaction_data *tr, int reply)
{
	struct binder_transaction *t;
	struct binder_work *tcomplete;
	size_t *offp, *off_end;
	struct binder_proc *target_proc;
	struct binder_thread *target_thread = NULL;
	struct binder_node *target_node = NULL;
	struct list_head *target_list;
	wait_queue_head_t *target_wait;
	struct binder_transaction *in_reply_to = NULL;
	struct binder_transaction_log_entry *e;
	uint32_t return_error;

	......

	if (reply) {
		in_reply_to = thread->transaction_stack;
		if (in_reply_to == NULL) {
			......
			return_error = BR_FAILED_REPLY;
			goto err_empty_call_stack;
		}
		binder_set_nice(in_reply_to->saved_priority);
		if (in_reply_to->to_thread != thread) {
			.......
			goto err_bad_call_stack;
		}
		thread->transaction_stack = in_reply_to->to_parent;
		target_thread = in_reply_to->from;
		if (target_thread == NULL) {
			return_error = BR_DEAD_REPLY;
			goto err_dead_binder;
		}
		if (target_thread->transaction_stack != in_reply_to) {
			......
			return_error = BR_FAILED_REPLY;
			in_reply_to = NULL;
			target_thread = NULL;
			goto err_dead_binder;
		}
		target_proc = target_thread->proc;
	} else {
		......
	}
	if (target_thread) {
		e->to_thread = target_thread->pid;
		target_list = &target_thread->todo;
		target_wait = &target_thread->wait;
	} else {
		......
	}


	/* TODO: reuse incoming transaction for reply */
	t = kzalloc(sizeof(*t), GFP_KERNEL);
	if (t == NULL) {
		return_error = BR_FAILED_REPLY;
		goto err_alloc_t_failed;
	}
	

	tcomplete = kzalloc(sizeof(*tcomplete), GFP_KERNEL);
	if (tcomplete == NULL) {
		return_error = BR_FAILED_REPLY;
		goto err_alloc_tcomplete_failed;
	}

	if (!reply && !(tr->flags & TF_ONE_WAY))
		t->from = thread;
	else
		t->from = NULL;
	t->sender_euid = proc->tsk->cred->euid;
	t->to_proc = target_proc;
	t->to_thread = target_thread;
	t->code = tr->code;
	t->flags = tr->flags;
	t->priority = task_nice(current);
	t->buffer = binder_alloc_buf(target_proc, tr->data_size,
		tr->offsets_size, !reply && (t->flags & TF_ONE_WAY));
	if (t->buffer == NULL) {
		return_error = BR_FAILED_REPLY;
		goto err_binder_alloc_buf_failed;
	}
	t->buffer->allow_user_free = 0;
	t->buffer->debug_id = t->debug_id;
	t->buffer->transaction = t;
	t->buffer->target_node = target_node;
	if (target_node)
		binder_inc_node(target_node, 1, 0, NULL);

	offp = (size_t *)(t->buffer->data + ALIGN(tr->data_size, sizeof(void *)));

	if (copy_from_user(t->buffer->data, tr->data.ptr.buffer, tr->data_size)) {
		binder_user_error("binder: %d:%d got transaction with invalid "
			"data ptr\n", proc->pid, thread->pid);
		return_error = BR_FAILED_REPLY;
		goto err_copy_data_failed;
	}
	if (copy_from_user(offp, tr->data.ptr.offsets, tr->offsets_size)) {
		binder_user_error("binder: %d:%d got transaction with invalid "
			"offsets ptr\n", proc->pid, thread->pid);
		return_error = BR_FAILED_REPLY;
		goto err_copy_data_failed;
	}
	
    ......

	if (reply) {
		BUG_ON(t->buffer->async_transaction != 0);
		binder_pop_transaction(target_thread, in_reply_to);
	} else if (!(t->flags & TF_ONE_WAY)) {
		......
	} else {
		......
	}
	t->work.type = BINDER_WORK_TRANSACTION;
	list_add_tail(&t->work.entry, target_list);
	tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE;
	list_add_tail(&tcomplete->entry, &thread->todo);
	if (target_wait)
		wake_up_interruptible(target_wait);
	return;
    ......
}
       注意,这里的reply为1,我们忽略掉其它无关代码。

       前面Service Manager正在binder_thread_read函数中被MediaPlayerService启动后进程唤醒后,在最后会把当前处理完的事务放在thread->transaction_stack中:

  1. if (cmd == BR_TRANSACTION && !(t->flags & TF_ONE_WAY)) {  
  2.     t->to_parent = thread->transaction_stack;  
  3.     t->to_thread = thread;  
  4.     thread->transaction_stack = t;  
  5. }   
if (cmd == BR_TRANSACTION && !(t->flags & TF_ONE_WAY)) {
	t->to_parent = thread->transaction_stack;
	t->to_thread = thread;
	thread->transaction_stack = t;
} 
       所以,这里,首先是把它这个binder_transaction取回来,并且放在本地变量in_reply_to中:

  1. in_reply_to = thread->transaction_stack;  
in_reply_to = thread->transaction_stack;
       接着就可以通过in_reply_to得到最终发出这个事务请求的线程和进程:

  1. target_thread = in_reply_to->from;  
  2. target_proc = target_thread->proc;  
target_thread = in_reply_to->from;
target_proc = target_thread->proc;
        然后得到target_list和target_wait:

  1. target_list = &target_thread->todo;  
  2. target_wait = &target_thread->wait;  
target_list = &target_thread->todo;
target_wait = &target_thread->wait;
       下面这一段代码:

  1. /* TODO: reuse incoming transaction for reply */  
  2. t = kzalloc(sizeof(*t), GFP_KERNEL);  
  3. if (t == NULL) {  
  4.     return_error = BR_FAILED_REPLY;  
  5.     goto err_alloc_t_failed;  
  6. }  
  7.   
  8.   
  9. tcomplete = kzalloc(sizeof(*tcomplete), GFP_KERNEL);  
  10. if (tcomplete == NULL) {  
  11.     return_error = BR_FAILED_REPLY;  
  12.     goto err_alloc_tcomplete_failed;  
  13. }  
  14.   
  15. if (!reply && !(tr->flags & TF_ONE_WAY))  
  16.     t->from = thread;  
  17. else  
  18.     t->from = NULL;  
  19. t->sender_euid = proc->tsk->cred->euid;  
  20. t->to_proc = target_proc;  
  21. t->to_thread = target_thread;  
  22. t->code = tr->code;  
  23. t->flags = tr->flags;  
  24. t->priority = task_nice(current);  
  25. t->buffer = binder_alloc_buf(target_proc, tr->data_size,  
  26.     tr->offsets_size, !reply && (t->flags & TF_ONE_WAY));  
  27. if (t->buffer == NULL) {  
  28.     return_error = BR_FAILED_REPLY;  
  29.     goto err_binder_alloc_buf_failed;  
  30. }  
  31. t->buffer->allow_user_free = 0;  
  32. t->buffer->debug_id = t->debug_id;  
  33. t->buffer->transaction = t;  
  34. t->buffer->target_node = target_node;  
  35. if (target_node)  
  36.     binder_inc_node(target_node, 1, 0, NULL);  
  37.   
  38. offp = (size_t *)(t->buffer->data + ALIGN(tr->data_size, sizeof(void *)));  
  39.   
  40. if (copy_from_user(t->buffer->data, tr->data.ptr.buffer, tr->data_size)) {  
  41.     binder_user_error("binder: %d:%d got transaction with invalid "  
  42.         "data ptr\n", proc->pid, thread->pid);  
  43.     return_error = BR_FAILED_REPLY;  
  44.     goto err_copy_data_failed;  
  45. }  
  46. if (copy_from_user(offp, tr->data.ptr.offsets, tr->offsets_size)) {  
  47.     binder_user_error("binder: %d:%d got transaction with invalid "  
  48.         "offsets ptr\n", proc->pid, thread->pid);  
  49.     return_error = BR_FAILED_REPLY;  
  50.     goto err_copy_data_failed;  
  51. }  
	/* TODO: reuse incoming transaction for reply */
	t = kzalloc(sizeof(*t), GFP_KERNEL);
	if (t == NULL) {
		return_error = BR_FAILED_REPLY;
		goto err_alloc_t_failed;
	}
	

	tcomplete = kzalloc(sizeof(*tcomplete), GFP_KERNEL);
	if (tcomplete == NULL) {
		return_error = BR_FAILED_REPLY;
		goto err_alloc_tcomplete_failed;
	}

	if (!reply && !(tr->flags & TF_ONE_WAY))
		t->from = thread;
	else
		t->from = NULL;
	t->sender_euid = proc->tsk->cred->euid;
	t->to_proc = target_proc;
	t->to_thread = target_thread;
	t->code = tr->code;
	t->flags = tr->flags;
	t->priority = task_nice(current);
	t->buffer = binder_alloc_buf(target_proc, tr->data_size,
		tr->offsets_size, !reply && (t->flags & TF_ONE_WAY));
	if (t->buffer == NULL) {
		return_error = BR_FAILED_REPLY;
		goto err_binder_alloc_buf_failed;
	}
	t->buffer->allow_user_free = 0;
	t->buffer->debug_id = t->debug_id;
	t->buffer->transaction = t;
	t->buffer->target_node = target_node;
	if (target_node)
		binder_inc_node(target_node, 1, 0, NULL);

	offp = (size_t *)(t->buffer->data + ALIGN(tr->data_size, sizeof(void *)));

	if (copy_from_user(t->buffer->data, tr->data.ptr.buffer, tr->data_size)) {
		binder_user_error("binder: %d:%d got transaction with invalid "
			"data ptr\n", proc->pid, thread->pid);
		return_error = BR_FAILED_REPLY;
		goto err_copy_data_failed;
	}
	if (copy_from_user(offp, tr->data.ptr.offsets, tr->offsets_size)) {
		binder_user_error("binder: %d:%d got transaction with invalid "
			"offsets ptr\n", proc->pid, thread->pid);
		return_error = BR_FAILED_REPLY;
		goto err_copy_data_failed;
	}
          我们在前面已经分析过了,这里不再重复。但是有一点要注意的是,这里target_node为NULL,因此,t->buffer->target_node也为NULL。

          函数本来有一个for循环,用来处理数据中的Binder对象,这里由于没有Binder对象,所以就略过了。到了下面这句代码:

  1. binder_pop_transaction(target_thread, in_reply_to);  
binder_pop_transaction(target_thread, in_reply_to);
          我们看看做了什么事情:

  1. static void  
  2. binder_pop_transaction(  
  3.     struct binder_thread *target_thread, struct binder_transaction *t)  
  4. {  
  5.     if (target_thread) {  
  6.         BUG_ON(target_thread->transaction_stack != t);  
  7.         BUG_ON(target_thread->transaction_stack->from != target_thread);  
  8.         target_thread->transaction_stack =  
  9.             target_thread->transaction_stack->from_parent;  
  10.         t->from = NULL;  
  11.     }  
  12.     t->need_reply = 0;  
  13.     if (t->buffer)  
  14.         t->buffer->transaction = NULL;  
  15.     kfree(t);  
  16.     binder_stats.obj_deleted[BINDER_STAT_TRANSACTION]++;  
  17. }  
static void
binder_pop_transaction(
	struct binder_thread *target_thread, struct binder_transaction *t)
{
	if (target_thread) {
		BUG_ON(target_thread->transaction_stack != t);
		BUG_ON(target_thread->transaction_stack->from != target_thread);
		target_thread->transaction_stack =
			target_thread->transaction_stack->from_parent;
		t->from = NULL;
	}
	t->need_reply = 0;
	if (t->buffer)
		t->buffer->transaction = NULL;
	kfree(t);
	binder_stats.obj_deleted[BINDER_STAT_TRANSACTION]++;
}
        由于到了这里,已经不需要in_reply_to这个transaction了,就把它删掉。

        回到binder_transaction函数:

  1. t->work.type = BINDER_WORK_TRANSACTION;  
  2. list_add_tail(&t->work.entry, target_list);  
  3. tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE;  
  4. list_add_tail(&tcomplete->entry, &thread->todo);  
t->work.type = BINDER_WORK_TRANSACTION;
list_add_tail(&t->work.entry, target_list);
tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE;
list_add_tail(&tcomplete->entry, &thread->todo);
         和前面一样,分别把t和tcomplete分别放在target_list和thread->todo队列中,这里的target_list指的就是最初调用IServiceManager::addService的MediaPlayerService的Server主线程的的thread->todo队列了,而thread->todo指的是Service Manager中用来回复IServiceManager::addService请求的线程。

        最后,唤醒等待在target_wait队列上的线程了,就是最初调用IServiceManager::addService的MediaPlayerService的Server主线程了,它最后在binder_thread_read函数中睡眠在thread->wait上,就是这里的target_wait了:

  1. if (target_wait)  
  2.     wake_up_interruptible(target_wait);  
if (target_wait)
	wake_up_interruptible(target_wait);
        这样,Service Manger回复调用IServiceManager::addService请求就算完成了,重新回到frameworks/base/cmds/servicemanager/binder.c文件中的binder_loop函数等待下一个Client请求的到来。事实上,Service Manger回到binder_loop函数再次执行ioctl函数时候,又会再次进入到binder_thread_read函数。这时个会发现thread->todo不为空,这是因为刚才我们调用了:

  1. list_add_tail(&tcomplete->entry, &thread->todo);  
list_add_tail(&tcomplete->entry, &thread->todo);
          把一个工作项tcompelete放在了在thread->todo中,这个tcompelete的type为BINDER_WORK_TRANSACTION_COMPLETE,因此,Binder驱动程序会执行下面操作:

  1. switch (w->type) {  
  2. case BINDER_WORK_TRANSACTION_COMPLETE: {  
  3.     cmd = BR_TRANSACTION_COMPLETE;  
  4.     if (put_user(cmd, (uint32_t __user *)ptr))  
  5.         return -EFAULT;  
  6.     ptr += sizeof(uint32_t);  
  7.   
  8.     list_del(&w->entry);  
  9.     kfree(w);  
  10.       
  11.     } break;  
  12.     ......  
  13. }  
switch (w->type) {
case BINDER_WORK_TRANSACTION_COMPLETE: {
	cmd = BR_TRANSACTION_COMPLETE;
	if (put_user(cmd, (uint32_t __user *)ptr))
		return -EFAULT;
	ptr += sizeof(uint32_t);

	list_del(&w->entry);
	kfree(w);
	
	} break;
	......
}
        binder_loop函数执行完这个ioctl调用后,才会在下一次调用ioctl进入到Binder驱动程序进入休眠状态,等待下一次Client的请求。

        上面讲到调用IServiceManager::addService的MediaPlayerService的Server主线程被唤醒了,于是,重新执行binder_thread_read函数:

  1. static int  
  2. binder_thread_read(struct binder_proc *proc, struct binder_thread *thread,  
  3.                    void  __user *buffer, int size, signed long *consumed, int non_block)  
  4. {  
  5.     void __user *ptr = buffer + *consumed;  
  6.     void __user *end = buffer + size;  
  7.   
  8.     int ret = 0;  
  9.     int wait_for_proc_work;  
  10.   
  11.     if (*consumed == 0) {  
  12.         if (put_user(BR_NOOP, (uint32_t __user *)ptr))  
  13.             return -EFAULT;  
  14.         ptr += sizeof(uint32_t);  
  15.     }  
  16.   
  17. retry:  
  18.     wait_for_proc_work = thread->transaction_stack == NULL && list_empty(&thread->todo);  
  19.   
  20.     ......  
  21.   
  22.     if (wait_for_proc_work) {  
  23.         ......  
  24.     } else {  
  25.         if (non_block) {  
  26.             if (!binder_has_thread_work(thread))  
  27.                 ret = -EAGAIN;  
  28.         } else  
  29.             ret = wait_event_interruptible(thread->wait, binder_has_thread_work(thread));  
  30.     }  
  31.       
  32.     ......  
  33.   
  34.     while (1) {  
  35.         uint32_t cmd;  
  36.         struct binder_transaction_data tr;  
  37.         struct binder_work *w;  
  38.         struct binder_transaction *t = NULL;  
  39.   
  40.         if (!list_empty(&thread->todo))  
  41.             w = list_first_entry(&thread->todo, struct binder_work, entry);  
  42.         else if (!list_empty(&proc->todo) && wait_for_proc_work)  
  43.             w = list_first_entry(&proc->todo, struct binder_work, entry);  
  44.         else {  
  45.             if (ptr - buffer == 4 && !(thread->looper & BINDER_LOOPER_STATE_NEED_RETURN)) /* no data added */  
  46.                 goto retry;  
  47.             break;  
  48.         }  
  49.   
  50.         ......  
  51.   
  52.         switch (w->type) {  
  53.         case BINDER_WORK_TRANSACTION: {  
  54.             t = container_of(w, struct binder_transaction, work);  
  55.                                       } break;  
  56.         ......  
  57.         }  
  58.   
  59.         if (!t)  
  60.             continue;  
  61.   
  62.         BUG_ON(t->buffer == NULL);  
  63.         if (t->buffer->target_node) {  
  64.             ......  
  65.         } else {  
  66.             tr.target.ptr = NULL;  
  67.             tr.cookie = NULL;  
  68.             cmd = BR_REPLY;  
  69.         }  
  70.         tr.code = t->code;  
  71.         tr.flags = t->flags;  
  72.         tr.sender_euid = t->sender_euid;  
  73.   
  74.         if (t->from) {  
  75.             ......  
  76.         } else {  
  77.             tr.sender_pid = 0;  
  78.         }  
  79.   
  80.         tr.data_size = t->buffer->data_size;  
  81.         tr.offsets_size = t->buffer->offsets_size;  
  82.         tr.data.ptr.buffer = (void *)t->buffer->data + proc->user_buffer_offset;  
  83.         tr.data.ptr.offsets = tr.data.ptr.buffer + ALIGN(t->buffer->data_size, sizeof(void *));  
  84.   
  85.         if (put_user(cmd, (uint32_t __user *)ptr))  
  86.             return -EFAULT;  
  87.         ptr += sizeof(uint32_t);  
  88.         if (copy_to_user(ptr, &tr, sizeof(tr)))  
  89.             return -EFAULT;  
  90.         ptr += sizeof(tr);  
  91.   
  92.         ......  
  93.   
  94.         list_del(&t->work.entry);  
  95.         t->buffer->allow_user_free = 1;  
  96.         if (cmd == BR_TRANSACTION && !(t->flags & TF_ONE_WAY)) {  
  97.             ......  
  98.         } else {  
  99.             t->buffer->transaction = NULL;  
  100.             kfree(t);  
  101.             binder_stats.obj_deleted[BINDER_STAT_TRANSACTION]++;  
  102.         }  
  103.         break;  
  104.     }  
  105.   
  106. done:  
  107.     ......  
  108.     return 0;  
  109. }  
static int
binder_thread_read(struct binder_proc *proc, struct binder_thread *thread,
				   void  __user *buffer, int size, signed long *consumed, int non_block)
{
	void __user *ptr = buffer + *consumed;
	void __user *end = buffer + size;

	int ret = 0;
	int wait_for_proc_work;

	if (*consumed == 0) {
		if (put_user(BR_NOOP, (uint32_t __user *)ptr))
			return -EFAULT;
		ptr += sizeof(uint32_t);
	}

retry:
	wait_for_proc_work = thread->transaction_stack == NULL && list_empty(&thread->todo);

	......

	if (wait_for_proc_work) {
		......
	} else {
		if (non_block) {
			if (!binder_has_thread_work(thread))
				ret = -EAGAIN;
		} else
			ret = wait_event_interruptible(thread->wait, binder_has_thread_work(thread));
	}
	
	......

	while (1) {
		uint32_t cmd;
		struct binder_transaction_data tr;
		struct binder_work *w;
		struct binder_transaction *t = NULL;

		if (!list_empty(&thread->todo))
			w = list_first_entry(&thread->todo, struct binder_work, entry);
		else if (!list_empty(&proc->todo) && wait_for_proc_work)
			w = list_first_entry(&proc->todo, struct binder_work, entry);
		else {
			if (ptr - buffer == 4 && !(thread->looper & BINDER_LOOPER_STATE_NEED_RETURN)) /* no data added */
				goto retry;
			break;
		}

		......

		switch (w->type) {
		case BINDER_WORK_TRANSACTION: {
			t = container_of(w, struct binder_transaction, work);
									  } break;
		......
		}

		if (!t)
			continue;

		BUG_ON(t->buffer == NULL);
		if (t->buffer->target_node) {
			......
		} else {
			tr.target.ptr = NULL;
			tr.cookie = NULL;
			cmd = BR_REPLY;
		}
		tr.code = t->code;
		tr.flags = t->flags;
		tr.sender_euid = t->sender_euid;

		if (t->from) {
			......
		} else {
			tr.sender_pid = 0;
		}

		tr.data_size = t->buffer->data_size;
		tr.offsets_size = t->buffer->offsets_size;
		tr.data.ptr.buffer = (void *)t->buffer->data + proc->user_buffer_offset;
		tr.data.ptr.offsets = tr.data.ptr.buffer + ALIGN(t->buffer->data_size, sizeof(void *));

		if (put_user(cmd, (uint32_t __user *)ptr))
			return -EFAULT;
		ptr += sizeof(uint32_t);
		if (copy_to_user(ptr, &tr, sizeof(tr)))
			return -EFAULT;
		ptr += sizeof(tr);

		......

		list_del(&t->work.entry);
		t->buffer->allow_user_free = 1;
		if (cmd == BR_TRANSACTION && !(t->flags & TF_ONE_WAY)) {
			......
		} else {
			t->buffer->transaction = NULL;
			kfree(t);
			binder_stats.obj_deleted[BINDER_STAT_TRANSACTION]++;
		}
		break;
	}

done:
	......
	return 0;
}
         在while循环中,从thread->todo得到w,w->type为BINDER_WORK_TRANSACTION,于是,得到t。从上面可以知道,Service Manager反回了一个0回来,写在t->buffer->data里面,现在把t->buffer->data加上proc->user_buffer_offset,得到用户空间地址,保存在tr.data.ptr.buffer里面,这样用户空间就可以访问这个返回码了。由于cmd不等于BR_TRANSACTION,这时就可以把t删除掉了,因为以后都不需要用了。

         执行完这个函数后,就返回到binder_ioctl函数,执行下面语句,把数据返回给用户空间:

  1. if (copy_to_user(ubuf, &bwr, sizeof(bwr))) {  
  2.     ret = -EFAULT;  
  3.     goto err;  
  4. }  
if (copy_to_user(ubuf, &bwr, sizeof(bwr))) {
    ret = -EFAULT;
    goto err;
}
         接着返回到用户空间IPCThreadState::talkWithDriver函数,最后返回到IPCThreadState::waitForResponse函数,最终执行到下面语句:

  1. status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)  
  2. {  
  3.     int32_t cmd;  
  4.     int32_t err;  
  5.   
  6.     while (1) {  
  7.         if ((err=talkWithDriver()) < NO_ERROR) break;  
  8.           
  9.         ......  
  10.   
  11.         cmd = mIn.readInt32();  
  12.   
  13.         ......  
  14.   
  15.         switch (cmd) {  
  16.         ......  
  17.         case BR_REPLY:  
  18.             {  
  19.                 binder_transaction_data tr;  
  20.                 err = mIn.read(&tr, sizeof(tr));  
  21.                 LOG_ASSERT(err == NO_ERROR, "Not enough command data for brREPLY");  
  22.                 if (err != NO_ERROR) goto finish;  
  23.   
  24.                 if (reply) {  
  25.                     if ((tr.flags & TF_STATUS_CODE) == 0) {  
  26.                         reply->ipcSetDataReference(  
  27.                             reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),  
  28.                             tr.data_size,  
  29.                             reinterpret_cast<const size_t*>(tr.data.ptr.offsets),  
  30.                             tr.offsets_size/sizeof(size_t),  
  31.                             freeBuffer, this);  
  32.                     } else {  
  33.                         ......  
  34.                     }  
  35.                 } else {  
  36.                     ......  
  37.                 }  
  38.             }  
  39.             goto finish;  
  40.   
  41.         ......  
  42.         }  
  43.     }  
  44.   
  45. finish:  
  46.     ......  
  47.     return err;  
  48. }  
status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
{
	int32_t cmd;
	int32_t err;

	while (1) {
		if ((err=talkWithDriver()) < NO_ERROR) break;
		
		......

		cmd = mIn.readInt32();

		......

		switch (cmd) {
		......
		case BR_REPLY:
			{
				binder_transaction_data tr;
				err = mIn.read(&tr, sizeof(tr));
				LOG_ASSERT(err == NO_ERROR, "Not enough command data for brREPLY");
				if (err != NO_ERROR) goto finish;

				if (reply) {
					if ((tr.flags & TF_STATUS_CODE) == 0) {
						reply->ipcSetDataReference(
							reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
							tr.data_size,
							reinterpret_cast<const size_t*>(tr.data.ptr.offsets),
							tr.offsets_size/sizeof(size_t),
							freeBuffer, this);
					} else {
						......
					}
				} else {
					......
				}
			}
			goto finish;

		......
		}
	}

finish:
	......
	return err;
}

        注意,这里的tr.flags等于0,这个是在上面的binder_send_reply函数里设置的。最终把结果保存在reply了:

  1. reply->ipcSetDataReference(  
  2.        reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),  
  3.        tr.data_size,  
  4.        reinterpret_cast<const size_t*>(tr.data.ptr.offsets),  
  5.        tr.offsets_size/sizeof(size_t),  
  6.        freeBuffer, this);  
reply->ipcSetDataReference(
       reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
       tr.data_size,
       reinterpret_cast<const size_t*>(tr.data.ptr.offsets),
       tr.offsets_size/sizeof(size_t),
       freeBuffer, this);
       这个函数我们就不看了,有兴趣的读者可以研究一下。

       从这里层层返回,最后回到MediaPlayerService::instantiate函数中。

       至此,IServiceManager::addService终于执行完毕了。这个过程非常复杂,但是如果我们能够深刻地理解这一过程,将能很好地理解Binder机制的设计思想和实现过程。这里,对IServiceManager::addService过程中MediaPlayerService、ServiceManager和BinderDriver之间的交互作一个小结:


        回到frameworks/base/media/mediaserver/main_mediaserver.cpp文件中的main函数,接下去还要执行下面两个函数:

  1. ProcessState::self()->startThreadPool();  
  2. IPCThreadState::self()->joinThreadPool();  
    ProcessState::self()->startThreadPool();
    IPCThreadState::self()->joinThreadPool();
        首先看ProcessState::startThreadPool函数的实现:

  1. void ProcessState::startThreadPool()  
  2. {  
  3.     AutoMutex _l(mLock);  
  4.     if (!mThreadPoolStarted) {  
  5.         mThreadPoolStarted = true;  
  6.         spawnPooledThread(true);  
  7.     }  
  8. }  
void ProcessState::startThreadPool()
{
    AutoMutex _l(mLock);
    if (!mThreadPoolStarted) {
        mThreadPoolStarted = true;
        spawnPooledThread(true);
    }
}
       这里调用spwanPooledThread:

  1. void ProcessState::spawnPooledThread(bool isMain)  
  2. {  
  3.     if (mThreadPoolStarted) {  
  4.         int32_t s = android_atomic_add(1, &mThreadPoolSeq);  
  5.         char buf[32];  
  6.         sprintf(buf, "Binder Thread #%d", s);  
  7.         LOGV("Spawning new pooled thread, name=%s\n", buf);  
  8.         sp<Thread> t = new PoolThread(isMain);  
  9.         t->run(buf);  
  10.     }  
  11. }  
void ProcessState::spawnPooledThread(bool isMain)
{
    if (mThreadPoolStarted) {
        int32_t s = android_atomic_add(1, &mThreadPoolSeq);
        char buf[32];
        sprintf(buf, "Binder Thread #%d", s);
        LOGV("Spawning new pooled thread, name=%s\n", buf);
        sp<Thread> t = new PoolThread(isMain);
        t->run(buf);
    }
}
       这里主要是创建一个线程,PoolThread继续Thread类,Thread类定义在frameworks/base/libs/utils/Threads.cpp文件中,其run函数最终调用子类的threadLoop函数,这里即为PoolThread::threadLoop函数:

  1. virtual bool threadLoop()  
  2. {  
  3.     IPCThreadState::self()->joinThreadPool(mIsMain);  
  4.     return false;  
  5. }  
    virtual bool threadLoop()
    {
        IPCThreadState::self()->joinThreadPool(mIsMain);
        return false;
    }
       这里和frameworks/base/media/mediaserver/main_mediaserver.cpp文件中的main函数一样,最终都是调用了IPCThreadState::joinThreadPool函数,它们的区别是,一个参数是true,一个是默认值false。我们来看一下这个函数的实现:

  1. void IPCThreadState::joinThreadPool(bool isMain)  
  2. {  
  3.     LOG_THREADPOOL("**** THREAD %p (PID %d) IS JOINING THE THREAD POOL\n", (void*)pthread_self(), getpid());  
  4.   
  5.     mOut.writeInt32(isMain ? BC_ENTER_LOOPER : BC_REGISTER_LOOPER);  
  6.   
  7.     ......  
  8.   
  9.     status_t result;  
  10.     do {  
  11.         int32_t cmd;  
  12.   
  13.         .......  
  14.   
  15.         // now get the next command to be processed, waiting if necessary  
  16.         result = talkWithDriver();  
  17.         if (result >= NO_ERROR) {  
  18.             size_t IN = mIn.dataAvail();  
  19.             if (IN < sizeof(int32_t)) continue;  
  20.             cmd = mIn.readInt32();  
  21.             ......  
  22.             }  
  23.   
  24.             result = executeCommand(cmd);  
  25.         }  
  26.   
  27.         ......  
  28.     } while (result != -ECONNREFUSED && result != -EBADF);  
  29.   
  30.     .......  
  31.   
  32.     mOut.writeInt32(BC_EXIT_LOOPER);  
  33.     talkWithDriver(false);  
  34. }  
void IPCThreadState::joinThreadPool(bool isMain)
{
	LOG_THREADPOOL("**** THREAD %p (PID %d) IS JOINING THE THREAD POOL\n", (void*)pthread_self(), getpid());

	mOut.writeInt32(isMain ? BC_ENTER_LOOPER : BC_REGISTER_LOOPER);

	......

	status_t result;
	do {
		int32_t cmd;

		.......

		// now get the next command to be processed, waiting if necessary
		result = talkWithDriver();
		if (result >= NO_ERROR) {
			size_t IN = mIn.dataAvail();
			if (IN < sizeof(int32_t)) continue;
			cmd = mIn.readInt32();
			......
			}

			result = executeCommand(cmd);
		}

		......
	} while (result != -ECONNREFUSED && result != -EBADF);

	.......

	mOut.writeInt32(BC_EXIT_LOOPER);
	talkWithDriver(false);
}
        这个函数最终是在一个无穷循环中,通过调用talkWithDriver函数来和Binder驱动程序进行交互,实际上就是调用talkWithDriver来等待Client的请求,然后再调用executeCommand来处理请求,而在executeCommand函数中,最终会调用BBinder::transact来真正处理Client的请求:

  1. status_t IPCThreadState::executeCommand(int32_t cmd)  
  2. {  
  3.     BBinder* obj;  
  4.     RefBase::weakref_type* refs;  
  5.     status_t result = NO_ERROR;  
  6.   
  7.     switch (cmd) {  
  8.     ......  
  9.   
  10.     case BR_TRANSACTION:  
  11.         {  
  12.             binder_transaction_data tr;  
  13.             result = mIn.read(&tr, sizeof(tr));  
  14.               
  15.             ......  
  16.   
  17.             Parcel reply;  
  18.               
  19.             ......  
  20.   
  21.             if (tr.target.ptr) {  
  22.                 sp<BBinder> b((BBinder*)tr.cookie);  
  23.                 const status_t error = b->transact(tr.code, buffer, &reply, tr.flags);  
  24.                 if (error < NO_ERROR) reply.setError(error);  
  25.   
  26.             } else {  
  27.                 const status_t error = the_context_object->transact(tr.code, buffer, &reply, tr.flags);  
  28.                 if (error < NO_ERROR) reply.setError(error);  
  29.             }  
  30.   
  31.             ......  
  32.         }  
  33.         break;  
  34.   
  35.     .......  
  36.     }  
  37.   
  38.     if (result != NO_ERROR) {  
  39.         mLastError = result;  
  40.     }  
  41.   
  42.     return result;  
  43. }  
status_t IPCThreadState::executeCommand(int32_t cmd)
{
	BBinder* obj;
	RefBase::weakref_type* refs;
	status_t result = NO_ERROR;

	switch (cmd) {
	......

	case BR_TRANSACTION:
		{
			binder_transaction_data tr;
			result = mIn.read(&tr, sizeof(tr));
			
			......

			Parcel reply;
			
			......

			if (tr.target.ptr) {
				sp<BBinder> b((BBinder*)tr.cookie);
				const status_t error = b->transact(tr.code, buffer, &reply, tr.flags);
				if (error < NO_ERROR) reply.setError(error);

			} else {
				const status_t error = the_context_object->transact(tr.code, buffer, &reply, tr.flags);
				if (error < NO_ERROR) reply.setError(error);
			}

			......
		}
		break;

	.......
	}

	if (result != NO_ERROR) {
		mLastError = result;
	}

	return result;
}
        接下来再看一下BBinder::transact的实现:

  1. status_t BBinder::transact(  
  2.     uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)  
  3. {  
  4.     data.setDataPosition(0);  
  5.   
  6.     status_t err = NO_ERROR;  
  7.     switch (code) {  
  8.         case PING_TRANSACTION:  
  9.             reply->writeInt32(pingBinder());  
  10.             break;  
  11.         default:  
  12.             err = onTransact(code, data, reply, flags);  
  13.             break;  
  14.     }  
  15.   
  16.     if (reply != NULL) {  
  17.         reply->setDataPosition(0);  
  18.     }  
  19.   
  20.     return err;  
  21. }  
status_t BBinder::transact(
    uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
    data.setDataPosition(0);

    status_t err = NO_ERROR;
    switch (code) {
        case PING_TRANSACTION:
            reply->writeInt32(pingBinder());
            break;
        default:
            err = onTransact(code, data, reply, flags);
            break;
    }

    if (reply != NULL) {
        reply->setDataPosition(0);
    }

    return err;
}
       最终会调用onTransact函数来处理。在这个场景中,BnMediaPlayerService继承了BBinder类,并且重载了onTransact函数,因此,这里实际上是调用了BnMediaPlayerService::onTransact函数,这个函数定义在frameworks/base/libs/media/libmedia/IMediaPlayerService.cpp文件中:

  1. status_t BnMediaPlayerService::onTransact(  
  2.     uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)  
  3. {  
  4.     switch(code) {  
  5.         case CREATE_URL: {  
  6.             ......  
  7.                          } break;  
  8.         case CREATE_FD: {  
  9.             ......  
  10.                         } break;  
  11.         case DECODE_URL: {  
  12.             ......  
  13.                          } break;  
  14.         case DECODE_FD: {  
  15.             ......  
  16.                         } break;  
  17.         case CREATE_MEDIA_RECORDER: {  
  18.             ......  
  19.                                     } break;  
  20.         case CREATE_METADATA_RETRIEVER: {  
  21.             ......  
  22.                                         } break;  
  23.         case GET_OMX: {  
  24.             ......  
  25.                       } break;  
  26.         default:  
  27.             return BBinder::onTransact(code, data, reply, flags);  
  28.     }  
  29. }  
status_t BnMediaPlayerService::onTransact(
	uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
	switch(code) {
		case CREATE_URL: {
			......
						 } break;
		case CREATE_FD: {
			......
						} break;
		case DECODE_URL: {
			......
						 } break;
		case DECODE_FD: {
			......
						} break;
		case CREATE_MEDIA_RECORDER: {
			......
									} break;
		case CREATE_METADATA_RETRIEVER: {
			......
										} break;
		case GET_OMX: {
			......
					  } break;
		default:
			return BBinder::onTransact(code, data, reply, flags);
	}
}

       至此,我们就以MediaPlayerService为例,完整地介绍了Android系统进程间通信Binder机制中的Server启动过程。Server启动起来之后,就会在一个无穷循环中等待Client的请求了。在下一篇文章中,我们将介绍Client如何通过Service Manager远程接口来获得Server远程接口,进而调用Server远程接口来使用Server提供的服务,敬请关注。


Android系统进程间通信(IPC)机制Binder中的Client获得Server远程接口过程源代码分析

分类: Android 8106人阅读 评论(13) 收藏 举报

        在上一篇文章中,我们分析了Android系统进程间通信机制Binder中的Server在启动过程使用Service Manager的addService接口把自己添加到Service Manager守护过程中接受管理。在这一篇文章中,我们将深入到Binder驱动程序源代码去分析Client是如何通过Service Manager的getService接口中来获得Server远程接口的。Client只有获得了Server的远程接口之后,才能进一步调用Server提供的服务。

        这里,我们仍然是通过Android系统中自带的多媒体播放器为例子来说明Client是如何通过IServiceManager::getService接口来获得MediaPlayerService这个Server的远程接口的。假设计读者已经阅读过前面三篇文章浅谈Service Manager成为Android进程间通信(IPC)机制Binder守护进程之路浅谈Android系统进程间通信(IPC)机制Binder中的Server和Client获得Service Manager接口之路Android系统进程间通信(IPC)机制Binder中的Server启动过程源代码分析,即假设Service Manager和MediaPlayerService已经启动完毕,Service Manager现在等待Client的请求。

        这里,我们要举例子说明的Client便是MediaPlayer了,它声明和实现在frameworks/base/include/media/mediaplayer.h和frameworks/base/media/libmedia/mediaplayer.cpp文件中。MediaPlayer继承于IMediaDeathNotifier类,这个类声明和实现在frameworks/base/include/media/IMediaDeathNotifier.h和frameworks/base/media/libmedia//IMediaDeathNotifier.cpp文件中,里面有一个静态成员函数getMeidaPlayerService,它通过IServiceManager::getService接口来获得MediaPlayerService的远程接口。

        在介绍IMediaDeathNotifier::getMeidaPlayerService函数之前,我们先了解一下这个函数的目标。看来前面浅谈Android系统进程间通信(IPC)机制Binder中的Server和Client获得Service Manager接口之路这篇文章的读者知道,我们在获取Service Manager远程接口时,最终是获得了一个BpServiceManager对象的IServiceManager接口。类似地,我们要获得MediaPlayerService的远程接口,实际上就是要获得一个称为BpMediaPlayerService对象的IMediaPlayerService接口。现在,我们就先来看一下BpMediaPlayerService的类图:


        从这个类图可以看到,BpMediaPlayerService继承于BpInterface<IMediaPlayerService>类,即BpMediaPlayerService继承了IMediaPlayerService类和BpRefBase类,这两个类又分别继续了RefBase类。BpRefBase类有一个成员变量mRemote,它的类型为IBinder,实际是一个BpBinder对象。BpBinder类使用了IPCThreadState类来与Binder驱动程序进行交互,而IPCThreadState类有一个成员变量mProcess,它的类型为ProcessState,IPCThreadState类借助ProcessState类来打开Binder设备文件/dev/binder,因此,它可以和Binder驱动程序进行交互。

       BpMediaPlayerService的构造函数有一个参数impl,它的类型为const sp<IBinder>&,从上面的描述中,这个实际上就是一个BpBinder对象。这样,要创建一个BpMediaPlayerService对象,首先就要有一个BpBinder对象。再来看BpBinder类的构造函数,它有一个参数handle,类型为int32_t,这个参数的意义就是请求MediaPlayerService这个远程接口的进程对MediaPlayerService这个Binder实体的引用了。因此,获取MediaPlayerService这个远程接口的本质问题就变为从Service Manager中获得MediaPlayerService的一个句柄了。

       现在,我们就来看一下IMediaDeathNotifier::getMeidaPlayerService的实现:

  1. // establish binder interface to MediaPlayerService  
  2. /*static*/const sp<IMediaPlayerService>&  
  3. IMediaDeathNotifier::getMediaPlayerService()  
  4. {  
  5.     LOGV("getMediaPlayerService");  
  6.     Mutex::Autolock _l(sServiceLock);  
  7.     if (sMediaPlayerService.get() == 0) {  
  8.         sp<IServiceManager> sm = defaultServiceManager();  
  9.         sp<IBinder> binder;  
  10.         do {  
  11.             binder = sm->getService(String16("media.player"));  
  12.             if (binder != 0) {  
  13.                 break;  
  14.              }  
  15.              LOGW("Media player service not published, waiting...");  
  16.              usleep(500000); // 0.5 s   
  17.         } while(true);  
  18.   
  19.         if (sDeathNotifier == NULL) {  
  20.         sDeathNotifier = new DeathNotifier();  
  21.     }  
  22.     binder->linkToDeath(sDeathNotifier);  
  23.     sMediaPlayerService = interface_cast<IMediaPlayerService>(binder);  
  24.     }  
  25.     LOGE_IF(sMediaPlayerService == 0, "no media player service!?");  
  26.     return sMediaPlayerService;  
  27. }  
// establish binder interface to MediaPlayerService
/*static*/const sp<IMediaPlayerService>&
IMediaDeathNotifier::getMediaPlayerService()
{
    LOGV("getMediaPlayerService");
    Mutex::Autolock _l(sServiceLock);
    if (sMediaPlayerService.get() == 0) {
        sp<IServiceManager> sm = defaultServiceManager();
        sp<IBinder> binder;
        do {
            binder = sm->getService(String16("media.player"));
            if (binder != 0) {
                break;
             }
             LOGW("Media player service not published, waiting...");
             usleep(500000); // 0.5 s
        } while(true);

        if (sDeathNotifier == NULL) {
        sDeathNotifier = new DeathNotifier();
    }
    binder->linkToDeath(sDeathNotifier);
    sMediaPlayerService = interface_cast<IMediaPlayerService>(binder);
    }
    LOGE_IF(sMediaPlayerService == 0, "no media player service!?");
    return sMediaPlayerService;
}
        函数首先通过defaultServiceManager函数来获得Service Manager的远程接口,实际上就是获得BpServiceManager的IServiceManager接口,具体可以参考 浅谈Android系统进程间通信(IPC)机制Binder中的Server和Client获得Service Manager接口之路一文。总的来说,这里的语句:

  1. sp<IServiceManager> sm = defaultServiceManager();  
sp<IServiceManager> sm = defaultServiceManager();
        相当于是:

  1. sp<IServiceManager> sm = new BpServiceManager(new BpBinder(0));   
sp<IServiceManager> sm = new BpServiceManager(new BpBinder(0)); 
        这里的0表示Service Manager的远程接口的句柄值是0。

        接下去的while循环是通过sm->getService接口来不断尝试获得名称为“media.player”的Service,即MediaPlayerService。为什么要通过这无穷循环来得MediaPlayerService呢?因为这时候MediaPlayerService可能还没有启动起来,所以这里如果发现取回来的binder接口为NULL,就睡眠0.5秒,然后再尝试获取,这是获取Service接口的标准做法。
        我们来看一下BpServiceManager::getService的实现:

  1. class BpServiceManager : public BpInterface<IServiceManager>  
  2. {  
  3.     ......  
  4.   
  5.     virtual sp<IBinder> getService(const String16& name) const  
  6.     {  
  7.         unsigned n;  
  8.         for (n = 0; n < 5; n++){  
  9.             sp<IBinder> svc = checkService(name);  
  10.             if (svc != NULL) return svc;  
  11.             LOGI("Waiting for service %s...\n", String8(name).string());  
  12.             sleep(1);  
  13.         }  
  14.         return NULL;  
  15.     }  
  16.   
  17.     virtual sp<IBinder> checkService( const String16& name) const  
  18.     {  
  19.         Parcel data, reply;  
  20.         data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());  
  21.         data.writeString16(name);  
  22.         remote()->transact(CHECK_SERVICE_TRANSACTION, data, &reply);  
  23.         return reply.readStrongBinder();  
  24.     }  
  25.   
  26.     ......  
  27. };  
class BpServiceManager : public BpInterface<IServiceManager>
{
    ......

	virtual sp<IBinder> getService(const String16& name) const
	{
		unsigned n;
		for (n = 0; n < 5; n++){
			sp<IBinder> svc = checkService(name);
			if (svc != NULL) return svc;
			LOGI("Waiting for service %s...\n", String8(name).string());
			sleep(1);
		}
		return NULL;
	}

	virtual sp<IBinder> checkService( const String16& name) const
	{
		Parcel data, reply;
		data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());
		data.writeString16(name);
		remote()->transact(CHECK_SERVICE_TRANSACTION, data, &reply);
		return reply.readStrongBinder();
	}

	......
};
         BpServiceManager::getService通过BpServiceManager::checkService执行操作。

         在BpServiceManager::checkService中,首先是通过Parcel::writeInterfaceToken往data写入一个RPC头,这个我们在Android系统进程间通信(IPC)机制Binder中的Server启动过程源代码分析一文已经介绍过了,就是写往data里面写入了一个整数和一个字符串“android.os.IServiceManager”, Service Manager来处理CHECK_SERVICE_TRANSACTION请求之前,会先验证一下这个RPC头,看看是否正确。接着再往data写入一个字符串name,这里就是“media.player”了。回忆一下Android系统进程间通信(IPC)机制Binder中的Server启动过程源代码分析这篇文章,那里已经往Service Manager中注册了一个名字为“media.player”的MediaPlayerService。

        这里的remote()返回的是一个BpBinder,具体可以参考浅谈Android系统进程间通信(IPC)机制Binder中的Server和Client获得Service Manager接口之路一文,于是,就进行到BpBinder::transact函数了:

  1. status_t BpBinder::transact(  
  2.     uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)  
  3. {  
  4.     // Once a binder has died, it will never come back to life.  
  5.     if (mAlive) {  
  6.         status_t status = IPCThreadState::self()->transact(  
  7.             mHandle, code, data, reply, flags);  
  8.         if (status == DEAD_OBJECT) mAlive = 0;  
  9.         return status;  
  10.     }  
  11.   
  12.     return DEAD_OBJECT;  
  13. }  
status_t BpBinder::transact(
    uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
    // Once a binder has died, it will never come back to life.
    if (mAlive) {
        status_t status = IPCThreadState::self()->transact(
            mHandle, code, data, reply, flags);
        if (status == DEAD_OBJECT) mAlive = 0;
        return status;
    }

    return DEAD_OBJECT;
}

        这里的mHandle = 0,code = CHECK_SERVICE_TRANSACTION,flags = 0。

        这里再进入到IPCThread::transact函数中:

  1. status_t IPCThreadState::transact(int32_t handle,  
  2.                                   uint32_t code, const Parcel& data,  
  3.                                   Parcel* reply, uint32_t flags)  
  4. {  
  5.     status_t err = data.errorCheck();  
  6.   
  7.     flags |= TF_ACCEPT_FDS;  
  8.   
  9.     IF_LOG_TRANSACTIONS() {  
  10.         TextOutput::Bundle _b(alog);  
  11.         alog << "BC_TRANSACTION thr " << (void*)pthread_self() << " / hand "  
  12.             << handle << " / code " << TypeCode(code) << ": "  
  13.             << indent << data << dedent << endl;  
  14.     }  
  15.       
  16.     if (err == NO_ERROR) {  
  17.         LOG_ONEWAY(">>>> SEND from pid %d uid %d %s", getpid(), getuid(),  
  18.             (flags & TF_ONE_WAY) == 0 ? "READ REPLY" : "ONE WAY");  
  19.         err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);  
  20.     }  
  21.       
  22.     if (err != NO_ERROR) {  
  23.         if (reply) reply->setError(err);  
  24.         return (mLastError = err);  
  25.     }  
  26.       
  27.     if ((flags & TF_ONE_WAY) == 0) {  
  28.         #if 0   
  29.         if (code == 4) { // relayout  
  30.             LOGI(">>>>>> CALLING transaction 4");  
  31.         } else {  
  32.             LOGI(">>>>>> CALLING transaction %d", code);  
  33.         }  
  34.         #endif   
  35.         if (reply) {  
  36.             err = waitForResponse(reply);  
  37.         } else {  
  38.             Parcel fakeReply;  
  39.             err = waitForResponse(&fakeReply);  
  40.         }  
  41.         #if 0   
  42.         if (code == 4) { // relayout  
  43.             LOGI("<<<<<< RETURNING transaction 4");  
  44.         } else {  
  45.             LOGI("<<<<<< RETURNING transaction %d", code);  
  46.         }  
  47.         #endif   
  48.           
  49.         IF_LOG_TRANSACTIONS() {  
  50.             TextOutput::Bundle _b(alog);  
  51.             alog << "BR_REPLY thr " << (void*)pthread_self() << " / hand "  
  52.                 << handle << ": ";  
  53.             if (reply) alog << indent << *reply << dedent << endl;  
  54.             else alog << "(none requested)" << endl;  
  55.         }  
  56.     } else {  
  57.         err = waitForResponse(NULL, NULL);  
  58.     }  
  59.       
  60.     return err;  
  61. }  
status_t IPCThreadState::transact(int32_t handle,
                                  uint32_t code, const Parcel& data,
                                  Parcel* reply, uint32_t flags)
{
    status_t err = data.errorCheck();

    flags |= TF_ACCEPT_FDS;

    IF_LOG_TRANSACTIONS() {
        TextOutput::Bundle _b(alog);
        alog << "BC_TRANSACTION thr " << (void*)pthread_self() << " / hand "
            << handle << " / code " << TypeCode(code) << ": "
            << indent << data << dedent << endl;
    }
    
    if (err == NO_ERROR) {
        LOG_ONEWAY(">>>> SEND from pid %d uid %d %s", getpid(), getuid(),
            (flags & TF_ONE_WAY) == 0 ? "READ REPLY" : "ONE WAY");
        err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);
    }
    
    if (err != NO_ERROR) {
        if (reply) reply->setError(err);
        return (mLastError = err);
    }
    
    if ((flags & TF_ONE_WAY) == 0) {
        #if 0
        if (code == 4) { // relayout
            LOGI(">>>>>> CALLING transaction 4");
        } else {
            LOGI(">>>>>> CALLING transaction %d", code);
        }
        #endif
        if (reply) {
            err = waitForResponse(reply);
        } else {
            Parcel fakeReply;
            err = waitForResponse(&fakeReply);
        }
        #if 0
        if (code == 4) { // relayout
            LOGI("<<<<<< RETURNING transaction 4");
        } else {
            LOGI("<<<<<< RETURNING transaction %d", code);
        }
        #endif
        
        IF_LOG_TRANSACTIONS() {
            TextOutput::Bundle _b(alog);
            alog << "BR_REPLY thr " << (void*)pthread_self() << " / hand "
                << handle << ": ";
            if (reply) alog << indent << *reply << dedent << endl;
            else alog << "(none requested)" << endl;
        }
    } else {
        err = waitForResponse(NULL, NULL);
    }
    
    return err;
}
         首先是调用函数writeTransactionData写入将要传输的数据到IPCThreadState的成员变量mOut中去:

  1. status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags,  
  2.     int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer)  
  3. {  
  4.     binder_transaction_data tr;  
  5.   
  6.     tr.target.handle = handle;  
  7.     tr.code = code;  
  8.     tr.flags = binderFlags;  
  9.       
  10.     const status_t err = data.errorCheck();  
  11.     if (err == NO_ERROR) {  
  12.         tr.data_size = data.ipcDataSize();  
  13.         tr.data.ptr.buffer = data.ipcData();  
  14.         tr.offsets_size = data.ipcObjectsCount()*sizeof(size_t);  
  15.         tr.data.ptr.offsets = data.ipcObjects();  
  16.     } else if (statusBuffer) {  
  17.         tr.flags |= TF_STATUS_CODE;  
  18.         *statusBuffer = err;  
  19.         tr.data_size = sizeof(status_t);  
  20.         tr.data.ptr.buffer = statusBuffer;  
  21.         tr.offsets_size = 0;  
  22.         tr.data.ptr.offsets = NULL;  
  23.     } else {  
  24.         return (mLastError = err);  
  25.     }  
  26.       
  27.     mOut.writeInt32(cmd);  
  28.     mOut.write(&tr, sizeof(tr));  
  29.       
  30.     return NO_ERROR;  
  31. }  
status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags,
    int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer)
{
    binder_transaction_data tr;

    tr.target.handle = handle;
    tr.code = code;
    tr.flags = binderFlags;
    
    const status_t err = data.errorCheck();
    if (err == NO_ERROR) {
        tr.data_size = data.ipcDataSize();
        tr.data.ptr.buffer = data.ipcData();
        tr.offsets_size = data.ipcObjectsCount()*sizeof(size_t);
        tr.data.ptr.offsets = data.ipcObjects();
    } else if (statusBuffer) {
        tr.flags |= TF_STATUS_CODE;
        *statusBuffer = err;
        tr.data_size = sizeof(status_t);
        tr.data.ptr.buffer = statusBuffer;
        tr.offsets_size = 0;
        tr.data.ptr.offsets = NULL;
    } else {
        return (mLastError = err);
    }
    
    mOut.writeInt32(cmd);
    mOut.write(&tr, sizeof(tr));
    
    return NO_ERROR;
}
        结构体binder_transaction_data在上一篇文章 Android系统进程间通信(IPC)机制Binder中的Server启动过程源代码分析已经介绍过,这里不再累述,这个结构体是用来描述要传输的参数的内容的。这里着重描述一下将要传输的参数tr里面的内容,handle = 0,code =  CHECK_SERVICE_TRANSACTION,cmd = BC_TRANSACTION,data里面的数据分别为:

  1. writeInt32(IPCThreadState::self()->getStrictModePolicy() | STRICT_MODE_PENALTY_GATHER);  
  2. writeString16("android.os.IServiceManager");  
  3. writeString16("media.player");  
writeInt32(IPCThreadState::self()->getStrictModePolicy() | STRICT_MODE_PENALTY_GATHER);
writeString16("android.os.IServiceManager");
writeString16("media.player");
       这是在BpServiceManager::checkService函数里面写进去的,其中前两个是RPC头,Service Manager在收到这个请求时会验证这两个参数是否正确,这点前面也提到了。IPCThread->getStrictModePolicy默认返回0,STRICT_MODE_PENALTY_GATHER定义为:

  1. // Note: must be kept in sync with android/os/StrictMode.java's PENALTY_GATHER  
  2. #define STRICT_MODE_PENALTY_GATHER 0x100  
// Note: must be kept in sync with android/os/StrictMode.java's PENALTY_GATHER
#define STRICT_MODE_PENALTY_GATHER 0x100
       我们不关心这个参数的含义,这不会影响我们分析下面的源代码,有兴趣的读者可以研究一下。这里要注意的是,要传输的参数不包含有Binder对象,因此tr.offsets_size = 0。要传输的参数最后写入到IPCThreadState的成员变量mOut中,包括cmd和tr两个数据。

       回到IPCThread::transact函数中,由于(flags & TF_ONE_WAY) == 0为true,即这是一个同步请求,并且reply  != NULL,最终调用:

  1. err = waitForResponse(reply);  
err = waitForResponse(reply);
       进入到waitForResponse函数中:

  1. status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)  
  2. {  
  3.     int32_t cmd;  
  4.     int32_t err;  
  5.   
  6.     while (1) {  
  7.         if ((err=talkWithDriver()) < NO_ERROR) break;  
  8.         err = mIn.errorCheck();  
  9.         if (err < NO_ERROR) break;  
  10.         if (mIn.dataAvail() == 0) continue;  
  11.           
  12.         cmd = mIn.readInt32();  
  13.           
  14.         IF_LOG_COMMANDS() {  
  15.             alog << "Processing waitForResponse Command: "  
  16.                 << getReturnString(cmd) << endl;  
  17.         }  
  18.   
  19.         switch (cmd) {  
  20.         case BR_TRANSACTION_COMPLETE:  
  21.             if (!reply && !acquireResult) goto finish;  
  22.             break;  
  23.           
  24.         case BR_DEAD_REPLY:  
  25.             err = DEAD_OBJECT;  
  26.             goto finish;  
  27.   
  28.         case BR_FAILED_REPLY:  
  29.             err = FAILED_TRANSACTION;  
  30.             goto finish;  
  31.           
  32.         case BR_ACQUIRE_RESULT:  
  33.             {  
  34.                 LOG_ASSERT(acquireResult != NULL, "Unexpected brACQUIRE_RESULT");  
  35.                 const int32_t result = mIn.readInt32();  
  36.                 if (!acquireResult) continue;  
  37.                 *acquireResult = result ? NO_ERROR : INVALID_OPERATION;  
  38.             }  
  39.             goto finish;  
  40.           
  41.         case BR_REPLY:  
  42.             {  
  43.                 binder_transaction_data tr;  
  44.                 err = mIn.read(&tr, sizeof(tr));  
  45.                 LOG_ASSERT(err == NO_ERROR, "Not enough command data for brREPLY");  
  46.                 if (err != NO_ERROR) goto finish;  
  47.   
  48.                 if (reply) {  
  49.                     if ((tr.flags & TF_STATUS_CODE) == 0) {  
  50.                         reply->ipcSetDataReference(  
  51.                             reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),  
  52.                             tr.data_size,  
  53.                             reinterpret_cast<const size_t*>(tr.data.ptr.offsets),  
  54.                             tr.offsets_size/sizeof(size_t),  
  55.                             freeBuffer, this);  
  56.                     } else {  
  57.                         err = *static_cast<const status_t*>(tr.data.ptr.buffer);  
  58.                         freeBuffer(NULL,  
  59.                             reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),  
  60.                             tr.data_size,  
  61.                             reinterpret_cast<const size_t*>(tr.data.ptr.offsets),  
  62.                             tr.offsets_size/sizeof(size_t), this);  
  63.                     }  
  64.                 } else {  
  65.                     freeBuffer(NULL,  
  66.                         reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),  
  67.                         tr.data_size,  
  68.                         reinterpret_cast<const size_t*>(tr.data.ptr.offsets),  
  69.                         tr.offsets_size/sizeof(size_t), this);  
  70.                     continue;  
  71.                 }  
  72.             }  
  73.             goto finish;  
  74.   
  75.         default:  
  76.             err = executeCommand(cmd);  
  77.             if (err != NO_ERROR) goto finish;  
  78.             break;  
  79.         }  
  80.     }  
  81.   
  82. finish:  
  83.     if (err != NO_ERROR) {  
  84.         if (acquireResult) *acquireResult = err;  
  85.         if (reply) reply->setError(err);  
  86.         mLastError = err;  
  87.     }  
  88.       
  89.     return err;  
  90. }  
status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
{
    int32_t cmd;
    int32_t err;

    while (1) {
        if ((err=talkWithDriver()) < NO_ERROR) break;
        err = mIn.errorCheck();
        if (err < NO_ERROR) break;
        if (mIn.dataAvail() == 0) continue;
        
        cmd = mIn.readInt32();
        
        IF_LOG_COMMANDS() {
            alog << "Processing waitForResponse Command: "
                << getReturnString(cmd) << endl;
        }

        switch (cmd) {
        case BR_TRANSACTION_COMPLETE:
            if (!reply && !acquireResult) goto finish;
            break;
        
        case BR_DEAD_REPLY:
            err = DEAD_OBJECT;
            goto finish;

        case BR_FAILED_REPLY:
            err = FAILED_TRANSACTION;
            goto finish;
        
        case BR_ACQUIRE_RESULT:
            {
                LOG_ASSERT(acquireResult != NULL, "Unexpected brACQUIRE_RESULT");
                const int32_t result = mIn.readInt32();
                if (!acquireResult) continue;
                *acquireResult = result ? NO_ERROR : INVALID_OPERATION;
            }
            goto finish;
        
        case BR_REPLY:
            {
                binder_transaction_data tr;
                err = mIn.read(&tr, sizeof(tr));
                LOG_ASSERT(err == NO_ERROR, "Not enough command data for brREPLY");
                if (err != NO_ERROR) goto finish;

                if (reply) {
                    if ((tr.flags & TF_STATUS_CODE) == 0) {
                        reply->ipcSetDataReference(
                            reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
                            tr.data_size,
                            reinterpret_cast<const size_t*>(tr.data.ptr.offsets),
                            tr.offsets_size/sizeof(size_t),
                            freeBuffer, this);
                    } else {
                        err = *static_cast<const status_t*>(tr.data.ptr.buffer);
                        freeBuffer(NULL,
                            reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
                            tr.data_size,
                            reinterpret_cast<const size_t*>(tr.data.ptr.offsets),
                            tr.offsets_size/sizeof(size_t), this);
                    }
                } else {
                    freeBuffer(NULL,
                        reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
                        tr.data_size,
                        reinterpret_cast<const size_t*>(tr.data.ptr.offsets),
                        tr.offsets_size/sizeof(size_t), this);
                    continue;
                }
            }
            goto finish;

        default:
            err = executeCommand(cmd);
            if (err != NO_ERROR) goto finish;
            break;
        }
    }

finish:
    if (err != NO_ERROR) {
        if (acquireResult) *acquireResult = err;
        if (reply) reply->setError(err);
        mLastError = err;
    }
    
    return err;
}
        这个函数通过IPCThreadState::talkWithDriver与驱动程序进行交互:

  1. status_t IPCThreadState::talkWithDriver(bool doReceive)  
  2. {  
  3.     LOG_ASSERT(mProcess->mDriverFD >= 0, "Binder driver is not opened");  
  4.   
  5.     binder_write_read bwr;  
  6.   
  7.     // Is the read buffer empty?  
  8.     const bool needRead = mIn.dataPosition() >= mIn.dataSize();  
  9.   
  10.     // We don't want to write anything if we are still reading  
  11.     // from data left in the input buffer and the caller  
  12.     // has requested to read the next data.  
  13.     const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0;  
  14.   
  15.     bwr.write_size = outAvail;  
  16.     bwr.write_buffer = (long unsigned int)mOut.data();  
  17.   
  18.     // This is what we'll read.   
  19.     if (doReceive && needRead) {  
  20.         bwr.read_size = mIn.dataCapacity();  
  21.         bwr.read_buffer = (long unsigned int)mIn.data();  
  22.     } else {  
  23.         bwr.read_size = 0;  
  24.     }  
  25.   
  26.     ......  
  27.   
  28.     // Return immediately if there is nothing to do.  
  29.     if ((bwr.write_size == 0) && (bwr.read_size == 0)) return NO_ERROR;  
  30.   
  31.     bwr.write_consumed = 0;  
  32.     bwr.read_consumed = 0;  
  33.     status_t err;  
  34.     do {  
  35.         ......  
  36. #if defined(HAVE_ANDROID_OS)   
  37.         if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)  
  38.             err = NO_ERROR;  
  39.         else  
  40.             err = -errno;  
  41. #else   
  42.         err = INVALID_OPERATION;  
  43. #endif   
  44.         ......  
  45.     } while (err == -EINTR);  
  46.   
  47.     ......  
  48.   
  49.     if (err >= NO_ERROR) {  
  50.         if (bwr.write_consumed > 0) {  
  51.             if (bwr.write_consumed < (ssize_t)mOut.dataSize())  
  52.                 mOut.remove(0, bwr.write_consumed);  
  53.             else  
  54.                 mOut.setDataSize(0);  
  55.         }  
  56.         if (bwr.read_consumed > 0) {  
  57.             mIn.setDataSize(bwr.read_consumed);  
  58.             mIn.setDataPosition(0);  
  59.         }  
  60.   
  61.         ......  
  62.   
  63.         return NO_ERROR;  
  64.     }  
  65.   
  66.     return err;  
  67. }  
status_t IPCThreadState::talkWithDriver(bool doReceive)
{
	LOG_ASSERT(mProcess->mDriverFD >= 0, "Binder driver is not opened");

	binder_write_read bwr;

	// Is the read buffer empty?
	const bool needRead = mIn.dataPosition() >= mIn.dataSize();

	// We don't want to write anything if we are still reading
	// from data left in the input buffer and the caller
	// has requested to read the next data.
	const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0;

	bwr.write_size = outAvail;
	bwr.write_buffer = (long unsigned int)mOut.data();

	// This is what we'll read.
	if (doReceive && needRead) {
		bwr.read_size = mIn.dataCapacity();
		bwr.read_buffer = (long unsigned int)mIn.data();
	} else {
		bwr.read_size = 0;
	}

	......

	// Return immediately if there is nothing to do.
	if ((bwr.write_size == 0) && (bwr.read_size == 0)) return NO_ERROR;

	bwr.write_consumed = 0;
	bwr.read_consumed = 0;
	status_t err;
	do {
		......
#if defined(HAVE_ANDROID_OS)
		if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)
			err = NO_ERROR;
		else
			err = -errno;
#else
		err = INVALID_OPERATION;
#endif
		......
	} while (err == -EINTR);

	......

	if (err >= NO_ERROR) {
		if (bwr.write_consumed > 0) {
			if (bwr.write_consumed < (ssize_t)mOut.dataSize())
				mOut.remove(0, bwr.write_consumed);
			else
				mOut.setDataSize(0);
		}
		if (bwr.read_consumed > 0) {
			mIn.setDataSize(bwr.read_consumed);
			mIn.setDataPosition(0);
		}

		......

		return NO_ERROR;
	}

	return err;
}
        这里的needRead为true,因此,bwr.read_size大于0;outAvail也大于0,因此,bwr.write_size也大于0。函数最后通过:

  1. ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr)  
ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr)
        进入到Binder驱动程序的binder_ioctl函数中。注意,这里的mProcess->mDriverFD是在我们前面调用defaultServiceManager函数获得Service Manager远程接口时,打开的设备文件/dev/binder的文件描述符,mProcess是IPCSThreadState的成员变量。

        Binder驱动程序的binder_ioctl函数中,我们只关注BINDER_WRITE_READ命令相关的逻辑:

  1. static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)  
  2. {  
  3.     int ret;  
  4.     struct binder_proc *proc = filp->private_data;  
  5.     struct binder_thread *thread;  
  6.     unsigned int size = _IOC_SIZE(cmd);  
  7.     void __user *ubuf = (void __user *)arg;  
  8.   
  9.     /*printk(KERN_INFO "binder_ioctl: %d:%d %x %lx\n", proc->pid, current->pid, cmd, arg);*/  
  10.   
  11.     ret = wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);  
  12.     if (ret)  
  13.         return ret;  
  14.   
  15.     mutex_lock(&binder_lock);  
  16.     thread = binder_get_thread(proc);  
  17.     if (thread == NULL) {  
  18.         ret = -ENOMEM;  
  19.         goto err;  
  20.     }  
  21.   
  22.     switch (cmd) {  
  23.     case BINDER_WRITE_READ: {  
  24.         struct binder_write_read bwr;  
  25.         if (size != sizeof(struct binder_write_read)) {  
  26.             ret = -EINVAL;  
  27.             goto err;  
  28.         }  
  29.         if (copy_from_user(&bwr, ubuf, sizeof(bwr))) {  
  30.             ret = -EFAULT;  
  31.             goto err;  
  32.         }  
  33.         if (binder_debug_mask & BINDER_DEBUG_READ_WRITE)  
  34.             printk(KERN_INFO "binder: %d:%d write %ld at %08lx, read %ld at %08lx\n",  
  35.             proc->pid, thread->pid, bwr.write_size, bwr.write_buffer, bwr.read_size, bwr.read_buffer);  
  36.         if (bwr.write_size > 0) {  
  37.             ret = binder_thread_write(proc, thread, (void __user *)bwr.write_buffer, bwr.write_size, &bwr.write_consumed);  
  38.             if (ret < 0) {  
  39.                 bwr.read_consumed = 0;  
  40.                 if (copy_to_user(ubuf, &bwr, sizeof(bwr)))  
  41.                     ret = -EFAULT;  
  42.                 goto err;  
  43.             }  
  44.         }  
  45.         if (bwr.read_size > 0) {  
  46.             ret = binder_thread_read(proc, thread, (void __user *)bwr.read_buffer, bwr.read_size, &bwr.read_consumed, filp->f_flags & O_NONBLOCK);  
  47.             if (!list_empty(&proc->todo))  
  48.                 wake_up_interruptible(&proc->wait);  
  49.             if (ret < 0) {  
  50.                 if (copy_to_user(ubuf, &bwr, sizeof(bwr)))  
  51.                     ret = -EFAULT;  
  52.                 goto err;  
  53.             }  
  54.         }  
  55.         if (binder_debug_mask & BINDER_DEBUG_READ_WRITE)  
  56.             printk(KERN_INFO "binder: %d:%d wrote %ld of %ld, read return %ld of %ld\n",  
  57.             proc->pid, thread->pid, bwr.write_consumed, bwr.write_size, bwr.read_consumed, bwr.read_size);  
  58.         if (copy_to_user(ubuf, &bwr, sizeof(bwr))) {  
  59.             ret = -EFAULT;  
  60.             goto err;  
  61.         }  
  62.         break;  
  63.                             }  
  64.     ......  
  65.     default:  
  66.         ret = -EINVAL;  
  67.         goto err;  
  68.     }  
  69.     ret = 0;  
  70. err:  
  71.     ......  
  72.     return ret;  
  73. }  
static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
	int ret;
	struct binder_proc *proc = filp->private_data;
	struct binder_thread *thread;
	unsigned int size = _IOC_SIZE(cmd);
	void __user *ubuf = (void __user *)arg;

	/*printk(KERN_INFO "binder_ioctl: %d:%d %x %lx\n", proc->pid, current->pid, cmd, arg);*/

	ret = wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);
	if (ret)
		return ret;

	mutex_lock(&binder_lock);
	thread = binder_get_thread(proc);
	if (thread == NULL) {
		ret = -ENOMEM;
		goto err;
	}

	switch (cmd) {
	case BINDER_WRITE_READ: {
		struct binder_write_read bwr;
		if (size != sizeof(struct binder_write_read)) {
			ret = -EINVAL;
			goto err;
		}
		if (copy_from_user(&bwr, ubuf, sizeof(bwr))) {
			ret = -EFAULT;
			goto err;
		}
		if (binder_debug_mask & BINDER_DEBUG_READ_WRITE)
			printk(KERN_INFO "binder: %d:%d write %ld at %08lx, read %ld at %08lx\n",
			proc->pid, thread->pid, bwr.write_size, bwr.write_buffer, bwr.read_size, bwr.read_buffer);
		if (bwr.write_size > 0) {
			ret = binder_thread_write(proc, thread, (void __user *)bwr.write_buffer, bwr.write_size, &bwr.write_consumed);
			if (ret < 0) {
				bwr.read_consumed = 0;
				if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
					ret = -EFAULT;
				goto err;
			}
		}
		if (bwr.read_size > 0) {
			ret = binder_thread_read(proc, thread, (void __user *)bwr.read_buffer, bwr.read_size, &bwr.read_consumed, filp->f_flags & O_NONBLOCK);
			if (!list_empty(&proc->todo))
				wake_up_interruptible(&proc->wait);
			if (ret < 0) {
				if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
					ret = -EFAULT;
				goto err;
			}
		}
		if (binder_debug_mask & BINDER_DEBUG_READ_WRITE)
			printk(KERN_INFO "binder: %d:%d wrote %ld of %ld, read return %ld of %ld\n",
			proc->pid, thread->pid, bwr.write_consumed, bwr.write_size, bwr.read_consumed, bwr.read_size);
		if (copy_to_user(ubuf, &bwr, sizeof(bwr))) {
			ret = -EFAULT;
			goto err;
		}
		break;
							}
	......
	default:
		ret = -EINVAL;
		goto err;
	}
	ret = 0;
err:
	......
	return ret;
}
        这里的filp->private_data的值是在defaultServiceManager函数创建ProcessState对象时,在ProcessState构造函数通过open文件操作函数打开设备文件/dev/binder时设置好的,它表示的是调用open函数打开设备文件/dev/binder的进程上下文信息,这里将它取出来保存在proc本地变量中。

        这里的thread本地变量表示当前线程上下文信息,通过binder_get_thread函数获得。在前面执行ProcessState构造函数时,也会通过ioctl文件操作函数进入到这个函数,那是第一次进入到binder_ioctl这里,因此,调用binder_get_thread时,表示当前进程上下文信息的proc变量还没有关于当前线程的上下文信息,因此,会为proc创建一个表示当前线程上下文信息的thread,会保存在proc->threads表示的红黑树结构中。这里调用binder_get_thread就可以直接从proc找到并返回了。

        进入到BINDER_WRITE_READ相关的逻辑。先看看BINDER_WRITE_READ的定义:

  1. #define BINDER_WRITE_READ           _IOWR('b', 1, struct binder_write_read)  
#define BINDER_WRITE_READ   		_IOWR('b', 1, struct binder_write_read)
        这里可以看出,BINDER_WRITE_READ命令的参数类型为struct binder_write_read:

  1. struct binder_write_read {  
  2.     signed long write_size; /* bytes to write */  
  3.     signed long write_consumed; /* bytes consumed by driver */  
  4.     unsigned long   write_buffer;  
  5.     signed long read_size;  /* bytes to read */  
  6.     signed long read_consumed;  /* bytes consumed by driver */  
  7.     unsigned long   read_buffer;  
  8. };  
struct binder_write_read {
	signed long	write_size;	/* bytes to write */
	signed long	write_consumed;	/* bytes consumed by driver */
	unsigned long	write_buffer;
	signed long	read_size;	/* bytes to read */
	signed long	read_consumed;	/* bytes consumed by driver */
	unsigned long	read_buffer;
};
        这个结构体的含义可以参考 浅谈Service Manager成为Android进程间通信(IPC)机制Binder守护进程之路一文。这里首先是通过copy_from_user函数把用户传进来的参数的内容拷贝到本地变量bwr中。
        从上面的调用过程,我们知道,这里bwr.write_size是大于0的,因此进入到binder_thread_write函数中,我们只关注BC_TRANSACTION相关的逻辑:

  1. int  
  2. binder_thread_write(struct binder_proc *proc, struct binder_thread *thread,  
  3.                     void __user *buffer, int size, signed long *consumed)  
  4. {  
  5.     uint32_t cmd;  
  6.     void __user *ptr = buffer + *consumed;  
  7.     void __user *end = buffer + size;  
  8.   
  9.     while (ptr < end && thread->return_error == BR_OK) {  
  10.         if (get_user(cmd, (uint32_t __user *)ptr))  
  11.             return -EFAULT;  
  12.         ptr += sizeof(uint32_t);  
  13.         if (_IOC_NR(cmd) < ARRAY_SIZE(binder_stats.bc)) {  
  14.             binder_stats.bc[_IOC_NR(cmd)]++;  
  15.             proc->stats.bc[_IOC_NR(cmd)]++;  
  16.             thread->stats.bc[_IOC_NR(cmd)]++;  
  17.         }  
  18.         switch (cmd) {  
  19.         ......  
  20.         case BC_TRANSACTION:  
  21.         case BC_REPLY: {  
  22.             struct binder_transaction_data tr;  
  23.   
  24.             if (copy_from_user(&tr, ptr, sizeof(tr)))  
  25.                 return -EFAULT;  
  26.             ptr += sizeof(tr);  
  27.             binder_transaction(proc, thread, &tr, cmd == BC_REPLY);  
  28.             break;  
  29.                        }  
  30.         ......  
  31.         default:  
  32.             printk(KERN_ERR "binder: %d:%d unknown command %d\n", proc->pid, thread->pid, cmd);  
  33.             return -EINVAL;  
  34.         }  
  35.         *consumed = ptr - buffer;  
  36.     }  
  37.     return 0;  
  38. }  
int
binder_thread_write(struct binder_proc *proc, struct binder_thread *thread,
					void __user *buffer, int size, signed long *consumed)
{
	uint32_t cmd;
	void __user *ptr = buffer + *consumed;
	void __user *end = buffer + size;

	while (ptr < end && thread->return_error == BR_OK) {
		if (get_user(cmd, (uint32_t __user *)ptr))
			return -EFAULT;
		ptr += sizeof(uint32_t);
		if (_IOC_NR(cmd) < ARRAY_SIZE(binder_stats.bc)) {
			binder_stats.bc[_IOC_NR(cmd)]++;
			proc->stats.bc[_IOC_NR(cmd)]++;
			thread->stats.bc[_IOC_NR(cmd)]++;
		}
		switch (cmd) {
		......
		case BC_TRANSACTION:
		case BC_REPLY: {
			struct binder_transaction_data tr;

			if (copy_from_user(&tr, ptr, sizeof(tr)))
				return -EFAULT;
			ptr += sizeof(tr);
			binder_transaction(proc, thread, &tr, cmd == BC_REPLY);
			break;
					   }
		......
		default:
			printk(KERN_ERR "binder: %d:%d unknown command %d\n", proc->pid, thread->pid, cmd);
			return -EINVAL;
		}
		*consumed = ptr - buffer;
	}
	return 0;
}
        这里再次把用户传出来的参数拷贝到本地变量tr中,tr的类型为struct binder_transaction_data,这个就是前面我们在IPCThreadState::writeTransactionData写入的内容了。

        接着进入到binder_transaction函数中,不相关的代码我们忽略掉:

  1. static void  
  2. binder_transaction(struct binder_proc *proc, struct binder_thread *thread,  
  3. struct binder_transaction_data *tr, int reply)  
  4. {  
  5.     struct binder_transaction *t;  
  6.     struct binder_work *tcomplete;  
  7.     size_t *offp, *off_end;  
  8.     struct binder_proc *target_proc;  
  9.     struct binder_thread *target_thread = NULL;  
  10.     struct binder_node *target_node = NULL;  
  11.     struct list_head *target_list;  
  12.     wait_queue_head_t *target_wait;  
  13.     struct binder_transaction *in_reply_to = NULL;  
  14.     struct binder_transaction_log_entry *e;  
  15.     uint32_t return_error;  
  16.   
  17.     .......  
  18.   
  19.     if (reply) {  
  20.         ......  
  21.     } else {  
  22.         if (tr->target.handle) {  
  23.             ......  
  24.         } else {  
  25.             target_node = binder_context_mgr_node;  
  26.             if (target_node == NULL) {  
  27.                 return_error = BR_DEAD_REPLY;  
  28.                 goto err_no_context_mgr_node;  
  29.             }  
  30.         }  
  31.         ......  
  32.         target_proc = target_node->proc;  
  33.         if (target_proc == NULL) {  
  34.             return_error = BR_DEAD_REPLY;  
  35.             goto err_dead_binder;  
  36.         }  
  37.         if (!(tr->flags & TF_ONE_WAY) && thread->transaction_stack) {  
  38.             ......  
  39.         }  
  40.     }  
  41.     if (target_thread) {  
  42.         ......  
  43.     } else {  
  44.         target_list = &target_proc->todo;  
  45.         target_wait = &target_proc->wait;  
  46.     }  
  47.     ......  
  48.   
  49.     /* TODO: reuse incoming transaction for reply */  
  50.     t = kzalloc(sizeof(*t), GFP_KERNEL);  
  51.     if (t == NULL) {  
  52.         return_error = BR_FAILED_REPLY;  
  53.         goto err_alloc_t_failed;  
  54.     }  
  55.     binder_stats.obj_created[BINDER_STAT_TRANSACTION]++;  
  56.   
  57.     tcomplete = kzalloc(sizeof(*tcomplete), GFP_KERNEL);  
  58.     if (tcomplete == NULL) {  
  59.         return_error = BR_FAILED_REPLY;  
  60.         goto err_alloc_tcomplete_failed;  
  61.     }  
  62.     binder_stats.obj_created[BINDER_STAT_TRANSACTION_COMPLETE]++;  
  63.   
  64.     t->debug_id = ++binder_last_id;  
  65.       
  66.     ......  
  67.   
  68.   
  69.     if (!reply && !(tr->flags & TF_ONE_WAY))  
  70.         t->from = thread;  
  71.     else  
  72.         t->from = NULL;  
  73.     t->sender_euid = proc->tsk->cred->euid;  
  74.     t->to_proc = target_proc;  
  75.     t->to_thread = target_thread;  
  76.     t->code = tr->code;  
  77.     t->flags = tr->flags;  
  78.     t->priority = task_nice(current);  
  79.     t->buffer = binder_alloc_buf(target_proc, tr->data_size,  
  80.         tr->offsets_size, !reply && (t->flags & TF_ONE_WAY));  
  81.     if (t->buffer == NULL) {  
  82.         return_error = BR_FAILED_REPLY;  
  83.         goto err_binder_alloc_buf_failed;  
  84.     }  
  85.     t->buffer->allow_user_free = 0;  
  86.     t->buffer->debug_id = t->debug_id;  
  87.     t->buffer->transaction = t;  
  88.     t->buffer->target_node = target_node;  
  89.     if (target_node)  
  90.         binder_inc_node(target_node, 1, 0, NULL);  
  91.   
  92.     offp = (size_t *)(t->buffer->data + ALIGN(tr->data_size, sizeof(void *)));  
  93.   
  94.     if (copy_from_user(t->buffer->data, tr->data.ptr.buffer, tr->data_size)) {  
  95.         ......  
  96.         return_error = BR_FAILED_REPLY;  
  97.         goto err_copy_data_failed;  
  98.     }  
  99.   
  100.     ......  
  101.   
  102.     if (reply) {  
  103.         ......  
  104.     } else if (!(t->flags & TF_ONE_WAY)) {  
  105.         BUG_ON(t->buffer->async_transaction != 0);  
  106.         t->need_reply = 1;  
  107.         t->from_parent = thread->transaction_stack;  
  108.         thread->transaction_stack = t;  
  109.     } else {  
  110.         ......  
  111.     }  
  112.   
  113.     t->work.type = BINDER_WORK_TRANSACTION;  
  114.     list_add_tail(&t->work.entry, target_list);  
  115.     tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE;  
  116.     list_add_tail(&tcomplete->entry, &thread->todo);  
  117.     if (target_wait)  
  118.         wake_up_interruptible(target_wait);  
  119.     return;  
  120.   
  121.     ......  
  122. }  
static void
binder_transaction(struct binder_proc *proc, struct binder_thread *thread,
struct binder_transaction_data *tr, int reply)
{
	struct binder_transaction *t;
	struct binder_work *tcomplete;
	size_t *offp, *off_end;
	struct binder_proc *target_proc;
	struct binder_thread *target_thread = NULL;
	struct binder_node *target_node = NULL;
	struct list_head *target_list;
	wait_queue_head_t *target_wait;
	struct binder_transaction *in_reply_to = NULL;
	struct binder_transaction_log_entry *e;
	uint32_t return_error;

	.......

	if (reply) {
		......
	} else {
		if (tr->target.handle) {
			......
		} else {
			target_node = binder_context_mgr_node;
			if (target_node == NULL) {
				return_error = BR_DEAD_REPLY;
				goto err_no_context_mgr_node;
			}
		}
		......
		target_proc = target_node->proc;
		if (target_proc == NULL) {
			return_error = BR_DEAD_REPLY;
			goto err_dead_binder;
		}
		if (!(tr->flags & TF_ONE_WAY) && thread->transaction_stack) {
			......
		}
	}
	if (target_thread) {
		......
	} else {
		target_list = &target_proc->todo;
		target_wait = &target_proc->wait;
	}
	......

	/* TODO: reuse incoming transaction for reply */
	t = kzalloc(sizeof(*t), GFP_KERNEL);
	if (t == NULL) {
		return_error = BR_FAILED_REPLY;
		goto err_alloc_t_failed;
	}
	binder_stats.obj_created[BINDER_STAT_TRANSACTION]++;

	tcomplete = kzalloc(sizeof(*tcomplete), GFP_KERNEL);
	if (tcomplete == NULL) {
		return_error = BR_FAILED_REPLY;
		goto err_alloc_tcomplete_failed;
	}
	binder_stats.obj_created[BINDER_STAT_TRANSACTION_COMPLETE]++;

	t->debug_id = ++binder_last_id;
	
	......


	if (!reply && !(tr->flags & TF_ONE_WAY))
		t->from = thread;
	else
		t->from = NULL;
	t->sender_euid = proc->tsk->cred->euid;
	t->to_proc = target_proc;
	t->to_thread = target_thread;
	t->code = tr->code;
	t->flags = tr->flags;
	t->priority = task_nice(current);
	t->buffer = binder_alloc_buf(target_proc, tr->data_size,
		tr->offsets_size, !reply && (t->flags & TF_ONE_WAY));
	if (t->buffer == NULL) {
		return_error = BR_FAILED_REPLY;
		goto err_binder_alloc_buf_failed;
	}
	t->buffer->allow_user_free = 0;
	t->buffer->debug_id = t->debug_id;
	t->buffer->transaction = t;
	t->buffer->target_node = target_node;
	if (target_node)
		binder_inc_node(target_node, 1, 0, NULL);

	offp = (size_t *)(t->buffer->data + ALIGN(tr->data_size, sizeof(void *)));

	if (copy_from_user(t->buffer->data, tr->data.ptr.buffer, tr->data_size)) {
		......
		return_error = BR_FAILED_REPLY;
		goto err_copy_data_failed;
	}

	......

	if (reply) {
		......
	} else if (!(t->flags & TF_ONE_WAY)) {
		BUG_ON(t->buffer->async_transaction != 0);
		t->need_reply = 1;
		t->from_parent = thread->transaction_stack;
		thread->transaction_stack = t;
	} else {
		......
	}

	t->work.type = BINDER_WORK_TRANSACTION;
	list_add_tail(&t->work.entry, target_list);
	tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE;
	list_add_tail(&tcomplete->entry, &thread->todo);
	if (target_wait)
		wake_up_interruptible(target_wait);
	return;

    ......
}
        注意,这里的参数reply = 0,表示这是一个BC_TRANSACTION命令。
        前面我们提到,传给驱动程序的handle值为0,即这里的tr->target.handle = 0,表示请求的目标Binder对象是Service Manager,因此有:

  1. target_node = binder_context_mgr_node;  
  2. target_proc = target_node->proc;  
  3. target_list = &target_proc->todo;  
  4. target_wait = &target_proc->wait;  
target_node = binder_context_mgr_node;
target_proc = target_node->proc;
target_list = &target_proc->todo;
target_wait = &target_proc->wait;

        其中binder_context_mgr_node是在Service Manager通知Binder驱动程序它是守护过程时创建的。

        接着创建一个待完成事项tcomplete,它的类型为struct binder_work,这是等一会要保存在当前线程的todo队列去的,表示当前线程有一个待完成的事务。紧跟着创建一个待处理事务t,它的类型为struct binder_transaction,这是等一会要存在到Service Manager的todo队列去的,表示Service Manager当前有一个事务需要处理。同时,这个待处理事务t也要存放在当前线程的待完成事务transaction_stack列表中去:

  1. t->from_parent = thread->transaction_stack;  
  2. thread->transaction_stack = t;  
t->from_parent = thread->transaction_stack;
thread->transaction_stack = t;
        这样表明当前线程还有事务要处理。

        继续往下看,就是分别把tcomplete和t放在当前线程thread和Service Manager进程的todo队列去了:

  1. t->work.type = BINDER_WORK_TRANSACTION;  
  2. list_add_tail(&t->work.entry, target_list);  
  3. tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE;  
  4. list_add_tail(&tcomplete->entry, &thread->todo);  
t->work.type = BINDER_WORK_TRANSACTION;
list_add_tail(&t->work.entry, target_list);
tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE;
list_add_tail(&tcomplete->entry, &thread->todo);
        最后,Service Manager有事情可做了,就要唤醒它了:

  1. wake_up_interruptible(target_wait);  
wake_up_interruptible(target_wait);
        前面我们提到,此时Service Manager正在等待Client的请求,也就是Service Manager此时正在进入到Binder驱动程序的binder_thread_read函数中,并且休眠在target->wait上,具体参考 浅谈Service Manager成为Android进程间通信(IPC)机制Binder守护进程之路一文。
        这里,我们暂时忽略Service Manager被唤醒之后的情景,继续看当前线程的执行。
        函数binder_transaction执行完成之后,就一路返回到binder_ioctl函数里去了。函数binder_ioctl从binder_thread_write函数调用处返回后,发现bwr.read_size大于0,于是就进入到binder_thread_read函数去了:

  1. static int  
  2. binder_thread_read(struct binder_proc *proc, struct binder_thread *thread,  
  3.                    void  __user *buffer, int size, signed long *consumed, int non_block)  
  4. {  
  5.     void __user *ptr = buffer + *consumed;  
  6.     void __user *end = buffer + size;  
  7.   
  8.     int ret = 0;  
  9.     int wait_for_proc_work;  
  10.   
  11.     if (*consumed == 0) {  
  12.         if (put_user(BR_NOOP, (uint32_t __user *)ptr))  
  13.             return -EFAULT;  
  14.         ptr += sizeof(uint32_t);  
  15.     }  
  16.   
  17. retry:  
  18.     wait_for_proc_work = thread->transaction_stack == NULL && list_empty(&thread->todo);  
  19.   
  20.     ......  
  21.       
  22.     if (wait_for_proc_work) {  
  23.         ......  
  24.     } else {  
  25.         if (non_block) {  
  26.             if (!binder_has_thread_work(thread))  
  27.                 ret = -EAGAIN;  
  28.         } else  
  29.             ret = wait_event_interruptible(thread->wait, binder_has_thread_work(thread));  
  30.     }  
  31.   
  32.     ......  
  33.   
  34.     while (1) {  
  35.         uint32_t cmd;  
  36.         struct binder_transaction_data tr;  
  37.         struct binder_work *w;  
  38.         struct binder_transaction *t = NULL;  
  39.   
  40.         if (!list_empty(&thread->todo))  
  41.             w = list_first_entry(&thread->todo, struct binder_work, entry);  
  42.         else if (!list_empty(&proc->todo) && wait_for_proc_work)  
  43.             w = list_first_entry(&proc->todo, struct binder_work, entry);  
  44.         else {  
  45.             if (ptr - buffer == 4 && !(thread->looper & BINDER_LOOPER_STATE_NEED_RETURN)) /* no data added */  
  46.                 goto retry;  
  47.             break;  
  48.         }  
  49.   
  50.         if (end - ptr < sizeof(tr) + 4)  
  51.             break;  
  52.   
  53.         switch (w->type) {  
  54.         ......  
  55.         case BINDER_WORK_TRANSACTION_COMPLETE: {  
  56.             cmd = BR_TRANSACTION_COMPLETE;  
  57.             if (put_user(cmd, (uint32_t __user *)ptr))  
  58.                 return -EFAULT;  
  59.             ptr += sizeof(uint32_t);  
  60.   
  61.             binder_stat_br(proc, thread, cmd);  
  62.             if (binder_debug_mask & BINDER_DEBUG_TRANSACTION_COMPLETE)  
  63.                 printk(KERN_INFO "binder: %d:%d BR_TRANSACTION_COMPLETE\n",  
  64.                 proc->pid, thread->pid);  
  65.   
  66.             list_del(&w->entry);  
  67.             kfree(w);  
  68.             binder_stats.obj_deleted[BINDER_STAT_TRANSACTION_COMPLETE]++;  
  69.                                                } break;  
  70.         ......  
  71.         }  
  72.   
  73.         if (!t)  
  74.             continue;  
  75.   
  76.         ......  
  77.     }  
  78.   
  79. done:  
  80.     ......  
  81.     return 0;  
  82. }  
static int
binder_thread_read(struct binder_proc *proc, struct binder_thread *thread,
				   void  __user *buffer, int size, signed long *consumed, int non_block)
{
	void __user *ptr = buffer + *consumed;
	void __user *end = buffer + size;

	int ret = 0;
	int wait_for_proc_work;

	if (*consumed == 0) {
		if (put_user(BR_NOOP, (uint32_t __user *)ptr))
			return -EFAULT;
		ptr += sizeof(uint32_t);
	}

retry:
	wait_for_proc_work = thread->transaction_stack == NULL && list_empty(&thread->todo);

	......
	
	if (wait_for_proc_work) {
		......
	} else {
		if (non_block) {
			if (!binder_has_thread_work(thread))
				ret = -EAGAIN;
		} else
			ret = wait_event_interruptible(thread->wait, binder_has_thread_work(thread));
	}

	......

	while (1) {
		uint32_t cmd;
		struct binder_transaction_data tr;
		struct binder_work *w;
		struct binder_transaction *t = NULL;

		if (!list_empty(&thread->todo))
			w = list_first_entry(&thread->todo, struct binder_work, entry);
		else if (!list_empty(&proc->todo) && wait_for_proc_work)
			w = list_first_entry(&proc->todo, struct binder_work, entry);
		else {
			if (ptr - buffer == 4 && !(thread->looper & BINDER_LOOPER_STATE_NEED_RETURN)) /* no data added */
				goto retry;
			break;
		}

		if (end - ptr < sizeof(tr) + 4)
			break;

		switch (w->type) {
		......
		case BINDER_WORK_TRANSACTION_COMPLETE: {
			cmd = BR_TRANSACTION_COMPLETE;
			if (put_user(cmd, (uint32_t __user *)ptr))
				return -EFAULT;
			ptr += sizeof(uint32_t);

			binder_stat_br(proc, thread, cmd);
			if (binder_debug_mask & BINDER_DEBUG_TRANSACTION_COMPLETE)
				printk(KERN_INFO "binder: %d:%d BR_TRANSACTION_COMPLETE\n",
				proc->pid, thread->pid);

			list_del(&w->entry);
			kfree(w);
			binder_stats.obj_deleted[BINDER_STAT_TRANSACTION_COMPLETE]++;
											   } break;
		......
		}

		if (!t)
			continue;

		......
	}

done:
	......
	return 0;
}
       函数首先是写入一个操作码BR_NOOP到用户传进来的缓冲区中去。

      回忆一下上面的binder_transaction函数,这里的thread->transaction_stack != NULL,并且thread->todo也不为空,所以线程不会进入休眠状态。

      进入while循环中,首先是从thread->todo队列中取回待处理事项w,w的类型为BINDER_WORK_TRANSACTION_COMPLETE,这也是在binder_transaction函数里面设置的。对BINDER_WORK_TRANSACTION_COMPLETE的处理也很简单,只是把一个操作码BR_TRANSACTION_COMPLETE写回到用户传进来的缓冲区中去。这时候,用户传进来的缓冲区就包含两个操作码了,分别是BR_NOOP和BINDER_WORK_TRANSACTION_COMPLETE。

      binder_thread_read执行完之后,返回到binder_ioctl函数中,将操作结果写回到用户空间中去:

  1. if (copy_to_user(ubuf, &bwr, sizeof(bwr))) {  
  2.     ret = -EFAULT;  
  3.     goto err;  
  4. }  
if (copy_to_user(ubuf, &bwr, sizeof(bwr))) {
	ret = -EFAULT;
	goto err;
}
       最后就返回到IPCThreadState::talkWithDriver函数中了。

       IPCThreadState::talkWithDriver函数从下面语句:

  1. ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr)  
ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr)
       返回后,首先是清空之前写入Binder驱动程序的内容:

  1. if (bwr.write_consumed > 0) {  
  2.      if (bwr.write_consumed < (ssize_t)mOut.dataSize())  
  3.           mOut.remove(0, bwr.write_consumed);  
  4.      else  
  5.           mOut.setDataSize(0);  
  6. }  
if (bwr.write_consumed > 0) {
     if (bwr.write_consumed < (ssize_t)mOut.dataSize())
          mOut.remove(0, bwr.write_consumed);
     else
          mOut.setDataSize(0);
}
       接着是设置从Binder驱动程序读取的内容:

  1. if (bwr.read_consumed > 0) {  
  2.      mIn.setDataSize(bwr.read_consumed);  
  3.      mIn.setDataPosition(0);  
  4. }  
if (bwr.read_consumed > 0) {
     mIn.setDataSize(bwr.read_consumed);
     mIn.setDataPosition(0);
}
       然后就返回到IPCThreadState::waitForResponse去了。IPCThreadState::waitForResponse函数的处理也很简单,就是处理刚才从Binder驱动程序读入内容了。从前面的分析中,我们知道,从Binder驱动程序读入的内容就是两个整数了,分别是BR_NOOP和BR_TRANSACTION_COMPLETE。对BR_NOOP的处理很简单,正如它的名字所示,什么也不做;而对BR_TRANSACTION_COMPLETE的处理,就分情况了,如果这个请求是异步的,那个整个BC_TRANSACTION操作就完成了,如果这个请求是同步的,即要等待回复的,也就是reply不为空,那么还要继续通过IPCThreadState::talkWithDriver进入到Binder驱动程序中去等待BC_TRANSACTION操作的处理结果。

      这里属于后一种情况,于是再次通过IPCThreadState::talkWithDriver进入到Binder驱动程序的binder_ioctl函数中。不过这一次在binder_ioctl函数中,bwr.write_size等于0,而bwr.read_size大于0,于是再次进入到binder_thread_read函数中。这时候thread->transaction_stack仍然不为NULL,不过thread->todo队列已经为空了,因为前面我们已经处理过thread->todo队列的内容了,于是就通过下面语句:

  1. ret = wait_event_interruptible(thread->wait, binder_has_thread_work(thread));  
ret = wait_event_interruptible(thread->wait, binder_has_thread_work(thread));
      进入休眠状态了,等待Service Manager的唤醒。

      现在,我们终于可以回到Service Manager被唤醒之后的过程了。前面我们说过,Service Manager此时正在binder_thread_read函数中休眠中:

  1. static int  
  2. binder_thread_read(struct binder_proc *proc, struct binder_thread *thread,  
  3.                    void  __user *buffer, int size, signed long *consumed, int non_block)  
  4. {  
  5.     void __user *ptr = buffer + *consumed;  
  6.     void __user *end = buffer + size;  
  7.   
  8.     int ret = 0;  
  9.     int wait_for_proc_work;  
  10.   
  11.     if (*consumed == 0) {  
  12.         if (put_user(BR_NOOP, (uint32_t __user *)ptr))  
  13.             return -EFAULT;  
  14.         ptr += sizeof(uint32_t);  
  15.     }  
  16.   
  17. retry:  
  18.     wait_for_proc_work = thread->transaction_stack == NULL && list_empty(&thread->todo);  
  19.   
  20.     ......  
  21.   
  22.     if (wait_for_proc_work) {  
  23.         ......  
  24.         if (non_block) {  
  25.             if (!binder_has_proc_work(proc, thread))  
  26.                 ret = -EAGAIN;  
  27.         } else  
  28.             ret = wait_event_interruptible_exclusive(proc->wait, binder_has_proc_work(proc, thread));  
  29.     } else {  
  30.         ......  
  31.     }  
  32.       
  33.     ......  
  34.   
  35.     while (1) {  
  36.         uint32_t cmd;  
  37.         struct binder_transaction_data tr;  
  38.         struct binder_work *w;  
  39.         struct binder_transaction *t = NULL;  
  40.   
  41.         if (!list_empty(&thread->todo))  
  42.             w = list_first_entry(&thread->todo, struct binder_work, entry);  
  43.         else if (!list_empty(&proc->todo) && wait_for_proc_work)  
  44.             w = list_first_entry(&proc->todo, struct binder_work, entry);  
  45.         else {  
  46.             if (ptr - buffer == 4 && !(thread->looper & BINDER_LOOPER_STATE_NEED_RETURN)) /* no data added */  
  47.                 goto retry;  
  48.             break;  
  49.         }  
  50.   
  51.         if (end - ptr < sizeof(tr) + 4)  
  52.             break;  
  53.   
  54.         switch (w->type) {  
  55.         case BINDER_WORK_TRANSACTION: {  
  56.             t = container_of(w, struct binder_transaction, work);  
  57.                                       } break;  
  58.         ......  
  59.         }  
  60.   
  61.         if (!t)  
  62.             continue;  
  63.   
  64.         BUG_ON(t->buffer == NULL);  
  65.         if (t->buffer->target_node) {  
  66.             struct binder_node *target_node = t->buffer->target_node;  
  67.             tr.target.ptr = target_node->ptr;  
  68.             tr.cookie =  target_node->cookie;  
  69.             t->saved_priority = task_nice(current);  
  70.             if (t->priority < target_node->min_priority &&  
  71.                 !(t->flags & TF_ONE_WAY))  
  72.                 binder_set_nice(t->priority);  
  73.             else if (!(t->flags & TF_ONE_WAY) ||  
  74.                 t->saved_priority > target_node->min_priority)  
  75.                 binder_set_nice(target_node->min_priority);  
  76.             cmd = BR_TRANSACTION;  
  77.         } else {  
  78.             ......  
  79.         }  
  80.         tr.code = t->code;  
  81.         tr.flags = t->flags;  
  82.         tr.sender_euid = t->sender_euid;  
  83.   
  84.         if (t->from) {  
  85.             struct task_struct *sender = t->from->proc->tsk;  
  86.             tr.sender_pid = task_tgid_nr_ns(sender, current->nsproxy->pid_ns);  
  87.         } else {  
  88.             ......  
  89.         }  
  90.   
  91.         tr.data_size = t->buffer->data_size;  
  92.         tr.offsets_size = t->buffer->offsets_size;  
  93.         tr.data.ptr.buffer = (void *)t->buffer->data + proc->user_buffer_offset;  
  94.         tr.data.ptr.offsets = tr.data.ptr.buffer + ALIGN(t->buffer->data_size, sizeof(void *));  
  95.   
  96.         if (put_user(cmd, (uint32_t __user *)ptr))  
  97.             return -EFAULT;  
  98.         ptr += sizeof(uint32_t);  
  99.         if (copy_to_user(ptr, &tr, sizeof(tr)))  
  100.             return -EFAULT;  
  101.         ptr += sizeof(tr);  
  102.   
  103.         ......  
  104.   
  105.         list_del(&t->work.entry);  
  106.         t->buffer->allow_user_free = 1;  
  107.         if (cmd == BR_TRANSACTION && !(t->flags & TF_ONE_WAY)) {  
  108.             t->to_parent = thread->transaction_stack;  
  109.             t->to_thread = thread;  
  110.             thread->transaction_stack = t;  
  111.         } else {  
  112.             ......  
  113.         }  
  114.         break;  
  115.     }  
  116.   
  117. done:  
  118.   
  119.     *consumed = ptr - buffer;  
  120.     ......  
  121.     return 0;  
  122. }  
static int
binder_thread_read(struct binder_proc *proc, struct binder_thread *thread,
				   void  __user *buffer, int size, signed long *consumed, int non_block)
{
	void __user *ptr = buffer + *consumed;
	void __user *end = buffer + size;

	int ret = 0;
	int wait_for_proc_work;

	if (*consumed == 0) {
		if (put_user(BR_NOOP, (uint32_t __user *)ptr))
			return -EFAULT;
		ptr += sizeof(uint32_t);
	}

retry:
	wait_for_proc_work = thread->transaction_stack == NULL && list_empty(&thread->todo);

	......

	if (wait_for_proc_work) {
		......
		if (non_block) {
			if (!binder_has_proc_work(proc, thread))
				ret = -EAGAIN;
		} else
			ret = wait_event_interruptible_exclusive(proc->wait, binder_has_proc_work(proc, thread));
	} else {
		......
	}
	
	......

	while (1) {
		uint32_t cmd;
		struct binder_transaction_data tr;
		struct binder_work *w;
		struct binder_transaction *t = NULL;

		if (!list_empty(&thread->todo))
			w = list_first_entry(&thread->todo, struct binder_work, entry);
		else if (!list_empty(&proc->todo) && wait_for_proc_work)
			w = list_first_entry(&proc->todo, struct binder_work, entry);
		else {
			if (ptr - buffer == 4 && !(thread->looper & BINDER_LOOPER_STATE_NEED_RETURN)) /* no data added */
				goto retry;
			break;
		}

		if (end - ptr < sizeof(tr) + 4)
			break;

		switch (w->type) {
		case BINDER_WORK_TRANSACTION: {
			t = container_of(w, struct binder_transaction, work);
									  } break;
		......
		}

		if (!t)
			continue;

		BUG_ON(t->buffer == NULL);
		if (t->buffer->target_node) {
			struct binder_node *target_node = t->buffer->target_node;
			tr.target.ptr = target_node->ptr;
			tr.cookie =  target_node->cookie;
			t->saved_priority = task_nice(current);
			if (t->priority < target_node->min_priority &&
				!(t->flags & TF_ONE_WAY))
				binder_set_nice(t->priority);
			else if (!(t->flags & TF_ONE_WAY) ||
				t->saved_priority > target_node->min_priority)
				binder_set_nice(target_node->min_priority);
			cmd = BR_TRANSACTION;
		} else {
			......
		}
		tr.code = t->code;
		tr.flags = t->flags;
		tr.sender_euid = t->sender_euid;

		if (t->from) {
			struct task_struct *sender = t->from->proc->tsk;
			tr.sender_pid = task_tgid_nr_ns(sender, current->nsproxy->pid_ns);
		} else {
			......
		}

		tr.data_size = t->buffer->data_size;
		tr.offsets_size = t->buffer->offsets_size;
		tr.data.ptr.buffer = (void *)t->buffer->data + proc->user_buffer_offset;
		tr.data.ptr.offsets = tr.data.ptr.buffer + ALIGN(t->buffer->data_size, sizeof(void *));

		if (put_user(cmd, (uint32_t __user *)ptr))
			return -EFAULT;
		ptr += sizeof(uint32_t);
		if (copy_to_user(ptr, &tr, sizeof(tr)))
			return -EFAULT;
		ptr += sizeof(tr);

		......

		list_del(&t->work.entry);
		t->buffer->allow_user_free = 1;
		if (cmd == BR_TRANSACTION && !(t->flags & TF_ONE_WAY)) {
			t->to_parent = thread->transaction_stack;
			t->to_thread = thread;
			thread->transaction_stack = t;
		} else {
			......
		}
		break;
	}

done:

	*consumed = ptr - buffer;
	......
	return 0;
}
        这里就是从语句中唤醒了:

  1. ret = wait_event_interruptible_exclusive(proc->wait, binder_has_proc_work(proc, thread));  
ret = wait_event_interruptible_exclusive(proc->wait, binder_has_proc_work(proc, thread));
        Service Manager唤醒过来看,继续往下执行,进入到while循环中。首先是从proc->todo中取回待处理事项w。这个事项w的类型是BINDER_WORK_TRANSACTION,这是上面调用binder_transaction的时候设置的,于是通过w得到待处理事务t:

  1. t = container_of(w, struct binder_transaction, work);  
t = container_of(w, struct binder_transaction, work);
        接下来的内容,就把cmd和t->buffer的内容拷贝到用户传进来的缓冲区去了,这里就是Service Manager从用户空间传进来的缓冲区了:

  1. if (put_user(cmd, (uint32_t __user *)ptr))  
  2.     return -EFAULT;  
  3. ptr += sizeof(uint32_t);  
  4. if (copy_to_user(ptr, &tr, sizeof(tr)))  
  5.     return -EFAULT;  
  6. ptr += sizeof(tr);  
if (put_user(cmd, (uint32_t __user *)ptr))
	return -EFAULT;
ptr += sizeof(uint32_t);
if (copy_to_user(ptr, &tr, sizeof(tr)))
	return -EFAULT;
ptr += sizeof(tr);
        注意,这里先是把t->buffer的内容拷贝到本地变量tr中,再拷贝到用户空间缓冲区去。关于t->buffer内容的拷贝,请参考 Android系统进程间通信(IPC)机制Binder中的Server启动过程源代码分析一文,它的一个关键地方是Binder驱动程序和Service Manager守护进程共享了同一个物理内存的内容,拷贝的只是这个物理内存在用户空间的虚拟地址回去:

  1. tr.data.ptr.buffer = (void *)t->buffer->data + proc->user_buffer_offset;  
  2. tr.data.ptr.offsets = tr.data.ptr.buffer + ALIGN(t->buffer->data_size, sizeof(void *));  
tr.data.ptr.buffer = (void *)t->buffer->data + proc->user_buffer_offset;
tr.data.ptr.offsets = tr.data.ptr.buffer + ALIGN(t->buffer->data_size, sizeof(void *));
       对于Binder驱动程序这次操作来说,这个事项就算是处理完了,就要从todo队列中删除了:

  1. list_del(&t->work.entry);  
list_del(&t->work.entry);
       紧接着,还不放删除这个事务,因为它还要等待Service Manager处理完成后,再进一步处理,因此,放在thread->transaction_stack队列中:

  1. t->to_parent = thread->transaction_stack;  
  2. t->to_thread = thread;  
  3. thread->transaction_stack = t;  
t->to_parent = thread->transaction_stack;
t->to_thread = thread;
thread->transaction_stack = t;
       还要注意的一个地方是,上面写入的cmd = BR_TRANSACTION,告诉Service Manager守护进程,它要做什么事情,后面我们会看到相应的分析。

       这样,binder_thread_read函数就处理完了,回到binder_ioctl函数中,同样是操作结果写回到用户空间的缓冲区中去:

  1. if (copy_to_user(ubuf, &bwr, sizeof(bwr))) {  
  2.     ret = -EFAULT;  
  3.     goto err;  
  4. }  
if (copy_to_user(ubuf, &bwr, sizeof(bwr))) {
    ret = -EFAULT;
    goto err;
}
       最后,就返回到frameworks/base/cmds/servicemanager/binder.c文件中的binder_loop函数去了:

  1. void binder_loop(struct binder_state *bs, binder_handler func)  
  2. {  
  3.     int res;  
  4.     struct binder_write_read bwr;  
  5.     unsigned readbuf[32];  
  6.   
  7.     bwr.write_size = 0;  
  8.     bwr.write_consumed = 0;  
  9.     bwr.write_buffer = 0;  
  10.       
  11.     readbuf[0] = BC_ENTER_LOOPER;  
  12.     binder_write(bs, readbuf, sizeof(unsigned));  
  13.   
  14.     for (;;) {  
  15.         bwr.read_size = sizeof(readbuf);  
  16.         bwr.read_consumed = 0;  
  17.         bwr.read_buffer = (unsigned) readbuf;  
  18.   
  19.         res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);  
  20.   
  21.         if (res < 0) {  
  22.             LOGE("binder_loop: ioctl failed (%s)\n", strerror(errno));  
  23.             break;  
  24.         }  
  25.   
  26.         res = binder_parse(bs, 0, readbuf, bwr.read_consumed, func);  
  27.         if (res == 0) {  
  28.             LOGE("binder_loop: unexpected reply?!\n");  
  29.             break;  
  30.         }  
  31.         if (res < 0) {  
  32.             LOGE("binder_loop: io error %d %s\n", res, strerror(errno));  
  33.             break;  
  34.         }  
  35.     }  
  36. }  
void binder_loop(struct binder_state *bs, binder_handler func)
{
    int res;
    struct binder_write_read bwr;
    unsigned readbuf[32];

    bwr.write_size = 0;
    bwr.write_consumed = 0;
    bwr.write_buffer = 0;
    
    readbuf[0] = BC_ENTER_LOOPER;
    binder_write(bs, readbuf, sizeof(unsigned));

    for (;;) {
        bwr.read_size = sizeof(readbuf);
        bwr.read_consumed = 0;
        bwr.read_buffer = (unsigned) readbuf;

        res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);

        if (res < 0) {
            LOGE("binder_loop: ioctl failed (%s)\n", strerror(errno));
            break;
        }

        res = binder_parse(bs, 0, readbuf, bwr.read_consumed, func);
        if (res == 0) {
            LOGE("binder_loop: unexpected reply?!\n");
            break;
        }
        if (res < 0) {
            LOGE("binder_loop: io error %d %s\n", res, strerror(errno));
            break;
        }
    }
}
        这里就是从下面的语句:

  1. res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);  
res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);
        返回来了。接着就进入binder_parse函数处理从Binder驱动程序里面读取出来的数据:

  1. int binder_parse(struct binder_state *bs, struct binder_io *bio,  
  2.                  uint32_t *ptr, uint32_t size, binder_handler func)  
  3. {  
  4.     int r = 1;  
  5.     uint32_t *end = ptr + (size / 4);  
  6.   
  7.     while (ptr < end) {  
  8.         uint32_t cmd = *ptr++;  
  9.         switch(cmd) {  
  10.         ......  
  11.         case BR_TRANSACTION: {  
  12.             struct binder_txn *txn = (void *) ptr;  
  13.             ......  
  14.             if (func) {  
  15.                 unsigned rdata[256/4];  
  16.                 struct binder_io msg;  
  17.                 struct binder_io reply;  
  18.                 int res;  
  19.   
  20.                 bio_init(&reply, rdata, sizeof(rdata), 4);  
  21.                 bio_init_from_txn(&msg, txn);  
  22.                 res = func(bs, txn, &msg, &reply);  
  23.                 binder_send_reply(bs, &reply, txn->data, res);  
  24.             }  
  25.             ptr += sizeof(*txn) / sizeof(uint32_t);  
  26.             break;  
  27.                              }  
  28.         ......  
  29.         default:  
  30.             LOGE("parse: OOPS %d\n", cmd);  
  31.             return -1;  
  32.         }  
  33.     }  
  34.   
  35.     return r;  
  36. }  
int binder_parse(struct binder_state *bs, struct binder_io *bio,
				 uint32_t *ptr, uint32_t size, binder_handler func)
{
	int r = 1;
	uint32_t *end = ptr + (size / 4);

	while (ptr < end) {
		uint32_t cmd = *ptr++;
		switch(cmd) {
		......
		case BR_TRANSACTION: {
			struct binder_txn *txn = (void *) ptr;
			......
			if (func) {
				unsigned rdata[256/4];
				struct binder_io msg;
				struct binder_io reply;
				int res;

				bio_init(&reply, rdata, sizeof(rdata), 4);
				bio_init_from_txn(&msg, txn);
				res = func(bs, txn, &msg, &reply);
				binder_send_reply(bs, &reply, txn->data, res);
			}
			ptr += sizeof(*txn) / sizeof(uint32_t);
			break;
							 }
		......
		default:
			LOGE("parse: OOPS %d\n", cmd);
			return -1;
		}
	}

	return r;
}
         前面我们说过,Binder驱动程序写入到用户空间的缓冲区中的cmd为BR_TRANSACTION,因此,这里我们只关注BR_TRANSACTION相关的逻辑。

         这里用到的两个数据结构struct binder_txn和struct binder_io可以参考前面一篇文章Android系统进程间通信(IPC)机制Binder中的Server启动过程源代码分析,这里就不复述了。

         接着往下看,函数调bio_init来初始化reply变量:

  1. void bio_init(struct binder_io *bio, void *data,  
  2.               uint32_t maxdata, uint32_t maxoffs)  
  3. {  
  4.     uint32_t n = maxoffs * sizeof(uint32_t);  
  5.   
  6.     if (n > maxdata) {  
  7.         bio->flags = BIO_F_OVERFLOW;  
  8.         bio->data_avail = 0;  
  9.         bio->offs_avail = 0;  
  10.         return;  
  11.     }  
  12.   
  13.     bio->data = bio->data0 = data + n;  
  14.     bio->offs = bio->offs0 = data;  
  15.     bio->data_avail = maxdata - n;  
  16.     bio->offs_avail = maxoffs;  
  17.     bio->flags = 0;  
  18. }  
void bio_init(struct binder_io *bio, void *data,
              uint32_t maxdata, uint32_t maxoffs)
{
    uint32_t n = maxoffs * sizeof(uint32_t);

    if (n > maxdata) {
        bio->flags = BIO_F_OVERFLOW;
        bio->data_avail = 0;
        bio->offs_avail = 0;
        return;
    }

    bio->data = bio->data0 = data + n;
    bio->offs = bio->offs0 = data;
    bio->data_avail = maxdata - n;
    bio->offs_avail = maxoffs;
    bio->flags = 0;
}
        接着又调用bio_init_from_txn来初始化msg变量:

  1. void bio_init_from_txn(struct binder_io *bio, struct binder_txn *txn)  
  2. {  
  3.     bio->data = bio->data0 = txn->data;  
  4.     bio->offs = bio->offs0 = txn->offs;  
  5.     bio->data_avail = txn->data_size;  
  6.     bio->offs_avail = txn->offs_size / 4;  
  7.     bio->flags = BIO_F_SHARED;  
  8. }  
void bio_init_from_txn(struct binder_io *bio, struct binder_txn *txn)
{
    bio->data = bio->data0 = txn->data;
    bio->offs = bio->offs0 = txn->offs;
    bio->data_avail = txn->data_size;
    bio->offs_avail = txn->offs_size / 4;
    bio->flags = BIO_F_SHARED;
}
       最后,真正进行处理的函数是从参数中传进来的函数指针func,这里就是定义在frameworks/base/cmds/servicemanager/service_manager.c文件中的svcmgr_handler函数:

  1. int svcmgr_handler(struct binder_state *bs,  
  2.                    struct binder_txn *txn,  
  3.                    struct binder_io *msg,  
  4.                    struct binder_io *reply)  
  5. {  
  6.     struct svcinfo *si;  
  7.     uint16_t *s;  
  8.     unsigned len;  
  9.     void *ptr;  
  10.     uint32_t strict_policy;  
  11.   
  12. //    LOGI("target=%p code=%d pid=%d uid=%d\n",  
  13. //         txn->target, txn->code, txn->sender_pid, txn->sender_euid);  
  14.   
  15.     if (txn->target != svcmgr_handle)  
  16.         return -1;  
  17.   
  18.     // Equivalent to Parcel::enforceInterface(), reading the RPC  
  19.     // header with the strict mode policy mask and the interface name.  
  20.     // Note that we ignore the strict_policy and don't propagate it  
  21.     // further (since we do no outbound RPCs anyway).  
  22.     strict_policy = bio_get_uint32(msg);  
  23.     s = bio_get_string16(msg, &len);  
  24.     if ((len != (sizeof(svcmgr_id) / 2)) ||  
  25.         memcmp(svcmgr_id, s, sizeof(svcmgr_id))) {  
  26.         fprintf(stderr,"invalid id %s\n", str8(s));  
  27.         return -1;  
  28.     }  
  29.   
  30.     switch(txn->code) {  
  31.     case SVC_MGR_GET_SERVICE:  
  32.     case SVC_MGR_CHECK_SERVICE:  
  33.         s = bio_get_string16(msg, &len);  
  34.         ptr = do_find_service(bs, s, len);  
  35.         if (!ptr)  
  36.             break;  
  37.         bio_put_ref(reply, ptr);  
  38.         return 0;  
  39.   
  40.     ......  
  41.     }  
  42.     default:  
  43.         LOGE("unknown code %d\n", txn->code);  
  44.         return -1;  
  45.     }  
  46.   
  47.     bio_put_uint32(reply, 0);  
  48.     return 0;  
  49. }  
int svcmgr_handler(struct binder_state *bs,
                   struct binder_txn *txn,
                   struct binder_io *msg,
                   struct binder_io *reply)
{
    struct svcinfo *si;
    uint16_t *s;
    unsigned len;
    void *ptr;
    uint32_t strict_policy;

//    LOGI("target=%p code=%d pid=%d uid=%d\n",
//         txn->target, txn->code, txn->sender_pid, txn->sender_euid);

    if (txn->target != svcmgr_handle)
        return -1;

    // Equivalent to Parcel::enforceInterface(), reading the RPC
    // header with the strict mode policy mask and the interface name.
    // Note that we ignore the strict_policy and don't propagate it
    // further (since we do no outbound RPCs anyway).
    strict_policy = bio_get_uint32(msg);
    s = bio_get_string16(msg, &len);
    if ((len != (sizeof(svcmgr_id) / 2)) ||
        memcmp(svcmgr_id, s, sizeof(svcmgr_id))) {
        fprintf(stderr,"invalid id %s\n", str8(s));
        return -1;
    }

    switch(txn->code) {
    case SVC_MGR_GET_SERVICE:
    case SVC_MGR_CHECK_SERVICE:
        s = bio_get_string16(msg, &len);
        ptr = do_find_service(bs, s, len);
        if (!ptr)
            break;
        bio_put_ref(reply, ptr);
        return 0;

    ......
    }
    default:
        LOGE("unknown code %d\n", txn->code);
        return -1;
    }

    bio_put_uint32(reply, 0);
    return 0;
}
        这里, Service Manager要处理的code是SVC_MGR_CHECK_SERVICE,这是在前面的BpServiceManager::checkService函数里面设置的。

        回忆一下,在BpServiceManager::checkService时,传给Binder驱动程序的参数为:

  1. writeInt32(IPCThreadState::self()->getStrictModePolicy() | STRICT_MODE_PENALTY_GATHER);    
  2. writeString16("android.os.IServiceManager");    
  3. writeString16("media.player");    
writeInt32(IPCThreadState::self()->getStrictModePolicy() | STRICT_MODE_PENALTY_GATHER);  
writeString16("android.os.IServiceManager");  
writeString16("media.player");  
       这里的语句:

  1. strict_policy = bio_get_uint32(msg);    
  2. s = bio_get_string16(msg, &len);    
  3. s = bio_get_string16(msg, &len);   
strict_policy = bio_get_uint32(msg);  
s = bio_get_string16(msg, &len);  
s = bio_get_string16(msg, &len); 
       其中,会验证一下传进来的第二个参数,即"android.os.IServiceManager"是否正确,这个是验证RPC头,注释已经说得很清楚了。

       最后,就是调用do_find_service函数查找是存在名称为"media.player"的服务了。回忆一下前面一篇文章Android系统进程间通信(IPC)机制Binder中的Server启动过程源代码分析,MediaPlayerService已经把一个名称为"media.player"的服务注册到Service Manager中,所以这里一定能找到。我们看看do_find_service这个函数:

  1. void *do_find_service(struct binder_state *bs, uint16_t *s, unsigned len)  
  2. {  
  3.     struct svcinfo *si;  
  4.     si = find_svc(s, len);  
  5.   
  6. //    LOGI("check_service('%s') ptr = %p\n", str8(s), si ? si->ptr : 0);  
  7.     if (si && si->ptr) {  
  8.         return si->ptr;  
  9.     } else {  
  10.         return 0;  
  11.     }  
  12. }  
void *do_find_service(struct binder_state *bs, uint16_t *s, unsigned len)
{
    struct svcinfo *si;
    si = find_svc(s, len);

//    LOGI("check_service('%s') ptr = %p\n", str8(s), si ? si->ptr : 0);
    if (si && si->ptr) {
        return si->ptr;
    } else {
        return 0;
    }
}
       这里又调用了find_svc函数:

  1. struct svcinfo *find_svc(uint16_t *s16, unsigned len)  
  2. {  
  3.     struct svcinfo *si;  
  4.   
  5.     for (si = svclist; si; si = si->next) {  
  6.         if ((len == si->len) &&  
  7.             !memcmp(s16, si->name, len * sizeof(uint16_t))) {  
  8.             return si;  
  9.         }  
  10.     }  
  11.     return 0;  
  12. }  
struct svcinfo *find_svc(uint16_t *s16, unsigned len)
{
    struct svcinfo *si;

    for (si = svclist; si; si = si->next) {
        if ((len == si->len) &&
            !memcmp(s16, si->name, len * sizeof(uint16_t))) {
            return si;
        }
    }
    return 0;
}
       就是在svclist列表中查找对应名称的svcinfo了。

       然后返回到do_find_service函数中。回忆一下前面一篇文章Android系统进程间通信(IPC)机制Binder中的Server启动过程源代码分析,这里的si->ptr就是指MediaPlayerService这个Binder实体在Service Manager进程中的句柄值了。

       回到svcmgr_handler函数中,调用bio_put_ref函数将这个Binder引用写回到reply参数。我们看看bio_put_ref的实现:

  1. void bio_put_ref(struct binder_io *bio, void *ptr)  
  2. {  
  3.     struct binder_object *obj;  
  4.   
  5.     if (ptr)  
  6.         obj = bio_alloc_obj(bio);  
  7.     else  
  8.         obj = bio_alloc(bio, sizeof(*obj));  
  9.   
  10.     if (!obj)  
  11.         return;  
  12.   
  13.     obj->flags = 0x7f | FLAT_BINDER_FLAG_ACCEPTS_FDS;  
  14.     obj->type = BINDER_TYPE_HANDLE;  
  15.     obj->pointer = ptr;  
  16.     obj->cookie = 0;  
  17. }  
void bio_put_ref(struct binder_io *bio, void *ptr)
{
    struct binder_object *obj;

    if (ptr)
        obj = bio_alloc_obj(bio);
    else
        obj = bio_alloc(bio, sizeof(*obj));

    if (!obj)
        return;

    obj->flags = 0x7f | FLAT_BINDER_FLAG_ACCEPTS_FDS;
    obj->type = BINDER_TYPE_HANDLE;
    obj->pointer = ptr;
    obj->cookie = 0;
}
        这里很简单,就是把一个类型为BINDER_TYPE_HANDLE的binder_object写入到reply缓冲区中去。这里的binder_object就是相当于是flat_binder_obj了,具体可以参考 Android系统进程间通信(IPC)机制Binder中的Server启动过程源代码分析一文。

        再回到svcmgr_handler函数中,最后,还写入一个0值到reply缓冲区中,表示操作结果码:

  1. bio_put_uint32(reply, 0);  
bio_put_uint32(reply, 0);
        最后返回到binder_parse函数中,调用binder_send_reply函数将操作结果反馈给Binder驱动程序:

  1. void binder_send_reply(struct binder_state *bs,  
  2.                        struct binder_io *reply,  
  3.                        void *buffer_to_free,  
  4.                        int status)  
  5. {  
  6.     struct {  
  7.         uint32_t cmd_free;  
  8.         void *buffer;  
  9.         uint32_t cmd_reply;  
  10.         struct binder_txn txn;  
  11.     } __attribute__((packed)) data;  
  12.   
  13.     data.cmd_free = BC_FREE_BUFFER;  
  14.     data.buffer = buffer_to_free;  
  15.     data.cmd_reply = BC_REPLY;  
  16.     data.txn.target = 0;  
  17.     data.txn.cookie = 0;  
  18.     data.txn.code = 0;  
  19.     if (status) {  
  20.         data.txn.flags = TF_STATUS_CODE;  
  21.         data.txn.data_size = sizeof(int);  
  22.         data.txn.offs_size = 0;  
  23.         data.txn.data = &status;  
  24.         data.txn.offs = 0;  
  25.     } else {  
  26.         data.txn.flags = 0;  
  27.         data.txn.data_size = reply->data - reply->data0;  
  28.         data.txn.offs_size = ((char*) reply->offs) - ((char*) reply->offs0);  
  29.         data.txn.data = reply->data0;  
  30.         data.txn.offs = reply->offs0;  
  31.     }  
  32.     binder_write(bs, &data, sizeof(data));  
  33. }  
void binder_send_reply(struct binder_state *bs,
                       struct binder_io *reply,
                       void *buffer_to_free,
                       int status)
{
    struct {
        uint32_t cmd_free;
        void *buffer;
        uint32_t cmd_reply;
        struct binder_txn txn;
    } __attribute__((packed)) data;

    data.cmd_free = BC_FREE_BUFFER;
    data.buffer = buffer_to_free;
    data.cmd_reply = BC_REPLY;
    data.txn.target = 0;
    data.txn.cookie = 0;
    data.txn.code = 0;
    if (status) {
        data.txn.flags = TF_STATUS_CODE;
        data.txn.data_size = sizeof(int);
        data.txn.offs_size = 0;
        data.txn.data = &status;
        data.txn.offs = 0;
    } else {
        data.txn.flags = 0;
        data.txn.data_size = reply->data - reply->data0;
        data.txn.offs_size = ((char*) reply->offs) - ((char*) reply->offs0);
        data.txn.data = reply->data0;
        data.txn.offs = reply->offs0;
    }
    binder_write(bs, &data, sizeof(data));
}
        注意,这里的status参数为0。从这里可以看出,binder_send_reply告诉Binder驱动程序执行BC_FREE_BUFFER和BC_REPLY命令,前者释放之前在binder_transaction分配的空间,地址为buffer_to_free,buffer_to_free这个地址是Binder驱动程序把自己在内核空间用的地址转换成用户空间地址再传给Service Manager的,所以Binder驱动程序拿到这个地址后,知道怎么样释放这个空间;后者告诉Binder驱动程序,它的SVC_MGR_CHECK_SERVICE操作已经完成了,要查询的服务的句柄值也是保存在data.txn.data,操作结果码是0,也是保存在data.txn.data中。
        再来看binder_write函数:

  1. int binder_write(struct binder_state *bs, void *data, unsigned len)  
  2. {  
  3.     struct binder_write_read bwr;  
  4.     int res;  
  5.     bwr.write_size = len;  
  6.     bwr.write_consumed = 0;  
  7.     bwr.write_buffer = (unsigned) data;  
  8.     bwr.read_size = 0;  
  9.     bwr.read_consumed = 0;  
  10.     bwr.read_buffer = 0;  
  11.     res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);  
  12.     if (res < 0) {  
  13.         fprintf(stderr,"binder_write: ioctl failed (%s)\n",  
  14.                 strerror(errno));  
  15.     }  
  16.     return res;  
  17. }  
int binder_write(struct binder_state *bs, void *data, unsigned len)
{
    struct binder_write_read bwr;
    int res;
    bwr.write_size = len;
    bwr.write_consumed = 0;
    bwr.write_buffer = (unsigned) data;
    bwr.read_size = 0;
    bwr.read_consumed = 0;
    bwr.read_buffer = 0;
    res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);
    if (res < 0) {
        fprintf(stderr,"binder_write: ioctl failed (%s)\n",
                strerror(errno));
    }
    return res;
}
        这里可以看出,只有写操作,没有读操作,即read_size为0。
        这里又是一个ioctl的BINDER_WRITE_READ操作。直入到驱动程序的binder_ioctl函数后,执行BINDER_WRITE_READ命令,这里就不累述了。
        最后,从binder_ioctl执行到binder_thread_write函数,首先是执行BC_FREE_BUFFER命令,这个命令的执行在前面一篇文章 Android系统进程间通信(IPC)机制Binder中的Server启动过程源代码分析已经介绍过了,这里就不再累述了。

        我们重点关注BC_REPLY命令的执行:

  1. int    
  2. binder_thread_write(struct binder_proc *proc, struct binder_thread *thread,    
  3.                     void __user *buffer, int size, signed long *consumed)    
  4. {    
  5.     uint32_t cmd;    
  6.     void __user *ptr = buffer + *consumed;    
  7.     void __user *end = buffer + size;    
  8.     
  9.     while (ptr < end && thread->return_error == BR_OK) {    
  10.         if (get_user(cmd, (uint32_t __user *)ptr))    
  11.             return -EFAULT;    
  12.         ptr += sizeof(uint32_t);    
  13.         if (_IOC_NR(cmd) < ARRAY_SIZE(binder_stats.bc)) {    
  14.             binder_stats.bc[_IOC_NR(cmd)]++;    
  15.             proc->stats.bc[_IOC_NR(cmd)]++;    
  16.             thread->stats.bc[_IOC_NR(cmd)]++;    
  17.         }    
  18.         switch (cmd) {    
  19.         ......    
  20.         case BC_TRANSACTION:    
  21.         case BC_REPLY: {    
  22.             struct binder_transaction_data tr;    
  23.     
  24.             if (copy_from_user(&tr, ptr, sizeof(tr)))    
  25.                 return -EFAULT;    
  26.             ptr += sizeof(tr);    
  27.             binder_transaction(proc, thread, &tr, cmd == BC_REPLY);    
  28.             break;    
  29.                        }    
  30.     
  31.         ......    
  32.         *consumed = ptr - buffer;    
  33.     }    
  34.     return 0;    
  35. }   
int  
binder_thread_write(struct binder_proc *proc, struct binder_thread *thread,  
                    void __user *buffer, int size, signed long *consumed)  
{  
    uint32_t cmd;  
    void __user *ptr = buffer + *consumed;  
    void __user *end = buffer + size;  
  
    while (ptr < end && thread->return_error == BR_OK) {  
        if (get_user(cmd, (uint32_t __user *)ptr))  
            return -EFAULT;  
        ptr += sizeof(uint32_t);  
        if (_IOC_NR(cmd) < ARRAY_SIZE(binder_stats.bc)) {  
            binder_stats.bc[_IOC_NR(cmd)]++;  
            proc->stats.bc[_IOC_NR(cmd)]++;  
            thread->stats.bc[_IOC_NR(cmd)]++;  
        }  
        switch (cmd) {  
        ......  
        case BC_TRANSACTION:  
        case BC_REPLY: {  
            struct binder_transaction_data tr;  
  
            if (copy_from_user(&tr, ptr, sizeof(tr)))  
                return -EFAULT;  
            ptr += sizeof(tr);  
            binder_transaction(proc, thread, &tr, cmd == BC_REPLY);  
            break;  
                       }  
  
        ......  
        *consumed = ptr - buffer;  
    }  
    return 0;  
} 
        又再次进入到binder_transaction函数:

  1. static void  
  2. binder_transaction(struct binder_proc *proc, struct binder_thread *thread,  
  3. struct binder_transaction_data *tr, int reply)  
  4. {  
  5.     struct binder_transaction *t;  
  6.     struct binder_work *tcomplete;  
  7.     size_t *offp, *off_end;  
  8.     struct binder_proc *target_proc;  
  9.     struct binder_thread *target_thread = NULL;  
  10.     struct binder_node *target_node = NULL;  
  11.     struct list_head *target_list;  
  12.     wait_queue_head_t *target_wait;  
  13.     struct binder_transaction *in_reply_to = NULL;  
  14.     struct binder_transaction_log_entry *e;  
  15.     uint32_t return_error;  
  16.   
  17.     ......  
  18.   
  19.     if (reply) {  
  20.         in_reply_to = thread->transaction_stack;  
  21.         if (in_reply_to == NULL) {  
  22.             ......  
  23.             return_error = BR_FAILED_REPLY;  
  24.             goto err_empty_call_stack;  
  25.         }  
  26.         ......  
  27.         thread->transaction_stack = in_reply_to->to_parent;  
  28.         target_thread = in_reply_to->from;  
  29.         ......  
  30.         target_proc = target_thread->proc;  
  31.     } else {  
  32.         ......  
  33.     }  
  34.     if (target_thread) {  
  35.         e->to_thread = target_thread->pid;  
  36.         target_list = &target_thread->todo;  
  37.         target_wait = &target_thread->wait;  
  38.     } else {  
  39.         ......  
  40.     }  
  41.       
  42.   
  43.     /* TODO: reuse incoming transaction for reply */  
  44.     t = kzalloc(sizeof(*t), GFP_KERNEL);  
  45.     if (t == NULL) {  
  46.         return_error = BR_FAILED_REPLY;  
  47.         goto err_alloc_t_failed;  
  48.     }  
  49.     binder_stats.obj_created[BINDER_STAT_TRANSACTION]++;  
  50.   
  51.     tcomplete = kzalloc(sizeof(*tcomplete), GFP_KERNEL);  
  52.     if (tcomplete == NULL) {  
  53.         return_error = BR_FAILED_REPLY;  
  54.         goto err_alloc_tcomplete_failed;  
  55.     }  
  56.     ......  
  57.   
  58.     if (!reply && !(tr->flags & TF_ONE_WAY))  
  59.         t->from = thread;  
  60.     else  
  61.         t->from = NULL;  
  62.     t->sender_euid = proc->tsk->cred->euid;  
  63.     t->to_proc = target_proc;  
  64.     t->to_thread = target_thread;  
  65.     t->code = tr->code;  
  66.     t->flags = tr->flags;  
  67.     t->priority = task_nice(current);  
  68.     t->buffer = binder_alloc_buf(target_proc, tr->data_size,  
  69.         tr->offsets_size, !reply && (t->flags & TF_ONE_WAY));  
  70.     if (t->buffer == NULL) {  
  71.         return_error = BR_FAILED_REPLY;  
  72.         goto err_binder_alloc_buf_failed;  
  73.     }  
  74.     t->buffer->allow_user_free = 0;  
  75.     t->buffer->debug_id = t->debug_id;  
  76.     t->buffer->transaction = t;  
  77.     t->buffer->target_node = target_node;  
  78.     if (target_node)  
  79.         binder_inc_node(target_node, 1, 0, NULL);  
  80.   
  81.     offp = (size_t *)(t->buffer->data + ALIGN(tr->data_size, sizeof(void *)));  
  82.   
  83.     if (copy_from_user(t->buffer->data, tr->data.ptr.buffer, tr->data_size)) {  
  84.         binder_user_error("binder: %d:%d got transaction with invalid "  
  85.             "data ptr\n", proc->pid, thread->pid);  
  86.         return_error = BR_FAILED_REPLY;  
  87.         goto err_copy_data_failed;  
  88.     }  
  89.     if (copy_from_user(offp, tr->data.ptr.offsets, tr->offsets_size)) {  
  90.         binder_user_error("binder: %d:%d got transaction with invalid "  
  91.             "offsets ptr\n", proc->pid, thread->pid);  
  92.         return_error = BR_FAILED_REPLY;  
  93.         goto err_copy_data_failed;  
  94.     }  
  95.     ......  
  96.   
  97.     off_end = (void *)offp + tr->offsets_size;  
  98.     for (; offp < off_end; offp++) {  
  99.         struct flat_binder_object *fp;  
  100.         ......  
  101.         fp = (struct flat_binder_object *)(t->buffer->data + *offp);  
  102.         switch (fp->type) {  
  103.         ......  
  104.         case BINDER_TYPE_HANDLE:  
  105.         case BINDER_TYPE_WEAK_HANDLE: {  
  106.             struct binder_ref *ref = binder_get_ref(proc, fp->handle);  
  107.             if (ref == NULL) {  
  108.                 ......  
  109.                 return_error = BR_FAILED_REPLY;  
  110.                 goto err_binder_get_ref_failed;  
  111.             }  
  112.             if (ref->node->proc == target_proc) {  
  113.                 ......  
  114.             } else {  
  115.                 struct binder_ref *new_ref;  
  116.                 new_ref = binder_get_ref_for_node(target_proc, ref->node);  
  117.                 if (new_ref == NULL) {  
  118.                     return_error = BR_FAILED_REPLY;  
  119.                     goto err_binder_get_ref_for_node_failed;  
  120.                 }  
  121.                 fp->handle = new_ref->desc;  
  122.                 binder_inc_ref(new_ref, fp->type == BINDER_TYPE_HANDLE, NULL);  
  123.                 ......  
  124.             }  
  125.         } break;  
  126.   
  127.         ......  
  128.         }  
  129.     }  
  130.   
  131.     if (reply) {  
  132.         BUG_ON(t->buffer->async_transaction != 0);  
  133.         binder_pop_transaction(target_thread, in_reply_to);  
  134.     } else if (!(t->flags & TF_ONE_WAY)) {  
  135.         ......  
  136.     } else {  
  137.         ......  
  138.     }  
  139.   
  140.     t->work.type = BINDER_WORK_TRANSACTION;  
  141.     list_add_tail(&t->work.entry, target_list);  
  142.     tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE;  
  143.     list_add_tail(&tcomplete->entry, &thread->todo);  
  144.     if (target_wait)  
  145.         wake_up_interruptible(target_wait);  
  146.     return;  
  147.   
  148.     ......  
  149. }  
static void
binder_transaction(struct binder_proc *proc, struct binder_thread *thread,
struct binder_transaction_data *tr, int reply)
{
	struct binder_transaction *t;
	struct binder_work *tcomplete;
	size_t *offp, *off_end;
	struct binder_proc *target_proc;
	struct binder_thread *target_thread = NULL;
	struct binder_node *target_node = NULL;
	struct list_head *target_list;
	wait_queue_head_t *target_wait;
	struct binder_transaction *in_reply_to = NULL;
	struct binder_transaction_log_entry *e;
	uint32_t return_error;

	......

	if (reply) {
		in_reply_to = thread->transaction_stack;
		if (in_reply_to == NULL) {
			......
			return_error = BR_FAILED_REPLY;
			goto err_empty_call_stack;
		}
		......
		thread->transaction_stack = in_reply_to->to_parent;
		target_thread = in_reply_to->from;
		......
		target_proc = target_thread->proc;
	} else {
		......
	}
	if (target_thread) {
		e->to_thread = target_thread->pid;
		target_list = &target_thread->todo;
		target_wait = &target_thread->wait;
	} else {
		......
	}
	

	/* TODO: reuse incoming transaction for reply */
	t = kzalloc(sizeof(*t), GFP_KERNEL);
	if (t == NULL) {
		return_error = BR_FAILED_REPLY;
		goto err_alloc_t_failed;
	}
	binder_stats.obj_created[BINDER_STAT_TRANSACTION]++;

	tcomplete = kzalloc(sizeof(*tcomplete), GFP_KERNEL);
	if (tcomplete == NULL) {
		return_error = BR_FAILED_REPLY;
		goto err_alloc_tcomplete_failed;
	}
	......

	if (!reply && !(tr->flags & TF_ONE_WAY))
		t->from = thread;
	else
		t->from = NULL;
	t->sender_euid = proc->tsk->cred->euid;
	t->to_proc = target_proc;
	t->to_thread = target_thread;
	t->code = tr->code;
	t->flags = tr->flags;
	t->priority = task_nice(current);
	t->buffer = binder_alloc_buf(target_proc, tr->data_size,
		tr->offsets_size, !reply && (t->flags & TF_ONE_WAY));
	if (t->buffer == NULL) {
		return_error = BR_FAILED_REPLY;
		goto err_binder_alloc_buf_failed;
	}
	t->buffer->allow_user_free = 0;
	t->buffer->debug_id = t->debug_id;
	t->buffer->transaction = t;
	t->buffer->target_node = target_node;
	if (target_node)
		binder_inc_node(target_node, 1, 0, NULL);

	offp = (size_t *)(t->buffer->data + ALIGN(tr->data_size, sizeof(void *)));

	if (copy_from_user(t->buffer->data, tr->data.ptr.buffer, tr->data_size)) {
		binder_user_error("binder: %d:%d got transaction with invalid "
			"data ptr\n", proc->pid, thread->pid);
		return_error = BR_FAILED_REPLY;
		goto err_copy_data_failed;
	}
	if (copy_from_user(offp, tr->data.ptr.offsets, tr->offsets_size)) {
		binder_user_error("binder: %d:%d got transaction with invalid "
			"offsets ptr\n", proc->pid, thread->pid);
		return_error = BR_FAILED_REPLY;
		goto err_copy_data_failed;
	}
	......

	off_end = (void *)offp + tr->offsets_size;
	for (; offp < off_end; offp++) {
		struct flat_binder_object *fp;
		......
		fp = (struct flat_binder_object *)(t->buffer->data + *offp);
		switch (fp->type) {
		......
		case BINDER_TYPE_HANDLE:
		case BINDER_TYPE_WEAK_HANDLE: {
			struct binder_ref *ref = binder_get_ref(proc, fp->handle);
			if (ref == NULL) {
				......
				return_error = BR_FAILED_REPLY;
				goto err_binder_get_ref_failed;
			}
			if (ref->node->proc == target_proc) {
				......
			} else {
				struct binder_ref *new_ref;
				new_ref = binder_get_ref_for_node(target_proc, ref->node);
				if (new_ref == NULL) {
					return_error = BR_FAILED_REPLY;
					goto err_binder_get_ref_for_node_failed;
				}
				fp->handle = new_ref->desc;
				binder_inc_ref(new_ref, fp->type == BINDER_TYPE_HANDLE, NULL);
				......
			}
		} break;

		......
		}
	}

	if (reply) {
		BUG_ON(t->buffer->async_transaction != 0);
		binder_pop_transaction(target_thread, in_reply_to);
	} else if (!(t->flags & TF_ONE_WAY)) {
		......
	} else {
		......
	}

	t->work.type = BINDER_WORK_TRANSACTION;
	list_add_tail(&t->work.entry, target_list);
	tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE;
	list_add_tail(&tcomplete->entry, &thread->todo);
	if (target_wait)
		wake_up_interruptible(target_wait);
	return;

    ......
}
        这次进入binder_transaction函数的情形和上面介绍的binder_transaction函数的情形基本一致,只是这里的proc、thread和target_proc、target_thread调换了角色,这里的proc和thread指的是Service Manager进程,而target_proc和target_thread指的是刚才请求SVC_MGR_CHECK_SERVICE的进程。

        那么,这次是如何找到target_proc和target_thread呢。首先,我们注意到,这里的reply等于1,其次,上面我们提到,Binder驱动程序在唤醒Service Manager,告诉它有一个事务t要处理时,事务t虽然从Service Manager的todo队列中删除了,但是仍然保留在transaction_stack中。因此,这里可以从thread->transaction_stack找回这个等待回复的事务t,然后通过它找回target_proc和target_thread:

  1. in_reply_to = thread->transaction_stack;  
  2. target_thread = in_reply_to->from;  
  3. target_list = &target_thread->todo;  
  4. target_wait = &target_thread->wait;  
in_reply_to = thread->transaction_stack;
target_thread = in_reply_to->from;
target_list = &target_thread->todo;
target_wait = &target_thread->wait;
       再接着往下看,由于Service Manager返回来了一个Binder引用,所以这里要处理一下,就是中间的for循环了。这是一个BINDER_TYPE_HANDLE类型的Binder引用,这是前面设置的。先把t->buffer->data的内容转换为一个struct flat_binder_object对象fp,这里的fp->handle值就是这个Service在Service Manager进程里面的引用值了。接通过调用binder_get_ref函数得到Binder引用对象struct binder_ref类型的对象ref:

  1. struct binder_ref *ref = binder_get_ref(proc, fp->handle);  
struct binder_ref *ref = binder_get_ref(proc, fp->handle);
       这里一定能找到,因为前面MediaPlayerService执行IServiceManager::addService的时候把自己添加到Service Manager的时候,会在Service Manager进程中创建这个Binder引用,然后把这个Binder引用的句柄值返回给Service Manager用户空间。

       这里面的ref->node->proc不等于target_proc,因为这个Binder实体是属于创建MediaPlayerService的进程的,而不是请求这个服务的远程接口的进程的,因此,这里调用binder_get_ref_for_node函数为这个Binder实体在target_proc创建一个引用:

  1. struct binder_ref *new_ref;  
  2. new_ref = binder_get_ref_for_node(target_proc, ref->node);  
struct binder_ref *new_ref;
new_ref = binder_get_ref_for_node(target_proc, ref->node);
       然后增加引用计数:

  1. binder_inc_ref(new_ref, fp->type == BINDER_TYPE_HANDLE, NULL);  
binder_inc_ref(new_ref, fp->type == BINDER_TYPE_HANDLE, NULL);
      这样,返回数据中的Binder对象就处理完成了。注意,这里会把fp->handle的值改为在target_proc中的引用值:

  1. fp->handle = new_ref->desc;  
fp->handle = new_ref->desc;
     这里就相当于是把t->buffer->data里面的Binder对象的句柄值改写了。因为这是在另外一个不同的进程里面的Binder引用,所以句柄值当然要用新的了。这个值最终是要拷贝回target_proc进程的用户空间去的。

      再往下看:

  1. if (reply) {  
  2.      BUG_ON(t->buffer->async_transaction != 0);  
  3.      binder_pop_transaction(target_thread, in_reply_to);  
  4. else if (!(t->flags & TF_ONE_WAY)) {  
  5.      ......  
  6. else {  
  7.      ......  
  8. }  
if (reply) {
     BUG_ON(t->buffer->async_transaction != 0);
     binder_pop_transaction(target_thread, in_reply_to);
} else if (!(t->flags & TF_ONE_WAY)) {
     ......
} else {
     ......
}
       这里reply等于1,执行binder_pop_transaction函数把当前事务in_reply_to从target_thread->transaction_stack队列中删掉,这是上次调用binder_transaction函数的时候设置的,现在不需要了,所以把它删掉。

       再往后的逻辑就跟前面执行binder_transaction函数时候一样了,这里不再介绍。最后的结果就是唤醒请求SVC_MGR_CHECK_SERVICE操作的线程:

  1. if (target_wait)  
  2.      wake_up_interruptible(target_wait);  
if (target_wait)
     wake_up_interruptible(target_wait);
       这样,Service Manger回复调用SVC_MGR_CHECK_SERVICE请求就算完成了,重新回到frameworks/base/cmds/servicemanager/binder.c文件中的binder_loop函数等待下一个Client请求的到来。事实上,Service Manger回到binder_loop函数再次执行ioctl函数时候,又会再次进入到binder_thread_read函数。这时个会发现thread->todo不为空,这是因为刚才我们调用了:
  1. list_add_tail(&tcomplete->entry, &thread->todo);  
list_add_tail(&tcomplete->entry, &thread->todo);
       把一个工作项tcompelete放在了在thread->todo中,这个tcompelete的type为BINDER_WORK_TRANSACTION_COMPLETE,因此,Binder驱动程序会执行下面操作:

  1. switch (w->type) {    
  2. case BINDER_WORK_TRANSACTION_COMPLETE: {    
  3.     cmd = BR_TRANSACTION_COMPLETE;    
  4.     if (put_user(cmd, (uint32_t __user *)ptr))    
  5.         return -EFAULT;    
  6.     ptr += sizeof(uint32_t);    
  7.     
  8.     list_del(&w->entry);    
  9.     kfree(w);    
  10.         
  11.     } break;    
  12.     ......    
  13. }    
switch (w->type) {  
case BINDER_WORK_TRANSACTION_COMPLETE: {  
    cmd = BR_TRANSACTION_COMPLETE;  
    if (put_user(cmd, (uint32_t __user *)ptr))  
        return -EFAULT;  
    ptr += sizeof(uint32_t);  
  
    list_del(&w->entry);  
    kfree(w);  
      
    } break;  
    ......  
}  
       binder_loop函数执行完这个ioctl调用后,才会在下一次调用ioctl进入到Binder驱动程序进入休眠状态,等待下一次Client的请求。
      上面讲到调用请求SVC_MGR_CHECK_SERVICE操作的线程被唤醒了,于是,重新执行binder_thread_read函数:
  1. static int    
  2. binder_thread_read(struct binder_proc *proc, struct binder_thread *thread,    
  3.                    void  __user *buffer, int size, signed long *consumed, int non_block)    
  4. {    
  5.     void __user *ptr = buffer + *consumed;    
  6.     void __user *end = buffer + size;    
  7.     
  8.     int ret = 0;    
  9.     int wait_for_proc_work;    
  10.     
  11.     if (*consumed == 0) {    
  12.         if (put_user(BR_NOOP, (uint32_t __user *)ptr))    
  13.             return -EFAULT;    
  14.         ptr += sizeof(uint32_t);    
  15.     }    
  16.     
  17. retry:    
  18.     wait_for_proc_work = thread->transaction_stack == NULL && list_empty(&thread->todo);    
  19.     
  20.     ......    
  21.     
  22.     if (wait_for_proc_work) {    
  23.         ......    
  24.     } else {    
  25.         if (non_block) {    
  26.             if (!binder_has_thread_work(thread))    
  27.                 ret = -EAGAIN;    
  28.         } else    
  29.             ret = wait_event_interruptible(thread->wait, binder_has_thread_work(thread));    
  30.     }    
  31.         
  32.     ......    
  33.     
  34.     while (1) {    
  35.         uint32_t cmd;    
  36.         struct binder_transaction_data tr;    
  37.         struct binder_work *w;    
  38.         struct binder_transaction *t = NULL;    
  39.     
  40.         if (!list_empty(&thread->todo))    
  41.             w = list_first_entry(&thread->todo, struct binder_work, entry);    
  42.         else if (!list_empty(&proc->todo) && wait_for_proc_work)    
  43.             w = list_first_entry(&proc->todo, struct binder_work, entry);    
  44.         else {    
  45.             if (ptr - buffer == 4 && !(thread->looper & BINDER_LOOPER_STATE_NEED_RETURN)) /* no data added */    
  46.                 goto retry;    
  47.             break;    
  48.         }    
  49.     
  50.         ......    
  51.     
  52.         switch (w->type) {    
  53.         case BINDER_WORK_TRANSACTION: {    
  54.             t = container_of(w, struct binder_transaction, work);    
  55.                                       } break;    
  56.         ......    
  57.         }    
  58.     
  59.         if (!t)    
  60.             continue;    
  61.     
  62.         BUG_ON(t->buffer == NULL);    
  63.         if (t->buffer->target_node) {    
  64.             ......    
  65.         } else {    
  66.             tr.target.ptr = NULL;    
  67.             tr.cookie = NULL;    
  68.             cmd = BR_REPLY;    
  69.         }    
  70.         tr.code = t->code;    
  71.         tr.flags = t->flags;    
  72.         tr.sender_euid = t->sender_euid;    
  73.     
  74.         if (t->from) {    
  75.             ......    
  76.         } else {    
  77.             tr.sender_pid = 0;    
  78.         }    
  79.     
  80.         tr.data_size = t->buffer->data_size;    
  81.         tr.offsets_size = t->buffer->offsets_size;    
  82.         tr.data.ptr.buffer = (void *)t->buffer->data + proc->user_buffer_offset;    
  83.         tr.data.ptr.offsets = tr.data.ptr.buffer + ALIGN(t->buffer->data_size, sizeof(void *));    
  84.     
  85.         if (put_user(cmd, (uint32_t __user *)ptr))    
  86.             return -EFAULT;    
  87.         ptr += sizeof(uint32_t);    
  88.         if (copy_to_user(ptr, &tr, sizeof(tr)))    
  89.             return -EFAULT;    
  90.         ptr += sizeof(tr);    
  91.     
  92.         ......    
  93.     
  94.         list_del(&t->work.entry);    
  95.         t->buffer->allow_user_free = 1;    
  96.         if (cmd == BR_TRANSACTION && !(t->flags & TF_ONE_WAY)) {    
  97.             ......    
  98.         } else {    
  99.             t->buffer->transaction = NULL;    
  100.             kfree(t);    
  101.             binder_stats.obj_deleted[BINDER_STAT_TRANSACTION]++;    
  102.         }    
  103.         break;    
  104.     }    
  105.     
  106. done:    
  107.     ......    
  108.     return 0;    
  109. }    
static int  
binder_thread_read(struct binder_proc *proc, struct binder_thread *thread,  
                   void  __user *buffer, int size, signed long *consumed, int non_block)  
{  
    void __user *ptr = buffer + *consumed;  
    void __user *end = buffer + size;  
  
    int ret = 0;  
    int wait_for_proc_work;  
  
    if (*consumed == 0) {  
        if (put_user(BR_NOOP, (uint32_t __user *)ptr))  
            return -EFAULT;  
        ptr += sizeof(uint32_t);  
    }  
  
retry:  
    wait_for_proc_work = thread->transaction_stack == NULL && list_empty(&thread->todo);  
  
    ......  
  
    if (wait_for_proc_work) {  
        ......  
    } else {  
        if (non_block) {  
            if (!binder_has_thread_work(thread))  
                ret = -EAGAIN;  
        } else  
            ret = wait_event_interruptible(thread->wait, binder_has_thread_work(thread));  
    }  
      
    ......  
  
    while (1) {  
        uint32_t cmd;  
        struct binder_transaction_data tr;  
        struct binder_work *w;  
        struct binder_transaction *t = NULL;  
  
        if (!list_empty(&thread->todo))  
            w = list_first_entry(&thread->todo, struct binder_work, entry);  
        else if (!list_empty(&proc->todo) && wait_for_proc_work)  
            w = list_first_entry(&proc->todo, struct binder_work, entry);  
        else {  
            if (ptr - buffer == 4 && !(thread->looper & BINDER_LOOPER_STATE_NEED_RETURN)) /* no data added */  
                goto retry;  
            break;  
        }  
  
        ......  
  
        switch (w->type) {  
        case BINDER_WORK_TRANSACTION: {  
            t = container_of(w, struct binder_transaction, work);  
                                      } break;  
        ......  
        }  
  
        if (!t)  
            continue;  
  
        BUG_ON(t->buffer == NULL);  
        if (t->buffer->target_node) {  
            ......  
        } else {  
            tr.target.ptr = NULL;  
            tr.cookie = NULL;  
            cmd = BR_REPLY;  
        }  
        tr.code = t->code;  
        tr.flags = t->flags;  
        tr.sender_euid = t->sender_euid;  
  
        if (t->from) {  
            ......  
        } else {  
            tr.sender_pid = 0;  
        }  
  
        tr.data_size = t->buffer->data_size;  
        tr.offsets_size = t->buffer->offsets_size;  
        tr.data.ptr.buffer = (void *)t->buffer->data + proc->user_buffer_offset;  
        tr.data.ptr.offsets = tr.data.ptr.buffer + ALIGN(t->buffer->data_size, sizeof(void *));  
  
        if (put_user(cmd, (uint32_t __user *)ptr))  
            return -EFAULT;  
        ptr += sizeof(uint32_t);  
        if (copy_to_user(ptr, &tr, sizeof(tr)))  
            return -EFAULT;  
        ptr += sizeof(tr);  
  
        ......  
  
        list_del(&t->work.entry);  
        t->buffer->allow_user_free = 1;  
        if (cmd == BR_TRANSACTION && !(t->flags & TF_ONE_WAY)) {  
            ......  
        } else {  
            t->buffer->transaction = NULL;  
            kfree(t);  
            binder_stats.obj_deleted[BINDER_STAT_TRANSACTION]++;  
        }  
        break;  
    }  
  
done:  
    ......  
    return 0;  
}  
        就是从下面这个调用:

  1. ret = wait_event_interruptible(thread->wait, binder_has_thread_work(thread));  
ret = wait_event_interruptible(thread->wait, binder_has_thread_work(thread));
       被唤醒过来了。在while循环中,从thread->todo得到w,w->type为BINDER_WORK_TRANSACTION,于是,得到t。从上面可以知道,Service Manager返回来了一个Binder引用和一个结果码0回来,写在t->buffer->data里面,现在把t->buffer->data加上proc->user_buffer_offset,得到用户空间地址,保存在tr.data.ptr.buffer里面,这样用户空间就可以访问这个数据了。由于cmd不等于BR_TRANSACTION,这时就可以把t删除掉了,因为以后都不需要用了。
       执行完这个函数后,就返回到binder_ioctl函数,执行下面语句,把数据返回给用户空间:

  1. if (copy_to_user(ubuf, &bwr, sizeof(bwr))) {    
  2.     ret = -EFAULT;    
  3.     goto err;    
  4. }    
if (copy_to_user(ubuf, &bwr, sizeof(bwr))) {  
    ret = -EFAULT;  
    goto err;  
}  
       接着返回到用户空间IPCThreadState::talkWithDriver函数,最后返回到IPCThreadState::waitForResponse函数,最终执行到下面语句:

  1. status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)    
  2. {    
  3.     int32_t cmd;    
  4.     int32_t err;    
  5.     
  6.     while (1) {    
  7.         if ((err=talkWithDriver()) < NO_ERROR) break;    
  8.             
  9.         ......    
  10.     
  11.         cmd = mIn.readInt32();    
  12.     
  13.         ......    
  14.     
  15.         switch (cmd) {    
  16.         ......    
  17.         case BR_REPLY:    
  18.             {    
  19.                 binder_transaction_data tr;    
  20.                 err = mIn.read(&tr, sizeof(tr));    
  21.                 LOG_ASSERT(err == NO_ERROR, "Not enough command data for brREPLY");    
  22.                 if (err != NO_ERROR) goto finish;    
  23.     
  24.                 if (reply) {    
  25.                     if ((tr.flags & TF_STATUS_CODE) == 0) {    
  26.                         reply->ipcSetDataReference(    
  27.                             reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),    
  28.                             tr.data_size,    
  29.                             reinterpret_cast<const size_t*>(tr.data.ptr.offsets),    
  30.                             tr.offsets_size/sizeof(size_t),    
  31.                             freeBuffer, this);    
  32.                     } else {    
  33.                         ......  
  34.                     }    
  35.                 } else {    
  36.                     ......   
  37.                 }    
  38.             }    
  39.             goto finish;    
  40.     
  41.         ......    
  42.         }    
  43.     }    
  44.     
  45. finish:    
  46.     ......    
  47.     return err;    
  48. }    
status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)  
{  
    int32_t cmd;  
    int32_t err;  
  
    while (1) {  
        if ((err=talkWithDriver()) < NO_ERROR) break;  
          
        ......  
  
        cmd = mIn.readInt32();  
  
        ......  
  
        switch (cmd) {  
        ......  
        case BR_REPLY:  
            {  
                binder_transaction_data tr;  
                err = mIn.read(&tr, sizeof(tr));  
                LOG_ASSERT(err == NO_ERROR, "Not enough command data for brREPLY");  
                if (err != NO_ERROR) goto finish;  
  
                if (reply) {  
                    if ((tr.flags & TF_STATUS_CODE) == 0) {  
                        reply->ipcSetDataReference(  
                            reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),  
                            tr.data_size,  
                            reinterpret_cast<const size_t*>(tr.data.ptr.offsets),  
                            tr.offsets_size/sizeof(size_t),  
                            freeBuffer, this);  
                    } else {  
                        ......
                    }  
                } else {  
                    ...... 
                }  
            }  
            goto finish;  
  
        ......  
        }  
    }  
  
finish:  
    ......  
    return err;  
}  
       注意,这里的tr.flags等于0,这个是在上面的binder_send_reply函数里设置的。接着就把结果保存在reply了:

  1. reply->ipcSetDataReference(    
  2.        reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),    
  3.        tr.data_size,    
  4.        reinterpret_cast<const size_t*>(tr.data.ptr.offsets),    
  5.        tr.offsets_size/sizeof(size_t),    
  6.        freeBuffer, this);    
reply->ipcSetDataReference(  
       reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),  
       tr.data_size,  
       reinterpret_cast<const size_t*>(tr.data.ptr.offsets),  
       tr.offsets_size/sizeof(size_t),  
       freeBuffer, this);  
       我们简单看一下Parcel::ipcSetDataReference函数的实现:

  1. void Parcel::ipcSetDataReference(const uint8_t* data, size_t dataSize,  
  2.     const size_t* objects, size_t objectsCount, release_func relFunc, void* relCookie)  
  3. {  
  4.     freeDataNoInit();  
  5.     mError = NO_ERROR;  
  6.     mData = const_cast<uint8_t*>(data);  
  7.     mDataSize = mDataCapacity = dataSize;  
  8.     //LOGI("setDataReference Setting data size of %p to %lu (pid=%d)\n", this, mDataSize, getpid());  
  9.     mDataPos = 0;  
  10.     LOGV("setDataReference Setting data pos of %p to %d\n"this, mDataPos);  
  11.     mObjects = const_cast<size_t*>(objects);  
  12.     mObjectsSize = mObjectsCapacity = objectsCount;  
  13.     mNextObjectHint = 0;  
  14.     mOwner = relFunc;  
  15.     mOwnerCookie = relCookie;  
  16.     scanForFds();  
  17. }  
void Parcel::ipcSetDataReference(const uint8_t* data, size_t dataSize,
    const size_t* objects, size_t objectsCount, release_func relFunc, void* relCookie)
{
    freeDataNoInit();
    mError = NO_ERROR;
    mData = const_cast<uint8_t*>(data);
    mDataSize = mDataCapacity = dataSize;
    //LOGI("setDataReference Setting data size of %p to %lu (pid=%d)\n", this, mDataSize, getpid());
    mDataPos = 0;
    LOGV("setDataReference Setting data pos of %p to %d\n", this, mDataPos);
    mObjects = const_cast<size_t*>(objects);
    mObjectsSize = mObjectsCapacity = objectsCount;
    mNextObjectHint = 0;
    mOwner = relFunc;
    mOwnerCookie = relCookie;
    scanForFds();
}
        上面提到,返回来的数据中有一个Binder引用,因此,这里的mObjectSize等于1,这个Binder引用对应的位置记录在mObjects成员变量中。

        从这里层层返回,最后回到BpServiceManager::checkService函数中:

  1. virtual sp<IBinder> BpServiceManager::checkService( const String16& name) const  
  2. {  
  3.     Parcel data, reply;  
  4.     data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());  
  5.     data.writeString16(name);  
  6.     remote()->transact(CHECK_SERVICE_TRANSACTION, data, &reply);  
  7.     return reply.readStrongBinder();  
  8. }  
virtual sp<IBinder> BpServiceManager::checkService( const String16& name) const
{
    Parcel data, reply;
    data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());
    data.writeString16(name);
    remote()->transact(CHECK_SERVICE_TRANSACTION, data, &reply);
    return reply.readStrongBinder();
}
        这里就是从:

  1. remote()->transact(CHECK_SERVICE_TRANSACTION, data, &reply);  
remote()->transact(CHECK_SERVICE_TRANSACTION, data, &reply);
        返回来了。我们接着看一下reply.readStrongBinder函数的实现:

  1. sp<IBinder> Parcel::readStrongBinder() const  
  2. {  
  3.     sp<IBinder> val;  
  4.     unflatten_binder(ProcessState::self(), *this, &val);  
  5.     return val;  
  6. }  
sp<IBinder> Parcel::readStrongBinder() const
{
    sp<IBinder> val;
    unflatten_binder(ProcessState::self(), *this, &val);
    return val;
}
        这里调用了unflatten_binder函数来构造一个Binder对象:

  1. status_t unflatten_binder(const sp<ProcessState>& proc,  
  2.     const Parcel& in, sp<IBinder>* out)  
  3. {  
  4.     const flat_binder_object* flat = in.readObject(false);  
  5.       
  6.     if (flat) {  
  7.         switch (flat->type) {  
  8.             case BINDER_TYPE_BINDER:  
  9.                 *out = static_cast<IBinder*>(flat->cookie);  
  10.                 return finish_unflatten_binder(NULL, *flat, in);  
  11.             case BINDER_TYPE_HANDLE:  
  12.                 *out = proc->getStrongProxyForHandle(flat->handle);  
  13.                 return finish_unflatten_binder(  
  14.                     static_cast<BpBinder*>(out->get()), *flat, in);  
  15.         }          
  16.     }  
  17.     return BAD_TYPE;  
  18. }  
status_t unflatten_binder(const sp<ProcessState>& proc,
    const Parcel& in, sp<IBinder>* out)
{
    const flat_binder_object* flat = in.readObject(false);
    
    if (flat) {
        switch (flat->type) {
            case BINDER_TYPE_BINDER:
                *out = static_cast<IBinder*>(flat->cookie);
                return finish_unflatten_binder(NULL, *flat, in);
            case BINDER_TYPE_HANDLE:
                *out = proc->getStrongProxyForHandle(flat->handle);
                return finish_unflatten_binder(
                    static_cast<BpBinder*>(out->get()), *flat, in);
        }        
    }
    return BAD_TYPE;
}
        这里的flat->type是BINDER_TYPE_HANDLE,因此调用ProcessState::getStrongProxyForHandle函数:

  1. sp<IBinder> ProcessState::getStrongProxyForHandle(int32_t handle)  
  2. {  
  3.     sp<IBinder> result;  
  4.   
  5.     AutoMutex _l(mLock);  
  6.   
  7.     handle_entry* e = lookupHandleLocked(handle);  
  8.   
  9.     if (e != NULL) {  
  10.         // We need to create a new BpBinder if there isn't currently one, OR we  
  11.         // are unable to acquire a weak reference on this current one.  See comment  
  12.         // in getWeakProxyForHandle() for more info about this.  
  13.         IBinder* b = e->binder;  
  14.         if (b == NULL || !e->refs->attemptIncWeak(this)) {  
  15.             b = new BpBinder(handle);   
  16.             e->binder = b;  
  17.             if (b) e->refs = b->getWeakRefs();  
  18.             result = b;  
  19.         } else {  
  20.             // This little bit of nastyness is to allow us to add a primary  
  21.             // reference to the remote proxy when this team doesn't have one  
  22.             // but another team is sending the handle to us.  
  23.             result.force_set(b);  
  24.             e->refs->decWeak(this);  
  25.         }  
  26.     }  
  27.   
  28.     return result;  
  29. }  
sp<IBinder> ProcessState::getStrongProxyForHandle(int32_t handle)
{
    sp<IBinder> result;

    AutoMutex _l(mLock);

    handle_entry* e = lookupHandleLocked(handle);

    if (e != NULL) {
        // We need to create a new BpBinder if there isn't currently one, OR we
        // are unable to acquire a weak reference on this current one.  See comment
        // in getWeakProxyForHandle() for more info about this.
        IBinder* b = e->binder;
        if (b == NULL || !e->refs->attemptIncWeak(this)) {
            b = new BpBinder(handle); 
            e->binder = b;
            if (b) e->refs = b->getWeakRefs();
            result = b;
        } else {
            // This little bit of nastyness is to allow us to add a primary
            // reference to the remote proxy when this team doesn't have one
            // but another team is sending the handle to us.
            result.force_set(b);
            e->refs->decWeak(this);
        }
    }

    return result;
}
       这里我们可以看到,ProcessState会把使用过的Binder远程接口(BpBinder)缓存起来,这样下次从Service Manager那里请求得到相同的句柄(Handle)时就可以直接返回这个Binder远程接口了,不用再创建一个出来。这里是第一次使用,因此,e->binder为空,于是创建了一个BpBinder对象:

  1. b = new BpBinder(handle);   
  2. e->binder = b;  
  3. if (b) e->refs = b->getWeakRefs();  
  4. result = b;  
b = new BpBinder(handle); 
e->binder = b;
if (b) e->refs = b->getWeakRefs();
result = b;
       最后,函数返回到IMediaDeathNotifier::getMediaPlayerService这里,从这个语句返回:

  1. binder = sm->getService(String16("media.player"));  
binder = sm->getService(String16("media.player"));
        这里,就相当于是:

  1. binder = new BpBinder(handle);  
binder = new BpBinder(handle);
        最后,函数调用:

  1. sMediaPlayerService = interface_cast<IMediaPlayerService>(binder);  
sMediaPlayerService = interface_cast<IMediaPlayerService>(binder);
        到了这里,我们可以参考一下前面一篇文章 浅谈Android系统进程间通信(IPC)机制Binder中的Server和Client获得Service Manager,就会知道,这里的interface_cast实际上最终调用了IMediaPlayerService::asInterface函数:

  1. android::sp<IMediaPlayerService> IMediaPlayerService::asInterface(const android::sp<android::IBinder>& obj)  
  2. {  
  3.     android::sp<IServiceManager> intr;  
  4.     if (obj != NULL) {               
  5.         intr = static_cast<IMediaPlayerService*>(   
  6.             obj->queryLocalInterface(IMediaPlayerService::descriptor).get());  
  7.         if (intr == NULL) {  
  8.             intr = new BpMediaPlayerService(obj);  
  9.         }  
  10.     }  
  11.     return intr;   
  12. }  
android::sp<IMediaPlayerService> IMediaPlayerService::asInterface(const android::sp<android::IBinder>& obj)
{
	android::sp<IServiceManager> intr;
	if (obj != NULL) {             
		intr = static_cast<IMediaPlayerService*>( 
			obj->queryLocalInterface(IMediaPlayerService::descriptor).get());
		if (intr == NULL) {
			intr = new BpMediaPlayerService(obj);
		}
	}
	return intr; 
}
        这里的obj就是BpBinder,而BpBinder::queryLocalInterface返回NULL,因此就创建了一个BpMediaPlayerService对象:

  1. intr = new BpMediaPlayerService(new BpBinder(handle));  
intr = new BpMediaPlayerService(new BpBinder(handle));
        因此,我们最终就得到了一个BpMediaPlayerService对象,达到我们最初的目标。

        有了这个BpMediaPlayerService这个远程接口之后,MediaPlayer就可以调用MediaPlayerService的服务了。

        至此,Android系统进程间通信(IPC)机制Binder中的Client如何通过Service Manager的getService函数获得Server远程接口的过程就分析完了,Binder机制的学习就暂告一段落了。

        不过,细心的读者可能会发现,我们这里介绍的Binder机制都是基于C/C++语言实现的,但是我们在编写应用程序都是基于Java语言的,那么,我们如何使用Java语言来使用系统的Binder机制来进行进程间通信呢?这就是下一篇文章要介绍的内容了,敬请关注。

Android系统进程间通信Binder机制在应用程序框架层的Java接口源代码分析

分类: Android 10978人阅读 评论(47) 收藏 举报

       在前面几篇文章中,我们详细介绍了Android系统进程间通信机制Binder的原理,并且深入分析了系统提供的Binder运行库和驱动程序的源代码。细心的读者会发现,这几篇文章分析的Binder接口都是基于C/C++语言来实现的,但是我们在编写应用程序都是基于Java语言的,那么,我们如何使用Java语言来使用系统的Binder机制来进行进程间通信呢?这就是本文要介绍的Android系统应用程序框架层的用Java语言来实现的Binder接口了。

       熟悉Android系统的读者,应该能想到应用程序框架中的基于Java语言的Binder接口是通过JNI来调用基于C/C++语言的Binder运行库来为Java应用程序提供进程间通信服务的了。JNI在Android系统中用得相当普遍,SDK中的Java接口API很多只是简单地通过JNI来调用底层的C/C++运行库从而为应用程序服务的。

       这里,我们仍然是通过具体的例子来说明Binder机制在应用程序框架层中的Java接口,主要就是Service Manager、Server和Client这三个角色的实现了。通常,在应用程序中,我们都是把Server实现为Service的形式,并且通过IServiceManager.addService接口来把这个Service添加到Service Manager,Client也是通过IServiceManager.getService接口来获得Service接口,接着就可以使用这个Service提供的功能了,这个与运行时库的Binder接口是一致的。

       前面我们学习Android硬件抽象层时,曾经在应用程序框架层中提供了一个硬件访问服务HelloService,这个Service运行在一个独立的进程中充当Server的角色,使用这个Service的Client运行在另一个进程中,它们之间就是通过Binder机制来通信的了。这里,我们就使用HelloService这个例子来分析Android系统进程间通信Binder机制在应用程序框架层的Java接口源代码。所以希望读者在阅读下面的内容之前,先了解一下前面在Ubuntu上为Android系统的Application Frameworks层增加硬件访问服务这篇文章。

       这篇文章通过五个情景来学习Android系统进程间通信Binder机制在应用程序框架层的Java接口:1. 获取Service Manager的Java远程接口的过程;2. HelloService接口的定义;3. HelloService的启动过程;4. Client获取HelloService的Java远程接口的过程;5.  Client通过HelloService的Java远程接口来使用HelloService提供的服务的过程。

       一.  获取Service Manager的Java远程接口

       我们要获取的Service Manager的Java远程接口是一个ServiceManagerProxy对象的IServiceManager接口。我们现在就来看看ServiceManagerProxy类是长什么样子的:


         这里可以看出,ServiceManagerProxy类实现了IServiceManager接口,IServiceManager提供了getService和addService两个成员函数来管理系统中的Service。从ServiceManagerProxy类的构造函数可以看出,它需要一个BinderProxy对象的IBinder接口来作为参数。因此,要获取Service Manager的Java远程接口ServiceManagerProxy,首先要有一个BinderProxy对象。下面将会看到这个BinderProxy对象是如何获得的。

         再来看一下是通过什么路径来获取Service Manager的Java远程接口ServiceManagerProxy的。这个主角就是ServiceManager了,我们也先看一下ServiceManager是长什么样子的:


        ServiceManager类有一个静态成员函数getIServiceManager,它的作用就是用来获取Service Manager的Java远程接口了,而这个函数又是通过ServiceManagerNative来获取Service Manager的Java远程接口的。

        接下来,我们就看一下ServiceManager.getIServiceManager这个函数的实现,这个函数定义在frameworks/base/core/java/android/os/ServiceManager.java文件中:

  1. public final class ServiceManager {  
  2.     ......  
  3.     private static IServiceManager sServiceManager;  
  4.     ......  
  5.     private static IServiceManager getIServiceManager() {  
  6.         if (sServiceManager != null) {  
  7.             return sServiceManager;  
  8.         }  
  9.   
  10.         // Find the service manager   
  11.         sServiceManager = ServiceManagerNative.asInterface(BinderInternal.getContextObject());  
  12.         return sServiceManager;  
  13.     }  
  14.     ......  
  15. }  
public final class ServiceManager {
	......
	private static IServiceManager sServiceManager;
	......
	private static IServiceManager getIServiceManager() {
		if (sServiceManager != null) {
			return sServiceManager;
		}

		// Find the service manager
		sServiceManager = ServiceManagerNative.asInterface(BinderInternal.getContextObject());
		return sServiceManager;
	}
	......
}

        如果其静态成员变量sServiceManager尚未创建,那么就调用ServiceManagerNative.asInterface函数来创建。在调用ServiceManagerNative.asInterface函数之前,首先要通过BinderInternal.getContextObject函数来获得一个BinderProxy对象。

        我们来看一下BinderInternal.getContextObject的实现,这个函数定义在frameworks/base/core/java/com/android/internal/os/BinderInternal.java文件中:

  1. public class BinderInternal {  
  2.     ......  
  3.     /** 
  4.     * Return the global "context object" of the system.  This is usually 
  5.     * an implementation of IServiceManager, which you can use to find 
  6.     * other services. 
  7.     */  
  8.     public static final native IBinder getContextObject();  
  9.       
  10.     ......  
  11. }  
public class BinderInternal {
	......
	/**
	* Return the global "context object" of the system.  This is usually
	* an implementation of IServiceManager, which you can use to find
	* other services.
	*/
	public static final native IBinder getContextObject();
	
	......
}

        这里可以看出,BinderInternal.getContextObject是一个JNI方法,它实现在frameworks/base/core/jni/android_util_Binder.cpp文件中:

  1. static jobject android_os_BinderInternal_getContextObject(JNIEnv* env, jobject clazz)  
  2. {  
  3.     sp<IBinder> b = ProcessState::self()->getContextObject(NULL);  
  4.     return javaObjectForIBinder(env, b);  
  5. }  
static jobject android_os_BinderInternal_getContextObject(JNIEnv* env, jobject clazz)
{
    sp<IBinder> b = ProcessState::self()->getContextObject(NULL);
    return javaObjectForIBinder(env, b);
}

       这里看到我们熟悉的ProcessState::self()->getContextObject函数,具体可以参考 浅谈Android系统进程间通信(IPC)机制Binder中的Server和Client获得Service Manager接口之路一文。ProcessState::self()->getContextObject函数返回一个BpBinder对象,它的句柄值是0,即下面语句:

  1. sp<IBinder> b = ProcessState::self()->getContextObject(NULL);  
sp<IBinder> b = ProcessState::self()->getContextObject(NULL);
       相当于是:

  1. sp<IBinder> b = new BpBinder(0);  
sp<IBinder> b = new BpBinder(0);
       接着调用javaObjectForIBinder把这个BpBinder对象转换成一个BinderProxy对象:

  1. jobject javaObjectForIBinder(JNIEnv* env, const sp<IBinder>& val)  
  2. {  
  3.     if (val == NULL) return NULL;  
  4.   
  5.     if (val->checkSubclass(&gBinderOffsets)) {  
  6.         // One of our own!   
  7.         jobject object = static_cast<JavaBBinder*>(val.get())->object();  
  8.         //printf("objectForBinder %p: it's our own %p!\n", val.get(), object);  
  9.         return object;  
  10.     }  
  11.   
  12.     // For the rest of the function we will hold this lock, to serialize  
  13.     // looking/creation of Java proxies for native Binder proxies.  
  14.     AutoMutex _l(mProxyLock);  
  15.   
  16.     // Someone else's...  do we know about it?  
  17.     jobject object = (jobject)val->findObject(&gBinderProxyOffsets);  
  18.     if (object != NULL) {  
  19.         jobject res = env->CallObjectMethod(object, gWeakReferenceOffsets.mGet);  
  20.         if (res != NULL) {  
  21.             LOGV("objectForBinder %p: found existing %p!\n", val.get(), res);  
  22.             return res;  
  23.         }  
  24.         LOGV("Proxy object %p of IBinder %p no longer in working set!!!", object, val.get());  
  25.         android_atomic_dec(&gNumProxyRefs);  
  26.         val->detachObject(&gBinderProxyOffsets);  
  27.         env->DeleteGlobalRef(object);  
  28.     }  
  29.   
  30.     object = env->NewObject(gBinderProxyOffsets.mClass, gBinderProxyOffsets.mConstructor);  
  31.     if (object != NULL) {  
  32.         LOGV("objectForBinder %p: created new %p!\n", val.get(), object);  
  33.         // The proxy holds a reference to the native object.  
  34.         env->SetIntField(object, gBinderProxyOffsets.mObject, (int)val.get());  
  35.         val->incStrong(object);  
  36.   
  37.         // The native object needs to hold a weak reference back to the  
  38.         // proxy, so we can retrieve the same proxy if it is still active.  
  39.         jobject refObject = env->NewGlobalRef(  
  40.                 env->GetObjectField(object, gBinderProxyOffsets.mSelf));  
  41.         val->attachObject(&gBinderProxyOffsets, refObject,  
  42.                 jnienv_to_javavm(env), proxy_cleanup);  
  43.   
  44.         // Note that a new object reference has been created.  
  45.         android_atomic_inc(&gNumProxyRefs);  
  46.         incRefsCreated(env);  
  47.     }  
  48.   
  49.     return object;  
  50. }  
jobject javaObjectForIBinder(JNIEnv* env, const sp<IBinder>& val)
{
    if (val == NULL) return NULL;

    if (val->checkSubclass(&gBinderOffsets)) {
        // One of our own!
        jobject object = static_cast<JavaBBinder*>(val.get())->object();
        //printf("objectForBinder %p: it's our own %p!\n", val.get(), object);
        return object;
    }

    // For the rest of the function we will hold this lock, to serialize
    // looking/creation of Java proxies for native Binder proxies.
    AutoMutex _l(mProxyLock);

    // Someone else's...  do we know about it?
    jobject object = (jobject)val->findObject(&gBinderProxyOffsets);
    if (object != NULL) {
        jobject res = env->CallObjectMethod(object, gWeakReferenceOffsets.mGet);
        if (res != NULL) {
            LOGV("objectForBinder %p: found existing %p!\n", val.get(), res);
            return res;
        }
        LOGV("Proxy object %p of IBinder %p no longer in working set!!!", object, val.get());
        android_atomic_dec(&gNumProxyRefs);
        val->detachObject(&gBinderProxyOffsets);
        env->DeleteGlobalRef(object);
    }

    object = env->NewObject(gBinderProxyOffsets.mClass, gBinderProxyOffsets.mConstructor);
    if (object != NULL) {
        LOGV("objectForBinder %p: created new %p!\n", val.get(), object);
        // The proxy holds a reference to the native object.
        env->SetIntField(object, gBinderProxyOffsets.mObject, (int)val.get());
        val->incStrong(object);

        // The native object needs to hold a weak reference back to the
        // proxy, so we can retrieve the same proxy if it is still active.
        jobject refObject = env->NewGlobalRef(
                env->GetObjectField(object, gBinderProxyOffsets.mSelf));
        val->attachObject(&gBinderProxyOffsets, refObject,
                jnienv_to_javavm(env), proxy_cleanup);

        // Note that a new object reference has been created.
        android_atomic_inc(&gNumProxyRefs);
        incRefsCreated(env);
    }

    return object;
}

        在介绍这个函数之前,先来看两个变量gBinderOffsets和gBinderProxyOffsets的定义。

        先看gBinderOffsets的定义:

  1. static struct bindernative_offsets_t  
  2. {  
  3.     // Class state.   
  4.     jclass mClass;  
  5.     jmethodID mExecTransact;  
  6.   
  7.     // Object state.   
  8.     jfieldID mObject;  
  9.   
  10. } gBinderOffsets;  
static struct bindernative_offsets_t
{
    // Class state.
    jclass mClass;
    jmethodID mExecTransact;

    // Object state.
    jfieldID mObject;

} gBinderOffsets;

       简单来说,gBinderOffsets变量是用来记录上面第二个类图中的Binder类的相关信息的,它是在注册Binder类的JNI方法的int_register_android_os_Binder函数初始化的:

  1. const charconst kBinderPathName = "android/os/Binder";  
  2.   
  3. static int int_register_android_os_Binder(JNIEnv* env)  
  4. {  
  5.     jclass clazz;  
  6.   
  7.     clazz = env->FindClass(kBinderPathName);  
  8.     LOG_FATAL_IF(clazz == NULL, "Unable to find class android.os.Binder");  
  9.   
  10.     gBinderOffsets.mClass = (jclass) env->NewGlobalRef(clazz);  
  11.     gBinderOffsets.mExecTransact  
  12.         = env->GetMethodID(clazz, "execTransact""(IIII)Z");  
  13.     assert(gBinderOffsets.mExecTransact);  
  14.   
  15.     gBinderOffsets.mObject  
  16.         = env->GetFieldID(clazz, "mObject""I");  
  17.     assert(gBinderOffsets.mObject);  
  18.   
  19.     return AndroidRuntime::registerNativeMethods(  
  20.         env, kBinderPathName,  
  21.         gBinderMethods, NELEM(gBinderMethods));  
  22. }  
const char* const kBinderPathName = "android/os/Binder";

static int int_register_android_os_Binder(JNIEnv* env)
{
    jclass clazz;

    clazz = env->FindClass(kBinderPathName);
    LOG_FATAL_IF(clazz == NULL, "Unable to find class android.os.Binder");

    gBinderOffsets.mClass = (jclass) env->NewGlobalRef(clazz);
    gBinderOffsets.mExecTransact
        = env->GetMethodID(clazz, "execTransact", "(IIII)Z");
    assert(gBinderOffsets.mExecTransact);

    gBinderOffsets.mObject
        = env->GetFieldID(clazz, "mObject", "I");
    assert(gBinderOffsets.mObject);

    return AndroidRuntime::registerNativeMethods(
        env, kBinderPathName,
        gBinderMethods, NELEM(gBinderMethods));
}

       再来看gBinderProxyOffsets的定义:

  1. static struct binderproxy_offsets_t  
  2. {  
  3.     // Class state.   
  4.     jclass mClass;  
  5.     jmethodID mConstructor;  
  6.     jmethodID mSendDeathNotice;  
  7.   
  8.     // Object state.   
  9.     jfieldID mObject;  
  10.     jfieldID mSelf;  
  11.   
  12. } gBinderProxyOffsets;  
static struct binderproxy_offsets_t
{
    // Class state.
    jclass mClass;
    jmethodID mConstructor;
    jmethodID mSendDeathNotice;

    // Object state.
    jfieldID mObject;
    jfieldID mSelf;

} gBinderProxyOffsets;

       简单来说,gBinderProxyOffsets是用来变量是用来记录上面第一个图中的BinderProxy类的相关信息的,它是在注册BinderProxy类的JNI方法的int_register_android_os_BinderProxy函数初始化的:

  1. const charconst kBinderProxyPathName = "android/os/BinderProxy";  
  2.   
  3. static int int_register_android_os_BinderProxy(JNIEnv* env)  
  4. {  
  5.     jclass clazz;  
  6.   
  7.     clazz = env->FindClass("java/lang/ref/WeakReference");  
  8.     LOG_FATAL_IF(clazz == NULL, "Unable to find class java.lang.ref.WeakReference");  
  9.     gWeakReferenceOffsets.mClass = (jclass) env->NewGlobalRef(clazz);  
  10.     gWeakReferenceOffsets.mGet  
  11.         = env->GetMethodID(clazz, "get""()Ljava/lang/Object;");  
  12.     assert(gWeakReferenceOffsets.mGet);  
  13.   
  14.     clazz = env->FindClass("java/lang/Error");  
  15.     LOG_FATAL_IF(clazz == NULL, "Unable to find class java.lang.Error");  
  16.     gErrorOffsets.mClass = (jclass) env->NewGlobalRef(clazz);  
  17.       
  18.     clazz = env->FindClass(kBinderProxyPathName);  
  19.     LOG_FATAL_IF(clazz == NULL, "Unable to find class android.os.BinderProxy");  
  20.   
  21.     gBinderProxyOffsets.mClass = (jclass) env->NewGlobalRef(clazz);  
  22.     gBinderProxyOffsets.mConstructor  
  23.         = env->GetMethodID(clazz, "<init>""()V");  
  24.     assert(gBinderProxyOffsets.mConstructor);  
  25.     gBinderProxyOffsets.mSendDeathNotice  
  26.         = env->GetStaticMethodID(clazz, "sendDeathNotice""(Landroid/os/IBinder$DeathRecipient;)V");  
  27.     assert(gBinderProxyOffsets.mSendDeathNotice);  
  28.   
  29.     gBinderProxyOffsets.mObject  
  30.         = env->GetFieldID(clazz, "mObject""I");  
  31.     assert(gBinderProxyOffsets.mObject);  
  32.     gBinderProxyOffsets.mSelf  
  33.         = env->GetFieldID(clazz, "mSelf""Ljava/lang/ref/WeakReference;");  
  34.     assert(gBinderProxyOffsets.mSelf);  
  35.   
  36.     return AndroidRuntime::registerNativeMethods(  
  37.         env, kBinderProxyPathName,  
  38.         gBinderProxyMethods, NELEM(gBinderProxyMethods));  
  39. }  
const char* const kBinderProxyPathName = "android/os/BinderProxy";

static int int_register_android_os_BinderProxy(JNIEnv* env)
{
    jclass clazz;

    clazz = env->FindClass("java/lang/ref/WeakReference");
    LOG_FATAL_IF(clazz == NULL, "Unable to find class java.lang.ref.WeakReference");
    gWeakReferenceOffsets.mClass = (jclass) env->NewGlobalRef(clazz);
    gWeakReferenceOffsets.mGet
        = env->GetMethodID(clazz, "get", "()Ljava/lang/Object;");
    assert(gWeakReferenceOffsets.mGet);

    clazz = env->FindClass("java/lang/Error");
    LOG_FATAL_IF(clazz == NULL, "Unable to find class java.lang.Error");
    gErrorOffsets.mClass = (jclass) env->NewGlobalRef(clazz);
    
    clazz = env->FindClass(kBinderProxyPathName);
    LOG_FATAL_IF(clazz == NULL, "Unable to find class android.os.BinderProxy");

    gBinderProxyOffsets.mClass = (jclass) env->NewGlobalRef(clazz);
    gBinderProxyOffsets.mConstructor
        = env->GetMethodID(clazz, "<init>", "()V");
    assert(gBinderProxyOffsets.mConstructor);
    gBinderProxyOffsets.mSendDeathNotice
        = env->GetStaticMethodID(clazz, "sendDeathNotice", "(Landroid/os/IBinder$DeathRecipient;)V");
    assert(gBinderProxyOffsets.mSendDeathNotice);

    gBinderProxyOffsets.mObject
        = env->GetFieldID(clazz, "mObject", "I");
    assert(gBinderProxyOffsets.mObject);
    gBinderProxyOffsets.mSelf
        = env->GetFieldID(clazz, "mSelf", "Ljava/lang/ref/WeakReference;");
    assert(gBinderProxyOffsets.mSelf);

    return AndroidRuntime::registerNativeMethods(
        env, kBinderProxyPathName,
        gBinderProxyMethods, NELEM(gBinderProxyMethods));
}

        回到前面的javaObjectForIBinder函数中,下面这段代码:

  1. if (val->checkSubclass(&gBinderOffsets)) {  
  2.     // One of our own!   
  3.     jobject object = static_cast<JavaBBinder*>(val.get())->object();  
  4.     //printf("objectForBinder %p: it's our own %p!\n", val.get(), object);  
  5.     return object;  
  6. }  
    if (val->checkSubclass(&gBinderOffsets)) {
        // One of our own!
        jobject object = static_cast<JavaBBinder*>(val.get())->object();
        //printf("objectForBinder %p: it's our own %p!\n", val.get(), object);
        return object;
    }

        前面说过,这里传进来的参数是一个BpBinder的指针,而BpBinder::checkSubclass继承于父类IBinder::checkSubclass,它什么也不做就返回false。

        于是函数继续往下执行:

  1. jobject object = (jobject)val->findObject(&gBinderProxyOffsets);  
jobject object = (jobject)val->findObject(&gBinderProxyOffsets);

        由于这个BpBinder对象是第一创建,它里面什么对象也没有,因此,这里返回的object为NULL。

        于是函数又继续往下执行:

  1. object = env->NewObject(gBinderProxyOffsets.mClass, gBinderProxyOffsets.mConstructor);  
object = env->NewObject(gBinderProxyOffsets.mClass, gBinderProxyOffsets.mConstructor);

        这里,就创建了一个BinderProxy对象了。创建了之后,要把这个BpBinder对象和这个BinderProxy对象关联起来:

  1. env->SetIntField(object, gBinderProxyOffsets.mObject, (int)val.get());  
env->SetIntField(object, gBinderProxyOffsets.mObject, (int)val.get());

        就是通过BinderProxy.mObject成员变量来关联的了,BinderProxy.mObject成员变量记录了这个BpBinder对象的地址。

        接下去,还要把它放到BpBinder里面去,下次就要使用时,就可以在上一步调用BpBinder::findObj把它找回来了:

  1. val->attachObject(&gBinderProxyOffsets, refObject,  
  2.                 jnienv_to_javavm(env), proxy_cleanup);  
val->attachObject(&gBinderProxyOffsets, refObject,
                jnienv_to_javavm(env), proxy_cleanup);

        最后,就把这个BinderProxy返回到android_os_BinderInternal_getContextObject函数,最终返回到最开始的ServiceManager.getIServiceManager函数中来了,于是,我们就获得一个BinderProxy对象了。

        回到ServiceManager.getIServiceManager中,从下面语句返回:

  1. sServiceManager = ServiceManagerNative.asInterface(BinderInternal.getContextObject());  
sServiceManager = ServiceManagerNative.asInterface(BinderInternal.getContextObject());

        相当于是:

  1. sServiceManager = ServiceManagerNative.asInterface(new BinderProxy());  
sServiceManager = ServiceManagerNative.asInterface(new BinderProxy());

       接下去就是调用ServiceManagerNative.asInterface函数了,这个函数定义在frameworks/base/core/java/android/os/ServiceManagerNative.java文件中:

  1. public abstract class ServiceManagerNative ......  
  2. {  
  3.     ......  
  4.     static public IServiceManager asInterface(IBinder obj)  
  5.     {  
  6.         if (obj == null) {  
  7.             return null;  
  8.         }  
  9.         IServiceManager in =  
  10.             (IServiceManager)obj.queryLocalInterface(descriptor);  
  11.         if (in != null) {  
  12.             return in;  
  13.         }  
  14.   
  15.         return new ServiceManagerProxy(obj);  
  16.     }  
  17.     ......  
  18. }  
public abstract class ServiceManagerNative ......
{
	......
	static public IServiceManager asInterface(IBinder obj)
	{
		if (obj == null) {
			return null;
		}
		IServiceManager in =
			(IServiceManager)obj.queryLocalInterface(descriptor);
		if (in != null) {
			return in;
		}

		return new ServiceManagerProxy(obj);
	}
	......
}

       这里的参数obj是一个BinderProxy对象,它的queryLocalInterface函数返回null。因此,最终以这个BinderProxy对象为参数创建一个ServiceManagerProxy对象。

       返回到ServiceManager.getIServiceManager中,从下面语句返回:

  1. sServiceManager = ServiceManagerNative.asInterface(new BinderProxy());  
sServiceManager = ServiceManagerNative.asInterface(new BinderProxy());

       就相当于是:

  1. sServiceManager = new ServiceManagerProxy(new BinderProxy());  
sServiceManager = new ServiceManagerProxy(new BinderProxy());

      于是,我们的目标终于完成了。

      总结一下,就是在Java层,我们拥有了一个Service Manager远程接口ServiceManagerProxy,而这个ServiceManagerProxy对象在JNI层有一个句柄值为0的BpBinder对象与之通过gBinderProxyOffsets关联起来。

      这样获取Service Manager的Java远程接口的过程就完成了。

      二. HelloService接口的定义

      前面我们在学习Android系统的硬件抽象层(HAL)时,在在Ubuntu上为Android系统的Application Frameworks层增加硬件访问服务这篇文章中,我们编写了一个硬件服务HelloService,它的服务接口定义在frameworks/base/core/java/android/os/IHelloService.aidl文件中:

  1. package android.os;  
  2.   
  3. interface IHelloService  
  4. {  
  5.     void setVal(int val);  
  6.     int getVal();  
  7. }  
package android.os;

interface IHelloService
{
	void setVal(int val);
	int getVal();
}

       这个服务接口很简单,只有两个函数,分别用来读写硬件寄存器。

       注意,这是一个aidl文件,编译后会生成一个IHelloService.java。我们来看一下这个文件的内容隐藏着什么奥秘,可以这么神奇地支持进程间通信。

  1. /* 
  2.  * This file is auto-generated.  DO NOT MODIFY. 
  3.  * Original file: frameworks/base/core/java/android/os/IHelloService.aidl 
  4.  */  
  5. package android.os;  
  6. public interface IHelloService extends android.os.IInterface  
  7. {  
  8.     /** Local-side IPC implementation stub class. */  
  9.     public static abstract class Stub extends android.os.Binder implements android.os.IHelloService  
  10.     {  
  11.         private static final java.lang.String DESCRIPTOR = "android.os.IHelloService";  
  12.         /** Construct the stub at attach it to the interface. */  
  13.         public Stub()  
  14.         {  
  15.             this.attachInterface(this, DESCRIPTOR);  
  16.         }  
  17.   
  18.         /** 
  19.         * Cast an IBinder object into an android.os.IHelloService interface, 
  20.         * generating a proxy if needed. 
  21.         */  
  22.         public static android.os.IHelloService asInterface(android.os.IBinder obj)  
  23.         {  
  24.             if ((obj==null)) {  
  25.                 return null;  
  26.             }  
  27.             android.os.IInterface iin = (android.os.IInterface)obj.queryLocalInterface(DESCRIPTOR);  
  28.             if (((iin!=null)&&(iin instanceof android.os.IHelloService))) {  
  29.                 return ((android.os.IHelloService)iin);  
  30.             }  
  31.             return new android.os.IHelloService.Stub.Proxy(obj);  
  32.         }  
  33.   
  34.         public android.os.IBinder asBinder()  
  35.         {  
  36.             return this;  
  37.         }  
  38.   
  39.         @Override   
  40.         public boolean onTransact(int code, android.os.Parcel data, android.os.Parcel reply, int flags) throws android.os.RemoteException  
  41.         {  
  42.             switch (code)  
  43.             {  
  44.                 case INTERFACE_TRANSACTION:  
  45.                 {  
  46.                     reply.writeString(DESCRIPTOR);  
  47.                     return true;  
  48.                 }  
  49.                 case TRANSACTION_setVal:  
  50.                 {  
  51.                     data.enforceInterface(DESCRIPTOR);  
  52.                     int _arg0;  
  53.                     _arg0 = data.readInt();  
  54.                     this.setVal(_arg0);  
  55.                     reply.writeNoException();  
  56.                     return true;  
  57.                 }  
  58.                 case TRANSACTION_getVal:  
  59.                 {  
  60.                     data.enforceInterface(DESCRIPTOR);  
  61.                     int _result = this.getVal();  
  62.                     reply.writeNoException();  
  63.                     reply.writeInt(_result);  
  64.                     return true;  
  65.                 }  
  66.             }  
  67.             return super.onTransact(code, data, reply, flags);  
  68.         }  
  69.   
  70.         private static class Proxy implements android.os.IHelloService  
  71.         {  
  72.             private android.os.IBinder mRemote;  
  73.   
  74.             Proxy(android.os.IBinder remote)  
  75.             {  
  76.                 mRemote = remote;  
  77.             }  
  78.   
  79.             public android.os.IBinder asBinder()  
  80.             {  
  81.                 return mRemote;  
  82.             }  
  83.   
  84.             public java.lang.String getInterfaceDescriptor()  
  85.             {  
  86.                 return DESCRIPTOR;  
  87.             }  
  88.   
  89.             public void setVal(int val) throws android.os.RemoteException  
  90.             {  
  91.                 android.os.Parcel _data = android.os.Parcel.obtain();  
  92.                 android.os.Parcel _reply = android.os.Parcel.obtain();  
  93.                 try {  
  94.                     _data.writeInterfaceToken(DESCRIPTOR);  
  95.                     _data.writeInt(val);  
  96.                     mRemote.transact(Stub.TRANSACTION_setVal, _data, _reply, 0);  
  97.                     _reply.readException();  
  98.                 }  
  99.                 finally {  
  100.                     _reply.recycle();  
  101.                     _data.recycle();  
  102.                 }  
  103.             }  
  104.   
  105.             public int getVal() throws android.os.RemoteException  
  106.             {  
  107.                 android.os.Parcel _data = android.os.Parcel.obtain();  
  108.                 android.os.Parcel _reply = android.os.Parcel.obtain();  
  109.                 int _result;  
  110.                 try {  
  111.                     _data.writeInterfaceToken(DESCRIPTOR);  
  112.                     mRemote.transact(Stub.TRANSACTION_getVal, _data, _reply, 0);  
  113.                     _reply.readException();  
  114.                     _result = _reply.readInt();  
  115.                 }  
  116.                 finally {  
  117.                     _reply.recycle();  
  118.                     _data.recycle();  
  119.                 }  
  120.                 return _result;  
  121.             }  
  122.         }  
  123.   
  124.         static final int TRANSACTION_setVal = (android.os.IBinder.FIRST_CALL_TRANSACTION + 0);  
  125.         static final int TRANSACTION_getVal = (android.os.IBinder.FIRST_CALL_TRANSACTION + 1);  
  126.     }  
  127.   
  128.     public void setVal(int val) throws android.os.RemoteException;  
  129.     public int getVal() throws android.os.RemoteException;  
  130. }  
/*
 * This file is auto-generated.  DO NOT MODIFY.
 * Original file: frameworks/base/core/java/android/os/IHelloService.aidl
 */
package android.os;
public interface IHelloService extends android.os.IInterface
{
	/** Local-side IPC implementation stub class. */
	public static abstract class Stub extends android.os.Binder implements android.os.IHelloService
	{
		private static final java.lang.String DESCRIPTOR = "android.os.IHelloService";
		/** Construct the stub at attach it to the interface. */
		public Stub()
		{
			this.attachInterface(this, DESCRIPTOR);
		}

		/**
		* Cast an IBinder object into an android.os.IHelloService interface,
		* generating a proxy if needed.
		*/
		public static android.os.IHelloService asInterface(android.os.IBinder obj)
		{
			if ((obj==null)) {
				return null;
			}
			android.os.IInterface iin = (android.os.IInterface)obj.queryLocalInterface(DESCRIPTOR);
			if (((iin!=null)&&(iin instanceof android.os.IHelloService))) {
				return ((android.os.IHelloService)iin);
			}
			return new android.os.IHelloService.Stub.Proxy(obj);
		}

		public android.os.IBinder asBinder()
		{
			return this;
		}

		@Override 
		public boolean onTransact(int code, android.os.Parcel data, android.os.Parcel reply, int flags) throws android.os.RemoteException
		{
			switch (code)
			{
				case INTERFACE_TRANSACTION:
				{
					reply.writeString(DESCRIPTOR);
					return true;
				}
				case TRANSACTION_setVal:
				{
					data.enforceInterface(DESCRIPTOR);
					int _arg0;
					_arg0 = data.readInt();
					this.setVal(_arg0);
					reply.writeNoException();
					return true;
				}
				case TRANSACTION_getVal:
				{
					data.enforceInterface(DESCRIPTOR);
					int _result = this.getVal();
					reply.writeNoException();
					reply.writeInt(_result);
					return true;
				}
			}
			return super.onTransact(code, data, reply, flags);
		}

		private static class Proxy implements android.os.IHelloService
		{
			private android.os.IBinder mRemote;

			Proxy(android.os.IBinder remote)
			{
				mRemote = remote;
			}

			public android.os.IBinder asBinder()
			{
				return mRemote;
			}

			public java.lang.String getInterfaceDescriptor()
			{
				return DESCRIPTOR;
			}

			public void setVal(int val) throws android.os.RemoteException
			{
				android.os.Parcel _data = android.os.Parcel.obtain();
				android.os.Parcel _reply = android.os.Parcel.obtain();
				try {
					_data.writeInterfaceToken(DESCRIPTOR);
					_data.writeInt(val);
					mRemote.transact(Stub.TRANSACTION_setVal, _data, _reply, 0);
					_reply.readException();
				}
				finally {
					_reply.recycle();
					_data.recycle();
				}
			}

			public int getVal() throws android.os.RemoteException
			{
				android.os.Parcel _data = android.os.Parcel.obtain();
				android.os.Parcel _reply = android.os.Parcel.obtain();
				int _result;
				try {
					_data.writeInterfaceToken(DESCRIPTOR);
					mRemote.transact(Stub.TRANSACTION_getVal, _data, _reply, 0);
					_reply.readException();
					_result = _reply.readInt();
				}
				finally {
					_reply.recycle();
					_data.recycle();
				}
				return _result;
			}
		}

		static final int TRANSACTION_setVal = (android.os.IBinder.FIRST_CALL_TRANSACTION + 0);
		static final int TRANSACTION_getVal = (android.os.IBinder.FIRST_CALL_TRANSACTION + 1);
	}

	public void setVal(int val) throws android.os.RemoteException;
	public int getVal() throws android.os.RemoteException;
}

        这里我们可以看到IHelloService.aidl这个文件编译后的真面目,原来就是根据IHelloService接口的定义生成相应的Stub和Proxy类,这个就是我们熟悉的Binder机制的内容了,即实现这个HelloService的Server必须继续于这里的IHelloService.Stub类,而这个HelloService的远程接口就是这里的IHelloService.Stub.Proxy对象获得的IHelloService接口。接下来的内容,我们就可以看到IHelloService.Stub和IHelloService.Stub.Proxy是怎么创建或者使用的。

        三. HelloService的启动过程

        在讨论HelloService的启动过程之前,我们先来看一下实现HelloService接口的Server是怎么定义的。

        回忆在Ubuntu上为Android系统的Application Frameworks层增加硬件访问服务一文,我们在frameworks/base/services/java/com/android/server目录下新增了一个HelloService.java文件:

  1. package com.android.server;  
  2.   
  3. import android.content.Context;  
  4. import android.os.IHelloService;  
  5. import android.util.Slog;  
  6.   
  7. public class HelloService extends IHelloService.Stub {  
  8.     private static final String TAG = "HelloService";  
  9.   
  10.     HelloService() {  
  11.         init_native();  
  12.     }  
  13.   
  14.     public void setVal(int val) {  
  15.         setVal_native(val);  
  16.     }     
  17.   
  18.     public int getVal() {  
  19.         return getVal_native();  
  20.     }  
  21.       
  22.     private static native boolean init_native();  
  23.         private static native void setVal_native(int val);  
  24.     private static native int getVal_native();  
  25. }  
package com.android.server;

import android.content.Context;
import android.os.IHelloService;
import android.util.Slog;

public class HelloService extends IHelloService.Stub {
	private static final String TAG = "HelloService";

	HelloService() {
		init_native();
	}

	public void setVal(int val) {
		setVal_native(val);
	}	

	public int getVal() {
		return getVal_native();
	}
	
	private static native boolean init_native();
    	private static native void setVal_native(int val);
	private static native int getVal_native();
}

        这里,我们可以看到,HelloService继续了IHelloService.Stub类,它通过本地方法调用实现了getVal和setVal两个函数。我们不关心这两个函数的具体实现,有兴趣的读者可以参考在Ubuntu上为Android系统的Application Frameworks层增加硬件访问服务一文。
        有了HelloService这个Server类后,下一步就是考虑怎么样把它启动起来了。在frameworks/base/services/java/com/android/server/SystemServer.java文件中,定义了SystemServer类。SystemServer对象是在系统启动的时候创建的,它被创建的时候会启动一个线程来创建HelloService,并且把它添加到Service Manager中去。

       我们来看一下这部份的代码:

  1. class ServerThread extends Thread {  
  2.     ......  
  3.   
  4.     @Override  
  5.     public void run() {  
  6.   
  7.         ......  
  8.   
  9.         Looper.prepare();  
  10.   
  11.         ......  
  12.   
  13.         try {  
  14.             Slog.i(TAG, "Hello Service");  
  15.             ServiceManager.addService("hello"new HelloService());  
  16.         } catch (Throwable e) {  
  17.             Slog.e(TAG, "Failure starting Hello Service", e);  
  18.         }  
  19.   
  20.         ......  
  21.   
  22.         Looper.loop();  
  23.   
  24.         ......  
  25.     }  
  26. }  
  27.   
  28. ......  
  29.   
  30. public class SystemServer  
  31. {  
  32.     ......  
  33.   
  34.     /** 
  35.     * This method is called from Zygote to initialize the system. This will cause the native 
  36.     * services (SurfaceFlinger, AudioFlinger, etc..) to be started. After that it will call back 
  37.     * up into init2() to start the Android services. 
  38.     */  
  39.     native public static void init1(String[] args);  
  40.   
  41.     ......  
  42.   
  43.     public static final void init2() {  
  44.         Slog.i(TAG, "Entered the Android system server!");  
  45.         Thread thr = new ServerThread();  
  46.         thr.setName("android.server.ServerThread");  
  47.         thr.start();  
  48.     }  
  49.     ......  
  50. }  
class ServerThread extends Thread {
	......

	@Override
	public void run() {

		......

		Looper.prepare();

		......

		try {
			Slog.i(TAG, "Hello Service");
			ServiceManager.addService("hello", new HelloService());
		} catch (Throwable e) {
			Slog.e(TAG, "Failure starting Hello Service", e);
		}

		......

		Looper.loop();

		......
	}
}

......

public class SystemServer
{
	......

	/**
	* This method is called from Zygote to initialize the system. This will cause the native
	* services (SurfaceFlinger, AudioFlinger, etc..) to be started. After that it will call back
	* up into init2() to start the Android services.
	*/
	native public static void init1(String[] args);

	......

	public static final void init2() {
		Slog.i(TAG, "Entered the Android system server!");
		Thread thr = new ServerThread();
		thr.setName("android.server.ServerThread");
		thr.start();
	}
	......
}

        这里,我们可以看到,在ServerThread.run函数中,执行了下面代码把HelloService添加到Service Manager中去。这里我们关注把HelloService添加到Service Manager中去的代码:

  1. try {  
  2.     Slog.i(TAG, "Hello Service");  
  3.     ServiceManager.addService("hello"new HelloService());  
  4. catch (Throwable e) {  
  5.     Slog.e(TAG, "Failure starting Hello Service", e);  
  6. }  
try {
	Slog.i(TAG, "Hello Service");
	ServiceManager.addService("hello", new HelloService());
} catch (Throwable e) {
	Slog.e(TAG, "Failure starting Hello Service", e);
}

         通过调用ServiceManager.addService把一个HelloService实例添加到Service Manager中去。

         我们先来看一下HelloService的创建过程:

  1. new HelloService();  
new HelloService();

         这个语句会调用HelloService类的构造函数,而HelloService类继承于IHelloService.Stub类,IHelloService.Stub类又继承了Binder类,因此,最后会调用Binder类的构造函数:

  1. public class Binder implements IBinder {  
  2.     ......  
  3.       
  4.     private int mObject;  
  5.       
  6.     ......  
  7.   
  8.   
  9.     public Binder() {  
  10.         init();  
  11.         ......  
  12.     }  
  13.   
  14.   
  15.     private native final void init();  
  16.   
  17.   
  18.     ......  
  19. }  
public class Binder implements IBinder {
	......
	
	private int mObject;
	
	......


	public Binder() {
		init();
		......
	}


	private native final void init();


	......
}

        这里调用了一个JNI方法init来初始化这个Binder对象,这个JNI方法定义在frameworks/base/core/jni/android_util_Binder.cpp文件中:

  1. static void android_os_Binder_init(JNIEnv* env, jobject clazz)  
  2. {  
  3.     JavaBBinderHolder* jbh = new JavaBBinderHolder(env, clazz);  
  4.     if (jbh == NULL) {  
  5.         jniThrowException(env, "java/lang/OutOfMemoryError", NULL);  
  6.         return;  
  7.     }  
  8.     LOGV("Java Binder %p: acquiring first ref on holder %p", clazz, jbh);  
  9.     jbh->incStrong(clazz);  
  10.     env->SetIntField(clazz, gBinderOffsets.mObject, (int)jbh);  
  11. }  
static void android_os_Binder_init(JNIEnv* env, jobject clazz)
{
    JavaBBinderHolder* jbh = new JavaBBinderHolder(env, clazz);
    if (jbh == NULL) {
        jniThrowException(env, "java/lang/OutOfMemoryError", NULL);
        return;
    }
    LOGV("Java Binder %p: acquiring first ref on holder %p", clazz, jbh);
    jbh->incStrong(clazz);
    env->SetIntField(clazz, gBinderOffsets.mObject, (int)jbh);
}
        它实际上只做了一件事情,就是创建一个JavaBBinderHolder对象jbh,然后把这个对象的地址保存在上面的Binder类的mObject成员变量中,后面我们会用到。

        回到ServerThread.run函数中,我们再来看一下ServiceManager.addService函数的实现:

  1. public final class ServiceManager {  
  2.     ......  
  3.   
  4.     private static IServiceManager sServiceManager;  
  5.   
  6.     ......  
  7.   
  8.     public static void addService(String name, IBinder service) {  
  9.         try {  
  10.             getIServiceManager().addService(name, service);  
  11.         } catch (RemoteException e) {  
  12.             Log.e(TAG, "error in addService", e);  
  13.         }  
  14.     }  
  15.   
  16.     ......  
  17.   
  18. }  
public final class ServiceManager {
	......

	private static IServiceManager sServiceManager;

	......

	public static void addService(String name, IBinder service) {
		try {
			getIServiceManager().addService(name, service);
		} catch (RemoteException e) {
			Log.e(TAG, "error in addService", e);
		}
	}

	......

}

         这里的getIServiceManager函数我们在前面已经分析过了,它返回的是一个ServiceManagerProxy对象的IServiceManager接口。因此,我们进入到ServiceManagerProxy.addService中去看看:

  1. class ServiceManagerProxy implements IServiceManager {  
  2.     public ServiceManagerProxy(IBinder remote) {  
  3.         mRemote = remote;  
  4.     }  
  5.   
  6.     ......  
  7.   
  8.     public void addService(String name, IBinder service)  
  9.         throws RemoteException {  
  10.             Parcel data = Parcel.obtain();  
  11.             Parcel reply = Parcel.obtain();  
  12.             data.writeInterfaceToken(IServiceManager.descriptor);  
  13.             data.writeString(name);  
  14.             data.writeStrongBinder(service);  
  15.             mRemote.transact(ADD_SERVICE_TRANSACTION, data, reply, 0);  
  16.             reply.recycle();  
  17.             data.recycle();  
  18.     }  
  19.   
  20.     ......  
  21.   
  22.     private IBinder mRemote;  
  23. }  
class ServiceManagerProxy implements IServiceManager {
	public ServiceManagerProxy(IBinder remote) {
		mRemote = remote;
	}

	......

	public void addService(String name, IBinder service)
		throws RemoteException {
			Parcel data = Parcel.obtain();
			Parcel reply = Parcel.obtain();
			data.writeInterfaceToken(IServiceManager.descriptor);
			data.writeString(name);
			data.writeStrongBinder(service);
			mRemote.transact(ADD_SERVICE_TRANSACTION, data, reply, 0);
			reply.recycle();
			data.recycle();
	}

	......

	private IBinder mRemote;
}

       这里的Parcel类是用Java来实现的,它跟我们前面几篇文章介绍Binder机制时提到的用C++实现的Parcel类的作用是一样的,即用来在两个进程之间传递数据。

       这里我们关注是如何把参数service写到data这个Parcel对象中去的:

  1. data.writeStrongBinder(service);  
data.writeStrongBinder(service);

       我们来看看Parcel.writeStrongBinder函数的实现:

  1. public final class Parcel {  
  2.     ......  
  3.   
  4.     /** 
  5.     * Write an object into the parcel at the current dataPosition(), 
  6.     * growing dataCapacity() if needed. 
  7.     */  
  8.     public final native void writeStrongBinder(IBinder val);  
  9.   
  10.     ......  
  11. }  
public final class Parcel {
	......

	/**
	* Write an object into the parcel at the current dataPosition(),
	* growing dataCapacity() if needed.
	*/
	public final native void writeStrongBinder(IBinder val);

	......
}

        这里的writeStrongBinder函数又是一个JNI方法,它定义在frameworks/base/core/jni/android_util_Binder.cpp文件中:

  1. static void android_os_Parcel_writeStrongBinder(JNIEnv* env, jobject clazz, jobject object)  
  2. {  
  3.     Parcel* parcel = parcelForJavaObject(env, clazz);  
  4.     if (parcel != NULL) {  
  5.         const status_t err = parcel->writeStrongBinder(ibinderForJavaObject(env, object));  
  6.         if (err != NO_ERROR) {  
  7.             jniThrowException(env, "java/lang/OutOfMemoryError", NULL);  
  8.         }  
  9.     }  
  10. }  
static void android_os_Parcel_writeStrongBinder(JNIEnv* env, jobject clazz, jobject object)
{
    Parcel* parcel = parcelForJavaObject(env, clazz);
    if (parcel != NULL) {
        const status_t err = parcel->writeStrongBinder(ibinderForJavaObject(env, object));
        if (err != NO_ERROR) {
            jniThrowException(env, "java/lang/OutOfMemoryError", NULL);
        }
    }
}
       这里的clazz参数是一个Java语言实现的Parcel对象,通过parcelForJavaObject把它转换成C++语言实现的Parcel对象。这个函数的实现我们就不看了,有兴趣的读者可以研究一下,这个函数也是实现在frameworks/base/core/jni/android_util_Binder.cpp这个文件中。
       这里的object参数是一个Java语言实现的Binder对象,在调用C++语言实现的Parcel::writeStrongBinder把这个对象写入到parcel对象时,首先通过ibinderForJavaObject函数把这个Java语言实现的Binder对象转换为C++语言实现的JavaBBinderHolder对象:

  1. sp<IBinder> ibinderForJavaObject(JNIEnv* env, jobject obj)  
  2. {  
  3.     if (obj == NULL) return NULL;  
  4.   
  5.     if (env->IsInstanceOf(obj, gBinderOffsets.mClass)) {  
  6.         JavaBBinderHolder* jbh = (JavaBBinderHolder*)  
  7.             env->GetIntField(obj, gBinderOffsets.mObject);  
  8.         return jbh != NULL ? jbh->get(env) : NULL;  
  9.     }  
  10.   
  11.     if (env->IsInstanceOf(obj, gBinderProxyOffsets.mClass)) {  
  12.         return (IBinder*)  
  13.             env->GetIntField(obj, gBinderProxyOffsets.mObject);  
  14.     }  
  15.   
  16.     LOGW("ibinderForJavaObject: %p is not a Binder object", obj);  
  17.     return NULL;  
  18. }  
sp<IBinder> ibinderForJavaObject(JNIEnv* env, jobject obj)
{
    if (obj == NULL) return NULL;

    if (env->IsInstanceOf(obj, gBinderOffsets.mClass)) {
        JavaBBinderHolder* jbh = (JavaBBinderHolder*)
            env->GetIntField(obj, gBinderOffsets.mObject);
        return jbh != NULL ? jbh->get(env) : NULL;
    }

    if (env->IsInstanceOf(obj, gBinderProxyOffsets.mClass)) {
        return (IBinder*)
            env->GetIntField(obj, gBinderProxyOffsets.mObject);
    }

    LOGW("ibinderForJavaObject: %p is not a Binder object", obj);
    return NULL;
}

         我们知道,这里的obj参数是一个Binder类的实例,因此,这里会进入到第一个if语句中去。

         在前面创建HelloService对象,曾经在调用到HelloService的父类Binder中,曾经在JNI层创建了一个JavaBBinderHolder对象,然后把这个对象的地址保存在Binder类的mObject成员变量中,因此,这里把obj对象的mObject成员变量强制转为JavaBBinderHolder对象。

         到了这里,这个函数的功课还未完成,还剩下最后关键的一步:

  1. return jbh != NULL ? jbh->get(env) : NULL;  
return jbh != NULL ? jbh->get(env) : NULL;

        这里就是jbh->get这个语句了。

        在JavaBBinderHolder类中,有一个成员变量mBinder,它的类型为JavaBBinder,而JavaBBinder类继承于BBinder类。在前面学习Binder机制的C++语言实现时,我们在Android系统进程间通信(IPC)机制Binder中的Server启动过程源代码分析这篇文章中,曾经介绍过,IPCThreadState类负责与Binder驱动程序进行交互,它把从Binder驱动程序读出来的请求作简单的处理后,最后把这个请求扔给BBinder的onTransact函数来进一步处理。

        这里,我们就是要把JavaBBinderHolder里面的JavaBBinder类型Binder实体添加到Service Manager中去,以便使得这个HelloService有Client来请求服务时,由Binder驱动程序来唤醒这个Server线程,进而调用这个JavaBBinder类型Binder实体的onTransact函数来进一步处理,这个函数我们在后面会继续介绍。

       先来看一下JavaBBinderHolder::get函数的实现:

  1. class JavaBBinderHolder : public RefBase  
  2. {  
  3.     ......  
  4.   
  5.     JavaBBinderHolder(JNIEnv* env, jobject object)  
  6.         : mObject(object)  
  7.     {  
  8.         ......  
  9.     }  
  10.   
  11.     ......  
  12.   
  13.     sp<JavaBBinder> get(JNIEnv* env)  
  14.     {  
  15.         AutoMutex _l(mLock);  
  16.         sp<JavaBBinder> b = mBinder.promote();  
  17.         if (b == NULL) {  
  18.             b = new JavaBBinder(env, mObject);  
  19.             mBinder = b;  
  20.             ......  
  21.         }  
  22.   
  23.         return b;  
  24.     }  
  25.   
  26.     ......  
  27.   
  28.     jobject         mObject;  
  29.     wp<JavaBBinder> mBinder;  
  30. };  
class JavaBBinderHolder : public RefBase
{
	......

	JavaBBinderHolder(JNIEnv* env, jobject object)
		: mObject(object)
	{
		......
	}

	......

	sp<JavaBBinder> get(JNIEnv* env)
	{
		AutoMutex _l(mLock);
		sp<JavaBBinder> b = mBinder.promote();
		if (b == NULL) {
			b = new JavaBBinder(env, mObject);
			mBinder = b;
			......
		}

		return b;
	}

	......

	jobject         mObject;
	wp<JavaBBinder> mBinder;
};

       这里是第一次调用get函数,因此,会创建一个JavaBBinder对象,并且保存在mBinder成员变量中。注意,这里的mObject就是上面创建的HelloService对象了,这是一个Java对象。这个HelloService对象最终也会保存在JavaBBinder对象的成员变量mObject中。

       回到android_os_Parcel_writeStrongBinder函数中,下面这个语句:

  1. const status_t err = parcel->writeStrongBinder(ibinderForJavaObject(env, object));  
const status_t err = parcel->writeStrongBinder(ibinderForJavaObject(env, object));
       相当于是:

  1. const status_t err = parcel->writeStrongBinder((JavaBBinderHodler*)(obj.mObject));  
const status_t err = parcel->writeStrongBinder((JavaBBinderHodler*)(obj.mObject));
       因此,这里的效果相当于是写入了一个JavaBBinder类型的Binder实体到parcel中去。这与我们前面介绍的Binder机制的C++实现是一致的。

       接着,再回到ServiceManagerProxy.addService这个函数中,最后它通过其成员变量mRemote来执行进程间通信操作。前面我们在介绍如何获取Service Manager远程接口时提到,这里的mRemote成员变量实际上是一个BinderProxy对象,因此,我们再来看看BinderProxy.transact函数的实现:

  1. final class BinderProxy implements IBinder {  
  2.     ......  
  3.   
  4.     public native boolean transact(int code, Parcel data, Parcel reply,  
  5.                                 int flags) throws RemoteException;  
  6.   
  7.     ......  
  8. }  
final class BinderProxy implements IBinder {
	......

	public native boolean transact(int code, Parcel data, Parcel reply,
								int flags) throws RemoteException;

	......
}

       这里的transact成员函数又是一个JNI方法,它定义在frameworks/base/core/jni/android_util_Binder.cpp文件中:

  1. static jboolean android_os_BinderProxy_transact(JNIEnv* env, jobject obj,  
  2.                         jint code, jobject dataObj,  
  3.                         jobject replyObj, jint flags)  
  4. {  
  5.     ......  
  6.   
  7.     Parcel* data = parcelForJavaObject(env, dataObj);  
  8.     if (data == NULL) {  
  9.         return JNI_FALSE;  
  10.     }  
  11.     Parcel* reply = parcelForJavaObject(env, replyObj);  
  12.     if (reply == NULL && replyObj != NULL) {  
  13.         return JNI_FALSE;  
  14.     }  
  15.   
  16.     IBinder* target = (IBinder*)  
  17.         env->GetIntField(obj, gBinderProxyOffsets.mObject);  
  18.     if (target == NULL) {  
  19.         jniThrowException(env, "java/lang/IllegalStateException""Binder has been finalized!");  
  20.         return JNI_FALSE;  
  21.     }  
  22.   
  23.     ......  
  24.   
  25.     status_t err = target->transact(code, *data, reply, flags);  
  26.   
  27.     ......  
  28.   
  29.     if (err == NO_ERROR) {  
  30.         return JNI_TRUE;  
  31.     } else if (err == UNKNOWN_TRANSACTION) {  
  32.         return JNI_FALSE;  
  33.     }  
  34.   
  35.     signalExceptionForError(env, obj, err);  
  36.     return JNI_FALSE;  
  37. }  
static jboolean android_os_BinderProxy_transact(JNIEnv* env, jobject obj,
						jint code, jobject dataObj,
						jobject replyObj, jint flags)
{
	......

	Parcel* data = parcelForJavaObject(env, dataObj);
	if (data == NULL) {
		return JNI_FALSE;
	}
	Parcel* reply = parcelForJavaObject(env, replyObj);
	if (reply == NULL && replyObj != NULL) {
		return JNI_FALSE;
	}

	IBinder* target = (IBinder*)
		env->GetIntField(obj, gBinderProxyOffsets.mObject);
	if (target == NULL) {
		jniThrowException(env, "java/lang/IllegalStateException", "Binder has been finalized!");
		return JNI_FALSE;
	}

	......

	status_t err = target->transact(code, *data, reply, flags);

	......

	if (err == NO_ERROR) {
		return JNI_TRUE;
	} else if (err == UNKNOWN_TRANSACTION) {
		return JNI_FALSE;
	}

	signalExceptionForError(env, obj, err);
	return JNI_FALSE;
}

        这里传进来的参数dataObj和replyObj是一个Java接口实现的Parcel类,由于这里是JNI层,需要把它转换为C++实现的Parcel类,它们就是通过我们前面说的parcelForJavaObject函数进行转换的。

        前面我们在分析如何获取Service Manager远程接口时,曾经说到,在JNI层中,创建了一个BpBinder对象,它的句柄值为0,它的地址保存在gBinderProxyOffsets.mObject中,因此,这里通过下面语句得到这个BpBinder对象的IBinder接口:

  1. IBinder* target = (IBinder*)  
  2.         env->GetIntField(obj, gBinderProxyOffsets.mObject);  
IBinder* target = (IBinder*)
        env->GetIntField(obj, gBinderProxyOffsets.mObject);

        有了这个IBinder接口后,就和我们前面几篇文章介绍Binder机制的C/C++实现一致了。

        最后,通过BpBinder::transact函数进入到Binder驱动程序,然后Binder驱动程序唤醒Service Manager响应这个ADD_SERVICE_TRANSACTION请求:

  1. status_t err = target->transact(code, *data, reply, flags);  
status_t err = target->transact(code, *data, reply, flags);

       具体可以参考Android系统进程间通信(IPC)机制Binder中的Server启动过程源代码分析一文。需要注意的是,这里的data包含了一个JavaBBinderHolder类型的Binder实体对象,它就代表了我们上面创建的HelloService。Service Manager收到这个ADD_SERVICE_TRANSACTION请求时,就会把这个Binder实体纳入到自己内部进行管理。
       这样,实现HelloService的Server的启动过程就完成了。

       四. Client获取HelloService的Java远程接口的过程

        前面我们在学习Android系统硬件抽象层(HAL)时,在在Ubuntu上为Android系统内置Java应用程序测试Application Frameworks层的硬件服务这篇文章中,我们创建了一个应用程序,这个应用程序作为一个Client角色,借助Service Manager这个Java远程接口来获得HelloService的远程接口,进而调用HelloService提供的服务。

        我们看看它是如何借助Service Manager这个Java远程接口来获得HelloService的远程接口的。在Hello这个Activity的onCreate函数,通过IServiceManager.getService函数来获得HelloService的远程接口:

  1. public class Hello extends Activity implements OnClickListener {    
  2.     ......   
  3.   
  4.     private IHelloService helloService = null;    
  5.   
  6.     ......  
  7.   
  8.     @Override    
  9.     public void onCreate(Bundle savedInstanceState) {    
  10.   
  11.         helloService = IHelloService.Stub.asInterface(    
  12.                             ServiceManager.getService("hello"));  
  13.     }  
  14.   
  15.     ......  
  16. }  
public class Hello extends Activity implements OnClickListener {  
	...... 

	private IHelloService helloService = null;  

	......

	@Override  
	public void onCreate(Bundle savedInstanceState) {  

		helloService = IHelloService.Stub.asInterface(  
							ServiceManager.getService("hello"));
	}

	......
}

        我们先来看ServiceManager.getService的实现。前面我们说过,这里实际上是调用了ServiceManagerProxy.getService函数:

  1. class ServiceManagerProxy implements IServiceManager {  
  2.     public ServiceManagerProxy(IBinder remote) {  
  3.         mRemote = remote;  
  4.     }  
  5.   
  6.     ......  
  7.   
  8.     public IBinder getService(String name) throws RemoteException {  
  9.         Parcel data = Parcel.obtain();  
  10.         Parcel reply = Parcel.obtain();  
  11.         data.writeInterfaceToken(IServiceManager.descriptor);  
  12.         data.writeString(name);  
  13.         mRemote.transact(GET_SERVICE_TRANSACTION, data, reply, 0);  
  14.         IBinder binder = reply.readStrongBinder();  
  15.         reply.recycle();  
  16.         data.recycle();  
  17.         return binder;  
  18.     }  
  19.   
  20.     ......  
  21.   
  22.     private IBinder mRemote;  
  23. }  
class ServiceManagerProxy implements IServiceManager {
	public ServiceManagerProxy(IBinder remote) {
		mRemote = remote;
	}

	......

	public IBinder getService(String name) throws RemoteException {
		Parcel data = Parcel.obtain();
		Parcel reply = Parcel.obtain();
		data.writeInterfaceToken(IServiceManager.descriptor);
		data.writeString(name);
		mRemote.transact(GET_SERVICE_TRANSACTION, data, reply, 0);
		IBinder binder = reply.readStrongBinder();
		reply.recycle();
		data.recycle();
		return binder;
	}

	......

	private IBinder mRemote;
}

         最终通过mRemote.transact来执行实际操作。我们在前面已经介绍过了,这里的mRemote实际上是一个BinderProxy对象,它的transact成员函数是一个JNI方法,实现在frameworks/base/core/jni/android_util_Binder.cpp文件中的android_os_BinderProxy_transact函数中。

        这个函数前面我们已经看到了,这里就不再列出来了。不过,当这个函数从:

  1. status_t err = target->transact(code, *data, reply, flags);  
status_t err = target->transact(code, *data, reply, flags);

       这里的reply变量里面就包括了一个HelloService的引用了。注意,这里的reply变量就是我们在ServiceManagerProxy.getService函数里面传进来的参数reply,它是一个Parcel对象。

       回到ServiceManagerProxy.getService函数中,从下面语句返回:

  1. mRemote.transact(GET_SERVICE_TRANSACTION, data, reply, 0);  
mRemote.transact(GET_SERVICE_TRANSACTION, data, reply, 0);

       接着,就通过下面语句将这个HelloService的引用读出来:

  1. IBinder binder = reply.readStrongBinder();  
IBinder binder = reply.readStrongBinder();

       我们看看Parcel.readStrongBinder的实现:

  1. public final class Parcel {  
  2.     ......  
  3.   
  4.     /** 
  5.     * Read an object from the parcel at the current dataPosition(). 
  6.     */  
  7.     public final native IBinder readStrongBinder();  
  8.   
  9.     ......  
  10. }  
public final class Parcel {
	......

	/**
	* Read an object from the parcel at the current dataPosition().
	*/
	public final native IBinder readStrongBinder();

	......
}

        它也是一个JNI方法,实现在frameworks/base/core/jni/android_util_Binder.cpp文件中:

  1. static jobject android_os_Parcel_readStrongBinder(JNIEnv* env, jobject clazz)  
  2. {  
  3.     Parcel* parcel = parcelForJavaObject(env, clazz);  
  4.     if (parcel != NULL) {  
  5.         return javaObjectForIBinder(env, parcel->readStrongBinder());  
  6.     }  
  7.     return NULL;  
  8. }  
static jobject android_os_Parcel_readStrongBinder(JNIEnv* env, jobject clazz)
{
    Parcel* parcel = parcelForJavaObject(env, clazz);
    if (parcel != NULL) {
        return javaObjectForIBinder(env, parcel->readStrongBinder());
    }
    return NULL;
}

       这里首先把Java语言实现的Parcel对象class转换成C++语言实现的Parcel对象parcel,接着,通过parcel->readStrongBinder函数来获得一个Binder引用。

       我们在前面学习Binder机制时,在Android系统进程间通信(IPC)机制Binder中的Client获得Server远程接口过程源代码分析这篇文章中,曾经分析过这个函数,它最终返回来的是一个BpBinder对象,因此,下面的语句:

  1. return javaObjectForIBinder(env, parcel->readStrongBinder());  
return javaObjectForIBinder(env, parcel->readStrongBinder());

       就相当于是:

  1. return javaObjectForIBinder(env, new BpBinder(handle));  
return javaObjectForIBinder(env, new BpBinder(handle));

       这里的handle就是HelloService这个Binder实体在Client进程中的句柄了,它是由Binder驱动程序设置的,上层不用关心它的值具体是多少。至于javaObjectForIBinder这个函数,我们前面介绍如何获取Service Manager的Java远程接口时已经有详细介绍,这里就不累述了,它的作用就是创建一个BinderProxy对象,并且把刚才获得的BpBinder对象的地址保存在这个BinderProxy对象的mObject成员变量中。

       最后返回到Hello.onCreate函数中,从下面语句返回:

  1. helloService = IHelloService.Stub.asInterface(    
  2.                     ServiceManager.getService("hello"));  
helloService = IHelloService.Stub.asInterface(  
					ServiceManager.getService("hello"));

      就相当于是:

  1. helloService = IHelloService.Stub.asInterface(new BinderProxy()));  
helloService = IHelloService.Stub.asInterface(new BinderProxy()));

      回忆一下前面介绍IHelloService接口的定义时,IHelloService.Stub.asInterface是这样定义的:

  1. public interface IHelloService extends android.os.IInterface  
  2. {  
  3.     /** Local-side IPC implementation stub class. */  
  4.     public static abstract class Stub extends android.os.Binder implements android.os.IHelloService  
  5.     {  
  6.         ......  
  7.   
  8.         public static android.os.IHelloService asInterface(android.os.IBinder obj)  
  9.         {  
  10.             if ((obj==null)) {  
  11.                 return null;  
  12.             }  
  13.             android.os.IInterface iin = (android.os.IInterface)obj.queryLocalInterface(DESCRIPTOR);  
  14.             if (((iin!=null)&&(iin instanceof android.os.IHelloService))) {  
  15.                 return ((android.os.IHelloService)iin);  
  16.             }  
  17.             return new android.os.IHelloService.Stub.Proxy(obj);  
  18.         }  
  19.   
  20.         ......  
  21.     }  
  22. }  
public interface IHelloService extends android.os.IInterface
{
	/** Local-side IPC implementation stub class. */
	public static abstract class Stub extends android.os.Binder implements android.os.IHelloService
	{
		......

		public static android.os.IHelloService asInterface(android.os.IBinder obj)
		{
			if ((obj==null)) {
				return null;
			}
			android.os.IInterface iin = (android.os.IInterface)obj.queryLocalInterface(DESCRIPTOR);
			if (((iin!=null)&&(iin instanceof android.os.IHelloService))) {
				return ((android.os.IHelloService)iin);
			}
			return new android.os.IHelloService.Stub.Proxy(obj);
		}

		......
	}
}

        这里的obj是一个BinderProxy对象,它的queryLocalInterface返回null,于是调用下面语句获得HelloService的远程接口:

  1. return new android.os.IHelloService.Stub.Proxy(obj);  
return new android.os.IHelloService.Stub.Proxy(obj);

        相当于是:

  1. return new android.os.IHelloService.Stub.Proxy(new BinderProxy());  
return new android.os.IHelloService.Stub.Proxy(new BinderProxy());

        这样,我们就获得了HelloService的远程接口了,它实质上是一个实现了IHelloService接口的IHelloService.Stub.Proxy对象。

        五. Client通过HelloService的Java远程接口来使用HelloService提供的服务的过程

        上面介绍的Hello这个Activity获得了HelloService的远程接口后,就可以使用它的服务了。

        我们以使用IHelloService.getVal函数为例详细说明。在Hello::onClick函数中调用了IHelloService.getVal函数:

  1. public class Hello extends Activity implements OnClickListener {  
  2.     ......  
  3.   
  4.     @Override  
  5.     public void onClick(View v) {  
  6.         if(v.equals(readButton)) {  
  7.             int val = helloService.getVal();    
  8.             ......  
  9.         }  
  10.         else if(v.equals(writeButton)) {  
  11.             ......  
  12.         }  
  13.         else if(v.equals(clearButton)) {  
  14.             ......  
  15.         }  
  16.     }  
  17.   
  18.     ......  
  19. }  
public class Hello extends Activity implements OnClickListener {
	......

	@Override
	public void onClick(View v) {
		if(v.equals(readButton)) {
			int val = helloService.getVal();  
			......
		}
		else if(v.equals(writeButton)) {
			......
		}
		else if(v.equals(clearButton)) {
			......
		}
	}

	......
}

        通知前面的分析,我们知道,这里的helloService接口实际上是一个IHelloService.Stub.Proxy对象,因此,我们进入到IHelloService.Stub.Proxy类的getVal函数中:

  1. public interface IHelloService extends android.os.IInterface  
  2. {  
  3.     /** Local-side IPC implementation stub class. */  
  4.     public static abstract class Stub extends android.os.Binder implements android.os.IHelloService  
  5.     {  
  6.           
  7.         ......  
  8.   
  9.         private static class Proxy implements android.os.IHelloService  
  10.         {  
  11.             private android.os.IBinder mRemote;  
  12.   
  13.             ......  
  14.   
  15.             public int getVal() throws android.os.RemoteException  
  16.             {  
  17.                 android.os.Parcel _data = android.os.Parcel.obtain();  
  18.                 android.os.Parcel _reply = android.os.Parcel.obtain();  
  19.                 int _result;  
  20.                 try {  
  21.                     _data.writeInterfaceToken(DESCRIPTOR);  
  22.                     mRemote.transact(Stub.TRANSACTION_getVal, _data, _reply, 0);  
  23.                     _reply.readException();  
  24.                     _result = _reply.readInt();  
  25.                 }  
  26.                 finally {  
  27.                     _reply.recycle();  
  28.                     _data.recycle();  
  29.                 }  
  30.                 return _result;  
  31.             }  
  32.         }  
  33.   
  34.         ......  
  35.         static final int TRANSACTION_getVal = (android.os.IBinder.FIRST_CALL_TRANSACTION + 1);  
  36.     }  
  37.   
  38.     ......  
  39. }  
public interface IHelloService extends android.os.IInterface
{
	/** Local-side IPC implementation stub class. */
	public static abstract class Stub extends android.os.Binder implements android.os.IHelloService
	{
		
		......

		private static class Proxy implements android.os.IHelloService
		{
			private android.os.IBinder mRemote;

			......

			public int getVal() throws android.os.RemoteException
			{
				android.os.Parcel _data = android.os.Parcel.obtain();
				android.os.Parcel _reply = android.os.Parcel.obtain();
				int _result;
				try {
					_data.writeInterfaceToken(DESCRIPTOR);
					mRemote.transact(Stub.TRANSACTION_getVal, _data, _reply, 0);
					_reply.readException();
					_result = _reply.readInt();
				}
				finally {
					_reply.recycle();
					_data.recycle();
				}
				return _result;
			}
		}

		......
		static final int TRANSACTION_getVal = (android.os.IBinder.FIRST_CALL_TRANSACTION + 1);
	}

	......
}

        这里我们可以看出,实际上是通过mRemote.transact来请求HelloService执行TRANSACTION_getVal操作。这里的mRemote是一个BinderProxy对象,这是我们在前面获取HelloService的Java远程接口的过程中创建的。

        BinderProxy.transact函数是一个JNI方法,我们在前面已经介绍过了,这里不再累述。最过调用到Binder驱动程序,Binder驱动程序唤醒HelloService这个Server。前面我们在介绍HelloService的启动过程时,曾经提到,HelloService这个Server线程被唤醒之后,就会调用JavaBBinder类的onTransact函数:

  1. class JavaBBinder : public BBinder  
  2. {  
  3.     JavaBBinder(JNIEnv* env, jobject object)  
  4.         : mVM(jnienv_to_javavm(env)), mObject(env->NewGlobalRef(object))  
  5.     {  
  6.         ......  
  7.     }  
  8.   
  9.     ......  
  10.   
  11.     virtual status_t onTransact(  
  12.         uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags = 0)  
  13.     {  
  14.         JNIEnv* env = javavm_to_jnienv(mVM);  
  15.   
  16.         ......  
  17.   
  18.         jboolean res = env->CallBooleanMethod(mObject, gBinderOffsets.mExecTransact,  
  19.             code, (int32_t)&data, (int32_t)reply, flags);  
  20.   
  21.         ......  
  22.   
  23.         return res != JNI_FALSE ? NO_ERROR : UNKNOWN_TRANSACTION;  
  24.     }  
  25.   
  26.     ......  
  27.   
  28.         JavaVM* const   mVM;  
  29.     jobject const   mObject;  
  30. };  
class JavaBBinder : public BBinder
{
	JavaBBinder(JNIEnv* env, jobject object)
		: mVM(jnienv_to_javavm(env)), mObject(env->NewGlobalRef(object))
	{
		......
	}

	......

	virtual status_t onTransact(
		uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags = 0)
	{
		JNIEnv* env = javavm_to_jnienv(mVM);

		......

		jboolean res = env->CallBooleanMethod(mObject, gBinderOffsets.mExecTransact,
			code, (int32_t)&data, (int32_t)reply, flags);

		......

		return res != JNI_FALSE ? NO_ERROR : UNKNOWN_TRANSACTION;
	}

	......

        JavaVM* const   mVM;
	jobject const   mObject;
};

         前面我们在介绍HelloService的启动过程时,曾经介绍过,JavaBBinder类里面的成员变量mObject就是HelloService类的一个实例对象了。因此,这里通过语句:

  1. jboolean res = env->CallBooleanMethod(mObject, gBinderOffsets.mExecTransact,  
  2.             code, (int32_t)&data, (int32_t)reply, flags);  
jboolean res = env->CallBooleanMethod(mObject, gBinderOffsets.mExecTransact,
			code, (int32_t)&data, (int32_t)reply, flags);

         就调用了HelloService.execTransact函数,而HelloService.execTransact函数继承了Binder类的execTransact函数:

  1. public class Binder implements IBinder {  
  2.     ......  
  3.   
  4.     // Entry point from android_util_Binder.cpp's onTransact  
  5.     private boolean execTransact(int code, int dataObj, int replyObj, int flags) {  
  6.         Parcel data = Parcel.obtain(dataObj);  
  7.         Parcel reply = Parcel.obtain(replyObj);  
  8.         // theoretically, we should call transact, which will call onTransact,  
  9.         // but all that does is rewind it, and we just got these from an IPC,  
  10.         // so we'll just call it directly.  
  11.         boolean res;  
  12.         try {  
  13.             res = onTransact(code, data, reply, flags);  
  14.         } catch (RemoteException e) {  
  15.             reply.writeException(e);  
  16.             res = true;  
  17.         } catch (RuntimeException e) {  
  18.             reply.writeException(e);  
  19.             res = true;  
  20.         } catch (OutOfMemoryError e) {  
  21.             RuntimeException re = new RuntimeException("Out of memory", e);  
  22.             reply.writeException(re);  
  23.             res = true;  
  24.         }  
  25.         reply.recycle();  
  26.         data.recycle();  
  27.         return res;  
  28.     }  
  29. }  
public class Binder implements IBinder {
	......

	// Entry point from android_util_Binder.cpp's onTransact
	private boolean execTransact(int code, int dataObj, int replyObj, int flags) {
		Parcel data = Parcel.obtain(dataObj);
		Parcel reply = Parcel.obtain(replyObj);
		// theoretically, we should call transact, which will call onTransact,
		// but all that does is rewind it, and we just got these from an IPC,
		// so we'll just call it directly.
		boolean res;
		try {
			res = onTransact(code, data, reply, flags);
		} catch (RemoteException e) {
			reply.writeException(e);
			res = true;
		} catch (RuntimeException e) {
			reply.writeException(e);
			res = true;
		} catch (OutOfMemoryError e) {
			RuntimeException re = new RuntimeException("Out of memory", e);
			reply.writeException(re);
			res = true;
		}
		reply.recycle();
		data.recycle();
		return res;
	}
}

         这里又调用了onTransact函数来作进一步处理。由于HelloService类继承了IHelloService.Stub类,而IHelloService.Stub类实现了onTransact函数,HelloService类没有实现,因此,最终调用了IHelloService.Stub.onTransact函数:

  1. public interface IHelloService extends android.os.IInterface  
  2. {  
  3.     /** Local-side IPC implementation stub class. */  
  4.     public static abstract class Stub extends android.os.Binder implements android.os.IHelloService  
  5.     {  
  6.         ......  
  7.   
  8.         @Override   
  9.         public boolean onTransact(int code, android.os.Parcel data, android.os.Parcel reply, int flags) throws android.os.RemoteException  
  10.         {  
  11.             switch (code)  
  12.             {  
  13.             ......  
  14.             case TRANSACTION_getVal:  
  15.                 {  
  16.                     data.enforceInterface(DESCRIPTOR);  
  17.                     int _result = this.getVal();  
  18.                     reply.writeNoException();  
  19.                     reply.writeInt(_result);  
  20.                     return true;  
  21.                 }  
  22.             }  
  23.             return super.onTransact(code, data, reply, flags);  
  24.         }  
  25.   
  26.         ......  
  27.   
  28.     }  
  29. }  
public interface IHelloService extends android.os.IInterface
{
	/** Local-side IPC implementation stub class. */
	public static abstract class Stub extends android.os.Binder implements android.os.IHelloService
	{
		......

		@Override 
		public boolean onTransact(int code, android.os.Parcel data, android.os.Parcel reply, int flags) throws android.os.RemoteException
		{
			switch (code)
			{
			......
			case TRANSACTION_getVal:
				{
					data.enforceInterface(DESCRIPTOR);
					int _result = this.getVal();
					reply.writeNoException();
					reply.writeInt(_result);
					return true;
				}
			}
			return super.onTransact(code, data, reply, flags);
		}

		......

	}
}

         函数最终又调用了HelloService.getVal函数:

  1. public class HelloService extends IHelloService.Stub {  
  2.     ......  
  3.   
  4.     public int getVal() {  
  5.         return getVal_native();  
  6.     }  
  7.       
  8.     ......  
  9.     private static native int getVal_native();  
  10. }  
public class HelloService extends IHelloService.Stub {
	......

	public int getVal() {
		return getVal_native();
	}
	
	......
	private static native int getVal_native();
}

       最终,经过层层返回,就回到IHelloService.Stub.Proxy.getVal函数中来了,从下面语句返回:

  1. mRemote.transact(Stub.TRANSACTION_getVal, _data, _reply, 0);  
mRemote.transact(Stub.TRANSACTION_getVal, _data, _reply, 0);

       并将结果读出来:

  1. _result = _reply.readInt();  
_result = _reply.readInt();

       最后将这个结果返回到Hello.onClick函数中。

       这样,Client通过HelloService的Java远程接口来使用HelloService提供的服务的过程就介绍完了。

       至此,Android系统进程间通信Binder机制在应用程序框架层的Java接口源代码分析也完成了,整个Binder机制的学习就结束了。

       重新学习Android系统进程间通信Binder机制,请回到Android进程间通信(IPC)机制Binder简要介绍和学习计划一文。

 

来自:http://blog.csdn.net/luoshengyang/article/details/6642463

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值