android Binder详解(3)


三,binder场景分析

写完SampleService,我们已经有一些基本概念了,下面我们通过一些场景的分析来理解一下整个binder的实现架构。
在这部分首先了解下/dev/binder的驱动,分析ServiceManager的实现,然后我们分几个具体的场景来分析:

  • addService()分析。
  • getService()分析。
  • client调用service接口分析。
  • TF_ONE_WAY的调用分析。
  • DeathRecipient 回调的调用分析。

在这部分,大多数时候,我们都会逐行分析代码,为了方便会把注释直接加在对应的代码上,如下面的例子:

//记录了BINDER_STAT_PROC的object建立了多少,这个是为了debug问题而记录的
    binder_stats_created(BINDER_STAT_PROC);
//binder_procs是一个全局变量,hlist_add_head是将proc->proc_node加入binder_procs的list中,实际将当前的binder_proc加入了list。
    hlist_add_head(&proc->proc_node, &binder_procs);
PS:在这部分内容中,binder特指kernel中的/dev/binder。


3.1 /dev/binder

binder device是android写的一个虚拟的设备,他的驱动实现在common/driver/staging/android/binder.c中,对于linux驱动编写不了解的话,问题也不是很大,我们只要知道上层的操作下来具体到了binder中的哪个执行函数即可,其他不了解的api,只需要google一下就清楚了(本人也只是了解些皮毛而已)。

static const struct file_operations binder_fops = {
    .owner = THIS_MODULE,
    .poll = binder_poll,
    .unlocked_ioctl = binder_ioctl,
    .mmap = binder_mmap,
    .open = binder_open,
    .flush = binder_flush,
    .release = binder_release,
};

static struct miscdevice binder_miscdev = {
    .minor = MISC_DYNAMIC_MINOR,
    .name = "binder",
    .fops = &binder_fops
};
我们只要知道,binder的操作接口在binder_fops中定义了,后面我们遇到操作,直接找对应的函数即可。
比如open("/dev/binder",O_RDWR);就是open操作,对应的是binder_open这个函数,那我们直接看binder_open()函数即可。


3.2 ServiceMananger main函数分析

servicemanager是在一个单独的进程中的,他的main函数实现在frameworks/native/cmds/servicemanager/service_manager.c:

int main(int argc, char **argv)
{
    struct binder_state *bs;
    void *svcmgr = BINDER_SERVICE_MANAGER;

    bs = binder_open(128*1024);

    if (binder_become_context_manager(bs)) {
        ALOGE("cannot become context manager (%s)\n", strerror(errno));
        return -1;
    }

    svcmgr_handle = svcmgr;
     binder_loop(bs, svcmgr_handler); 
    return 0;
}
main函数很短,主要是调用了三个函数,逐个看下。


binder_open(128*1024)

binder_open()实现在frameworks/natvie/cmds/servicemanager/binder.c中:

struct binder_state *binder_open(unsigned mapsize)
{
    struct binder_state *bs;

    bs = malloc(sizeof(*bs));
    if (!bs) {
        errno = ENOMEM;
        return 0;
    }

     bs->fd = open("/dev/binder", O_RDWR); 
    if (bs->fd < 0) {
        fprintf(stderr,"binder: cannot open device (%s)\n",
                strerror(errno));
        goto fail_open;
    }

    bs->mapsize = mapsize; 
     bs->mapped = mmap(NULL, mapsize, PROT_READ, MAP_PRIVATE, bs->fd, 0); 
    if (bs->mapped == MAP_FAILED) {
        fprintf(stderr,"binder: cannot map device (%s)\n",
                strerror(errno));
        goto fail_map;
    }

        /* TODO: check version */

    return bs;

fail_map:
    close(bs->fd);
fail_open:
    free(bs);
    return 0;
}

其中,fd是open /dev/binder得到的文件指针,mapsize是传入的参数就是128*1024,而mapped是从/dev/binder中mmap出来的内存的起始地址。
那么我们看看/dev/binder在open和mmap的时候做了什么。

binder_open() in binder

/dev/binder open对应的是binder.c中binder_open函数:

static int binder_open(struct inode *nodp, struct file *filp)
{
    struct binder_proc *proc;

    binder_debug(BINDER_DEBUG_OPEN_CLOSE, "binder_open: %d:%d\n",
             current->group_leader->pid, current->pid);

     //申请binder_proc的内存。 
    proc = kzalloc(sizeof(*proc), GFP_KERNEL);
    if (proc == NULL)
        return -ENOMEM;
    //current是指向当前调用open()进程的task_struct指针,get_task_struct实际是atomic_inc(&(current)->usage),并不是给current赋值。 
    get_task_struct(current);
    proc->tsk = current; 
     //todo是struct list_head,是kernel中链表的head,INIT_LIST_HEAD是初始化链表的动作。 
    INIT_LIST_HEAD(&proc->todo);
     //wait是wait_queue_head_t,是linux的等待队列,init_waitqueue_head是初始化等待队列。 
    init_waitqueue_head(&proc->wait);
     //记录当前进程的nice值也就是优先级。 
    proc->default_priority = task_nice(current);

    binder_lock(__func__);

     //记录了BINDER_STAT_PROC的object建立了多少,这个是为了debug问题而记录的。 
    binder_stats_created(BINDER_STAT_PROC);
     //binder_procs是一个全局变量,hlist_add_head是将proc->proc_node加入binder_procs的list中,实际将当前的binder_proc加入了list 
    hlist_add_head(&proc->proc_node, &binder_procs);
    proc->pid = current->group_leader->pid;
     //建立delivered_death的list。 
    INIT_LIST_HEAD(&proc->delivered_death);
    filp->private_data = proc;

    binder_unlock(__func__);

    if (binder_debugfs_dir_entry_proc) {
        char strbuf[11];
        snprintf(strbuf, sizeof(strbuf), "%u", proc->pid);
        proc->debugfs_entry = debugfs_create_file(strbuf, S_IRUGO,
            binder_debugfs_dir_entry_proc, proc, &binder_proc_fops);
    }

    return 0;
}
这个函数主要是初始化了binder_proc这个class,具体的行为在代码中已经注释了。
binder_open()中主要建立了一个binder_proc的数据结构。对于user层来说每个进程都会对应一个binder_proc的对象。为什么这么说?记得前面我们看到的ProcessState对象,/dev/binder是在ProcessState的时候打开的,而ProcessState又是唯一的,所以就是说进程里面只会open一次/dev/binder也只会建立一个binder_proc的对象。binder_proc是组织在binder_procs这个list中管理的。
另外在这里还有todo,delivered_death这两个list,和wait这个waitqueue,他们都在binder_open()中做了初始化,他们的具体作用我们在后面分析中会看到。

binder_mmap()  in binder

mmap函数主要是调用到binder_mmap()这个函数:

static int binder_mmap(struct file *filp, struct vm_area_struct *vma)
{
    int ret;
    struct vm_struct *area;
    struct binder_proc *proc = filp->private_data;
    const char *failure_string;
    struct binder_buffer *buffer;

    if (proc->tsk != current)
        return -EINVAL;

    if ((vma->vm_end - vma->vm_start) > SZ_4M)
        vma->vm_end = vma->vm_start + SZ_4M;

    binder_debug(BINDER_DEBUG_OPEN_CLOSE,
             "binder_mmap: %d %lx-%lx (%ld K) vma %lx pagep %lx\n",
             proc->pid, vma->vm_start, vma->vm_end,
             (vma->vm_end - vma->vm_start) / SZ_1K, vma->vm_flags,
             (unsigned long)pgprot_val(vma->vm_page_prot));

    if (vma->vm_flags & FORBIDDEN_MMAP_FLAGS) {
        ret = -EPERM;
        failure_string = "bad vm_flags";
        goto err_bad_arg;
    }
    vma->vm_flags = (vma->vm_flags | VM_DONTCOPY) & ~VM_MAYWRITE;

    mutex_lock(&binder_mmap_lock);
    if (proc->buffer) {
        ret = -EBUSY;
        failure_string = "already mapped";
        goto err_already_mapped;
    }
     //从kernel vm区域,获取一块同样大小的空间,这块地址是可以在kernel访问的。 
    area = get_vm_area(vma->vm_end - vma->vm_start, VM_IOREMAP);
    if (area == NULL) {
        ret = -ENOMEM;
        failure_string = "get_vm_area";
        goto err_get_vm_area_failed;
    }
     //记录mmap内存的vm地址。 
    proc->buffer = area->addr;
     //两个内存的Offset,这个是为了方便后续的做user空间和kernel空间的地址转换。 
    proc->user_buffer_offset = vma->vm_start - (uintptr_t)proc->buffer;
    mutex_unlock(&binder_mmap_lock);

#ifdef CONFIG_CPU_CACHE_VIPT
    if (cache_is_vipt_aliasing()) {
        while (CACHE_COLOUR((vma->vm_start ^ (uint32_t)proc->buffer))) {
            pr_info("binder_mmap: %d %lx-%lx maps %p bad alignment\n", proc->pid, vma->vm_start, vma->vm_end, proc->buffer);
            vma->vm_start += PAGE_SIZE;
        }
    }
#endif
     //按照mmap的size,分配保存page指针的内存。 
    proc->pages = kzalloc(sizeof(proc->pages[0]) * ((vma->vm_end - vma->vm_start) / PAGE_SIZE), GFP_KERNEL);
    if (proc->pages == NULL) {
        ret = -ENOMEM;
        failure_string = "alloc page array";
        goto err_alloc_pages_failed;
    }
    proc->buffer_size = vma->vm_end - vma->vm_start;

     //绑定vm_ops。 
    vma->vm_ops = &binder_vm_ops;
    vma->vm_private_data = proc;

     //申请一个page的物理内存(因为目前实际只使用了少量的内存),并把物理内存和user空间地址,vm地址映射好。 
    if (binder_update_page_range(proc, 1, proc->buffer, proc->buffer + PAGE_SIZE, vma)) {
        ret = -ENOMEM;
        failure_string = "alloc small buf";
        goto err_alloc_small_buf_failed;
    }
     //把proc->buffer强制转换成binder_buffer。 
    buffer = proc->buffer;
     //初始化binder_proc.buffers这个list 
    INIT_LIST_HEAD(&proc->buffers);
     //把新申请的内存加入到binder_proc.buffers的list中去。 
    list_add(&buffer->entry, &proc->buffers);
    buffer->free = 1;
     //将新申请的内存放到binder_proc.free_buffers的红黑树中去。红黑树也是linux中常用的一个数据结构。 
    binder_insert_free_buffer(proc, buffer);
    proc->free_async_space = proc->buffer_size / 2;
    barrier();
     //获取当前进程的files_struct结构指针,后续操作中会需要使用。 
    proc->files = get_files_struct(current);
    proc->vma = vma;
    proc->vma_vm_mm = vma->vm_mm;

    /*pr_info("binder_mmap: %d %lx-%lx maps %p\n",
         proc->pid, vma->vm_start, vma->vm_end, proc->buffer);*/
    return 0;

err_alloc_small_buf_failed:
    kfree(proc->pages);
    proc->pages = NULL;
err_alloc_pages_failed:
    mutex_lock(&binder_mmap_lock);
    vfree(proc->buffer);
    proc->buffer = NULL;
err_get_vm_area_failed:
err_already_mapped:
    mutex_unlock(&binder_mmap_lock);
err_bad_arg:
    pr_err("binder_mmap: %d %lx-%lx %s failed %d\n",
           proc->pid, vma->vm_start, vma->vm_end, failure_string, ret);
    return ret;
}
binder_mmap中有个重要的调用是binder_update_page_range(),这个函数也一起看下:

static int binder_update_page_range(struct binder_proc *proc, int allocate,
                    void *start, void *end,
                    struct vm_area_struct *vma)
{
    void *page_addr;
    unsigned long user_page_addr;
    struct vm_struct tmp_area;
    struct page **page;
    struct mm_struct *mm;

    binder_debug(BINDER_DEBUG_BUFFER_ALLOC,
             "%d: %s pages %p-%p\n", proc->pid,
             allocate ? "allocate" : "free", start, end);

    if (end <= start)
        return 0;

    trace_binder_update_page_range(proc, allocate, start, end);

    if (vma)
        mm = NULL;
    else
        mm = get_task_mm(proc->tsk);

    if (mm) {
        down_write(&mm->mmap_sem);
        vma = proc->vma;
        if (vma && mm != proc->vma_vm_mm) {
            pr_err("%d: vma mm and task mm mismatch\n",
                proc->pid);
            vma = NULL;
        }
    }

    if (allocate == 0)
        goto free_range;

    if (vma == NULL) {
        pr_err("%d: binder_alloc_buf failed to map pages in userspace, no vma\n",
            proc->pid);
        goto err_no_vma;
    }

    for (page_addr = start; page_addr < end; page_addr += PAGE_SIZE) {
        int ret;
        struct page **page_array_ptr;
        page = &proc->pages[(page_addr - proc->buffer) / PAGE_SIZE];

        BUG_ON(*page);
         //申请物理page 
        *page = alloc_page(GFP_KERNEL | __GFP_HIGHMEM | __GFP_ZERO);
        if (*page == NULL) {
            pr_err("%d: binder_alloc_buf failed for page at %p\n",
                proc->pid, page_addr);
            goto err_alloc_page_failed;
        }
        tmp_area.addr = page_addr;
        tmp_area.size = PAGE_SIZE + PAGE_SIZE /* guard page? */;
        page_array_ptr = page;
         //将page和vm做映射,就是物理内存和vm的address的映射。 
        ret = map_vm_area(&tmp_area, PAGE_KERNEL, &page_array_ptr);
        if (ret) {
            pr_err("%d: binder_alloc_buf failed to map page at %p in kernel\n",
                   proc->pid, page_addr);
            goto err_map_kernel_failed;
        }
         //把vm地址换算到user空间地址。 
        user_page_addr =
            (uintptr_t)page_addr + proc->user_buffer_offset;
         //将page和user空间地址做映射 
        ret = vm_insert_page(vma, user_page_addr, page[0]);
        if (ret) {
            pr_err("%d: binder_alloc_buf failed to map page at %lx in userspace\n",
                   proc->pid, user_page_addr);
            goto err_vm_insert_page_failed;
        }
        /* vm_insert_page does not seem to increment the refcount */
    }
    if (mm) {
        up_write(&mm->mmap_sem);
        mmput(mm);
    }
    return 0;

free_range:
    for (page_addr = end - PAGE_SIZE; page_addr >= start;
         page_addr -= PAGE_SIZE) {
        page = &proc->pages[(page_addr - proc->buffer) / PAGE_SIZE];
        if (vma)
            zap_page_range(vma, (uintptr_t)page_addr +
                proc->user_buffer_offset, PAGE_SIZE, NULL);
err_vm_insert_page_failed:
        unmap_kernel_range((unsigned long)page_addr, PAGE_SIZE);
err_map_kernel_failed:
        __free_page(*page);
        *page = NULL;
err_alloc_page_failed:
        ;
    }
err_no_vma:
    if (mm) {
        up_write(&mm->mmap_sem);
        mmput(mm);
    }
    return -ENOMEM;
}
这边提一下,binder_mmap中传下来的 struct vm_area_struct *vma是kernel帮我们构造好的结构体,我们记得他里面的地址是用户空间的地址就可以了。
代码逐行看下来,binder_mmap的功能还是很明确的:为kernel和user空间的地址做了映射(为了方便后续user层和kernel层共享内存),并且直接在mmap的地址上建立了一个binder_buffer的对象,这个对象被加入到buffers的list中去了。

到这里,ServiceManager的binder_open()里面的细节已经搞清楚了:为当前的进程构造了一个binder_proc的结构体,并且做了一个内存的映射。


3.2.1.2 binder_become_context_manager()
binder_become_context_manager(struct binder_state *bs)的函数只是调用了ioctl,主要的动作还是要看看binder driver。

int binder_become_context_manager(struct binder_state *bs)
{
    return ioctl(bs->fd, BINDER_SET_CONTEXT_MGR, 0);
}
ioctl在binder driver部分对应的函数是static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) ,cmd是BINDER_SET_CONTEXT_MGR,arg为0,看看这个cmd相关的代码:

static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
     ...... 
    //检测user error的,对于正常流程无影响。 
    ret = wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);
    if (ret)
        goto err_unlocked;
    
    binder_lock(__func__);
     //获得binder_thread对象,如果本来没有,会创建binder_thread。 
    thread = binder_get_thread(proc);
    if (thread == NULL) {
        ret = -ENOMEM;
        goto err;
    }

    switch (cmd) {
     ...... 
    
    case BINDER_SET_CONTEXT_MGR:
        if (binder_context_mgr_node != NULL) {
            pr_err("BINDER_SET_CONTEXT_MGR already set\n");
            ret = -EBUSY;
            goto err;
        }
         //security_binder_set_context_mgr是做权限检查的,检查当前进程是否有权限。 
        ret = security_binder_set_context_mgr(proc->tsk);
        if (ret < 0)
            goto err;

         //检测比较binder_context_mgr_uid,或者把当前的uid赋值给binder_context_mgr_uid。 
        if (uid_valid(binder_context_mgr_uid)) {
            if (!uid_eq(binder_context_mgr_uid, current->cred->euid)) {
                pr_err("BINDER_SET_CONTEXT_MGR bad uid %d != %d\n",
                       from_kuid(&init_user_ns, current->cred->euid),
                       from_kuid(&init_user_ns, binder_context_mgr_uid));
                ret = -EPERM;
                goto err;
            }
        } else
            binder_context_mgr_uid = current->cred->euid;
         //生成binder_context_mgr_node,注意下binder_node是有binder_proc.nodes来管理的。 
        binder_context_mgr_node = binder_new_node(proc, NULL, NULL);
        if (binder_context_mgr_node == NULL) {
            ret = -ENOMEM;
            goto err;
        }
        binder_context_mgr_node->local_weak_refs++;
        binder_context_mgr_node->local_strong_refs++;
        binder_context_mgr_node->has_strong_ref = 1;
        binder_context_mgr_node->has_weak_ref = 1;
        break;
        
    ......
    }
    ret = 0;
err:
    if (thread)
        thread->looper &= ~BINDER_LOOPER_STATE_NEED_RETURN;
    binder_unlock(__func__);
    wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);
    if (ret && ret != -ERESTARTSYS)
        pr_info("%d:%d ioctl %x %lx returned %d\n", proc->pid, current->pid, cmd, arg, ret);
err_unlocked:
    trace_binder_ioctl_done(ret);
    return ret;
}
在这边对于kernel中的进程概念,我们要理解:kernel中只有进程的概念,thread,process在user层面的概念,记得常说的一句话,thread是一个轻量级的process。因此在kernel中,同一个进程里面的不同thread调用下来,我们获取到的current也是不一样的,pid也是不一样的。
a) binder_thread

我们在binder_ioctl中看到的binder_thread,就是和user层的thread呼应,user的每个thread在kernel会有一个对应的binder_thread结构,而binder_proc是和user层的process对应,binder_proc在binder_open()中建立,每个process也只会open一次。

b)binder_context_mgr_uid & binder_context_mgr_node
这两个变量都是全局的变量,结合servicemanager的调用,不难理解,这两个变量是记录了servicemanager的节点信息。
binder_context_mgr_uid在binder中使用只有在BINDER_SET_CONTEXT_MGR这个case中,主要还是为了保证安全性,并没有特定的用途。

到这,binder_become_context_manager()的功能也基本清楚了:在binder device中记录了ServiceManager的一些信息,建立了ServcieManager对应的binder_node。


3.2.1.3 binder_loop()
binder_loop()里面是一个循环,分成两部分看,先看进循环之前做了什么,再看循环中做了什么。

binder_looper()循环前

void binder_loop(struct binder_state *bs, binder_handler func)
{
    int res;
    struct binder_write_read bwr;
    unsigned readbuf[32];

    bwr.write_size = 0;
    bwr.write_consumed = 0;
    bwr.write_buffer = 0;
    
    readbuf[0] = BC_ENTER_LOOPER;
    binder_write(bs, readbuf, sizeof(unsigned));

     ...... 
}
这边主要是赋值了readbuf,然后去调用了binder_write()函数。readbuf[0]的赋值是BC_ENTER_LOOPER,BC_ENTER_LOOPER是在kernel的binder.h中定义的:
enum binder_driver_command_protocol {
    BC_TRANSACTION = _IOW('c', 0, struct binder_transaction_data),
    BC_REPLY = _IOW('c', 1, struct binder_transaction_data),
    /*
     * binder_transaction_data: the sent command.
     */

    BC_ACQUIRE_RESULT = _IOW('c', 2, int),
    /*
     * not currently supported
     * int:  0 if the last BR_ATTEMPT_ACQUIRE was not successful.
     * Else you have acquired a primary reference on the object.
     */

    BC_FREE_BUFFER = _IOW('c', 3, int),
    /*
     * void *: ptr to transaction data received on a read
     */

    BC_INCREFS = _IOW('c', 4, int),
    BC_ACQUIRE = _IOW('c', 5, int),
    BC_RELEASE = _IOW('c', 6, int),
    BC_DECREFS = _IOW('c', 7, int),
    /*
     * int:    descriptor
     */

    BC_INCREFS_DONE = _IOW('c', 8, struct binder_ptr_cookie),
    BC_ACQUIRE_DONE = _IOW('c', 9, struct binder_ptr_cookie),
    /*
     * void *: ptr to binder
     * void *: cookie for binder
     */

    BC_ATTEMPT_ACQUIRE = _IOW('c', 10, struct binder_pri_desc),
    /*
     * not currently supported
     * int: priority
     * int: descriptor
     */

    BC_REGISTER_LOOPER = _IO('c', 11),
    /*
     * No parameters.
     * Register a spawned looper thread with the device.
     */

    BC_ENTER_LOOPER = _IO('c', 12),
    BC_EXIT_LOOPER = _IO('c', 13),
    /*
     * No parameters.
     * These two commands are sent as an application-level thread
     * enters and exits the binder loop, respectively.  They are
     * used so the binder can have an accurate count of the number
     * of looping threads it has available.
     */

    BC_REQUEST_DEATH_NOTIFICATION = _IOW('c', 14, struct binder_ptr_cookie),
    /*
     * void *: ptr to binder
     * void *: cookie
     */

    BC_CLEAR_DEATH_NOTIFICATION = _IOW('c', 15, struct binder_ptr_cookie),
    /*
     * void *: ptr to binder
     * void *: cookie
     */

    BC_DEAD_BINDER_DONE = _IOW('c', 16, void *),
    /*
     * void *: cookie
     */
};
这里我们只需要知道binder_driver_command_protocol中定义的是一系列的comand,这些command会对应一个unsigned int,其中具体赋值的_IO,_IOW这些宏,我们不需要去了解。
再看下binder_write函数
int binder_write(struct binder_state *bs, void *data, unsigned len)
{
    struct binder_write_read bwr;
    int res;
    bwr.write_size = len;
    bwr.write_consumed = 0;
    bwr.write_buffer = (unsigned) data;
    bwr.read_size = 0;
    bwr.read_consumed = 0;
    bwr.read_buffer = 0;
    res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);
    if (res < 0) {
        fprintf(stderr,"binder_write: ioctl failed (%s)\n",
                strerror(errno));
    }
    return res;
}
这边出现了binder_wreite_read结构体,这个结构体也是在kernel的binder.h头文件中定义的:
struct binder_write_read {
    signed long    write_size;    /* bytes to write */
    signed long    write_consumed;    /* bytes consumed by driver */
    unsigned long    write_buffer;
    signed long    read_size;    /* bytes to read */
    signed long    read_consumed;    /* bytes consumed by driver */
    unsigned long    read_buffer;
};
binder_write_read结构体里面有read和write两部分信息,而binder_write()函数中只是了write这部分。
这边要注意write_size表明write_buffer数据的长度,write_consumed表明write数据已经被driver处理了多少。类似的,read_size是read_buffer的长度,read_consumed是表明driver已经写了多少长度的数据进来。

这里我们看到最终是调用了ioctl,cmd是BINDER_WRITE_READ,参数是包含了BC_ENTER_LOOPER的binder_write_read结构。
看看driver这边是怎么处理的:

static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
    int ret;
    struct binder_proc *proc = filp->private_data;
    struct binder_thread *thread;
     //从cmd中读取arg的数据的长度。 
    unsigned int size = _IOC_SIZE(cmd);
     //将long参数,转换为指针。 
    void __user *ubuf = (void __user *)arg;
    ......
     //获得binder_thread对象,如果本来没有,会创建binder_thread。 
    thread = binder_get_thread(proc);
    if (thread == NULL) {
        ret = -ENOMEM;
        goto err;
    }

    switch (cmd) {
    case BINDER_WRITE_READ: {
        struct binder_write_read bwr;
         //校验参数是否为binder_write_read参数。 
        if (size != sizeof(struct binder_write_read)) {
            ret = -EINVAL;
            goto err;
        }
        //把用户空间的数据copy到内核空间来。 
        if (copy_from_user(&bwr, ubuf, sizeof(bwr))) {
            ret = -EFAULT;
            goto err;
        }
        binder_debug(BINDER_DEBUG_READ_WRITE,
                 "%d:%d write %ld at %08lx, read %ld at %08lx\n",
                 proc->pid, thread->pid, bwr.write_size,
                 bwr.write_buffer, bwr.read_size, bwr.read_buffer);

         //write_size>0,表明数据要传给binder driver。 
        if (bwr.write_size > 0) {
            ret = binder_thread_write(proc, thread, (void __user *)bwr.write_buffer, bwr.write_size, &bwr.write_consumed);
            trace_binder_write_done(ret);
            if (ret < 0) {
                bwr.read_consumed = 0;
                if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
                    ret = -EFAULT;
                goto err;
            }
        }
         //read_size>0,表明要从driver中读数据。 
        if (bwr.read_size > 0) {
            ret = binder_thread_read(proc, thread, (void __user *)bwr.read_buffer, bwr.read_size, &bwr.read_consumed, filp->f_flags & O_NONBLOCK);
            trace_binder_read_done(ret);
            if (!list_empty(&proc->todo))
                wake_up_interruptible(&proc->wait);
            if (ret < 0) {
                if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
                    ret = -EFAULT;
                goto err;
            }
        }
        binder_debug(BINDER_DEBUG_READ_WRITE,
                 "%d:%d wrote %ld of %ld, read return %ld of %ld\n",
                 proc->pid, thread->pid, bwr.write_consumed, bwr.write_size,
                 bwr.read_consumed, bwr.read_size);
        if (copy_to_user(ubuf, &bwr, sizeof(bwr))) {
            ret = -EFAULT;
            goto err;
        }
        break;
    }
    
        ...... 
    }
     ...... 
}
对于binder_write,write_size>0,会执行binder_thread_write()函数:
int binder_thread_write(struct binder_proc *proc, struct binder_thread *thread,
            void __user *buffer, int size, signed long *consumed)
{
    uint32_t cmd;
    void __user *ptr = buffer + *consumed;
    void __user *end = buffer + size;

    while (ptr < end && thread->return_error == BR_OK) {
         //读取用户层的command
        if (get_user(cmd, (uint32_t __user *)ptr))
            return -EFAULT;
        ptr += sizeof(uint32_t);
        trace_binder_command(cmd);
        //command 记录,for debug。
        if (_IOC_NR(cmd) < ARRAY_SIZE(binder_stats.bc)) {
            binder_stats.bc[_IOC_NR(cmd)]++;
            proc->stats.bc[_IOC_NR(cmd)]++;
            thread->stats.bc[_IOC_NR(cmd)]++;
        }
        switch (cmd) {
        
        ......
        case BC_ENTER_LOOPER:
            binder_debug(BINDER_DEBUG_THREADS,
                     "%d:%d BC_ENTER_LOOPER\n",
                     proc->pid, thread->pid);
            if (thread->looper & BINDER_LOOPER_STATE_REGISTERED) {
                thread->looper |= BINDER_LOOPER_STATE_INVALID;
                binder_user_error("%d:%d ERROR: BC_ENTER_LOOPER called after BC_REGISTER_LOOPER\n",
                    proc->pid, thread->pid);
            }
            //将binder_thread的looper,设上了BINDER_LOOPER_STATE_ENTERED的bit。
            thread->looper |= BINDER_LOOPER_STATE_ENTERED;
            break;
        case BC_EXIT_LOOPER:
            binder_debug(BINDER_DEBUG_THREADS,
                     "%d:%d BC_EXIT_LOOPER\n",
                     proc->pid, thread->pid);
            //设上BINDER_LOOPER_STATE_EXITED的bit位。
            thread->looper |= BINDER_LOOPER_STATE_EXITED;
            break;
            ......
        }
        //更新consumed值。
        *consumed = ptr - buffer;
    }
    return 0;
}
BC_ENTER_LOOPER这个command中只是更新当前thread对应的binder_thread的looper的对应bit,表明已经进入了循环处理。在这里,我们也看到了BC_ENTER_LOOPER对应的command BC_EXIT_LOOPER,这个command行为只是把looper的对应的BINDER_LOOPER_STATE_EXITED bit置上。

至此,进入循环前的这段代码基本清楚了:ServiceManager在binder中设置了binder_thread.threadlooper标志为BC_ENTER_LOOPER,这个flag告诉binder,user层进入了交互的循环了。

binder_looper()循环

for (;;) {
        bwr.read_size = sizeof(readbuf);
        bwr.read_consumed = 0;
        bwr.read_buffer = (unsigned) readbuf;

        res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);

        if (res < 0) {
            ALOGE("binder_loop: ioctl failed (%s)\n", strerror(errno));
            break;
        }

        res = binder_parse(bs, 0, readbuf, bwr.read_consumed, func);
        if (res == 0) {
            ALOGE("binder_loop: unexpected reply?!\n");
            break;
        }
        if (res < 0) {
            ALOGE("binder_loop: io error %d %s\n", res, strerror(errno));
            break;
        }
    }
循环中前半段是从binder中去read数据,而后半段是解析read出来的数据。
read数据的细节暂时不分析,因为read数据不同于write,是需要有client发起操作,才会有数据传递过来,所以我们要结合addService/getService的调用来分析这段代码。
而binder_parse()函数bind_parse()的逻辑很清晰,从read buffer中读取cmd,针对cmd做不同的处理。因为没有具体的command,我们这边也不去这个函数的细节了,在后续的场景中再去分析。

3.2.1.3 ServiceManager main函数小结
ServiceManager的main函数看完后,我们会发现这个和一般的serivce很类似,binder_open()中的动作和ProcessState的构造函数中动作一致,而binder_loop()和IPCThreadState::joinThreadPool()的逻辑也是类似的。
和一般service的主要差别是binder_become_context_manager()的调用,在binder中建立servicemanager的binder_node。


3.3 addService()分析

addService这个场景是指SampleService启动时候的注册service的整个流程,这里面涉及到SampleService,binder,ServiceManaager这三者的交互。

3.3.1 SampleService端分析

SampleService addService的操作分两步,先是获取ServiceManager的操作接口,然后调用addService接口。

// publish SampleService
    sp<IServiceManager> sm(defaultServiceManager());
    sm->addService(String16("SampleService"), samplesrv, false);

3.3.1.1 ServiceManager接口的获取
ServiceManager接口获取,是调用的defaultServiceManager()函数,函数实现在frameworks/native/libs/binder/IServiceManager.cpp:

sp<IServiceManager> defaultServiceManager()
{
    if (gDefaultServiceManager != NULL) return gDefaultServiceManager;
    
    {
        AutoMutex _l(gDefaultServiceManagerLock);
        while (gDefaultServiceManager == NULL) {
            gDefaultServiceManager = interface_cast<IServiceManager>(
                ProcessState::self()->getContextObject(NULL));
            if (gDefaultServiceManager == NULL)
                sleep(1);
        }
    }
    
    return gDefaultServiceManager;
}
ProcessState::self()->getContextObject(NULL)调用了getStrongProxyForHandle(0),我们来仔细看下这个函数:
sp<IBinder> ProcessState::getStrongProxyForHandle(int32_t handle)
{
    sp<IBinder> result;

    AutoMutex _l(mLock);

    //以handle作为index,在Vector<handle_entry>mHandleToObject中查找handle_entry,如果没有,会建立一个空的handle_entry并返回。
    handle_entry* e = lookupHandleLocked(handle);

    if (e != NULL) {
        // We need to create a new BpBinder if there isn't currently one, OR we
        // are unable to acquire a weak reference on this current one.  See comment
        // in getWeakProxyForHandle() for more info about this.
        IBinder* b = e->binder;
        //初始化新建的entry
        if (b == NULL || !e->refs->attemptIncWeak(this)) {
        //handle为0是ServiceManager,这边是对于ServiceManager的特殊处理。
            if (handle == 0) {
                // Special case for context manager...
                // The context manager is the only object for which we create
                // a BpBinder proxy without already holding a reference.
                // Perform a dummy transaction to ensure the context manager
                // is registered before we create the first local reference
                // to it (which will occur when creating the BpBinder).
                // If a local reference is created for the BpBinder when the
                // context manager is not present, the driver will fail to
                // provide a reference to the context manager, but the
                // driver API does not return status.
                //
                // Note that this is not race-free if the context manager
                // dies while this code runs.
                //
                // TODO: add a driver API to wait for context manager, or
                // stop special casing handle 0 for context manager and add
                // a driver API to get a handle to the context manager with
                // proper reference counting.

                //检测ServiceManager是否已经启动。
                Parcel data;
                status_t status = IPCThreadState::self()->transact(
                        0, IBinder::PING_TRANSACTION, data, NULL, 0);
                if (status == DEAD_OBJECT)
                   return NULL;
            }

            //构造BpBinder对象,这个对象会被返回。注意BpBinder是继承IBinder的。
            b = new BpBinder(handle); 
            e->binder = b;
            //记录weakRefs
            if (b) e->refs = b->getWeakRefs();
            result = b;
        } else {
            // This little bit of nastyness is to allow us to add a primary
            // reference to the remote proxy when this team doesn't have one
            // but another team is sending the handle to us.
            //调用force_set,从而强制调用BpBinder::onFirstRef()
            result.force_set(b);
            e->refs->decWeak(this);
        }
    }

    return result;
}
这个函数中几行代码,我们逐个分析:

3.3.1.1.1  IPCThreadState::self()->transact(0, IBinder::PING_TRANSACTION, data, NULL, 0);

SampleService端Transact()分析
transact函数如下:

status_t IPCThreadState::transact(int32_t handle,
                                  uint32_t code, const Parcel& data,
                                  Parcel* reply, uint32_t flags)
{
    status_t err = data.errorCheck();

    flags |= TF_ACCEPT_FDS;

    ......
    
    if (err == NO_ERROR) {
        LOG_ONEWAY(">>>> SEND from pid %d uid %d %s", getpid(), getuid(),
            (flags & TF_ONE_WAY) == 0 ? "READ REPLY" : "ONE WAY");
        err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);
    }
    
    if (err != NO_ERROR) {
        if (reply) reply->setError(err);
        return (mLastError = err);
    }
    
    if ((flags & TF_ONE_WAY) == 0) {
       ......
        if (reply) {
            err = waitForResponse(reply);
        } else {
            Parcel fakeReply;
            err = waitForResponse(&fakeReply);
        }
           ......
    } else {
        err = waitForResponse(NULL, NULL);
    }
    
    return err;
}
trasact中动作就只有writeTransactionData()和waitForResponse()。

先看下writeTransactionData()

status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags,
    int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer)
{
    binder_transaction_data tr;

    tr.target.handle = handle;
    tr.code = code;
    tr.flags = binderFlags;
    //pid和uid会在binder中设置。
    tr.cookie = 0;
    tr.sender_pid = 0;
    tr.sender_euid = 0;
    
    const status_t err = data.errorCheck();
    if (err == NO_ERROR) {
        tr.data_size = data.ipcDataSize();
        tr.data.ptr.buffer = data.ipcData();
        tr.offsets_size = data.ipcObjectsCount()*sizeof(size_t);
        tr.data.ptr.offsets = data.ipcObjects();
    } else if (statusBuffer) {
        tr.flags |= TF_STATUS_CODE;
        *statusBuffer = err;
        tr.data_size = sizeof(status_t);
        tr.data.ptr.buffer = statusBuffer;
        tr.offsets_size = 0;
        tr.data.ptr.offsets = NULL;
    } else {
        return (mLastError = err);
    }
    
    mOut.writeInt32(cmd);
    mOut.write(&tr, sizeof(tr));
    
    return NO_ERROR;
}
IPCThreadState::writeTransactionData()中是用IPCThreadState::transact()的参数构造了一个binder_transaction_data对象,并把这个对象写入Parcel mOut中去。在编写SampleService的时候,我们已经知道mOut保存的是要写入binder的数据,也就是说writeTransactionData()这边是为写入binder准备了一组数据。
注意:我们传入的IBinder::PING_TRANSACTION是赋值给了binder_transaction_data.code,不是作为command,真正的command是BC_TRANSACTION了。


我们再看看IPCThreadState::waitForResponse(),这个函数中的重点是调用了IPCThreadState::talkWithDriver()来读取数据,然后进行处理。IPCThreadState::talkWithDriver()我们在前面分析IPCThreadState::JoinThreadPool()的时候大概看过,不过这里我们还是要再仔细分析下,看看它和binder的交互细节:

status_t IPCThreadState::talkWithDriver(bool doReceive)
{
    if (mProcess->mDriverFD <= 0) {
        return -EBADF;
    }
    
    binder_write_read bwr;
    
    // Is the read buffer empty?
    //dataPosition() >= dataSize()表明中数据已经消耗完毕,或者没有数据(两者都为0)
    const bool needRead = mIn.dataPosition() >= mIn.dataSize();
    
    // We don't want to write anything if we are still reading
    // from data left in the input buffer and the caller
    // has requested to read the next data.
    //判断是否需要写数据。
    //判断条件一是doReceive,表明client是不是想要read数据,为false的时候,表明他不会去读数据,但是隐含表明他可能是想要写数据的,否则就不会调用talkWithDriver的函数了,为true的时候,表明希望去read数据,但是不表明不想写数据。
    //判断条件二是needRead,needRead为false,表明buffer中还有数据,这个时候不能去写数据,因为写数据可能要回写数据,这样会导致之前的数据被冲掉,而为true的时候,表明read已经完成,可以去write了。
    //结合起来就是doReceive为false的时候,或者needRead为true的时候,可以去write。 
    const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0;
    
    bwr.write_size = outAvail;
    bwr.write_buffer = (long unsigned int)mOut.data();

    // This is what we'll read.
     //填入read信息。 
    if (doReceive && needRead) {
        bwr.read_size = mIn.dataCapacity();
        bwr.read_buffer = (long unsigned int)mIn.data();
    } else {
        bwr.read_size = 0;
        bwr.read_buffer = 0;
    }

    IF_LOG_COMMANDS() {
        TextOutput::Bundle _b(alog);
        if (outAvail != 0) {
            alog << "Sending commands to driver: " << indent;
            const void* cmds = (const void*)bwr.write_buffer;
            const void* end = ((const uint8_t*)cmds)+bwr.write_size;
            alog << HexDump(cmds, bwr.write_size) << endl;
            while (cmds < end) cmds = printCommand(alog, cmds);
            alog << dedent;
        }
        alog << "Size of receive buffer: " << bwr.read_size
            << ", needRead: " << needRead << ", doReceive: " << doReceive << endl;
    }
    
    // Return immediately if there is nothing to do.
    if ((bwr.write_size == 0) && (bwr.read_size == 0)) return NO_ERROR;

    bwr.write_consumed = 0;
    bwr.read_consumed = 0;
    status_t err;
    do {
        IF_LOG_COMMANDS() {
            alog << "About to read/write, write size = " << mOut.dataSize() << endl;
        }
#if defined(HAVE_ANDROID_OS)
        //和binder交互。 
        if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)
            err = NO_ERROR;
        else
            err = -errno;
#else
        err = INVALID_OPERATION;
#endif
        if (mProcess->mDriverFD <= 0) {
            err = -EBADF;
        }
        IF_LOG_COMMANDS() {
            alog << "Finished read/write, write size = " << mOut.dataSize() << endl;
        }
    } while (err == -EINTR);

    IF_LOG_COMMANDS() {
        alog << "Our err: " << (void*)err << ", write consumed: "
            << bwr.write_consumed << " (of " << mOut.dataSize()
                        << "), read consumed: " << bwr.read_consumed << endl;
    }

    if (err >= NO_ERROR) {
        if (bwr.write_consumed > 0) {
             //移除binder读取过的数据。 
            if (bwr.write_consumed < (ssize_t)mOut.dataSize())
                mOut.remove(0, bwr.write_consumed);
            else //数据都已经被binder读取。 
                mOut.setDataSize(0);
        }
        if (bwr.read_consumed > 0) {
            //binder有返回数据。
            mIn.setDataSize(bwr.read_consumed);
            mIn.setDataPosition(0);
        }
        IF_LOG_COMMANDS() {
            TextOutput::Bundle _b(alog);
            alog << "Remaining data size: " << mOut.dataSize() << endl;
            alog << "Received commands from driver: " << indent;
            const void* cmds = mIn.data();
            const void* end = mIn.data() + mIn.dataSize();
            alog << HexDump(cmds, mIn.dataSize()) << endl;
            while (cmds < end) cmds = printReturnCommand(alog, cmds);
            alog << dedent;
        }
        return NO_ERROR;
    }
    
    return err;
}
talkWithDriver()的核心动作就是ioctl,对于IPCThreadState::self()->transact(0, IBinder::PING_TRANSACTION, data, NULL, 0)这行代码来说,最终就是ioctl了command BINDER_WRITE_READ,参数bwr的write_data是前面writeTransactionData()中构造的binder_transaction_data,这个data的command是BC_TRANSACTION。

binder对于BINDER_WRITE_READ的处理,前面已经分析过,我们直接进入binder的binder_thread_write()中处理BC_TRANSACTION的细节:
        case BC_TRANSACTION:
        case BC_REPLY: {
            struct binder_transaction_data tr;
             //copy上层传入的binder_transaction_data数据。 
            if (copy_from_user(&tr, ptr, sizeof(tr)))
                return -EFAULT;
            ptr += sizeof(tr);
            binder_transaction(proc, thread, &tr, cmd == BC_REPLY);
            break;
        }
获取数据之后,调用了binder_transaction(),这个函数比较庞大:
static void binder_transaction(struct binder_proc *proc,
			       struct binder_thread *thread,
			       struct binder_transaction_data *tr, int reply)
{
	struct binder_transaction *t;
	struct binder_work *tcomplete;
	size_t *offp, *off_end;
	struct binder_proc *target_proc;
	struct binder_thread *target_thread = NULL;
	struct binder_node *target_node = NULL;
	struct list_head *target_list;
	wait_queue_head_t *target_wait;
	struct binder_transaction *in_reply_to = NULL;
	struct binder_transaction_log_entry *e;
	uint32_t return_error;

        //binder transaction的log构造,为了debug需要。 
	e = binder_transaction_log_add(&binder_transaction_log);
	e->call_type = reply ? 2 : !!(tr->flags & TF_ONE_WAY);
	e->from_proc = proc->pid;
	e->from_thread = thread->pid;
	e->target_handle = tr->target.handle;
	e->data_size = tr->data_size;
	e->offsets_size = tr->offsets_size;

	if (reply) { //command是BC_REPLY 
		in_reply_to = thread->transaction_stack;
		if (in_reply_to == NULL) {
			binder_user_error("%d:%d got reply transaction with no transaction stack\n",
					  proc->pid, thread->pid);
			return_error = BR_FAILED_REPLY;
			goto err_empty_call_stack;
		}
		binder_set_nice(in_reply_to->saved_priority);
		if (in_reply_to->to_thread != thread) {
			binder_user_error("%d:%d got reply transaction with bad transaction stack, transaction %d has target %d:%d\n",
				proc->pid, thread->pid, in_reply_to->debug_id,
				in_reply_to->to_proc ?
				in_reply_to->to_proc->pid : 0,
				in_reply_to->to_thread ?
				in_reply_to->to_thread->pid : 0);
			return_error = BR_FAILED_REPLY;
			in_reply_to = NULL;
			goto err_bad_call_stack;
		}
		//reset transaaction_stack
		thread->transaction_stack = in_reply_to->to_parent;
		target_thread = in_reply_to->from;
		if (target_thread == NULL) {
			return_error = BR_DEAD_REPLY;
			goto err_dead_binder;
		}
		if (target_thread->transaction_stack != in_reply_to) {
			binder_user_error("%d:%d got reply transaction with bad target transaction stack %d, expected %d\n",
				proc->pid, thread->pid,
				target_thread->transaction_stack ?
				target_thread->transaction_stack->debug_id : 0,
				in_reply_to->debug_id);
			return_error = BR_FAILED_REPLY;
			in_reply_to = NULL;
			target_thread = NULL;
			goto err_dead_binder;
		}
		target_proc = target_thread->proc;
	} else { //command为BC_TRANSACTION. 
		if (tr->target.handle) { //handle不为0的情况,这个是一般service的处理。 
			struct binder_ref *ref;
			ref = binder_get_ref(proc, tr->target.handle);
			if (ref == NULL) {
				binder_user_error("%d:%d got transaction to invalid handle\n",
					proc->pid, thread->pid);
				return_error = BR_FAILED_REPLY;
				goto err_invalid_target_handle;
			}
			target_node = ref->node;
		} else { //handle为0,即ServiceManager的case,直接获取binder_context_mgr_node。 
			target_node = binder_context_mgr_node;
			if (target_node == NULL) {
				return_error = BR_DEAD_REPLY;
				goto err_no_context_mgr_node;
			}
		}
		e->to_node = target_node->debug_id;
               //target_proc是数据要transact的目标进程 
		target_proc = target_node->proc;
		if (target_proc == NULL) {
			return_error = BR_DEAD_REPLY;
			goto err_dead_binder;
		}
                //security检测。 
		if (security_binder_transaction(proc->tsk, target_proc->tsk) < 0) {
			return_error = BR_FAILED_REPLY;
			goto err_invalid_target_handle;
		}
                //在同步模式下的时候,from才会有设置。
		//因为同步模式下,stack中之前的thread会等待reply,如果此时调用对应的process中的service的时候使用了其他的thread,会导致多个thread在一次调用中被阻塞,
		//这样会导致其他client调用service时间消耗变多。
		//异步模式下不需要这个考量,发送完command之后,就不再等待reply了。 
		if (!(tr->flags & TF_ONE_WAY) && thread->transaction_stack) {
			struct binder_transaction *tmp;
			tmp = thread->transaction_stack;
			if (tmp->to_thread != thread) {
				binder_user_error("%d:%d got new transaction with bad transaction stack, transaction %d has target %d:%d\n",
					proc->pid, thread->pid, tmp->debug_id,
					tmp->to_proc ? tmp->to_proc->pid : 0,
					tmp->to_thread ?
					tmp->to_thread->pid : 0);
				return_error = BR_FAILED_REPLY;
				goto err_bad_call_stack;
			}
			while (tmp) {
				if (tmp->from && tmp->from->proc == target_proc)
					target_thread = tmp->from;
				tmp = tmp->from_parent;
			}
		}
	}
        //target_thread在transaction的时候,可能不存在,reply时候一定存在。 
	if (target_thread) {
		e->to_thread = target_thread->pid;
		target_list = &target_thread->todo;
		target_wait = &target_thread->wait;
	} else {
		target_list = &target_proc->todo;
		target_wait = &target_proc->wait;
	}
	e->to_proc = target_proc->pid;

	/* TODO: reuse incoming transaction for reply */
        //申请binder_transaction对象。 
	t = kzalloc(sizeof(*t), GFP_KERNEL);
	if (t == NULL) {
		return_error = BR_FAILED_REPLY;
		goto err_alloc_t_failed;
	}
        //for debug。 
	binder_stats_created(BINDER_STAT_TRANSACTION);

        //申请binder_work对象。 
	tcomplete = kzalloc(sizeof(*tcomplete), GFP_KERNEL);
	if (tcomplete == NULL) {
		return_error = BR_FAILED_REPLY;
		goto err_alloc_tcomplete_failed;
	}
        //for debug。 
	binder_stats_created(BINDER_STAT_TRANSACTION_COMPLETE);

	t->debug_id = ++binder_last_id;
	e->debug_id = t->debug_id;

	if (reply)
		binder_debug(BINDER_DEBUG_TRANSACTION,
			     "%d:%d BC_REPLY %d -> %d:%d, data %p-%p size %zd-%zd\n",
			     proc->pid, thread->pid, t->debug_id,
			     target_proc->pid, target_thread->pid,
			     tr->data.ptr.buffer, tr->data.ptr.offsets,
			     tr->data_size, tr->offsets_size);
	else
		binder_debug(BINDER_DEBUG_TRANSACTION,
			     "%d:%d BC_TRANSACTION %d -> %d - node %d, data %p-%p size %zd-%zd\n",
			     proc->pid, thread->pid, t->debug_id,
			     target_proc->pid, target_node->debug_id,
			     tr->data.ptr.buffer, tr->data.ptr.offsets,
			     tr->data_size, tr->offsets_size);

	if (!reply && !(tr->flags & TF_ONE_WAY))
		t->from = thread; //同步transaction的时候,记录from。 
	else
		t->from = NULL; //异步transaction或者reply时候,不需要记录from了。 
        //把上层传下来的binder_transaction_data中保存信息到binder_transaction中。
	//从当前的binder_proc获取uid。 
	t->sender_euid = proc->tsk->cred->euid;
	t->to_proc = target_proc;
        //transaction的时候target_thread可能为空,reply时一定不为空。 
	t->to_thread = target_thread;
        //上层transact的command,如IBinder::PING_TRANSACTION。reply的时候可能是0. 
	t->code = tr->code;
	t->flags = tr->flags;
	t->priority = task_nice(current);
        //for debug. 
	trace_binder_transaction(reply, t, target_node);
        //申请binder_buffer,注意是在目标进程中申请的,这样目标进程才能直接访问到。 
	t->buffer = binder_alloc_buf(target_proc, tr->data_size,
		tr->offsets_size, !reply && (t->flags & TF_ONE_WAY));
	if (t->buffer == NULL) {
		return_error = BR_FAILED_REPLY;
		goto err_binder_alloc_buf_failed;
	}
       //初始化binder_buffer。 
	t->buffer->allow_user_free = 0;
	t->buffer->debug_id = t->debug_id;
	t->buffer->transaction = t;
        //transaction的时候target_node存在,reply时候没有对它赋值,所以是NULL。 
        //在binder_thread_read()中根据target_node为null,来判断是否为reply。 
	t->buffer->target_node = target_node;
	trace_binder_transaction_alloc_buf(t->buffer);
        //increate targe_node的strong ref。在BC_FREE_BUFFER的处理中,调用了binder_transaction_buffer_release()中回去decrase node ref. 
	if (target_node)
		binder_inc_node(target_node, 1, 0, NULL);

        //计算保存objects offset数组的起始地址。objects offset放在data后面。 
	offp = (size_t *)(t->buffer->data + ALIGN(tr->data_size, sizeof(void *)));

        //copy parcel中的data数组。 
	if (copy_from_user(t->buffer->data, tr->data.ptr.buffer, tr->data_size)) {
		binder_user_error("%d:%d got transaction with invalid data ptr\n",
				proc->pid, thread->pid);
		return_error = BR_FAILED_REPLY;
		goto err_copy_data_failed;
	}
       //copy parcel中的objects offset数组。 
	if (copy_from_user(offp, tr->data.ptr.offsets, tr->offsets_size)) {
		binder_user_error("%d:%d got transaction with invalid offsets ptr\n",
				proc->pid, thread->pid);
		return_error = BR_FAILED_REPLY;
		goto err_copy_data_failed;
	}
	if (!IS_ALIGNED(tr->offsets_size, sizeof(size_t))) {
		binder_user_error("%d:%d got transaction with invalid offsets size, %zd\n",
				proc->pid, thread->pid, tr->offsets_size);
		return_error = BR_FAILED_REPLY;
		goto err_bad_offset;
	}
	off_end = (void *)offp + tr->offsets_size;
       //遍历objects offset数组。 
	for (; offp < off_end; offp++) {
		struct flat_binder_object *fp;
                //*offp是一个object的在data中的偏移。
		//检查数据是不是合法。 
		if (*offp > t->buffer->data_size - sizeof(*fp) ||//offset是否超过了合法范围,offset是指向一个flat_binder_object,所以是比较data_size - sizeof(*fp)
		    t->buffer->data_size < sizeof(*fp) ||  //data数据比flat_binder_object size还小,有错误
		    !IS_ALIGNED(*offp, sizeof(void *))) {
			binder_user_error("%d:%d got transaction with invalid offset, %zd\n",
					proc->pid, thread->pid, *offp);
			return_error = BR_FAILED_REPLY;
			goto err_bad_offset;
		}
		 //object只有flat_binder_object。 
		fp = (struct flat_binder_object *)(t->buffer->data + *offp);
               //对不同类型的object进行处理。各种类型,结合Parcel.cpp中flatten_binder()函数去理解。 
		switch (fp->type) {
		//local binder object的处理,means BBinder。
		case BINDER_TYPE_BINDER:
		case BINDER_TYPE_WEAK_BINDER: {
			struct binder_ref *ref;
			//查找/建立对应的binder_node。
			//除去ServiceManager的binder_node,其他所有的binder_node都在这边建立,在调用ServiceManager::addService的时候调用到。
			struct binder_node *node = binder_get_node(proc, fp->binder);
			if (node == NULL) {
				node = binder_new_node(proc, fp->binder, fp->cookie);
				if (node == NULL) {
					return_error = BR_FAILED_REPLY;
					goto err_binder_new_node_failed;
				}
				node->min_priority = fp->flags & FLAT_BINDER_FLAG_PRIORITY_MASK;
				node->accept_fds = !!(fp->flags & FLAT_BINDER_FLAG_ACCEPTS_FDS);
			}
			 //检查cookie也就是service的指针是否一致。 
			if (fp->cookie != node->cookie) {
				binder_user_error("%d:%d sending u%p node %d, cookie mismatch %p != %p\n",
					proc->pid, thread->pid,
					fp->binder, node->debug_id,
					fp->cookie, node->cookie);
				goto err_binder_get_ref_for_node_failed;
			}
			 //security权限检测。 
			if (security_binder_transfer_binder(proc->tsk, target_proc->tsk)) {
				return_error = BR_FAILED_REPLY;
				goto err_binder_get_ref_for_node_failed;
			}
			 //在target_proc中查找/创建 binder_ref,其中binder_ref.desc在这边被确定。 
			ref = binder_get_ref_for_node(target_proc, node);
			if (ref == NULL) {
				return_error = BR_FAILED_REPLY;
				goto err_binder_get_ref_for_node_failed;
			}
			 //将local binder转换成remote binder信息,之后会被target proc读取使用。 
			if (fp->type == BINDER_TYPE_BINDER)
				fp->type = BINDER_TYPE_HANDLE;
			else
				fp->type = BINDER_TYPE_WEAK_HANDLE;
			 //desc在binder_get_ref_for_node()中被确定,会被作为handle。 
			fp->handle = ref->desc;
		         //添加BINDER_WORK_NODE work到thread的todo list,increase strong binder。 
			binder_inc_ref(ref, fp->type == BINDER_TYPE_HANDLE,
				       &thread->todo);

			trace_binder_transaction_node_to_ref(t, node, ref);
			binder_debug(BINDER_DEBUG_TRANSACTION,
				     "        node %d u%p -> ref %d desc %d\n",
				     node->debug_id, node->ptr, ref->debug_id,
				     ref->desc);
		} break;
		 //remote binder object, means BpBinder。 
		case BINDER_TYPE_HANDLE:
		case BINDER_TYPE_WEAK_HANDLE: {
			struct binder_ref *ref = binder_get_ref(proc, fp->handle);
			if (ref == NULL) {
				binder_user_error("%d:%d got transaction with invalid handle, %ld\n",
						proc->pid,
						thread->pid, fp->handle);
				return_error = BR_FAILED_REPLY;
				goto err_binder_get_ref_failed;
			}
			if (security_binder_transfer_binder(proc->tsk, target_proc->tsk)) {
				return_error = BR_FAILED_REPLY;
				goto err_binder_get_ref_failed;
			}
			 //如果传给service所在的进程,转换为BINDER_TYPE_BINDER类型的object。 
			if (ref->node->proc == target_proc) {
				if (fp->type == BINDER_TYPE_HANDLE)
					fp->type = BINDER_TYPE_BINDER;
				else
					fp->type = BINDER_TYPE_WEAK_BINDER;
				fp->binder = ref->node->ptr;
				fp->cookie = ref->node->cookie;
				binder_inc_node(ref->node, fp->type == BINDER_TYPE_BINDER, 0, NULL);
				trace_binder_transaction_ref_to_node(t, ref);
				binder_debug(BINDER_DEBUG_TRANSACTION,
					     "        ref %d desc %d -> node %d u%p\n",
					     ref->debug_id, ref->desc, ref->node->debug_id,
					     ref->node->ptr);
			} else { //传入到其他client进程,为目标进程建立新的binder_ref,并传回这个新的binder_ref的信息。
				struct binder_ref *new_ref;
				new_ref = binder_get_ref_for_node(target_proc, ref->node); //第一次会建立一个新的binder_ref。 
				if (new_ref == NULL) {
					return_error = BR_FAILED_REPLY;
					goto err_binder_get_ref_for_node_failed;
				}
				fp->handle = new_ref->desc; //用目标进程desc替换。 
				binder_inc_ref(new_ref, fp->type == BINDER_TYPE_HANDLE, NULL);
				trace_binder_transaction_ref_to_ref(t, ref,
								    new_ref);
				binder_debug(BINDER_DEBUG_TRANSACTION,
					     "        ref %d desc %d -> ref %d desc %d (node %d)\n",
					     ref->debug_id, ref->desc, new_ref->debug_id,
					     new_ref->desc, ref->node->debug_id);
			}
		} break;

		 //文件指针fd。 
		case BINDER_TYPE_FD: {
			int target_fd;
			struct file *file;

			if (reply) {
				if (!(in_reply_to->flags & TF_ACCEPT_FDS)) {
					binder_user_error("%d:%d got reply with fd, %ld, but target does not allow fds\n",
						proc->pid, thread->pid, fp->handle);
					return_error = BR_FAILED_REPLY;
					goto err_fd_not_allowed;
				}
			} else if (!target_node->accept_fds) {
				binder_user_error("%d:%d got transaction with fd, %ld, but target does not allow fds\n",
					proc->pid, thread->pid, fp->handle);
				return_error = BR_FAILED_REPLY;
				goto err_fd_not_allowed;
			}

			file = fget(fp->handle);
			if (file == NULL) {
				binder_user_error("%d:%d got transaction with invalid fd, %ld\n",
					proc->pid, thread->pid, fp->handle);
				return_error = BR_FAILED_REPLY;
				goto err_fget_failed;
			}
			if (security_binder_transfer_file(proc->tsk, target_proc->tsk, file) < 0) {
				fput(file);
				return_error = BR_FAILED_REPLY;
				goto err_get_unused_fd_failed;
			}
			target_fd = task_get_unused_fd_flags(target_proc, O_CLOEXEC);
			if (target_fd < 0) {
				fput(file);
				return_error = BR_FAILED_REPLY;
				goto err_get_unused_fd_failed;
			}
			task_fd_install(target_proc, target_fd, file);
			trace_binder_transaction_fd(t, fp->handle, target_fd);
			binder_debug(BINDER_DEBUG_TRANSACTION,
				     "        fd %ld -> %d\n", fp->handle, target_fd);
			/* TODO: fput? */
			fp->handle = target_fd;
		} break;

		default:
			binder_user_error("%d:%d got transaction with invalid object type, %lx\n",
				proc->pid, thread->pid, fp->type);
			return_error = BR_FAILED_REPLY;
			goto err_bad_object_type;
		}
	}
	if (reply) {
		BUG_ON(t->buffer->async_transaction != 0);
		 //清除掉binder_tranasaction发起thread的transaction_stack,这里会释放掉in_reply_to这个binder_transaction 
		binder_pop_transaction(target_thread, in_reply_to);
	} else if (!(t->flags & TF_ONE_WAY)) {//同步transaction
		BUG_ON(t->buffer->async_transaction != 0);
		t->need_reply = 1;
		 //transaction_stack指向当前thread的最后一个binder_transaction,通过from_parent进行链接。 
		t->from_parent = thread->transaction_stack;
		thread->transaction_stack = t;
	} else { //异步transaction 
		BUG_ON(target_node == NULL);
		BUG_ON(t->buffer->async_transaction != 1);
		if (target_node->has_async_transaction) {
			target_list = &target_node->async_todo;
			target_wait = NULL;
		} else
			target_node->has_async_transaction = 1;
	}
	 //把新的binder_transaction加入到target的todo list中。 
	t->work.type = BINDER_WORK_TRANSACTION;
	list_add_tail(&t->work.entry, target_list);
	 //在发出transaction的thread的todo list中加入complete的work。 
	tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE;
	list_add_tail(&tcomplete->entry, &thread->todo);
	if (target_wait)
		wake_up_interruptible(target_wait);
	return;

        ...... 
}
binder_transaction()函数对于BC_TRANSCATION来说,做了下面的动作:

  1. 查找target node,并为同步模式下寻找target thread。
  2. 建立一个新的binder_transaction对象。
  3. 增加target binder_node的ref。
  4. 为新的binder_transaction申请binder_buffer,并且从Parcel中copy数据。
  5. 处理Parcel中的object数据??具体是什么
  6. 在当前thread的todo中加入一个BINDER_WORK_TRANSACTION_COMPLETE的work,在target thread/proc的todo中加入BINDER_WORK_TRANSACTION的work。

step 1中,寻找target thread的操作是针对同步操作的时候,并且涉及到交叉调用的情况下采取执行的,从逻辑上看这样的作法是为了减少交叉的同步调用时候的thread的消耗。
step 5,因为本次transact中Parcel中没有object数据,所以对于这个step的细节暂且跳过。
对于我们当前的场景,binder_transaction中主要是:找到了targetnode,copy了Parcel中的data,在target proc(ServiceManager的binder_proc中)中加入了type为BINDER_WORK_TRANSACTION的work。
在binder_thread_write完成之后,进入binder_thread_read()函数中,

static int binder_thread_read(struct binder_proc *proc,
			      struct binder_thread *thread,
			      void  __user *buffer, int size,
			      signed long *consumed, int non_block)
{
	void __user *ptr = buffer + *consumed;
	void __user *end = buffer + size;

	int ret = 0;
	int wait_for_proc_work;

	if (*consumed == 0) {
                //consumed ==0,表明driver还没有填充过数据,先填充一个BR_NOOP进去,作为开始的标示。 
		if (put_user(BR_NOOP, (uint32_t __user *)ptr))
			return -EFAULT;
		ptr += sizeof(uint32_t);
	}

retry:
         //检查当前thread没有需要处理的内容,是否要去检查proc中的todo work。
	//transaction_stack表明当前的thread处于一个同步操作的过程中,不能跳出去执行proc的work。
	wait_for_proc_work = thread->transaction_stack == NULL &&
				list_empty(&thread->todo);

	if (thread->return_error != BR_OK && ptr < end) {
		if (thread->return_error2 != BR_OK) {
			if (put_user(thread->return_error2, (uint32_t __user *)ptr))
				return -EFAULT;
			ptr += sizeof(uint32_t);
			binder_stat_br(proc, thread, thread->return_error2);
			if (ptr == end)
				goto done;
			thread->return_error2 = BR_OK;
		}
		if (put_user(thread->return_error, (uint32_t __user *)ptr))
			return -EFAULT;
		ptr += sizeof(uint32_t);
		binder_stat_br(proc, thread, thread->return_error);
		thread->return_error = BR_OK;
		goto done;
	}


	thread->looper |= BINDER_LOOPER_STATE_WAITING; //set waiting 状态位
	if (wait_for_proc_work)
		proc->ready_threads++; //nothing todo, it means idle

	binder_unlock(__func__);

	trace_binder_wait_for_work(wait_for_proc_work,
				   !!thread->transaction_stack,
				   !list_empty(&thread->todo));
	if (wait_for_proc_work) {
		if (!(thread->looper & (BINDER_LOOPER_STATE_REGISTERED |
					BINDER_LOOPER_STATE_ENTERED))) {
			binder_user_error("%d:%d ERROR: Thread waiting for process work before calling BC_REGISTER_LOOPER or BC_ENTER_LOOPER (state %x)\n",
				proc->pid, thread->pid, thread->looper);
			wait_event_interruptible(binder_user_error_wait,
						 binder_stop_on_user_error < 2);
		}
		binder_set_nice(proc->default_priority);
		if (non_block) { //non block模式下,只检测有无proc work。 
			if (!binder_has_proc_work(proc, thread))
				ret = -EAGAIN;
		} else //block模式下,等待proc有work添加进来。 
			ret = wait_event_freezable_exclusive(proc->wait, binder_has_proc_work(proc, thread));
	} else {
               //检查是否有thread todo中是否有work。 
		if (non_block) {
			if (!binder_has_thread_work(thread))
				ret = -EAGAIN;
		} else //等待thread todo 中有work。 
			ret = wait_event_freezable(thread->wait, binder_has_thread_work(thread));
	}

	binder_lock(__func__);  //注意有锁的。 

	if (wait_for_proc_work)
		proc->ready_threads--;
	thread->looper &= ~BINDER_LOOPER_STATE_WAITING;  //unset waiting 状态位 

	if (ret)
		return ret;

	while (1) {
		uint32_t cmd;
		struct binder_transaction_data tr;
		struct binder_work *w;
		struct binder_transaction *t = NULL;
               //获取work。 
		if (!list_empty(&thread->todo)) //从binder_thread.todo中获取work。 
			w = list_first_entry(&thread->todo, struct binder_work, entry);
		else if (!list_empty(&proc->todo) && wait_for_proc_work) //从binder_proc中获取work。 
			w = list_first_entry(&proc->todo, struct binder_work, entry);
		else {
                        //ptr - buffer == 4, 意味着没有加入有效数据,只有开头加入的BR_NOOP。 
			if (ptr - buffer == 4 && !(thread->looper & BINDER_LOOPER_STATE_NEED_RETURN)) /* no data added */
				goto retry;
                       //处理完毕,在此退出。 
			break;
		}

		if (end - ptr < sizeof(tr) + 4) //buffer已经无法确保能够保存下一个command了。最大的一个command信息就是command+binder_transaction_data的大小 
			break;


		switch (w->type) {
		case BINDER_WORK_TRANSACTION: {
                       //这个work是binder_transaction的一个成员,从这个work地址计算出binder_transaction的对象的地址。 
			t = container_of(w, struct binder_transaction, work);
		} break;
		case BINDER_WORK_TRANSACTION_COMPLETE: {
			cmd = BR_TRANSACTION_COMPLETE;
			if (put_user(cmd, (uint32_t __user *)ptr))
				return -EFAULT;
			ptr += sizeof(uint32_t);

			binder_stat_br(proc, thread, cmd);
			binder_debug(BINDER_DEBUG_TRANSACTION_COMPLETE,
				     "%d:%d BR_TRANSACTION_COMPLETE\n",
				     proc->pid, thread->pid);

			list_del(&w->entry);
			kfree(w);
			binder_stats_deleted(BINDER_STAT_TRANSACTION_COMPLETE);
		} break;
		case BINDER_WORK_NODE: {
                        //更新对于Binder_node对应的BBinder的refs。这里主要用node->has_strong_ref,has_weak_ref来标志是否要更新user层,减少更新user层的次数。
			struct binder_node *node = container_of(w, struct binder_node, work);
			uint32_t cmd = BR_NOOP;
			const char *cmd_name;
			int strong = node->internal_strong_refs || node->local_strong_refs;
			int weak = !hlist_empty(&node->refs) || node->local_weak_refs || strong;
			if (weak && !node->has_weak_ref) {
				cmd = BR_INCREFS;
				cmd_name = "BR_INCREFS";
				node->has_weak_ref = 1;
				node->pending_weak_ref = 1; //flag表明等待user层反馈。 
				node->local_weak_refs++; //在BC_INCREFS_DONE中通过BINDER_DEC_NODE来减去。 
			} else if (strong && !node->has_strong_ref) {
				//need increase strong reference.
				cmd = BR_ACQUIRE;
				cmd_name = "BR_ACQUIRE";
				node->has_strong_ref = 1;
				node->pending_strong_ref = 1; //flag表明等待user层反馈。 
				node->local_strong_refs++; //在BC_QCQUIRE_DONE中通过BINDER_DEC_NODE来减去。 
			} else if (!strong && node->has_strong_ref) {
				//no reference,need to free.
				cmd = BR_RELEASE;
				cmd_name = "BR_RELEASE";
				node->has_strong_ref = 0;
			} else if (!weak && node->has_weak_ref) {
				cmd = BR_DECREFS;
				cmd_name = "BR_DECREFS";
				node->has_weak_ref = 0;
			}
			if (cmd != BR_NOOP) {
				if (put_user(cmd, (uint32_t __user *)ptr))
					return -EFAULT;
				ptr += sizeof(uint32_t);
				if (put_user(node->ptr, (void * __user *)ptr))
					return -EFAULT;
				ptr += sizeof(void *);
				if (put_user(node->cookie, (void * __user *)ptr))
					return -EFAULT;
				ptr += sizeof(void *);

				binder_stat_br(proc, thread, cmd);
				binder_debug(BINDER_DEBUG_USER_REFS,
					     "%d:%d %s %d u%p c%p\n",
					     proc->pid, thread->pid, cmd_name, node->debug_id, node->ptr, node->cookie);
			} else {
				list_del_init(&w->entry);
				if (!weak && !strong) {
					binder_debug(BINDER_DEBUG_INTERNAL_REFS,
						     "%d:%d node %d u%p c%p deleted\n",
						     proc->pid, thread->pid, node->debug_id,
						     node->ptr, node->cookie);
					rb_erase(&node->rb_node, &proc->nodes);
					kfree(node);
					binder_stats_deleted(BINDER_STAT_NODE);
				} else {
					binder_debug(BINDER_DEBUG_INTERNAL_REFS,
						     "%d:%d node %d u%p c%p state unchanged\n",
						     proc->pid, thread->pid, node->debug_id, node->ptr,
						     node->cookie);
				}
			}
		} break;
		case BINDER_WORK_DEAD_BINDER:
		case BINDER_WORK_DEAD_BINDER_AND_CLEAR:
		case BINDER_WORK_CLEAR_DEATH_NOTIFICATION: {
			struct binder_ref_death *death;
			uint32_t cmd;
                        //获取binder_ref_death对象。 
			death = container_of(w, struct binder_ref_death, work);
			if (w->type == BINDER_WORK_CLEAR_DEATH_NOTIFICATION)
				cmd = BR_CLEAR_DEATH_NOTIFICATION_DONE;
			else
				cmd = BR_DEAD_BINDER;
                         //写回cmd。 
			if (put_user(cmd, (uint32_t __user *)ptr))
				return -EFAULT;
			ptr += sizeof(uint32_t);
			if (put_user(death->cookie, (void * __user *)ptr))
				return -EFAULT;
			ptr += sizeof(void *);
			binder_stat_br(proc, thread, cmd);
			binder_debug(BINDER_DEBUG_DEATH_NOTIFICATION,
				     "%d:%d %s %p\n",
				      proc->pid, thread->pid,
				      cmd == BR_DEAD_BINDER ?
				      "BR_DEAD_BINDER" :
				      "BR_CLEAR_DEATH_NOTIFICATION_DONE",
				      death->cookie);

			if (w->type == BINDER_WORK_CLEAR_DEATH_NOTIFICATION) {
				list_del(&w->entry);
				kfree(death);
				binder_stats_deleted(BINDER_STAT_DEATH);
			} else//加入到delivered_death。
				list_move(&w->entry, &proc->delivered_death);
			if (cmd == BR_DEAD_BINDER)
				goto done; /* DEAD_BINDER notifications can cause transactions */
		} break;
		}

		if (!t)
			continue;

                 //将binder_tranasaction中的信息,保存到binder_transaction_data中。
		//和binder_transaction()函数中行为相反。 
		BUG_ON(t->buffer == NULL);
		if (t->buffer->target_node) {//transaction cmd时候。
			 //对于binder_context_mgr_node,ptr和cookie都为0。
			//对于一般service来说,binder_node中的ptr是service的weakrefs指针,cookie是service的对象指针。(见Parcel中的flatten_binder()) 
			struct binder_node *target_node = t->buffer->target_node;
			tr.target.ptr = target_node->ptr;
			tr.cookie =  target_node->cookie;
			t->saved_priority = task_nice(current);
                        //同步模式下设置priority 
			if (t->priority < target_node->min_priority &&
			    !(t->flags & TF_ONE_WAY))
				binder_set_nice(t->priority);
			else if (!(t->flags & TF_ONE_WAY) ||
				 t->saved_priority > target_node->min_priority)
				binder_set_nice(target_node->min_priority);
			cmd = BR_TRANSACTION;
		} else { //reply cmd时候,reply时候target_node为null。 
			tr.target.ptr = NULL;
			tr.cookie = NULL;
			cmd = BR_REPLY;
		}
 		//transaction的真实command。 
		tr.code = t->code;
		tr.flags = t->flags;
		tr.sender_euid = from_kuid(current_user_ns(), t->sender_euid);

 		//记录sender_pid。
		//异步模式,或者reply情况下,t->from == 0. 
		if (t->from) {
			struct task_struct *sender = t->from->proc->tsk;
			tr.sender_pid = task_tgid_nr_ns(sender,
							task_active_pid_ns(current));
		} else {
			tr.sender_pid = 0;
		}

 		//转换在binder_thread_write中保留的Parcel数据信息。 
		tr.data_size = t->buffer->data_size;
		tr.offsets_size = t->buffer->offsets_size;
 		//地址转换给user空间的地址,传给binder_transaction_data 
		tr.data.ptr.buffer = (void *)t->buffer->data +
					proc->user_buffer_offset;
		tr.data.ptr.offsets = tr.data.ptr.buffer +
					ALIGN(t->buffer->data_size,
					    sizeof(void *));
 		//写入command。 
		if (put_user(cmd, (uint32_t __user *)ptr))
			return -EFAULT;
		ptr += sizeof(uint32_t);
 		//把binder_transaction_data copy给user层。 
		if (copy_to_user(ptr, &tr, sizeof(tr)))
			return -EFAULT;
		ptr += sizeof(tr);

		trace_binder_transaction_received(t);
		binder_stat_br(proc, thread, cmd);
		binder_debug(BINDER_DEBUG_TRANSACTION,
			     "%d:%d %s %d %d:%d, cmd %d size %zd-%zd ptr %p-%p\n",
			     proc->pid, thread->pid,
			     (cmd == BR_TRANSACTION) ? "BR_TRANSACTION" :
			     "BR_REPLY",
			     t->debug_id, t->from ? t->from->proc->pid : 0,
			     t->from ? t->from->pid : 0, cmd,
			     t->buffer->data_size, t->buffer->offsets_size,
			     tr.data.ptr.buffer, tr.data.ptr.offsets);

		list_del(&t->work.entry);
		t->buffer->allow_user_free = 1;
		if (cmd == BR_TRANSACTION && !(t->flags & TF_ONE_WAY)) {
 			//同步模式下更新stack信息,binder_transaction t会在reply时候在binder_transaction()中pop掉。 
			t->to_parent = thread->transaction_stack;
			t->to_thread = thread;
			thread->transaction_stack = t; //把最近读到的binder_transaction设置为transaction_stack 
		} else { //异步模式,或者reply时,binder_transaction已经不需要了 
 			//(reply时候才需要通过binder_transaction找到reply的target thread),在这里直接释放掉。 
			t->buffer->transaction = NULL;
			kfree(t);
			binder_stats_deleted(BINDER_STAT_TRANSACTION);
		}
		break;
	}

done:

	*consumed = ptr - buffer;
  	//当前进程中没有空闲的thread的时候(空闲thread也就是在监听binder的thread),
	//也没有在处理中的spawn thread,且thread数量小于max_threads,那么要求user层
	//去启动一个新的thread备用。user层会发送BC_REGISTER_LOOPER这个cmd,来告诉binder request的thread已经启动。 
	if (proc->requested_threads + proc->ready_threads == 0 &&
	    proc->requested_threads_started < proc->max_threads &&
	    (thread->looper & (BINDER_LOOPER_STATE_REGISTERED |
	     BINDER_LOOPER_STATE_ENTERED)) /* the user-space code fails to */
	     /*spawn a new thread if we leave this out */) {
		proc->requested_threads++;
		binder_debug(BINDER_DEBUG_THREADS,
			     "%d:%d BR_SPAWN_LOOPER\n",
			     proc->pid, thread->pid);
		if (put_user(BR_SPAWN_LOOPER, (uint32_t __user *)buffer))
			return -EFAULT;
		binder_stat_br(proc, thread, BR_SPAWN_LOOPER);
	}
	return 0;
}
binder_thread_read()函数,主要是等待thread/proc的todo list中的work并执行。
对于我们当前的场景中,我们从binder_thread_read()中会读取到在binder_transaction()中添加的BINDER_WORK_TRANSACTION_COMPLETE work,这个work只是把BR_TRANSACTION_COMPLETE这个cmd写入到read buffer中去。

binder_thread_read()函数的结束部分的代码,我们注意一下:如果当前的thread是专门和binder交互的thread,并且发现当前process中和binder交互的thread都不处于空闲状态(空闲也就是在wait work的状态),那么在这里binder会写入command BR_SPAWN_THREAD要求user空间去启动一个新的thread来和binder交互,直到达到了max thread的限制。max thread的就是我们在SampleService的main函数中设置的。

//Just set as surfaceflinger, we will check it later.
    ProcessState::self()->setThreadPoolMaxThreadCount(4);
BR_SPAWN_LOOPER在 IPCThreadState::executeCommand()函数的处理如下:
case BR_SPAWN_LOOPER:
        mProcess->spawnPooledThread(false);
        break;
spawnPooledThread(false)启动了一个PoolThread和binder交互,BR_SPAWN_LOOPER中启动的PoolThread最终还是调用了IPCThreadState::JoinThreadState(),和我们在SampleService里面调用差异是参数isMain的不同,这个参数影响了下面的代码:
mOut.writeInt32(isMain ? BC_ENTER_LOOPER : BC_REGISTER_LOOPER);
最终是向binder传递了cmd BC_REGISTER_LOOPER,这个cmd一样是在binder的binder_thread_write()函数中处理:
case BC_REGISTER_LOOPER:
             //告诉kernel,BR_SPAWN_LOOPER要求启动的thread已经启动,开始进入looper交互。 
            binder_debug(BINDER_DEBUG_THREADS,
                     "%d:%d BC_REGISTER_LOOPER\n",
                     proc->pid, thread->pid);
            if (thread->looper & BINDER_LOOPER_STATE_ENTERED) {
                thread->looper |= BINDER_LOOPER_STATE_INVALID;
                binder_user_error("%d:%d ERROR: BC_REGISTER_LOOPER called after BC_ENTER_LOOPER\n",
                    proc->pid, thread->pid);
            } else if (proc->requested_threads == 0) {
                thread->looper |= BINDER_LOOPER_STATE_INVALID;
                binder_user_error("%d:%d ERROR: BC_REGISTER_LOOPER called without request\n",
                    proc->pid, thread->pid);
            } else {
                 //更新request thread的记录。 
                proc->requested_threads--;
                proc->requested_threads_started++; 
            }
            thread->looper |= BINDER_LOOPER_STATE_REGISTERED;
            break;
结合binder_thread_read()结束部分返回BR_SPAWN_LOOPER的代码,这部分的代码逻辑比较清晰了:
BR_SPAWN_LOOPER和BC_REGISTER_LOOPER是一组匹配的command,BR_SPAWN_LOOPER在系统繁忙的时候,要求user层去启动新的thread来和binder交互,而user层在完整这个要求后,发送BC_REGISTER_LOOPER来告诉binder这个请求已经完成了。
这部分代码对于client端来说是没有作用的,因为client没有调用joinThreadPool,所以不满足条件,不会被binder要求启动新的thread。

到这里,我们又可以回到user层了,现在我们的read buffer中有2个command:BR_NOOP, BR_TRANSACTION_COMPLETE, waitForResponse()函数检查得到数据后返回到IPCThreadState::waitForResponse()函数中去,首先处理的command是BR_NOOP,BR_NOOP处理在 IPCThreadState::executeCommand()函数中:

case BR_NOOP:
        break;
看起来和名字一样,只是一个空命令。函数继续循环进入talkWithDriver(),因为read buffer还未空,所以不会read,又因为没有新数据写入,write size为0,所以talkWithDriver()直接返回,继续waitForResponse()的中的cmd处理。
这次读取到的cmd是BR_TRANSACTION_COMPLETE:
case BR_TRANSACTION_COMPLETE:
            if (!reply && !acquireResult) goto finish;
            break;
对于我们当前的场景,reply非空,函数继续循环进入talkWithDriver(),因为read buffer还未空,所以不会read,又因为没有新数据写入,write size为0,所以talkWithDriver()直接返回,继续waitForResponse()的中的cmd处理。
到这里,SampleService开始在binder中等待ServiceManager reply我们发出的PING_TRANSACTION。
我们先看看Servicemanager如何处理这个command的。

ServiceManager端处理IBinder::PING_TRANSACTION
前面我们已经分下到ServiceManager进入到binder_loop()函数中开始和binder交互,ServiceManager会在binder的binder_thread_read()函数中等待新的work的到来。
前面binder_transaction()中已经在ServiceManager的binder_proc的todo中去了,这样ServiceManager的在binder_thread_read()中wait的thread会被wakeup起来,获取work进行处理:
        case BINDER_WORK_TRANSACTION: {
             //这个work是binder_transaction的一个成员,从这个work地址计算出binder_transaction的对象的地址。 
            t = container_of(w, struct binder_transaction, work);
        } break;
首先获取到对应的binder_transaction数据,然后进行处理
         //将binder_tranasaction中的信息,保存到binder_transaction_data中。
        //和binder_transaction()函数中行为相反。 
        BUG_ON(t->buffer == NULL);
        if (t->buffer->target_node) {
            struct binder_node *target_node = t->buffer->target_node;
            tr.target.ptr = target_node->ptr;
            tr.cookie =  target_node->cookie;
            t->saved_priority = task_nice(current);
            if (t->priority < target_node->min_priority &&
                !(t->flags & TF_ONE_WAY))
                binder_set_nice(t->priority);
            else if (!(t->flags & TF_ONE_WAY) ||
                 t->saved_priority > target_node->min_priority)
                binder_set_nice(target_node->min_priority);
            cmd = BR_TRANSACTION;
        } else {
            tr.target.ptr = NULL;
            tr.cookie = NULL;
            cmd = BR_REPLY;
        }
        tr.code = t->code;
        tr.flags = t->flags;
        tr.sender_euid = from_kuid(current_user_ns(), t->sender_euid);

        if (t->from) {
            struct task_struct *sender = t->from->proc->tsk;
            tr.sender_pid = task_tgid_nr_ns(sender,
                            task_active_pid_ns(current));
        } else {
            tr.sender_pid = 0;
        }

         //转换在binder_thread_write中保留的Parcel数据信息。 
        tr.data_size = t->buffer->data_size;
        tr.offsets_size = t->buffer->offsets_size;
         //地址转换给user空间的地址,传给binder_transaction_data,避免了copy的动作。 
        tr.data.ptr.buffer = (void *)t->buffer->data +
                    proc->user_buffer_offset;
        tr.data.ptr.offsets = tr.data.ptr.buffer +
                    ALIGN(t->buffer->data_size,
                        sizeof(void *));
         //写入command。 
        if (put_user(cmd, (uint32_t __user *)ptr))
            return -EFAULT;
        ptr += sizeof(uint32_t);
         //把binder_transaction_data copy给user层。 
        if (copy_to_user(ptr, &tr, sizeof(tr)))
            return -EFAULT;
        ptr += sizeof(tr);

        trace_binder_transaction_received(t);
        binder_stat_br(proc, thread, cmd);
        binder_debug(BINDER_DEBUG_TRANSACTION,
                 "%d:%d %s %d %d:%d, cmd %d size %zd-%zd ptr %p-%p\n",
                 proc->pid, thread->pid,
                 (cmd == BR_TRANSACTION) ? "BR_TRANSACTION" :
                 "BR_REPLY",
                 t->debug_id, t->from ? t->from->proc->pid : 0,
                 t->from ? t->from->pid : 0, cmd,
                 t->buffer->data_size, t->buffer->offsets_size,
                 tr.data.ptr.buffer, tr.data.ptr.offsets);

        list_del(&t->work.entry);
        t->buffer->allow_user_free = 1;
        if (cmd == BR_TRANSACTION && !(t->flags & TF_ONE_WAY)) {
             //同步模式下更新stack信息。 
            t->to_parent = thread->transaction_stack;
            t->to_thread = thread;
            thread->transaction_stack = t;
        } else {
            t->buffer->transaction = NULL;
            kfree(t);
            binder_stats_deleted(BINDER_STAT_TRANSACTION);
        }
        break;
    }
建立一个binder_transaction_data的结构,填入需要传给servicemanager的数据,除了常规的信息,还包括了SampleService传递下来的Parcel中的数据(在binder_transaction结构中的buffer中),之后把cmd,binder_transaction_data填入到user buffer中,用来返回给servicemanager。
因为servicemanager的max_threads为0,所以read最后部分也不会返回BR_SPAWN_LOOPER的。
那么最终在从ioctl中返回时,ServiceManager的read buffer中包括BR_NOOP,BR_TRANSACTION,binder_transaction_data结构。回到user层的binder_parse()中进行处理:
case BR_NOOP:
            break;
BR_NOOP直接跳过,处理BR_TRANSACTION,
case BR_TRANSACTION: {
           //强制转换kernel传递上来的binder_transaction_data对象,binder_txn和binder_transaction_data的结构是完全一致的。 
            struct binder_txn *txn = (void *) ptr;
            if ((end - ptr) * sizeof(uint32_t) < sizeof(struct binder_txn)) {
                ALOGE("parse: txn too small!\n");
                return -1;
            }
            binder_dump_txn(txn);
            if (func) {
                unsigned rdata[256/4];
                struct binder_io msg;
                struct binder_io reply;
                int res;
                //初始化reply的结构。 
                bio_init(&reply, rdata, sizeof(rdata), 4);
                  //用传送来的数据初始化msg。 
                bio_init_from_txn(&msg, txn);
                //调用service_manager注册的svcmgr_handler函数来处理数据。 
                res = func(bs, txn, &msg, &reply);
                 //返回reply数据。 
                binder_send_reply(bs, &reply, txn->data, res);
            }
            ptr += sizeof(*txn) / sizeof(uint32_t);
            break;
        }
binder_txn和kernel的binder_transaction_data结构是完全对应的,所以这边直接把将ptr强制转换成了binder_txn指针。
struct binder_txn
{
    void *target;
    void *cookie;
    uint32_t code;
    uint32_t flags;

    uint32_t sender_pid;
    uint32_t sender_euid;

    uint32_t data_size;
    uint32_t offs_size;
    void *data;
    void *offs;
};
binder_io是和Parcel的角色类似,保存了data和objects数组的信息:
//和Parcel保留的数据类似,只是结构简化了。
struct binder_io
{
    //data,offs是读去data0和offs0的指针。
    char *data;            /* pointer to read/write from */
    uint32_t *offs;        /* array of offsets */
    uint32_t data_avail;   /* bytes available in data buffer */
    uint32_t offs_avail;   /* entries available in offsets array */

    //data0是保留的data数据的起始地址。
    char *data0;           /* start of data buffer */
    //保存了objects offset数据的起始地址。
    uint32_t *offs0;       /* start of offsets buffer */
    uint32_t flags;
    uint32_t unused;
};
看看svcmgr_handler如何处理传入的PING_TRANSACTION:
int svcmgr_handler(struct binder_state *bs,
                   struct binder_txn *txn,
                   struct binder_io *msg,
                   struct binder_io *reply)
{
    struct svcinfo *si;
    uint16_t *s;
    unsigned len;
    void *ptr;
    uint32_t strict_policy;
    int allow_isolated;

//    ALOGI("target=%p code=%d pid=%d uid=%d\n",
//         txn->target, txn->code, txn->sender_pid, txn->sender_euid);
    
    //确认数据是传送给service manager的,target = 0是正确的。
    if (txn->target != svcmgr_handle)
        return -1;

    // Equivalent to Parcel::enforceInterface(), reading the RPC
    // header with the strict mode policy mask and the interface name.
    // Note that we ignore the strict_policy and don't propagate it
    // further (since we do no outbound RPCs anyway).
    strict_policy = bio_get_uint32(msg);
    s = bio_get_string16(msg, &len);
    if ((len != (sizeof(svcmgr_id) / 2)) ||
        memcmp(svcmgr_id, s, sizeof(svcmgr_id))) {
        fprintf(stderr,"invalid id %s\n", str8(s));
        return -1;
    }

   ......
    return 0;
}

对于SampleService本次trnasact的来说,Parcel中是没有数据的,所以,msg中的data,offset中都没有有效的数据,检测interface的时候,会返回-1。返回后,调用binder_send_reply()把返回值写给binder:

void binder_send_reply(struct binder_state *bs,
                       struct binder_io *reply,
                       void *buffer_to_free,
                       int status)
{
    struct {
        uint32_t cmd_free;
        void *buffer;
        uint32_t cmd_reply;
        struct binder_txn txn;
    } __attribute__((packed)) data;

    //发送BC_FREE_BUFFER的command。
    data.cmd_free = BC_FREE_BUFFER;
    data.buffer = buffer_to_free;
    //发送BC_REPLY的command。
    data.cmd_reply = BC_REPLY;
    data.txn.target = 0;
    data.txn.cookie = 0;
    data.txn.code = 0;
    if (status) {//status非0时候,结果异常,直接将返回值传给binder。
        data.txn.flags = TF_STATUS_CODE;
        data.txn.data_size = sizeof(int);
        data.txn.offs_size = 0;
        data.txn.data = &status;
        data.txn.offs = 0;
    } else {//status为0,结果正常,将reply中的数据写回给binder。
        data.txn.flags = 0;
        data.txn.data_size = reply->data - reply->data0;
        data.txn.offs_size = ((char*) reply->offs) - ((char*) reply->offs0);
        data.txn.data = reply->data0;
        data.txn.offs = reply->offs0;
    }
    //将数据写回给binder。
    binder_write(bs, &data, sizeof(data));
}
注意binder_send_reply()出了发送了两个command,BC_FREE_BUFFER和BC_REPLY。
我们当前的场景中status为-1,接着binder_write中调用ioctl写回数据:
int binder_write(struct binder_state *bs, void *data, unsigned len)
{
    //binder_write_read在kernel中定义,组织数据写给binder。
    struct binder_write_read bwr;
    int res;
    bwr.write_size = len;
    bwr.write_consumed = 0;
    bwr.write_buffer = (unsigned) data;
    bwr.read_size = 0;
    bwr.read_consumed = 0;
    bwr.read_buffer = 0;
    res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);
    if (res < 0) {
        fprintf(stderr,"binder_write: ioctl failed (%s)\n",
                strerror(errno));
    }
    return res;
}
我们再次到binder_thread_write()看看如何处理这两个command的:
case BC_FREE_BUFFER: {
            void __user *data_ptr;
            struct binder_buffer *buffer;
            //获取user层传过来的指针,这个指针是binder_buffer.data对应的user空间地址。
            if (get_user(data_ptr, (void * __user *)ptr))
                return -EFAULT;
            ptr += sizeof(void *);

            //根据指针查找对应的binder_buffer。
            buffer = binder_buffer_lookup(proc, data_ptr);
            if (buffer == NULL) {
                binder_user_error("%d:%d BC_FREE_BUFFER u%p no match\n",
                    proc->pid, thread->pid, data_ptr);
                break;
            }
            if (!buffer->allow_user_free) {
                binder_user_error("%d:%d BC_FREE_BUFFER u%p matched unreturned buffer\n",
                    proc->pid, thread->pid, data_ptr);
                break;
            }
            binder_debug(BINDER_DEBUG_FREE_BUFFER,
                     "%d:%d BC_FREE_BUFFER u%p found buffer %d for %s transaction\n",
                     proc->pid, thread->pid, data_ptr, buffer->debug_id,
                     buffer->transaction ? "active" : "finished");

            //reset引用的地方。
            if (buffer->transaction) {
                buffer->transaction->buffer = NULL;
                buffer->transaction = NULL;
            }
            //异步操作,并且存在target_node(表明是一个transaction)
            if (buffer->async_transaction && buffer->target_node) {
                BUG_ON(!buffer->target_node->has_async_transaction);
                if (list_empty(&buffer->target_node->async_todo))
                    buffer->target_node->has_async_transaction = 0;
                else
                    list_move_tail(buffer->target_node->async_todo.next, &thread->todo);
            }
            trace_binder_transaction_buffer_release(buffer);
            //处理传递的object的decrease ref。
            binder_transaction_buffer_release(proc, buffer, NULL);
            //释放binder_buffer。
            binder_free_buf(proc, buffer);
            break;
        }
这个command释放的对象是binder_txn.data,这个指针是servicemanager在binder_thread_read()中将binder_transaction.binder_buffer.data转换到user空间得到的。所以这边free的对象实际就是SampleService在binder_thread_write()中申请的binder_buffer和他对应的memory。
binder_transaction_buffer_release()中对于SampleService传递进来的对象做了decrease ref的操作,之前在write()的时候做了increase的操作,不过我们当前的场景中没有涉及到这些对象,我们就先不看。
free的binder_buffer的动作主要是在binder_free_buf()中,
static void binder_free_buf(struct binder_proc *proc,
                struct binder_buffer *buffer)
{
    size_t size, buffer_size;

    buffer_size = binder_buffer_size(proc, buffer);

    size = ALIGN(buffer->data_size, sizeof(void *)) +
        ALIGN(buffer->offsets_size, sizeof(void *));

    binder_debug(BINDER_DEBUG_BUFFER_ALLOC,
             "%d: binder_free_buf %p size %zd buffer_size %zd\n",
              proc->pid, buffer, size, buffer_size);

    BUG_ON(buffer->free);
    BUG_ON(size > buffer_size);
    BUG_ON(buffer->transaction != NULL);
    BUG_ON((void *)buffer < proc->buffer);
    BUG_ON((void *)buffer > proc->buffer + proc->buffer_size);

    //update free_async_space。
    if (buffer->async_transaction) {
        proc->free_async_space += size + sizeof(struct binder_buffer);

        binder_debug(BINDER_DEBUG_BUFFER_ALLOC_ASYNC,
                 "%d: binder_free_buf size %zd async free %zd\n",
                  proc->pid, size, proc->free_async_space);
    }

    //释放物理page。
    binder_update_page_range(proc, 0,
        (void *)PAGE_ALIGN((uintptr_t)buffer->data),
        (void *)(((uintptr_t)buffer->data + buffer_size) & PAGE_MASK),
        NULL);
    //从allocated_buffers中拿掉改结点。
    rb_erase(&buffer->rb_node, &proc->allocated_buffers);
    buffer->free = 1;
    //插入list后,要进行合并操作。合并操作不需要特殊处理,删除掉地址大的那个结点即可。
    //检查next结点是否是free的,如果是,需要进行合并,从list和free tree中删除掉next结点。
    if (!list_is_last(&buffer->entry, &proc->buffers)) {
        struct binder_buffer *next = list_entry(buffer->entry.next,
                        struct binder_buffer, entry);
        if (next->free) {
            //从binder_proc.free_buffers中拿掉next结点。
            rb_erase(&next->rb_node, &proc->free_buffers);
            //从binder_proc.buffers的list中拿掉这个buffer结点。
            binder_delete_free_buffer(proc, next);
        }
    }
    if (proc->buffers.next != &buffer->entry) {
       //检查prev结点,如果结点有效,并且free,那么当前的节点直接删除,和prev结点合并。
        struct binder_buffer *prev = list_entry(buffer->entry.prev,
                        struct binder_buffer, entry);
        if (prev->free) {
            binder_delete_free_buffer(proc, buffer);
            rb_erase(&prev->rb_node, &proc->free_buffers);
            buffer = prev;
        }
    }
    //插入结点到binder_proc.free_buffers中去。
    binder_insert_free_buffer(proc, buffer);
}
和binder_alloc_buf()函数对照,比较容易看明白了。

第二个cmd是BC_REPLY,BC_REPLY和BC_TRANSACTION的case处理一样,调用binder_transaction()来处理,replay和transaction的操作上主要差异在于,一个是target thread的查找,另外一个就是需要pop target thread的transaction_stack:
static void binder_transaction(struct binder_proc *proc,
                   struct binder_thread *thread,
                   struct binder_transaction_data *tr, int reply)
{
    ......
    if (reply) {//command是BC_REPLY
        in_reply_to = thread->transaction_stack;
        if (in_reply_to == NULL) {
            binder_user_error("%d:%d got reply transaction with no transaction stack\n",
                      proc->pid, thread->pid);
            return_error = BR_FAILED_REPLY;
            goto err_empty_call_stack;
        }
        binder_set_nice(in_reply_to->saved_priority);
        if (in_reply_to->to_thread != thread) {
            binder_user_error("%d:%d got reply transaction with bad transaction stack, transaction %d has target %d:%d\n",
                proc->pid, thread->pid, in_reply_to->debug_id,
                in_reply_to->to_proc ?
                in_reply_to->to_proc->pid : 0,
                in_reply_to->to_thread ?
                in_reply_to->to_thread->pid : 0);
            return_error = BR_FAILED_REPLY;
            in_reply_to = NULL;
            goto err_bad_call_stack;
        }
        //reset当前thread的transaaction_stack
        thread->transaction_stack = in_reply_to->to_parent;
        target_thread = in_reply_to->from;
        if (target_thread == NULL) {
            return_error = BR_DEAD_REPLY;
            goto err_dead_binder;
        }
        if (target_thread->transaction_stack != in_reply_to) {
            binder_user_error("%d:%d got reply transaction with bad target transaction stack %d, expected %d\n",
                proc->pid, thread->pid,
                target_thread->transaction_stack ?
                target_thread->transaction_stack->debug_id : 0,
                in_reply_to->debug_id);
            return_error = BR_FAILED_REPLY;
            in_reply_to = NULL;
            target_thread = NULL;
            goto err_dead_binder;
        }
        target_proc = target_thread->proc;
    } else {//command为BC_TRANSACTION.
        ......
    }
     //target_thread在transaction的时候,可能不存在,reply时候一定存在。
    if (target_thread) {
        e->to_thread = target_thread->pid;
        target_list = &target_thread->todo;
        target_wait = &target_thread->wait;
    } else {
        target_list = &target_proc->todo;
        target_wait = &target_proc->wait;
    }
    ......

    if (!reply && !(tr->flags & TF_ONE_WAY))
        t->from = thread;//同步模式下的时候,记录from。
    else
        t->from = NULL;//reply时候,不需要记录from了。
        ......
    if (reply) {
        BUG_ON(t->buffer->async_transaction != 0);
        //清除掉binder_tranasaction发起thread的transaction_stack。
        binder_pop_transaction(target_thread, in_reply_to);
    } else if (!(t->flags & TF_ONE_WAY)) {
        BUG_ON(t->buffer->async_transaction != 0);
        t->need_reply = 1;
        //transaction_stack指向当前thread的最后一个binder_transaction,通过from_parent进行链接。
        t->from_parent = thread->transaction_stack;
        thread->transaction_stack = t;
    } else {
        BUG_ON(target_node == NULL);
        BUG_ON(t->buffer->async_transaction != 1);
        if (target_node->has_async_transaction) {
            target_list = &target_node->async_todo;
            target_wait = NULL;
        } else
            target_node->has_async_transaction = 1;
    }
    //把新的binder_transaction加入到target的todo list中。
    t->work.type = BINDER_WORK_TRANSACTION;
    list_add_tail(&t->work.entry, target_list);
    //在发出transaction的thread的todo list中加入complete的work。
    tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE;
    list_add_tail(&tcomplete->entry, &thread->todo);
    if (target_wait)
        wake_up_interruptible(target_wait);
    return;

        ......
}
reply在binder_transaction()中做了下面几个动作:

  1. reset当前thread的transaction_stack,获得target_thread。
  2. 建立一个新的binder_transaction对象。
  3. 为新的binder_transaction申请binder_buffer,并且从Parcel中copy数据。
  4. 处理Parcel中的object数据。
  5. 在当前thread的todo中加入一个BINDER_WORK_TRANSACTION_COMPLETE的work,在target thread/proc的todo中加入BINDER_WORK_TRANSACTION的work。

对于我们的场景来说,step 4不会执行,因为ServiceManager传下来的数据中只有一个返回值-1。在函数执行完毕后,ServiceManager的list中又加入了一个BINDER_WORK_TRANSACTION_COMPLETE的work。


在binder_thread_write()结束之后,servicemanager在binder_thread_read()中又会read到BINDER_WORK_TRANSACTION_COMPLETE返回的command BR_TRANSACTION_COMPLETE并返回。

对于ServiceManager来说,binder_thread_read()结束部分的BR_SPAWN_LOOPER的cmd是永远不会被写入,因为他的Max thread没有设置,默认为0,永远不会满足(我们ps -t也可以看到servicemanager中只有一个thread)。
继续返回到user层在ServiceManager中处理:

case BR_TRANSACTION_COMPLETE:
            break;
处理完毕后继续进入binder中,在binder_thread_read()中等待新的work来到。


SampleService端Transact()接受Reply
这个时候SampleService在binder_thread_read()中等待到了ServiceManager发来的BINDER_WORK_TRNASACTION的work,收到work后的动作和ServiceManager收到同样的work时候处理基本类似,唯一差异是最终的cmd:

        BUG_ON(t->buffer == NULL);
        if (t->buffer->target_node) {//transaction cmd时候。
          ......
        } else {//reply cmd时候,reply时候target_node为null。
            tr.target.ptr = NULL;
            tr.cookie = NULL;
            cmd = BR_REPLY;
        }
对于我们当前的场景,t->buffer->target_node为NULL,因为ServiceManager在binder_trnasaction()中没有对它赋值。

所以此时从binder中返回到user层时候,SampleService的带了cmd: BR_NOOP和BR_REPLY,直接看看BR_REPLY的处理:

case BR_REPLY:
            {
                binder_transaction_data tr;
                err = mIn.read(&tr, sizeof(tr));
                ALOG_ASSERT(err == NO_ERROR, "Not enough command data for brREPLY");
                if (err != NO_ERROR) goto finish;

                if (reply) {
                    if ((tr.flags & TF_STATUS_CODE) == 0) {
                       ......
                    } else {
                      //ServiceManager返回的flag为TF_STATUS_CODE,buffer中保存的PING_TRANSACTION处理的返回值-1.
                        err = *static_cast<const status_t*>(tr.data.ptr.buffer);
                        freeBuffer(NULL,
                            reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
                            tr.data_size,
                            reinterpret_cast<const size_t*>(tr.data.ptr.offsets),
                            tr.offsets_size/sizeof(size_t), this);
                    }
                } else {
                    ......
                }
            }
            goto finish;
这里我们得到了返回值err = -1,并调用freeBuffer():
void IPCThreadState::freeBuffer(Parcel* parcel, const uint8_t* data, size_t dataSize,
                                const size_t* objects, size_t objectsSize,
                                void* cookie)
{
   //和binder.c中的binder_send_reply()中做的动作是一样的,让binder去释放掉传回的binder_transaction_data.binder_buffer的内存。
    //ALOGI("Freeing parcel %p", &parcel);
    IF_LOG_COMMANDS() {
        alog << "Writing BC_FREE_BUFFER for " << data << endl;
    }
    ALOG_ASSERT(data != NULL, "Called with NULL data");
    if (parcel != NULL) parcel->closeFileDescriptors();
    IPCThreadState* state = self();
    state->mOut.writeInt32(BC_FREE_BUFFER);
    state->mOut.writeInt32((int32_t)data);
}

freeBuffer和binder.c中的binder_send_reply()中第一个command是一样的意义,让binder去释放binder_transaction_data.binder_buffer的内存,这个内存是ServiceManager reply时在binder_transaction()中候申请的。

最终,我们看到了IPCThreadState::self()->transact(0, IBinder::PING_TRANSACTION, data, NULL, 0)整个都执行完毕了,返回了一个-1的值出来,
if (status == DEAD_OBJECT)
之后的出错判断不成功,-1这个值表明了ServiceManager是存在的,所以程序继续往下执行。


这边,我们总结下binder中transact()中的动作:

  1. binder_thread_write()处理user层的write buffer中信息,将数据转换,通过binder_thread/binder_proc.todo来传递数据信息。
  2. binder_thread_read()向read buffer写回数据,在这里检查binder_thread/binder_proc.todo,并进行对应的处理。


3.3.1.1.2  new BpBinder(handle);

这行代码就是建立了一个新的BpBinder对象。

BpBinder::BpBinder(int32_t handle)
    : mHandle(handle)
    , mAlive(1)
    , mObitsSent(0)
    , mObituaries(NULL)
{
    ALOGV("Creating BpBinder %p handle %d\n", this, mHandle);

    //设置object的delete是当weak ref为0的时候发生。默认情况下是OBJECT_LIFETIME_STRONG。
    extendObjectLifetime(OBJECT_LIFETIME_WEAK);
    //增加handle的weak reference。
    IPCThreadState::self()->incWeakHandle(handle);
}
传入的handle这个值,在user层看就是一个index,在binder中我们也是通过这个去获得了ServiceManager的binder_node来操作,handle在BpBinder中也是主要在和binder通信的时候使用,对于其他的Service的情况下我们猜测应该是有类似的作用,这个后续分析中会看到。
最后的IPCThreadState::self()->incWeakHandle(handle);是增加handle的weak ref,我们看看具体做了什么:
void IPCThreadState::incWeakHandle(int32_t handle)
{
    LOG_REMOTEREFS("IPCThreadState::incWeakHandle(%d)\n", handle);
    //write BC_INCREFS command,后续transact中会一起被发送给binder。
    mOut.writeInt32(BC_INCREFS);
    mOut.writeInt32(handle);
}
直接看下Binder中会如何处理BC_INCREFS,直接到binder_thread_write()函数中找到:
       case BC_INCREFS://增加weak reference
        case BC_ACQUIRE://增加strong reference。
        case BC_RELEASE://减少strong reference.
        case BC_DECREFS: {//减少weak reference
            uint32_t target;
            struct binder_ref *ref;
            const char *debug_string;

            if (get_user(target, (uint32_t __user *)ptr))
                return -EFAULT;
            ptr += sizeof(uint32_t);
            if (target == 0 && binder_context_mgr_node &&
                (cmd == BC_INCREFS || cmd == BC_ACQUIRE)) {
                //查找binder_context_mgr_node对应的binder_ref对象,如果不存在就建立一个。
                ref = binder_get_ref_for_node(proc,
                           binder_context_mgr_node);
                if (ref->desc != target) {
                    binder_user_error("%d:%d tried to acquire reference to desc 0, got %d instead\n",
                        proc->pid, thread->pid,
                        ref->desc);
                }
            } else//查找对应的binder_ref对象,不会主动建立。
                ref = binder_get_ref(proc, target);
            if (ref == NULL) {
                binder_user_error("%d:%d refcount change on invalid ref %d\n",
                    proc->pid, thread->pid, target);
                break;
            }
            switch (cmd) {
            case BC_INCREFS:
                debug_string = "IncRefs";
                binder_inc_ref(ref, 0, NULL);
                break;
            case BC_ACQUIRE:
                debug_string = "Acquire";
                binder_inc_ref(ref, 1, NULL);
                break;
            case BC_RELEASE:
                debug_string = "Release";
                binder_dec_ref(ref, 1);
                break;
            case BC_DECREFS:
            default:
                debug_string = "DecRefs";
                binder_dec_ref(ref, 0);
                break;
            }
            binder_debug(BINDER_DEBUG_USER_REFS,
                     "%d:%d %s ref %d desc %d s %d w %d for node %d\n",
                     proc->pid, thread->pid, debug_string, ref->debug_id,
                     ref->desc, ref->strong, ref->weak, ref->node->debug_id);
            break;
        }
这段代码中主要是查找binder_ref节点,然后对binder_ref进行操作。对于我们当前的case,我们是在本地进程中查找ServiceManager对应的binder_ref,找不到就去创建:
static struct binder_ref *binder_get_ref_for_node(struct binder_proc *proc,
                          struct binder_node *node)
{
    struct rb_node *n;
    struct rb_node **p = &proc->refs_by_node.rb_node;
    struct rb_node *parent = NULL;
    struct binder_ref *ref, *new_ref;

    //在refs_by_node中查找。
    while (*p) {
        parent = *p;
        ref = rb_entry(parent, struct binder_ref, rb_node_node);

        if (node < ref->node)
            p = &(*p)->rb_left;
        else if (node > ref->node)
            p = &(*p)->rb_right;
        else
            return ref;
    }
    //查找失败,建立binder_ref。
    new_ref = kzalloc(sizeof(*ref), GFP_KERNEL);
    if (new_ref == NULL)
        return NULL;
    binder_stats_created(BINDER_STAT_REF);
    new_ref->debug_id = ++binder_last_id;
    new_ref->proc = proc;
    new_ref->node = node;
    rb_link_node(&new_ref->rb_node_node, parent, p);
    rb_insert_color(&new_ref->rb_node_node, &proc->refs_by_node);

    //ServiceManager的desc是0,其他都从1开始,这样0就保留给ServiceManager。
    //在refs_by_desc tree中遍历,找到可用的desc。
    new_ref->desc = (node == binder_context_mgr_node) ? 0 : 1;
    for (n = rb_first(&proc->refs_by_desc); n != NULL; n = rb_next(n)) {
        ref = rb_entry(n, struct binder_ref, rb_node_desc);
        if (ref->desc > new_ref->desc)
            break;
        new_ref->desc = ref->desc + 1;
    }

    //将结点加入到refs_by_desc tree中。
    p = &proc->refs_by_desc.rb_node;
    while (*p) {
        parent = *p;
        ref = rb_entry(parent, struct binder_ref, rb_node_desc);

        if (new_ref->desc < ref->desc)
            p = &(*p)->rb_left;
        else if (new_ref->desc > ref->desc)
            p = &(*p)->rb_right;
        else
            BUG();
    }
    rb_link_node(&new_ref->rb_node_desc, parent, p);
    rb_insert_color(&new_ref->rb_node_desc, &proc->refs_by_desc);

    //将binder_ref结点Link到binder_node.refs list中去。
    if (node) {
        hlist_add_head(&new_ref->node_entry, &node->refs);

        binder_debug(BINDER_DEBUG_INTERNAL_REFS,
                 "%d new ref %d desc %d for node %d\n",
                  proc->pid, new_ref->debug_id, new_ref->desc,
                  node->debug_id);
    } else {
        binder_debug(BINDER_DEBUG_INTERNAL_REFS,
                 "%d new ref %d desc %d for dead node\n",
                  proc->pid, new_ref->debug_id, new_ref->desc);
    }
    return new_ref;
}
ServiceManager建立的binder_ref.desc也是0,和user进程中的idx保持了一致。后续使用binder_get_ref()即可找到对应的binder_ref。
得到后调用binder_inc_ref():
static int binder_inc_ref(struct binder_ref *ref, int strong,
              struct list_head *target_list)
{
    //本进程中第一次引用binder_ref,对它引用的binder_node,同时increase reference。避免binder_node先释放造成问题。
    int ret;
    if (strong) {
        if (ref->strong == 0) {
            ret = binder_inc_node(ref->node, 1, 1, target_list);
            if (ret)
                return ret;
        }
        ref->strong++;
    } else {
        if (ref->weak == 0) {
            ret = binder_inc_node(ref->node, 0, 1, target_list);
            if (ret)
                return ret;
        }
        ref->weak++;
    }
    return 0;
}
从这个函数中,我们可以看出来binder_ref和binder_node的关系,binder_ref代表的就是对binder_node的reference的操作。

从binder中返回后,执行下面的操作:
            e->binder = b;
            if (b) e->refs = b->getWeakRefs();
            result = b;
注意result = b这行代码,result是sp<IBinder>对象,赋值的时候会触发BpBinder::onFirstRef()
void BpBinder::onFirstRef()
{
    ALOGV("onFirstRef BpBinder %p handle %d\n", this, mHandle);
    IPCThreadState* ipc = IPCThreadState::self();
    if (ipc) ipc->incStrongHandle(mHandle);
}
void IPCThreadState::incStrongHandle(int32_t handle)
{
    LOG_REMOTEREFS("IPCThreadState::incStrongHandle(%d)\n", handle);
    //write BC_ACQUIRE command,后续transact中会一起被发送给binder。
    mOut.writeInt32(BC_ACQUIRE);
    mOut.writeInt32(handle);
}
BC_ACQUIRE和前面的BC_INCREFS执行流程基本一致,只是最后调用binder_inc_ref的参数不一样,它要求increase的是strong ref。

到此,ProcessState::getStrongProxyForHandle()函数返回了BpBinder对象,通过函数interface_cast<IServiceManager>()构造出了一个BpServiceManager对象。在前面SampleService的编写中,我们已经知道了BpXXX这样的类是继承对应的interface的,我们获取到了BpServiceManager也就获取到了可以操作ServiceManager的接口了。

3.3.1.1 BpServiceManager.addService的调用


3.3.1.1.1 SampleService端的流程
在获取了sp<IServiceManager>也就是BpServiceManger之后,我们通过下面的代码去注册了SampleService:

sm->addService(String16("SampleService"), samplesrv, false);
调用是BpServiceManager的addService函数:
virtual status_t addService(const String16& name, const sp<IBinder>& service,
            bool allowIsolated)
    {
        Parcel data, reply;
        //写入消息头部,包括strictmode标志,和interface的description string。
        data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());
        //写入注册的service name。
        data.writeString16(name);
       //写入注册的service,重点注意。
        data.writeStrongBinder(service);
        data.writeInt32(allowIsolated ? 1 : 0);
        //向service 传送add service的command。
        status_t err = remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply);
        return err == NO_ERROR ? reply.readExceptionCode() : err;
    }
write参数主要writeStrongBinder(service)中做了什么:
status_t Parcel::writeStrongBinder(const sp<IBinder>& val)
{
    return flatten_binder(ProcessState::self(), val, this);
}
status_t flatten_binder(const sp<ProcessState>& proc,
    const sp<IBinder>& binder, Parcel* out)
{
    //flat_binder_object在kernel的biner.h中定义。
    flat_binder_object obj;
    
    obj.flags = 0x7f | FLAT_BINDER_FLAG_ACCEPTS_FDS;
    if (binder != NULL) {
       //localBinder()对于service端来说返回非空,对于client端返回NULL。
        IBinder *local = binder->localBinder();
        if (!local) {//remote binder:BpBinder
            BpBinder *proxy = binder->remoteBinder();
            if (proxy == NULL) {
                ALOGE("null proxy");
            }
            const int32_t handle = proxy ? proxy->handle() : 0;
            obj.type = BINDER_TYPE_HANDLE;//表明是remote的binder。
            obj.handle = handle;
            obj.cookie = NULL;
        } else {//local binder:BBinder
            obj.type = BINDER_TYPE_BINDER;//表明是本地的binder
            obj.binder = local->getWeakRefs();
            obj.cookie = local;
        }
    } else {//when it happen?used as local binder。
        obj.type = BINDER_TYPE_BINDER;
        obj.binder = NULL;
        obj.cookie = NULL;
    }
    
    //将flat_binder_object写入到buffer中去。
    return finish_flatten_binder(binder, obj, out);
}
当前的场景中SampleService是localBinder。看看remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply);
remote()函数在BpRefBase中,返回的是前面的IPCThreadState::getStrongProxyForHandle()中建立出来的BpBinder对象。
status_t BpBinder::transact(
    uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
    // Once a binder has died, it will never come back to life.
    if (mAlive) {
        status_t status = IPCThreadState::self()->transact(
            mHandle, code, data, reply, flags);
        if (status == DEAD_OBJECT) mAlive = 0;
        return status;
    }

    return DEAD_OBJECT;
}
最终还是调用到了IPCThreadState::transact()中去了,这里的mHanlde是ServiceManager的handle,值为0。flag没有设置,默认为0。
IPCThreadState::transact()前面我们已经分析过了,这边就不再重复了,我们要记得这里组织的binder_transaction_data的几个信息:

  1. command为BC_TRANSACTION。
  2. code为ADD_SERVICE。
  3. 传入handle为0。
  4. flag为0,表明操作为同步操作。
  5. Parcel中写入了一个binder object,type为BINDER_TYPE_BINDER。

binder中对这些数据的处理主要的流程和之前PING_TRANSACTION处理基本一致,我们主要需要注意一下,对于Parcel中的object的处理,这部分是PING_TRANSACTION中没有的:

      //local binder object的处理,means BBinder。
        case BINDER_TYPE_BINDER:
        case BINDER_TYPE_WEAK_BINDER: {
            struct binder_ref *ref;
            //查找/建立对应的binder_node。
            //除去ServiceManager的binder_node,其他所有的binder_node都在这边建立,在调用ServiceManager::addService的时候调用到。
            struct binder_node *node = binder_get_node(proc, fp->binder);
            if (node == NULL) {
                node = binder_new_node(proc, fp->binder, fp->cookie);
                if (node == NULL) {
                    return_error = BR_FAILED_REPLY;
                    goto err_binder_new_node_failed;
                }
                node->min_priority = fp->flags & FLAT_BINDER_FLAG_PRIORITY_MASK;
                node->accept_fds = !!(fp->flags & FLAT_BINDER_FLAG_ACCEPTS_FDS);
            }
            //检查cookie也就是service的指针是否一致。
            if (fp->cookie != node->cookie) {
                binder_user_error("%d:%d sending u%p node %d, cookie mismatch %p != %p\n",
                    proc->pid, thread->pid,
                    fp->binder, node->debug_id,
                    fp->cookie, node->cookie);
                goto err_binder_get_ref_for_node_failed;
            }
            //security权限检测。
            if (security_binder_transfer_binder(proc->tsk, target_proc->tsk)) {
                return_error = BR_FAILED_REPLY;
                goto err_binder_get_ref_for_node_failed;
            }
            //在target_proc中查找/创建 binder_ref,其中binder_ref.desc在这边被确定。
            ref = binder_get_ref_for_node(target_proc, node);
            if (ref == NULL) {
                return_error = BR_FAILED_REPLY;
                goto err_binder_get_ref_for_node_failed;
            }
            //将local binder转换成remote binder信息,之后会被target proc读取使用。
            if (fp->type == BINDER_TYPE_BINDER)
                fp->type = BINDER_TYPE_HANDLE;
            else
                fp->type = BINDER_TYPE_WEAK_HANDLE;
            //desc在binder_get_ref_for_node()中被确定,会被作为handle。
            fp->handle = ref->desc;
            //添加BINDER_WORK_NODE work到thread的todo list。
            binder_inc_ref(ref, fp->type == BINDER_TYPE_HANDLE,
                       &thread->todo);

            trace_binder_transaction_node_to_ref(t, node, ref);
            binder_debug(BINDER_DEBUG_TRANSACTION,
                     "        node %d u%p -> ref %d desc %d\n",
                     node->debug_id, node->ptr, ref->debug_id,
                     ref->desc);
        } break;
从这边的代码,我们已经可以看出binder_node,binder_ref这两个对象的差异:
binder_node是和user层的BBinder对应的,他是service在binder中的代表。
binder_ref是和user层的BpBinder对应的,他是client在binder中的代表,binder_ref是对于binder_node的reference。

binder_node中有两组refs:
local_strong_refs,local_weak_refs,这组是在binder_inc_node(),binder_dec_node()中去操作的,是直接反馈对于binder_node的引用关系。
internal_strong_refs,internal_weak_refs,这组refs是binder_ref在使用,他们是表明对于binder_ref对于binder_node的引用,也就是client对于service的引用。

对于我们当前的场景,在这里主要做了下面的动作:

  1. 建立了对应的binder_node。
  2. 在targetproc中建立对应的binder_ref。
  3. 将传递下来的flat_binder_object转换为remote binder。
  4. binder_inc_ref()中在当前thread的todo中添加了binder_node的BINDER_WORK_NODE的work,在这里我们increase的strong_refs。

BINDER_WORK_NODE在binder_thread_read()中被处理:

       case BINDER_WORK_NODE: {
             //更新对于Binder_node对应的BBinder的refs。这里主要用node->has_strong_ref,has_weak_ref来标志是否要更新user层,减少更新user层的次数。
            struct binder_node *node = container_of(w, struct binder_node, work);
            uint32_t cmd = BR_NOOP;
            const char *cmd_name;
            int strong = node->internal_strong_refs || node->local_strong_refs;
            int weak = !hlist_empty(&node->refs) || node->local_weak_refs || strong;
            if (weak && !node->has_weak_ref) {
                cmd = BR_INCREFS;
                cmd_name = "BR_INCREFS";
                node->has_weak_ref = 1;
                node->pending_weak_ref = 1;//flag表明等待user层反馈。
                node->local_weak_refs++;//在BC_INCREFS_DONE中通过BINDER_DEC_NODE来减去。
            } else if (strong && !node->has_strong_ref) {
                //need increase strong reference.
                cmd = BR_ACQUIRE;
                cmd_name = "BR_ACQUIRE";
                node->has_strong_ref = 1;
                node->pending_strong_ref = 1;//flag表明等待user层反馈。
                node->local_strong_refs++;//在BC_QCQUIRE_DONE中通过BINDER_DEC_NODE来减去。
            } else if (!strong && node->has_strong_ref) {
                //no reference,need to free.
                cmd = BR_RELEASE;
                cmd_name = "BR_RELEASE";
                node->has_strong_ref = 0;
            } else if (!weak && node->has_weak_ref) {
                cmd = BR_DECREFS;
                cmd_name = "BR_DECREFS";
                node->has_weak_ref = 0;
            }
            if (cmd != BR_NOOP) {
                if (put_user(cmd, (uint32_t __user *)ptr))
                    return -EFAULT;
                ptr += sizeof(uint32_t);
                if (put_user(node->ptr, (void * __user *)ptr))
                    return -EFAULT;
                ptr += sizeof(void *);
                if (put_user(node->cookie, (void * __user *)ptr))
                    return -EFAULT;
                ptr += sizeof(void *);

                binder_stat_br(proc, thread, cmd);
                binder_debug(BINDER_DEBUG_USER_REFS,
                         "%d:%d %s %d u%p c%p\n",
                         proc->pid, thread->pid, cmd_name, node->debug_id, node->ptr, node->cookie);
            } else {
                list_del_init(&w->entry);
                if (!weak && !strong) {
                    binder_debug(BINDER_DEBUG_INTERNAL_REFS,
                             "%d:%d node %d u%p c%p deleted\n",
                             proc->pid, thread->pid, node->debug_id,
                             node->ptr, node->cookie);
                    rb_erase(&node->rb_node, &proc->nodes);
                    kfree(node);
                    binder_stats_deleted(BINDER_STAT_NODE);
                } else {
                    binder_debug(BINDER_DEBUG_INTERNAL_REFS,
                             "%d:%d node %d u%p c%p state unchanged\n",
                             proc->pid, thread->pid, node->debug_id, node->ptr,
                             node->cookie);
                }
            }
        } break;
这里的逻辑,主要是依赖binder_node.has_strong_ref,has_weak_refs来确定是否要对user层的BBinder做操作,这样不会出现对user层的频繁操作,减少了时间消耗。
在当前的场景中,binder_transaction中调用了binder_inc_ref(),让internal_strong_refs+1,并且binder_node刚建立出来,所以第二个if语句成立,这里返回了BR_ACQUIRE cmd回去,在IPCThreadState::executeCommand()中处理:
case BR_ACQUIRE:
        refs = (RefBase::weakref_type*)mIn.readInt32();
        obj = (BBinder*)mIn.readInt32();
        ALOG_ASSERT(refs->refBase() == obj,
                   "BR_ACQUIRE: object %p does not match cookie %p (expected %p)",
                   refs, obj, refs->refBase());
     //increase strong refs. 
        obj->incStrong(mProcess.get());
        IF_LOG_REMOTEREFS() {
            LOG_REMOTEREFS("BR_ACQUIRE from driver on %p", obj);
            obj->printRefs();
        }
     //向binder写入BC_ACQUIRE_DONE。 
        mOut.writeInt32(BC_ACQUIRE_DONE);
        mOut.writeInt32((int32_t)refs);
        mOut.writeInt32((int32_t)obj);
        break;
在操作后又写入了BC_ACQUIRE_DONE的命令给binder,binder中的处理如下:
case BC_INCREFS_DONE:
        case BC_ACQUIRE_DONE: {
            void __user *node_ptr;
            void *cookie;
            struct binder_node *node;

            if (get_user(node_ptr, (void * __user *)ptr))
                return -EFAULT;
            ptr += sizeof(void *);
            if (get_user(cookie, (void * __user *)ptr))
                return -EFAULT;
            ptr += sizeof(void *);
            node = binder_get_node(proc, node_ptr);
            if (node == NULL) {
                binder_user_error("%d:%d %s u%p no match\n",
                    proc->pid, thread->pid,
                    cmd == BC_INCREFS_DONE ?
                    "BC_INCREFS_DONE" :
                    "BC_ACQUIRE_DONE",
                    node_ptr);
                break;
            }
            if (cookie != node->cookie) {
                binder_user_error("%d:%d %s u%p node %d cookie mismatch %p != %p\n",
                    proc->pid, thread->pid,
                    cmd == BC_INCREFS_DONE ?
                    "BC_INCREFS_DONE" : "BC_ACQUIRE_DONE",
                    node_ptr, node->debug_id,
                    cookie, node->cookie);
                break;
            }
            if (cmd == BC_ACQUIRE_DONE) {
                if (node->pending_strong_ref == 0) {
                    binder_user_error("%d:%d BC_ACQUIRE_DONE node %d has no pending acquire request\n",
                        proc->pid, thread->pid,
                        node->debug_id);
                    break;
                }
                node->pending_strong_ref = 0;//reset flag set in BINDER_WORK_NODE.
            } else {
                if (node->pending_weak_ref == 0) {
                    binder_user_error("%d:%d BC_INCREFS_DONE node %d has no pending increfs request\n",
                        proc->pid, thread->pid,
                        node->debug_id);
                    break;
                }
                node->pending_weak_ref = 0;//reset flag set in BINDER_WORK_NODE.
            }
             //执行 local_strong_refs  or local_weak_refs descrease, 在binder_thread_read()中处理BINDER_WORK_NODE中做了 increase。 
            binder_dec_node(node, cmd == BC_ACQUIRE_DONE, 0);
            binder_debug(BINDER_DEBUG_USER_REFS,
                     "%d:%d %s node %d ls %d lw %d\n",
                     proc->pid, thread->pid,
                     cmd == BC_INCREFS_DONE ? "BC_INCREFS_DONE" : "BC_ACQUIRE_DONE",
                     node->debug_id, node->local_strong_refs, node->local_weak_refs);
            break;
        }
处理完后,进入到binder_thread_read()中等待work。

3.3.1.1.2 ServiceManager端的响应
我们看下ServiceManager端如何处理传来的ADD_SERVICE消息的,消息的读取跳过,直接去看处理部分,还是在binder_parse()中调用svcmgr_handler()来处理,这次消息的头部是完整的,带有strinckmode和interface string,所以消息能够得到真正的处理:

case SVC_MGR_ADD_SERVICE:
         //获取add的service的name. 
        s = bio_get_string16(msg, &len);
         //获得对应的handle。 
        ptr = bio_get_ref(msg);
        allow_isolated = bio_get_uint32(msg) ? 1 : 0;
        if (do_add_service(bs, s, len, ptr, txn->sender_euid, allow_isolated))
            return -1;
        break;
看下bio_get_ref():
void *bio_get_ref(struct binder_io *bio)
{
     //binder_object和flat_binder_object结构一样。 
    struct binder_object *obj;

     //从binder_io中读取binder_object,实际就是从传递来的binder_transaction_data中读取flat_binder_object。 
    obj = _bio_get_obj(bio);
    if (!obj)
        return 0;
    //remote handler,在binder中我们已经把pointer转换成了binder_ref.desc,也就是handle了。 
    if (obj->type == BINDER_TYPE_HANDLE)
        return obj->pointer;

    return 0;
}
在binder中我们已经把flat_binder_object转换成BINDER_TYPE_HANDLE类型,pointer也变成了binder_ref.desc了。
再看下最终的处理函数:

int do_add_service(struct binder_state *bs,
                   uint16_t *s, unsigned len,
                   void *ptr, unsigned uid, int allow_isolated)
{
    struct svcinfo *si;
    //ALOGI("add_service('%s',%p,%s) uid=%d\n", str8(s), ptr,
    //        allow_isolated ? "allow_isolated" : "!allow_isolated", uid);

    if (!ptr || (len == 0) || (len > 127))
        return -1;

    if (!svc_can_register(uid, s)) {
        ALOGE("add_service('%s',%p) uid=%d - PERMISSION DENIED\n",
             str8(s), ptr, uid);
        return -1;
    }

     //查找是否已经有注册了这样的service。 
    si = find_svc(s, len);
    if (si) {
        if (si->ptr) {
            ALOGE("add_service('%s',%p) uid=%d - ALREADY REGISTERED, OVERRIDE\n",
                 str8(s), ptr, uid);
            svcinfo_death(bs, si);
        }
        si->ptr = ptr;
    } else {
    //添加新的service信息。 
        si = malloc(sizeof(*si) + (len + 1) * sizeof(uint16_t));
        if (!si) {
            ALOGE("add_service('%s',%p) uid=%d - OUT OF MEMORY\n",
                 str8(s), ptr, uid);
            return -1;
        }
        si->ptr = ptr;
        si->len = len;
        memcpy(si->name, s, (len + 1) * sizeof(uint16_t));
        si->name[len] = '\0';
        si->death.func = svcinfo_death;
        si->death.ptr = si;list
        si->allow_isolated = allow_isolated;
        si->next = svclist;
        svclist = si;
    }

     //写入BC_ACQUIRE命令。 
    binder_acquire(bs, ptr);
     //写入BC_REQUEST_DEATH_NOTIFICATION命令。 
    binder_link_to_death(bs, ptr, &si->death);
    return 0;
}
在do_add_service()中,将service信息加入list后,向binder中写了两个command,BC_ACQUIRE和BC_REQUEST_DEATH_NOTIFICAITON。
返回到函数svcmgr_handler()后,写入返回值0,

bio_put_uint32(reply, 0);
最终,回到binder_parse()中,调用binder_send_reply(),在write buffer中填入BC_FREE_BUFFER和BC_REPLY。到了这里,我们的write buffer中已经包含了以下的信息:
1,BC_ACQUIRE,参数是handle也就是binder_ref.desc。
2,BC_REQUEST_DEATH_NOTIFICATION,参数1是handle,参数2是user层的binder_death结构指针。
3,BC_FREE_BUFFER
4,BC_REPLY

我们到binder中看看每个command的处理:
BC_ACQUIRE
这个command在binder_thread_write中处理,根据传入的desc值找到对应的binder_ref,并调用binder_inc_ref()增加strong refs。
BC_REQUEST_DEATH_NOTIFICATION
为对应的binder_ref构造binder_ref_death对象。
BC_FREE_BUFFER
这个command我们在分析PING_TRANSACTION时候看过,主要是释放在binder_transaction中申请的binder_buffer,这里和之前主要的差异在于binder_transaction_buffer_release()对于buffer中包含的object的处理,我们当前的buffer中包含了一个BINDER_TYPE_BINDER类型的object,在这里需要进行下面的操作:

binder_dec_ref(ref, fp->type == BINDER_TYPE_HANDLE);
这和我们之前在binder_transaction()中的转换object时所做的下面操作对应:
binder_inc_ref(ref, fp->type == BINDER_TYPE_HANDLE,
                       &thread->todo);
BC_REPLY
这步和之前PING_TRANSACTION中流程是一样的了,只是返回的数据flag为0,不是TF_STATUS_CODE了,数据中只有一个返回值0。在这步处理中,建立一个binder_transaction,在target thread/proc的todo中加入BINDER_WORK_TRANSACTION的work。在当前thread的todo中加入一个BINDER_WORK_TRANSACTION_COMPLETE的work。


之后,在binder_thread_read()中work BINDER_WORK_TRANSACTION_COMPLETE被转为BR_TRANSACTION_COMPLETE command返回给user层,ServiceManager会处理这个command(没有执行任何动作)。


3.3.1.1.3 SampleService处理reply信息
SamleService这个时候会接收到BINDER_WORK_TRANSACTION的work,之后处理和PING_TRANSACTION中就是一样的了,带回一个command BR_REPLY,带有binder_transaction_data,其中只有返回值信息0。

case BR_REPLY:
            {
                binder_transaction_data tr;
                err = mIn.read(&tr, sizeof(tr));
                ALOG_ASSERT(err == NO_ERROR, "Not enough command data for brREPLY");
                if (err != NO_ERROR) goto finish;

                if (reply) {
                    if ((tr.flags & TF_STATUS_CODE) == 0) {
                           //将binder_transaction_data中的buffer传给reply Parcel,这些buffer不能直接用free去释放,
                       //注意最后两个参数是用来释放这些buffer的,所以Parcel在析构的时候调用了freeBuffer()函数去释放。
                        reply->ipcSetDataReference(
                            reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
                            tr.data_size,
                            reinterpret_cast<const size_t*>(tr.data.ptr.offsets),
                            tr.offsets_size/sizeof(size_t),
                            freeBuffer, this);
                    } else {//只有status code返回,没有数据信息的情况,读取信息后,释放掉buffer。
                        err = *static_cast<const status_t*>(tr.data.ptr.buffer);
                        freeBuffer(NULL,
                            reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
                            tr.data_size,
                            reinterpret_cast<const size_t*>(tr.data.ptr.offsets),
                            tr.offsets_size/sizeof(size_t), this);
                    }
                } else {
                    freeBuffer(NULL,
                        reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
                        tr.data_size,
                        reinterpret_cast<const size_t*>(tr.data.ptr.offsets),
                        tr.offsets_size/sizeof(size_t), this);
                    continue;
                }
            }
            goto finish;
IPCThreadState::Transact()处理BR_REPLAY,返回NO_ERROR,BpServiceManager::addService()最终返回NO_ERROR。
return err == NO_ERROR ? reply.readExceptionCode() : err;


到此,BpServiceManager.addService()的整个调用流程就结束了,这边有几个重要的点:

  • BBinder对象的传递
  • binder_node的建立
  • binder_ref的建立
  • handle的获取


-------------------------------------------

by sky



  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
Android Binder机制是Android系统中一种进程间通信(IPC)的机制,用于在不同进程之间进行数据交换和通信。通过Binder机制,Android应用程序可以实现进程间的数据共享和相互调用。 Binder机制基于C/S架构,主要由服务端、客户端和Binder驱动组成。服务端提供一个或多个服务,将其注册到Binder驱动中,并通过Binder对象发送和接收数据;客户端通过获取服务端的Binder对象,与其进行通信和交互;而Binder驱动负责管理Binder对象的创建、销毁和通信。 在Binder机制中,Binder对象充当了交互的桥梁。每个Binder对象都有一个唯一的标识符(具体是一个32位的整数),用于识别和查找对应的服务端。通过Binder对象,客户端和服务端可以进行方法调用、数据传输等操作。服务端通过Binder对象将数据发送给客户端,客户端通过Binder对象将数据传递给服务端。 Binder机制设计了多种数据结构来实现进程间通信,如BpBinder、BpRefBase、Parcel等。BpBinder负责处理进程间的通信,并通过Binder Proxy将方法调用转发给服务端;BpRefBase用于引用计数,确保对象在不再使用时能够正确释放;Parcel用于在进程间传递数据,可以实现序列化和反序列化。 总结来说,Android Binder机制是Android系统中一种进程间通信的机制,通过Binder对象实现不同进程之间的数据交换和通信。通过服务端、客户端和Binder驱动的协作,应用程序可以实现进程间的数据共享和相互调用。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值