Binder系列9 Binder服务的获取

一 概述

Binder 服务的获取,同 Binder 服务的注册一样都是向 SMgr 来发送请求,区别在于请求码不同,也就是请求 SMgr 服务的功能不同,这个反馈到 SMgr 中就是它对应的执行的函数不同,仅此而已,大部分流程和内容与 Binder系列8 Binder服务的注册 一致,大家需要注意.接下来我们还以 Media 服务为例来梳理下详细流程.

二 获取Media服务

通过查看 Android 源码发现,系统中有很多地方都有获取 Media 服务的操作,我们以常见的获取 Media 服务的代码为例来分析,获取代码如下:

  //获取SMgr的代理,这个之前已经讲过
  sp<IServiceManager> sm = defaultServiceManager();
  //获取名为"media.player"的服务,也就是Media服务
   sp<IBinder> binder = sm->getService(String16("media.player"));

其中 defaultServiceManager() 过程在 Binder系列3 ServiceManager启动和实现 中已经讲过,大家可以回顾下,我们知道这个最终的返回值是 BpServiceManager.

2.1 BpServiceManager.getService

接下来看 BpServiceManager 的 getService 函数,代码如下:

class BpServiceManager : public BpInterface<IServiceManager>
{
    virtual sp<IBinder> getService(const String16& name) const
    {
        sp<IBinder> svc = checkService(name);//获取服务
        if (svc != NULL) return svc;
        const bool isVendorService = 
strcmp(ProcessState::self()->getDriverName().c_str(), "/dev/vndbinder") == 0;
        const long timeout = uptimeMillis() + 5000;//超时时间5s
   if (!gSystemBootCompleted) {
    char bootCompleted[PROPERTY_VALUE_MAX];
    property_get("sys.boot_completed", bootCompleted, "0");
    gSystemBootCompleted = strcmp(bootCompleted, "1") == 0 ? true : false;
   }
        // retry interval in millisecond.
        const long sleepTime = gSystemBootCompleted ? 1000 : 100;
        int n = 0;
        while (uptimeMillis() < timeout) {
            n++;
            if (isVendorService) {              
              CallStack stack(LOG_TAG);
            } else if (n%10 == 0) {
                ALOGI("Waiting for service %s...", String8(name).string());
            }
            usleep(1000*sleepTime);//休眠sleepTime毫秒
            sp<IBinder> svc = checkService(name);//循环获取服务
            if (svc != NULL) return svc;
        }      
        return NULL;
    }
};

通过 BpServiceManager 来获取 Media 服务:检索服务是否存在,当服务存在则返回相应的服务,当服务不存在则设置一个5s的超时时间,在这个时间段内循环检索服务.

2.2 BpServiceManager.checkService

virtual sp<IBinder> checkService( const String16& name) const
    {
        Parcel data, reply;
        data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());
        data.writeString16(name);//把服务的名字写入data数据包中
        //调用BpBinder的transact函数执行跨进程传输任务,并且构造一个reply
        //用于存放回复的数据(这个数据主要是BpBinder对象)
        remote()->transact(CHECK_SERVICE_TRANSACTION, data, &reply);
        return reply.readStrongBinder();
    }

其中 remote() 为 BpBinder.下面的流程和服务的注册类似,我们只贴流程相关代码,不再做具体详细的说明.

2.3 BpBinder::transact

status_t BpBinder::transact(
    uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
    // Once a binder has died, it will never come back to life.
    if (mAlive) {//调用IPCThreadState的transact方法
        status_t status = IPCThreadState::self()->transact(
            mHandle, code, data, reply, flags);
        if (status == DEAD_OBJECT) mAlive = 0;
        return status;
    }
    return DEAD_OBJECT;
}

2.4 IPCThreadState::transact

status_t IPCThreadState::transact(int32_t handle,
                                  uint32_t code, const Parcel& data,
                                  Parcel* reply, uint32_t flags)
{   ........
 if (err == NO_ERROR) {
  //将data数据整理进内部的mOut包中
  err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);
 }
    ........
    if ((flags & TF_ONE_WAY) == 0) {//默认flags为0,所以为TF_ONE_WAY
        ........
        if (reply) {//reply不为空
            err = waitForResponse(reply);
        } else {
            Parcel fakeReply;
            err = waitForResponse(&fakeReply);
        }
        ........
        }
    } else {
        err = waitForResponse(NULL, NULL);
    }
    return err;
}

2.5 IPCThreadState::waitForResponse

status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
{
    uint32_t cmd;
    int32_t err;
    while (1) {
       // talkWithDriver()内部会完成跨进程交互
        if ((err=talkWithDriver()) < NO_ERROR) break;
        err = mIn.errorCheck();
        if (err < NO_ERROR) break;
        if (mIn.dataAvail() == 0) continue;
        cmd = (uint32_t)mIn.readInt32();
        ........
        switch (cmd) {
        case BR_TRANSACTION_COMPLETE:
            if (!reply && !acquireResult) goto finish;
            break;
        case BR_DEAD_REPLY:
            err = DEAD_OBJECT;
            goto finish;
        case BR_FAILED_REPLY:
            err = FAILED_TRANSACTION;
            goto finish;
        ........
        default:
           //注意这个executeCommand(),它会处理从驱动返回过来的BR_TRANSACTION
            err = executeCommand(cmd);
            if (err != NO_ERROR) goto finish;
            break;
        }
    }
    ........
    return err;
}

2.6 IPCThreadState::talkWithDriver

status_t IPCThreadState::talkWithDriver(bool doReceive)
{
    ........
    binder_write_read bwr;//定义一个binder_write_read,用来与Binder驱动交互数据
    // Is the read buffer empty?
    const bool needRead = mIn.dataPosition() >= mIn.dataSize();
    const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0;
    bwr.write_size = outAvail;
    //让bwr的write_buffer指向携带数据的mOut
    bwr.write_buffer = (uintptr_t)mOut.data();
    ........
    // Return immediately if there is nothing to do.
    if ((bwr.write_size == 0) && (bwr.read_size == 0)) return NO_ERROR;
    bwr.write_consumed = 0;
    bwr.read_consumed = 0;
    status_t err;
    do {
       ........
#if defined(__ANDROID__)
     //通过ioctl函数和Binder驱动交互,把数据携带者bwr传递到Binder驱动
        if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)
            err = NO_ERROR;
        else
            err = -errno;
        ........
    } while (err == -EINTR);
    ........
    return err;
}

三 Binder驱动处理

3.1 binder_ioctl 函数

static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
	int ret;
	//之前介绍过,通过filp的private_data域找到对应的发起请求的进程
	struct binder_proc *proc = filp->private_data;	
	struct binder_thread *thread;
	unsigned int size = _IOC_SIZE(cmd);//命令
	void __user *ubuf = (void __user *)arg;//参数 
	........ 
    //从proc的线程树threads中查找当前线程,如果没有找到则创建一个新的线程并添加
    //到threads树中(线程树threads的生成)
	thread = binder_get_thread(proc);
	........ 
	switch (cmd) {//解析命令,根据不同的命令,执行不同的操作
	case BINDER_WRITE_READ:
		ret = binder_ioctl_write_read(filp, cmd, arg, thread);
		if (ret)
			goto err;
		break;
    ........
	return ret;
}

3.2 binder_ioctl_write_read

static int binder_ioctl_write_read(struct file *filp,
				unsigned int cmd, unsigned long arg,
				struct binder_thread *thread)
{
	int ret = 0;
	struct binder_proc *proc = filp->private_data;//还是根据filp找到当前进程
	unsigned int size = _IOC_SIZE(cmd);
	void __user *ubuf = (void __user *)arg;//参数
	struct binder_write_read bwr;//定义内核空间的binder_write_read
	........
	//把用户空间数据ubuf拷贝到内核空间bwr
	if (copy_from_user(&bwr, ubuf, sizeof(bwr))) {
		........
	}
	........			
	if (bwr.write_size > 0) {//调用binder_thread_write函数执行写操作
		ret = binder_thread_write(proc, thread,
					  bwr.write_buffer,
					  bwr.write_size,
					  &bwr.write_consumed);
		........
	}
	if (bwr.read_size > 0) {//这里read_size应该不大于0,此分支不执行
		ret = binder_thread_read(proc, thread, bwr.read_buffer,
					 bwr.read_size,
					 &bwr.read_consumed,
					 filp->f_flags & O_NONBLOCK);
		........
		if (!binder_worklist_empty_ilocked(&proc->todo))
			binder_wakeup_proc_ilocked(proc);
		........
	}
	........
	if (copy_to_user(ubuf, &bwr, sizeof(bwr))) {//将内核空间数据bwr拷贝到用户空间ubuf
		........
	}
out:
	return ret;
}

3.3 binder_thread_write

static int binder_thread_write(struct binder_proc *proc,
			struct binder_thread *thread,
			binder_uintptr_t binder_buffer, size_t size,
			binder_size_t *consumed)
{
	uint32_t cmd;
	struct binder_context *context = proc->context;//Binder驱动全局的context
	void __user *buffer = (void __user *)(uintptr_t)binder_buffer;
	void __user *ptr = buffer + *consumed;
	void __user *end = buffer + size;
	while (ptr < end && thread->return_error.cmd == BR_OK) {
		int ret;
		//拷贝用户空间的cmd命令到内核空间,此时为BC_TRANSACTION
		if (get_user(cmd, (uint32_t __user *)ptr))
			return -EFAULT;
		ptr += sizeof(uint32_t);
		........
		switch (cmd) {		
		........
		//走到BC_TRANSACTION
		case BC_TRANSACTION:
		case BC_REPLY: {
		//定义内核空间的binder_transaction_data
			struct binder_transaction_data tr;
			//拷贝用户空间的binder_transaction_data到内核空间的tr		
			if (copy_from_user(&tr, ptr, sizeof(tr)))
				return -EFAULT;
			ptr += sizeof(tr);
			//执行核心函数binder_transaction
			binder_transaction(proc, thread, &tr,
					   cmd == BC_REPLY, 0);
			break;
		}	
		........
		}
		*consumed = ptr - buffer;
	}
	return 0;
}

3.4 binder_transaction

static void binder_transaction(struct binder_proc *proc,
			       struct binder_thread *thread,
			       struct binder_transaction_data *tr, int reply,
			       binder_size_t extra_buffers_size)
{
	int ret;
	struct binder_transaction *t;//定义要发送的事务
	struct binder_work *tcomplete;//向发送线程反馈命令发送完成
	........
	struct binder_proc *target_proc = NULL;//目标进程
	struct binder_thread *target_thread = NULL;//目标线程
	struct binder_node *target_node = NULL;//目标binder_node节点
	........
	if (reply) {
	  ........
	} else {//此处传输指令为BC_TRANSACTION, reply为false
    	//tr中的target描述传输的目标端,我们知道此次传输目标是SMgr,用来注册服务
    	//所以此处的target.handle为0,代表SMgr的代理
		if (tr->target.handle) {//不会走此分支
			........
		} else {//目标进程对应的句柄为0, 说明目标进程为SMgr进程
			........
			//获取全局的context中的SMgr的Binder节点
			//binder_context_mgr_node赋值给target_node
			target_node = context->binder_context_mgr_node;
			if (target_node)
    			//根据target_node,找到其对应的目标进程target_proc
    			//并对target_node的引用做计数管理
				target_node = binder_get_node_refs_for_txn(
						target_node, &target_proc,
						&return_error);
			........
		}
		//通过以上操作我们获取到了目标节点target_node和目标节点所在的目标进程target_proc
		........
		//flags默认值为0,如果transaction_stack存在则走入此分支,用来寻找目标线程
		if (!(tr->flags & TF_ONE_WAY) && thread->transaction_stack) {
			struct binder_transaction *tmp;
			tmp = thread->transaction_stack;
			........
			while (tmp) {//遍历当前线程的事务栈,查找是否存在可以复用的线程
				struct binder_thread *from;
				........
				from = tmp->from;
				//查找事务栈中的事务,是否有来自目标进程的线程在等待,如果有即可作为目标线程
				if (from && from->proc == target_proc) {
					atomic_inc(&from->tmp_ref);
					target_thread = from;
					spin_unlock(&tmp->lock);
					break;
				}
				........
				tmp = tmp->from_parent;
			}
		}
		........
	}
	........
	//为事务t分配内存空间
	t = kzalloc(sizeof(*t), GFP_KERNEL);
	........
    //为tcomplete分配内存空间
	tcomplete = kzalloc(sizeof(*tcomplete), GFP_KERNEL);
	........
	if (!reply && !(tr->flags & TF_ONE_WAY))
		t->from = thread;//因为flags为0,所以走入此分支,设置事务的from线程
	else
		t->from = NULL;//oneway情况下不需要回复,所以也不需要知道事务的发起线程是哪个
	//对事务的初始化操作
	t->sender_euid = task_euid(proc->tsk);
	t->to_proc = target_proc;//目标进程保存到事务的to_proc
	t->to_thread = target_thread;//目标线程(如果存在的话)保存到事务的to_thread
	t->code = tr->code;//函数编号
	t->flags = tr->flags;//同步异步flag
	........
    //从目标进程分配一个内核缓冲区给事务t的buffer
	t->buffer = binder_alloc_new_buf(&target_proc->alloc, tr->data_size,
		tr->offsets_size, extra_buffers_size,
		!reply && (t->flags & TF_ONE_WAY));
	........
	//继续对事务的buffer初始化
	t->buffer->allow_user_free = 0;
	t->buffer->debug_id = t->debug_id;
	t->buffer->transaction = t;
	t->buffer->target_node = target_node;
	........
	//以下操作为从传输的数据中解析出每一个Binder也就是flat_binder_object
	//并生成相应的红黑树(refs树和nodes树)
	//设置事务t中偏移数组的开始位置off_start, 即当前位置+binder_transaction_data数据大小
	off_start = (binder_size_t *)(t->buffer->data +
				      ALIGN(tr->data_size, sizeof(void *)));
	offp = off_start;
   //将用户空间中binder_transaction_data类型数据tr的数据缓冲区
   //拷贝到内核空间中事务t的内核缓冲区
	if (copy_from_user(t->buffer->data, (const void __user *)(uintptr_t)
			   tr->data.ptr.buffer, tr->data_size)) {
		........
	}
	// 将用户空间中binder_transaction_data类型数据tr的偏移数组,
	//拷贝到内核空间中事务t的偏移数组中
    // offp为事务t中偏移数组的起始位置
	if (copy_from_user(offp, (const void __user *)(uintptr_t)
			   tr->data.ptr.offsets, tr->offsets_size)) {
		........
	}
	........
	//设置事务t中偏移数组的结束位置off_end
	off_end = (void *)off_start + tr->offsets_size;
	........
	//遍历偏移数组区间,找出其中flat_binder_object对象并生成相应的binder_node
	//并添加到节点树(nodes)中
	for (; offp < off_end; offp++) {
		struct binder_object_header *hdr;
		........
		//从事务t的数据缓冲区中, 获取offp位置的binder_object_header对象
		hdr = (struct binder_object_header *)(t->buffer->data + *offp);
		off_min = *offp + object_size;
		switch (hdr->type) {//我们知道此处的type为BINDER_TYPE_BINDER
		case BINDER_TYPE_BINDER:
		case BINDER_TYPE_WEAK_BINDER: {
			struct flat_binder_object *fp;
            //根据hdr属性获取flat_binder_object对象
			fp = to_flat_binder_object(hdr);
			//从源进程的节点树nodes中, 根据fp对象的binder属性查找Binder节点
			//若没有则创建一个binder_node节点,并把这个节点添加到nodes树中
            //同时在目标进程中创建binder_ref并指向这个binder_node
            //然后把这个binder_ref添加到目标进程的引用树refs_by_desc和refs_by_node中
            //同时修改fp对象的引用类型为BINDER_TYPE_HANDLE.
			ret = binder_translate_binder(fp, t, thread);
			........
		} break;
		case BINDER_TYPE_HANDLE:
		case BINDER_TYPE_WEAK_HANDLE: {
			........
		} break;
		........
		default:
			........
		}
	}	
	tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE;//设置tcomplete的类型
	t->work.type = BINDER_WORK_TRANSACTION;//设置事务t的类型

	if (reply) {
		........
	} else if (!(t->flags & TF_ONE_WAY)) {//走入此分支
		........
		//把tcomplete的添加到源线程的todo队列
		binder_enqueue_deferred_thread_work_ilocked(thread, tcomplete);
		t->need_reply = 1;//同步需要回复
		t->from_parent = thread->transaction_stack;
		//把当前要发送的事务t添加到源线程事务栈栈顶
		thread->transaction_stack = t;
		........
		//把事务t添加到目标线程或目标进程的同步或异步todo队列上, 并唤醒目标线程的等待队列
		if (!binder_proc_transaction(t, target_proc, target_thread)) {
			........
		}
	} else {
		........
	}	
	........
}

四 SMgr的处理

我们知道这次传输的目标进程同样是 SMgr,回忆下我们之前讲的 SMgr 的启动,知道它的线程只有一个,通过 binder_loop 函数在不断循环读取数据,以便完成服务添加和查询的任务.我们回到 SMgr 的 binder_loop 函数

4.1 binder_loop

void binder_loop(struct binder_state *bs, binder_handler func)
{
    int res;
    struct binder_write_read bwr;//定义binder_write_read
    uint32_t readbuf[32];
 
    bwr.write_size = 0;//write_size置为0表示只读不写
    bwr.write_consumed = 0;
    bwr.write_buffer = 0; 
    readbuf[0] = BC_ENTER_LOOPER;
    binder_write(bs, readbuf, sizeof(uint32_t));
  //向Binder驱动发送BC_ENTER_LOOPER命令,通知Binder驱动本线程进入loop状态 
    for (;;) {//无限for循环
        bwr.read_size = sizeof(readbuf);
        bwr.read_consumed = 0;
        bwr.read_buffer = (uintptr_t) readbuf;
     //通过ioctl读取来自Binder驱动的数据
        res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);
 
        if (res < 0) {
            ALOGE("binder_loop: ioctl failed (%s)\n", strerror(errno));
            break;
        }
     // 调用binder_parse函数解析读取的数据
        res = binder_parse(bs, 0, (uintptr_t) readbuf, bwr.read_consumed, func);
        if (res == 0) {
            ALOGE("binder_loop: unexpected reply?!\n");
            break;
        }
        if (res < 0) {
            ALOGE("binder_loop: io error %d %s\n", res, strerror(errno));
            break;
        }
    }
}

4.2 binder_parse

int binder_parse(struct binder_state *bs, struct binder_io *bio,
                 uintptr_t ptr, size_t size, binder_handler func)
{
    int r = 1;
    uintptr_t end = ptr + (uintptr_t) size;
 
    while (ptr < end) {
        uint32_t cmd = *(uint32_t *) ptr;
        ptr += sizeof(uint32_t);
        switch(cmd) {
       ........
        case BR_TRANSACTION: {
    struct binder_transaction_data *txn = (struct binder_transaction_data *) ptr;
            if ((end - ptr) < sizeof(*txn)) {
                ALOGE("parse: txn too small!\n");
                return -1;
            }
            binder_dump_txn(txn);//打印txn
            if (func) {
                unsigned rdata[256/4];
                struct binder_io msg;
                struct binder_io reply;
                int res;
                bio_init(&reply, rdata, sizeof(rdata), 4);//初始化reply
                bio_init_from_txn(&msg, txn);//把msg与txn建立关联
                res = func(bs, txn, &msg, &reply);
                if (txn->flags & TF_ONE_WAY) {
                    binder_free_buffer(bs, txn->data.ptr.buffer);
                } else {//flags为0,走到此分支,把reply作为参数返回给请求进程
                    binder_send_reply(bs, &reply, txn->data.ptr.buffer, res);
                }
            }
            ptr += sizeof(*txn);
            break;
        }
        ........
        }
    }
    return r;
}

4.3 svcmgr_handler

int svcmgr_handler(struct binder_state *bs,
                   struct binder_transaction_data *txn,
                   struct binder_io *msg,
                   struct binder_io *reply)
{
    struct svcinfo *si;
    uint16_t *s;
    size_t len;
    uint32_t handle;
    uint32_t strict_policy;
    int allow_isolated;     
    ........ 
    switch(txn->code) {//解析命令码,这里这个参数为SVC_MGR_CHECK_SERVICE
    //等于GET_SERVICE_TRANSACTION
    case SVC_MGR_GET_SERVICE:
    //等于CHECK_SERVICE_TRANSACTION
    case SVC_MGR_CHECK_SERVICE:
        s = bio_get_string16(msg, &len);//获取服务名字
        if (s == NULL) {
            return -1;
        }
        handle = do_find_service(s, len, txn->sender_euid, txn->sender_pid);
        if (!handle)
            break;
        bio_put_ref(reply, handle);
        return 0;
    case SVC_MGR_ADD_SERVICE:
        ........ 
    case SVC_MGR_LIST_SERVICES: {
       ........
    }
    default:
        ALOGE("unknown code %d\n", txn->code);
        return -1;
    } 
    bio_put_uint32(reply, 0);
    return 0;
}

与服务的添加不同,这里是服务的查询,code 为 SVC_MGR_CHECK_SERVICE 我们来看对应的操作,先从参数中获取要查询的服务的名字,然后调用 do_find_service 函数根据这个服务名字来查询对应的句柄值,代码如下:

4.4 do_find_service

uint32_t do_find_service(const uint16_t *s, size_t len, uid_t uid, pid_t spid)
{
    struct svcinfo *si = find_svc(s, len);//从svclist表中查询指定的服务
    if (!si || !si->handle) {
        return 0;
    }
    if (!si->allow_isolated) {//检查服务是否允许从isolated进程访问
        // If this service doesn't allow access from isolated processes,
        // then check the uid to see if it is isolated.
        uid_t appid = uid % AID_USER;
        if (appid >= AID_ISOLATED_START && appid <= AID_ISOLATED_END) {
            return 0;
        }
    }
    if (!svc_can_find(s, len, spid, uid)) {//权限检查
        return 0;
    }
    return si->handle;
}

4.5 find_svc

struct svcinfo *find_svc(const uint16_t *s16, size_t len)
{
    struct svcinfo *si;
    for (si = svclist; si; si = si->next) {
        if ((len == si->len) &&
            !memcmp(s16, si->name, len * sizeof(uint16_t))) {
            return si;
        }
    }
    return NULL;
}

最后通过调用 bio_put_ref 函数在 reply 中开辟空间用来存储 flat_binder_object 并把相应的句柄值赋值给其中的 handle 域,然后把这个 reply 回传给请求端进程.我们接下来看请求端进程如何获取这个 Binder 代理.

五 结果处理

我们再次回看 BpServiceManager.checkService 函数如下:

virtual sp<IBinder> checkService( const String16& name) const
    {
        Parcel data, reply;
        data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());
        data.writeString16(name);//把服务的名字写入data数据包中
        //调用BpBinder的transact函数执行跨进程传输任务,并且构造一个reply
        //用于存放回复的数据(这个数据主要是BpBinder对象)
        remote()->transact(CHECK_SERVICE_TRANSACTION, data, &reply);
        return reply.readStrongBinder();
    }

可以看到返回的是 reply 的 readStrongBinder 函数,我们来分析这个函数

5.1 Parcel::readStrongBinder

sp<IBinder> Parcel::readStrongBinder() const
{
    sp<IBinder> val;
    // Note that a lot of code in Android reads binders by hand with this
    // method, and that code has historically been ok with getting nullptr
    // back (while ignoring error codes).
    readNullableStrongBinder(&val);
    return val;
}
status_t Parcel::readNullableStrongBinder(sp<IBinder>* val) const
{
    return unflatten_binder(ProcessState::self(), *this, val);
}

5.2 Parcel的unflatten_binder

status_t unflatten_binder(const sp<ProcessState>& proc,
    const Parcel& in, sp<IBinder>* out)
{
   //读取flat_binder_object对象
    const flat_binder_object* flat = in.readObject(false);

    if (flat) {
        switch (flat->type) {
            case BINDER_TYPE_BINDER:
                *out = reinterpret_cast<IBinder*>(flat->cookie);
                return finish_unflatten_binder(NULL, *flat, in);
            //此时返回的是handle类型
            case BINDER_TYPE_HANDLE:
                *out = proc->getStrongProxyForHandle(flat->handle);
                return finish_unflatten_binder(
                    static_cast<BpBinder*>(out->get()), *flat, in);
        }
    }
    return BAD_TYPE;
}

5.3 ProcessState的getStrongProxyForHandle

sp<IBinder> ProcessState::getStrongProxyForHandle(int32_t handle)
{
    sp<IBinder> result;
    AutoMutex _l(mLock);
    //查询是否包含句柄为handle的handle_entry对象
    handle_entry* e = lookupHandleLocked(handle);
    if (e != NULL) {
        IBinder* b = e->binder;
        //如果binder为空则创建一个新的BpBinder对象,如果不为空则直接返回
        if (b == NULL || !e->refs->attemptIncWeak(this)) {
            if (handle == 0) {
                Parcel data;
                status_t status = IPCThreadState::self()->transact(
                        0, IBinder::PING_TRANSACTION, data, NULL, 0);
                if (status == DEAD_OBJECT)
                   return NULL;
            }
            b = new BpBinder(handle); 
            e->binder = b;
            if (b) e->refs = b->getWeakRefs();
            result = b;
        } else {            
            result.force_set(b);
            e->refs->decWeak(this);
        }
    }
    return result;
}
ProcessState::handle_entry* ProcessState::lookupHandleLocked(int32_t handle)
{
  //先从mHandleToObject中查询是否包含句柄为handle的handle_entry对象
  //若没有则创建一个新的handle_entry对象,并返回,注意新的对象内容为null
    const size_t N=mHandleToObject.size();
    if (N <= (size_t)handle) {
        handle_entry e;
        e.binder = NULL;
        e.refs = NULL;
        status_t err = mHandleToObject.insertAt(e, N, handle+1-N);
        if (err < NO_ERROR) return NULL;
    }
    return &mHandleToObject.editItemAt(handle);
}

5.4 Parcel的finish_unflatten_binder

inline static status_t finish_unflatten_binder(
    BpBinder* /*proxy*/, const flat_binder_object& /*flat*/,
    const Parcel& /*in*/)
{
    return NO_ERROR;//返回状态码
}

六 注意

请求服务过程,就是向 SMgr 进程查询指定服务,当执行 binder_transaction() 时,会区分请求服务所属进程情况。

  • 当请求服务的进程与服务属于不同进程,则为请求服务所在进程创建 binder_ref 对象,指向服务进程中的 binder_node 最终 readStrongBinder(),返回的是 BpBinder 对象
  • 当请求服务的进程与服务属于同一进程,则不再创建新对象,只是引用计数加1,并且修改 type 为BINDER_TYPE_BINDER,最终 readStrongBinder(),返回的是 BBinder 对象的子类
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值