Binder系列8 Binder服务的注册

一 概述

我们在前面介绍Binder传输原理的三篇文章中,已经把Binder传输原理中的从请求发起到驱动路由再到请求处理的过程详细地阐述了一遍,接下来我们以实际的例子来加深印象和理解,本篇文章主要讨论Server端Binder服务实体的注册.我们以Media服务的注册为例来分析,先看时序图如下:

在这里插入图片描述

二 MediaPlayerService服务注册

2.1 MediaPlayerService服务入口

MediaPlayerService 服务的注册在 main_mediaserver.cpp 中的 main() 方法中,代码如下:

int main(int argc __unused, char **argv __unused)
{
    signal(SIGPIPE, SIG_IGN);
    //系列7中已经讲过,获得ProcessState实例对象
    sp<ProcessState> proc(ProcessState::self());
    //系列7中已经讲过,获得SMgr也就是BpServiceManager对象
    sp<IServiceManager> sm(defaultServiceManager());
    ALOGI("ServiceManager: %p", sm.get());
    InitializeIcuOrDie();
    MediaPlayerService::instantiate();//注册多媒体服务
    ResourceManagerService::instantiate();
    registerExtensions();
    ProcessState::self()->startThreadPool();//启动Binder线程池
    IPCThreadState::self()->joinThreadPool();//当前线程加入到线程池
}

我们看到在 main 函数中首先调用 ProcessState::self() 来获得 ProcessState 对象,我们已经知道在 ProcessState 的构造函数中实际上是打开了 Binder 驱动并且映射了内存空间,为将来的 Binder 传输通信提供了条件,接下来调用 defaultServiceManager(),用来获取 SMgr 的代理对象 BpServiceManager,我们知道不管服务端的注册服务,还是客户端的获取服务,都需要 SMgr 的参与,所以必须获取这个 BpServiceManager 来完成我们想要的操作.ProcessState 和 BpServiceManager 的获取这里不再赘述,大家可以回顾系列7.我们这里重点讨论 MediaPlayerService 服务的注册.

2.2 Media 服务的注册 instantiate()

void MediaPlayerService::instantiate() {
    defaultServiceManager()->addService(
            String16("media.player"), new MediaPlayerService());
}

可以看到是通过调用 BpServiceManager 的 addService 函数实现服务的注册的,我们来看这个函数的参数,总共有2个参数,其中第一个参数为一个字符串形式的Binder名字,第二个参数为 MediaPlayerService 对象,这个 MediaPlayerService 对象就是一个 Binder 实体,为什么这样说呢?因为它继承自 BnMediaPlayerService,而 BnMediaPlayerService 又继承自 BnInterface,而 BnInterface 又继承自 BBinder,所以说它就是一个 Binder 实体.

接下来我们来看 BpServiceManager 的 addService 函数是怎么实现服务的注册的.

class BpServiceManager : public BpInterface<IServiceManager>
{
public:
    explicit BpServiceManager(const sp<IBinder>& impl)
        : BpInterface<IServiceManager>(impl)
    {
    }
    ........
    virtual status_t addService(const String16& name, const sp<IBinder>& service,
            bool allowIsolated)
    {
        Parcel data, reply;
        data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());
        data.writeString16(name);
        data.writeStrongBinder(service);
        data.writeInt32(allowIsolated ? 1 : 0);
        //最终通过remote()的transact函数实现把数据跨进程传递给SMgr
        status_t err = remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply);
        return err == NO_ERROR ? reply.readExceptionCode() : err;
    }
   ........
};

我们看到,参数 name 和 service 都被整理进了一个 Parcel 结构的 data 中,我们重点关注 data 的 writeStrongBinder 函数,来看它是如何把 Binder 实体对象整理进 Parcel 中的.

status_t Parcel::writeStrongBinder(const sp<IBinder>& val)
{
    return flatten_binder(ProcessState::self(), val, this);
}

我们看 flatten_binder 函数,flatten 的意思就是打平,我们可以称之为把这个 Binder 实体对象"打扁".函数如下:

status_t flatten_binder(const sp<ProcessState>& /*proc*/,
    const sp<IBinder>& binder, Parcel* out)
{
    flat_binder_object obj;//Binder在传输数据中的表示形式
    ........
    if (binder != NULL) {//这个binder指的就是MediaPlayerService对象
      //如果binder是BBinder返回自己,如果不是的话返回null
        IBinder *local = binder->localBinder();//这里为BBinder所以返回自己,不为空
        if (!local) {
            BpBinder *proxy = binder->remoteBinder();
            if (proxy == NULL) {
                ALOGE("null proxy");
            }
            const int32_t handle = proxy ? proxy->handle() : 0;
            obj.type = BINDER_TYPE_HANDLE;
            obj.binder = 0;
            obj.handle = handle;
            obj.cookie = 0;
        } else {//不为空,所以走到这个分支,赋值obj的类型为BINDER_TYPE_BINDER
            obj.type = BINDER_TYPE_BINDER;
            obj.binder = reinterpret_cast<uintptr_t>(local->getWeakRefs());
            obj.cookie = reinterpret_cast<uintptr_t>(local);
        }
    } else {
        obj.type = BINDER_TYPE_BINDER;
        obj.binder = 0;
        obj.cookie = 0;
    }
   //把flat_binder_object对象写入Parcel中
    return finish_flatten_binder(binder, obj, out);
}
inline static status_t finish_flatten_binder(
    const sp<IBinder>& /*binder*/, const flat_binder_object& flat, Parcel* out)
{
    return out->writeObject(flat, false);
}

以上就是 Binder实体对象如何被整理进 Parcel 中的过程,也就是把这个 Binder 实体对象"打扁"成 flat_binder_object 的结构,然后把这个数据写入Parcel 中.

从 addService 函数可以看到,它最终是通过 remote() 的 transact 函数实现把数据信息跨进程传递给 SMgr 的,而这个 remote() 我们在系列7中已经知道,它就是一个以句柄值为0生成的一个 BpBinder,他是 SMgr 的代理,接下来研究 BpBinder 的 transact 函数.

2.3 代理传输 BpBinder的transact

status_t BpBinder::transact(
    uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
    // Once a binder has died, it will never come back to life.
    if (mAlive) {//调用IPCThreadState的transact方法
        status_t status = IPCThreadState::self()->transact(
            mHandle, code, data, reply, flags);
        if (status == DEAD_OBJECT) mAlive = 0;
        return status;
    }

    return DEAD_OBJECT;
}

最终调用的是 IPCThreadState 的 transact()

status_t IPCThreadState::transact(int32_t handle,
                                  uint32_t code, const Parcel& data,
                                  Parcel* reply, uint32_t flags)
{   ........
    if (err == NO_ERROR) {
      //将data数据整理进内部的mOut包中
     err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);
    }
    ........
    if ((flags & TF_ONE_WAY) == 0) {//默认flags为0,所以为TF_ONE_WAY
        ........
        if (reply) {//reply不为空
            err = waitForResponse(reply);
        } else {
            Parcel fakeReply;
            err = waitForResponse(&fakeReply);
        }
        ........
        }
    } else {
        err = waitForResponse(NULL, NULL);
    }
    return err;
}

writeTransactionData 将 data 数据整理进 IPCThreadState 内部的 mOut 包中

status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags,
    int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer)
{
    binder_transaction_data tr;

    tr.target.ptr = 0; 
    tr.target.handle = handle;//此处的handle句柄值为0,因为目标是SMgr
    tr.code = code;
    tr.flags = binderFlags;
    tr.cookie = 0;
    tr.sender_pid = 0;
    tr.sender_euid = 0;

    const status_t err = data.errorCheck();
    if (err == NO_ERROR) {
        tr.data_size = data.ipcDataSize();
        tr.data.ptr.buffer = data.ipcData();
        tr.offsets_size = data.ipcObjectsCount()*sizeof(binder_size_t);
        tr.data.ptr.offsets = data.ipcObjects();
    } else if (statusBuffer) {
        tr.flags |= TF_STATUS_CODE;
        *statusBuffer = err;
        tr.data_size = sizeof(status_t);
        tr.data.ptr.buffer = reinterpret_cast<uintptr_t>(statusBuffer);
        tr.offsets_size = 0;
        tr.data.ptr.offsets = 0;
    } else {
        return (mLastError = err);
    }

    mOut.writeInt32(cmd);//把BC_TRANSACTION命令写到mOut中
    //紧跟BC_TRANSACTION命令后,把携带的数据binder_transaction_data写到mOut中
    mOut.write(&tr, sizeof(tr));
    return NO_ERROR;
}

接下来看 waitForResponse() 函数,waitForResponse()的代码截选如下:

status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
{
    uint32_t cmd;
    int32_t err;
    while (1) {
       // talkWithDriver()内部会完成跨进程交互
        if ((err=talkWithDriver()) < NO_ERROR) break;
        err = mIn.errorCheck();
        if (err < NO_ERROR) break;
        if (mIn.dataAvail() == 0) continue;
        cmd = (uint32_t)mIn.readInt32();
        ........
        switch (cmd) {
        case BR_TRANSACTION_COMPLETE:
            if (!reply && !acquireResult) goto finish;
            break;
        case BR_DEAD_REPLY:
            err = DEAD_OBJECT;
            goto finish;
        case BR_FAILED_REPLY:
            err = FAILED_TRANSACTION;
            goto finish;
        ........
        default:
           //注意这个executeCommand(),它会处理从驱动返回过来的BR_TRANSACTION
            err = executeCommand(cmd);
            if (err != NO_ERROR) goto finish;
            break;
        }
    }
    ........
    return err;
}

接下来调用 talkWithDriver() 来和 Binder 驱动打交道的,talkWithDriver()的代码截选如下:

status_t IPCThreadState::talkWithDriver(bool doReceive)
{
    ........
    binder_write_read bwr;//定义一个binder_write_read,用来与Binder驱动交互数据
    // Is the read buffer empty?
    const bool needRead = mIn.dataPosition() >= mIn.dataSize();
    const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0;
    bwr.write_size = outAvail;
    //让bwr的write_buffer指向携带数据的mOut
    bwr.write_buffer = (uintptr_t)mOut.data();
    ........
    // Return immediately if there is nothing to do.
    if ((bwr.write_size == 0) && (bwr.read_size == 0)) return NO_ERROR;
    bwr.write_consumed = 0;
    bwr.read_consumed = 0;
    status_t err;
    do {
       ........
#if defined(__ANDROID__)
     //通过ioctl函数和Binder驱动交互,把数据携带者bwr传递到Binder驱动
        if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)
            err = NO_ERROR;
        else
            err = -errno;
        ........
    } while (err == -EINTR);
    ........
    return err;
}

三 Binder驱动处理

3.1 binder_ioctl 函数

static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
	int ret;
	//之前介绍过,通过filp的private_data域找到对应的发起请求的进程
	struct binder_proc *proc = filp->private_data;	
	struct binder_thread *thread;
	unsigned int size = _IOC_SIZE(cmd);//命令
	void __user *ubuf = (void __user *)arg;//参数 
	........ 
    //从proc的线程树threads中查找当前线程,如果没有找到则创建一个新的线程并添加
    //到threads树中(线程树threads的生成)
	thread = binder_get_thread(proc);
	........ 
	switch (cmd) {//解析命令,根据不同的命令,执行不同的操作
	case BINDER_WRITE_READ:
		ret = binder_ioctl_write_read(filp, cmd, arg, thread);
		if (ret)
			goto err;
		break;
    ........
	return ret;
}

3.2 binder_ioctl_write_read

static int binder_ioctl_write_read(struct file *filp,
				unsigned int cmd, unsigned long arg,
				struct binder_thread *thread)
{
	int ret = 0;
	struct binder_proc *proc = filp->private_data;//还是根据filp找到当前进程
	unsigned int size = _IOC_SIZE(cmd);
	void __user *ubuf = (void __user *)arg;//参数
	struct binder_write_read bwr;//定义内核空间的binder_write_read
	........
	//把用户空间数据ubuf拷贝到内核空间bwr
	if (copy_from_user(&bwr, ubuf, sizeof(bwr))) {
		........
	}
	........			
	if (bwr.write_size > 0) {//调用binder_thread_write函数执行写操作
		ret = binder_thread_write(proc, thread,
					  bwr.write_buffer,
					  bwr.write_size,
					  &bwr.write_consumed);
		........
	}
	if (bwr.read_size > 0) {//这里read_size应该不大于0,此分支不执行
		ret = binder_thread_read(proc, thread, bwr.read_buffer,
					 bwr.read_size,
					 &bwr.read_consumed,
					 filp->f_flags & O_NONBLOCK);
		........
		if (!binder_worklist_empty_ilocked(&proc->todo))
			binder_wakeup_proc_ilocked(proc);
		........
	}
	........
	if (copy_to_user(ubuf, &bwr, sizeof(bwr))) {//将内核空间数据bwr拷贝到用户空间ubuf
		........
	}
out:
	return ret;
}

3.3 binder_thread_write

static int binder_thread_write(struct binder_proc *proc,
			struct binder_thread *thread,
			binder_uintptr_t binder_buffer, size_t size,
			binder_size_t *consumed)
{
	uint32_t cmd;
	struct binder_context *context = proc->context;//Binder驱动全局的context
	void __user *buffer = (void __user *)(uintptr_t)binder_buffer;
	void __user *ptr = buffer + *consumed;
	void __user *end = buffer + size;

	while (ptr < end && thread->return_error.cmd == BR_OK) {
		int ret;
		//拷贝用户空间的cmd命令到内核空间,此时为BC_TRANSACTION
		if (get_user(cmd, (uint32_t __user *)ptr))
			return -EFAULT;
		ptr += sizeof(uint32_t);
		........
		switch (cmd) {		
		........
		//走到BC_TRANSACTION
		case BC_TRANSACTION:
		case BC_REPLY: {
		//定义内核空间的binder_transaction_data
			struct binder_transaction_data tr;
			//拷贝用户空间的binder_transaction_data到内核空间的tr		
			if (copy_from_user(&tr, ptr, sizeof(tr)))
				return -EFAULT;
			ptr += sizeof(tr);
			//执行核心函数binder_transaction
			binder_transaction(proc, thread, &tr,
					   cmd == BC_REPLY, 0);
			break;
		}	
		........
		}
		*consumed = ptr - buffer;
	}
	return 0;
}

3.4 binder_transaction

static void binder_transaction(struct binder_proc *proc,
			       struct binder_thread *thread,
			       struct binder_transaction_data *tr, int reply,
			       binder_size_t extra_buffers_size)
{
	int ret;
	struct binder_transaction *t;//定义要发送的事务
	struct binder_work *tcomplete;//向发送线程反馈命令发送完成
	........
	struct binder_proc *target_proc = NULL;//目标进程
	struct binder_thread *target_thread = NULL;//目标线程
	struct binder_node *target_node = NULL;//目标binder_node节点
	........
	if (reply) {
	  ........
	} else {//此处传输指令为BC_TRANSACTION, reply为false
    	//tr中的target描述传输的目标端,我们知道此次传输目标是SMgr,用来注册服务
    	//所以此处的target.handle为0,代表SMgr的代理
		if (tr->target.handle) {//不会走此分支
			........
		} else {//目标进程对应的句柄为0, 说明目标进程为SMgr进程
			........
			//获取全局的context中的SMgr的Binder节点
			//binder_context_mgr_node赋值给target_node
			target_node = context->binder_context_mgr_node;
			if (target_node)
    			//根据target_node,找到其对应的目标进程target_proc
    			//并对target_node的引用做计数管理
				target_node = binder_get_node_refs_for_txn(
						target_node, &target_proc,
						&return_error);
			........
		}
		//通过以上操作我们获取到了目标节点target_node和目标节点所在的目标进程target_proc
		........
		//flags默认值为0,如果transaction_stack存在则走入此分支,用来寻找目标线程
		if (!(tr->flags & TF_ONE_WAY) && thread->transaction_stack) {
			struct binder_transaction *tmp;
			tmp = thread->transaction_stack;
			........
			while (tmp) {//遍历当前线程的事务栈,查找是否存在可以复用的线程
				struct binder_thread *from;
				........
				from = tmp->from;
				//查找事务栈中的事务,是否有来自目标进程的线程在等待,如果有即可作为目标线程
				if (from && from->proc == target_proc) {
					atomic_inc(&from->tmp_ref);
					target_thread = from;
					spin_unlock(&tmp->lock);
					break;
				}
				........
				tmp = tmp->from_parent;
			}
		}
		........
	}
	........
	//为事务t分配内存空间
	t = kzalloc(sizeof(*t), GFP_KERNEL);
	........
    //为tcomplete分配内存空间
	tcomplete = kzalloc(sizeof(*tcomplete), GFP_KERNEL);
	........
	if (!reply && !(tr->flags & TF_ONE_WAY))
		t->from = thread;//因为flags为0,所以走入此分支,设置事务的from线程
	else
		t->from = NULL;//oneway情况下不需要回复,所以也不需要知道事务的发起线程是哪个
	//对事务的初始化操作
	t->sender_euid = task_euid(proc->tsk);
	t->to_proc = target_proc;//目标进程保存到事务的to_proc
	t->to_thread = target_thread;//目标线程(如果存在的话)保存到事务的to_thread
	t->code = tr->code;//函数编号
	t->flags = tr->flags;//同步异步flag
	........
    //从目标进程分配一个内核缓冲区给事务t的buffer
	t->buffer = binder_alloc_new_buf(&target_proc->alloc, tr->data_size,
		tr->offsets_size, extra_buffers_size,
		!reply && (t->flags & TF_ONE_WAY));
	........
	//继续对事务的buffer初始化
	t->buffer->allow_user_free = 0;
	t->buffer->debug_id = t->debug_id;
	t->buffer->transaction = t;
	t->buffer->target_node = target_node;
	........
	//以下操作为从传输的数据中解析出每一个Binder也就是flat_binder_object
	//并生成相应的红黑树(refs树和nodes树)
	//设置事务t中偏移数组的开始位置off_start, 即当前位置+binder_transaction_data数据大小
	off_start = (binder_size_t *)(t->buffer->data +
				      ALIGN(tr->data_size, sizeof(void *)));
	offp = off_start;
   //将用户空间中binder_transaction_data类型数据tr的数据缓冲区
   //拷贝到内核空间中事务t的内核缓冲区
	if (copy_from_user(t->buffer->data, (const void __user *)(uintptr_t)
			   tr->data.ptr.buffer, tr->data_size)) {
		........
	}
	// 将用户空间中binder_transaction_data类型数据tr的偏移数组,
	//拷贝到内核空间中事务t的偏移数组中
    // offp为事务t中偏移数组的起始位置
	if (copy_from_user(offp, (const void __user *)(uintptr_t)
			   tr->data.ptr.offsets, tr->offsets_size)) {
		........
	}
	........
	//设置事务t中偏移数组的结束位置off_end
	off_end = (void *)off_start + tr->offsets_size;
	........
	//遍历偏移数组区间,找出其中flat_binder_object对象并生成相应的binder_node
	//并添加到节点树(nodes)中
	for (; offp < off_end; offp++) {
		struct binder_object_header *hdr;
		........
		//从事务t的数据缓冲区中, 获取offp位置的binder_object_header对象
		hdr = (struct binder_object_header *)(t->buffer->data + *offp);
		off_min = *offp + object_size;
		switch (hdr->type) {//我们知道此处的type为BINDER_TYPE_BINDER
		case BINDER_TYPE_BINDER:
		case BINDER_TYPE_WEAK_BINDER: {
			struct flat_binder_object *fp;
            //根据hdr属性获取flat_binder_object对象
			fp = to_flat_binder_object(hdr);
			//从源进程的节点树nodes中, 根据fp对象的binder属性查找Binder节点
			//若没有则创建一个binder_node节点,并把这个节点添加到nodes树中
            //同时在目标进程中创建binder_ref并指向这个binder_node
            //然后把这个binder_ref添加到目标进程的引用树refs_by_desc和refs_by_node中
            //同时修改fp对象的引用类型为BINDER_TYPE_HANDLE.
			ret = binder_translate_binder(fp, t, thread);
			........
		} break;
		case BINDER_TYPE_HANDLE:
		case BINDER_TYPE_WEAK_HANDLE: {
			........
		} break;
		........
		default:
			........
		}
	}	
	tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE;//设置tcomplete的类型
	t->work.type = BINDER_WORK_TRANSACTION;//设置事务t的类型

	if (reply) {
		........
	} else if (!(t->flags & TF_ONE_WAY)) {//走入此分支
		........
		//把tcomplete的添加到源线程的todo队列
		binder_enqueue_deferred_thread_work_ilocked(thread, tcomplete);
		t->need_reply = 1;//同步需要回复
		t->from_parent = thread->transaction_stack;
		//把当前要发送的事务t添加到源线程事务栈栈顶
		thread->transaction_stack = t;
		........
		//把事务t添加到目标线程或目标进程的同步或异步todo队列上, 并唤醒目标线程的等待队列
		if (!binder_proc_transaction(t, target_proc, target_thread)) {
			........
		}
	} else {
		........
	}	
	........
}

之前已经知道 binder_transaction 函数的作用是创建一个 binder_transaction 事务,并这个事务添加到目标进程或目标线程的todo队列中,然后唤醒目标线程或进程.

四 SMgr的处理

我们知道我们此次传输的目标进程是SMgr,回忆下我们之前讲的 SMgr 的启动,知道它的线程只有一个,通过 binder_loop 函数在不断循环读取数据,以便完成服务添加和查询的任务.我们回到 SMgr 的 binder_loop 函数

4.1 binder_loop

void binder_loop(struct binder_state *bs, binder_handler func)
{
    int res;
    struct binder_write_read bwr;//定义binder_write_read
    uint32_t readbuf[32];
 
    bwr.write_size = 0;//write_size置为0表示只读不写
    bwr.write_consumed = 0;
    bwr.write_buffer = 0;
 
    readbuf[0] = BC_ENTER_LOOPER;
    binder_write(bs, readbuf, sizeof(uint32_t));
  //向Binder驱动发送BC_ENTER_LOOPER命令,通知Binder驱动本线程进入loop状态 
    for (;;) {//无限for循环
        bwr.read_size = sizeof(readbuf);
        bwr.read_consumed = 0;
        bwr.read_buffer = (uintptr_t) readbuf;
     //通过ioctl读取来自Binder驱动的数据
        res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);
 
        if (res < 0) {
            ALOGE("binder_loop: ioctl failed (%s)\n", strerror(errno));
            break;
        }
     // 调用binder_parse函数解析读取的数据
        res = binder_parse(bs, 0, (uintptr_t) readbuf, bwr.read_consumed, func);
        if (res == 0) {
            ALOGE("binder_loop: unexpected reply?!\n");
            break;
        }
        if (res < 0) {
            ALOGE("binder_loop: io error %d %s\n", res, strerror(errno));
            break;
        }
    }
}

我们知道 SMgr 通过 binder_loop 函数的 ioctl 函数来读取从驱动返回过来的数据.然后对于从驱动层传输过来的数据调用 binder_parse 进程解析和处理.

4.2 binder_parse

int binder_parse(struct binder_state *bs, struct binder_io *bio,
                 uintptr_t ptr, size_t size, binder_handler func)
{
    int r = 1;
    uintptr_t end = ptr + (uintptr_t) size;
 
    while (ptr < end) {
        uint32_t cmd = *(uint32_t *) ptr;
        ptr += sizeof(uint32_t);
        switch(cmd) {
       ........
        case BR_TRANSACTION: {
    struct binder_transaction_data *txn = (struct binder_transaction_data *) ptr;
            if ((end - ptr) < sizeof(*txn)) {
                ALOGE("parse: txn too small!\n");
                return -1;
            }
            binder_dump_txn(txn);//打印txn
            if (func) {
                unsigned rdata[256/4];
                struct binder_io msg;
                struct binder_io reply;
                int res;
                bio_init(&reply, rdata, sizeof(rdata), 4);//初始化reply
                bio_init_from_txn(&msg, txn);//把msg与txn建立关联
                res = func(bs, txn, &msg, &reply);
                if (txn->flags & TF_ONE_WAY) {
                    binder_free_buffer(bs, txn->data.ptr.buffer);
                } else {//flags为0,走到此分支,把reply作为参数返回给请求进程
                    binder_send_reply(bs, &reply, txn->data.ptr.buffer, res);
                }
            }
            ptr += sizeof(*txn);
            break;
        }
        ........
        }
    }
    return r;
}

SMgr 中的 binder_parse() 声明了两个类型为 binder_io 的局部变量:msg 和 reply。从 binder_io 这个类型的名字,我们就可以看出,要用它来处理从 Binder 驱动传递过来的数据了。其实为了便于读取 binder_io 所指代的内容,SMgr中提供了一系列以 bio_ 打头的辅助函数。在读取实际数据之前,我们必须先调用 bio_init_from_txn(),把binder_io 变量(比如msg变量)和 binder_transaction_data 所指代的缓冲区联系起来,接下来把数据交给参数 func 处理,也就是 svcmgr_handler 函数.

4.3 svcmgr_handler

int svcmgr_handler(struct binder_state *bs,
                   struct binder_transaction_data *txn,
                   struct binder_io *msg,
                   struct binder_io *reply)
{
    struct svcinfo *si;
    uint16_t *s;
    size_t len;
    uint32_t handle;
    uint32_t strict_policy;
    int allow_isolated;     
    ........ 
    switch(txn->code) {//解析命令码,这里这个参数为SVC_MGR_ADD_SERVICE
    case SVC_MGR_GET_SERVICE:
    case SVC_MGR_CHECK_SERVICE:
        ........ 
    case SVC_MGR_ADD_SERVICE:
        s = bio_get_string16(msg, &len);
        if (s == NULL) {
            return -1;
        }
        handle = bio_get_ref(msg);
        allow_isolated = bio_get_uint32(msg) ? 1 : 0;
        //执行do_add_service函数,实现对服务的注册
        if (do_add_service(bs, s, len, handle, txn->sender_euid,
            allow_isolated, txn->sender_pid))
            return -1;
        break;
 
    case SVC_MGR_LIST_SERVICES: {
       ........
    }
    default:
        ALOGE("unknown code %d\n", txn->code);
        return -1;
    } 
    bio_put_uint32(reply, 0);
    return 0;
}

4.4 do_add_service

int do_add_service(struct binder_state *bs,
                   const uint16_t *s, size_t len,
                   uint32_t handle, uid_t uid, int allow_isolated,
                   pid_t spid)
{
    struct svcinfo *si;//声明一个svcinfo节点  
    ........ 
    si = find_svc(s, len);//调用find_svc根据名字查询是否已经存在
    if (si) {//异常情况,正常情况不可能存在
        if (si->handle) {
            svcinfo_death(bs, si);//如果已经存在,则release这个句柄
        }
        si->handle = handle;//赋值新的handle
    } else {
        si = malloc(sizeof(*si) + (len + 1) * sizeof(uint16_t));
        if (!si) {
            ALOGE("add_service('%s',%x) uid=%d - OUT OF MEMORY\n",
                 str8(s, len), handle, uid);
            return -1;
        }
     //初始化si
        si->handle = handle;
        si->len = len;
        memcpy(si->name, s, (len + 1) * sizeof(uint16_t));
        si->name[len] = '\0';
        si->death.func = (void*) svcinfo_death;
        si->death.ptr = si;
        si->allow_isolated = allow_isolated;
        si->next = svclist;
        svclist = si;//把这个svcinfo节点添加到svclist链表中
    } 
    binder_acquire(bs, handle);
    binder_link_to_death(bs, handle, &si->death);
    return 0;
}

就是生成一个 svcinfo,并把要添加的 Binder 代理的句柄值,连同名字等信息一起赋值给这个 svcinfo,最后添加进链表 svclist 中,以方便以后的 Client 进程查询,这个已经实现了服务的注册.
接下来看回复

4.5 binder_send_reply

void binder_send_reply(struct binder_state *bs,
                       struct binder_io *reply,
                       binder_uintptr_t buffer_to_free,
                       int status)
{   //为了便于操作,新定义一个数据结构data,包含两条命令
    struct {
        uint32_t cmd_free;
        binder_uintptr_t buffer;
        uint32_t cmd_reply;
        struct binder_transaction_data txn;
    } __attribute__((packed)) data;
    data.cmd_free = BC_FREE_BUFFER;//先是free buffer
    data.buffer = buffer_to_free;
    data.cmd_reply = BC_REPLY;//然后发送BC_REPLY
    data.txn.target.ptr = 0;
    data.txn.cookie = 0;
    data.txn.code = 0;
    if (status) {
        data.txn.flags = TF_STATUS_CODE;
        data.txn.data_size = sizeof(int);
        data.txn.offsets_size = 0;
        data.txn.data.ptr.buffer = (uintptr_t)&status;
        data.txn.data.ptr.offsets = 0;
    } else {//status为0
        data.txn.flags = 0;
        data.txn.data_size = reply->data - reply->data0;
        data.txn.offsets_size = ((char*) reply->offs) - ((char*) reply->offs0);
        data.txn.data.ptr.buffer = (uintptr_t)reply->data0;
        data.txn.data.ptr.offsets = (uintptr_t)reply->offs0;
    }
    binder_write(bs, &data, sizeof(data));
  //最后把这些数据组织到binder_write_read中通过BINDER_WRITE_READ发送
}
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值