Binder是android系统的核心所在,是android framework到基石,通过深入研究binder对整个android系统有一个崭新的认识。
首先学习binder驱动之前要了解一些基本概念:
1、android 在启动zygote进程之前启动了service manager 进程,service manager打开binder驱动,之后启动的所有service都先向service manager 注册。service manager 是整个系统的通信中枢,熟悉设计模式的同学可能会感觉这个是代理模式的更高境界,或者一种分布式系统corba架构的轻量设计,回想下塞班到client——server架构的各自为战,android的client——server架构更加考虑到了开发者的便捷与系统资源的更好整合。
2、binder进程间通信采用AIDL 定义进程间通讯的数据格式。
一切从源码入手:打开android2.3.3/kernel/common/drivers/staging/android目录下的Binder.h 和Binder.c 。
Binder.h 开始处定义了binder的类型,
enum {
BINDER_TYPE_BINDER = B_PACK_CHARS('s', 'b', '*', B_TYPE_LARGE),
BINDER_TYPE_WEAK_BINDER = B_PACK_CHARS('w', 'b', '*', B_TYPE_LARGE),
BINDER_TYPE_HANDLE = B_PACK_CHARS('s', 'h', '*', B_TYPE_LARGE),
BINDER_TYPE_WEAK_HANDLE = B_PACK_CHARS('w', 'h', '*', B_TYPE_LARGE),
BINDER_TYPE_FD = B_PACK_CHARS('f', 'd', '*', B_TYPE_LARGE),
};
其中BINDER_TYPE_BINDER 和BINDER_TYPE_WEAK_BINDER 代表本地对象,剩下三个是远程对象的引用,BINDER_TYPE_FD 中包含fd文件描述符。
之后定义了进程间传递的binder对象,如下所示,
/*
* This is the flattened representation of a Binder object for transfer
* between processes. The 'offsets' supplied as part of a binder transaction
* contains offsets into the data where these structures occur. The Binder
* driver takes care of re-writing the structure type and data as it moves
* between processes.
*/
struct flat_binder_object {
/* 8 bytes for large_flat_header. */
unsigned long type;
unsigned long flags;
/* 8 bytes of data. */
union {
void *binder; /* local object */
signed long handle; /* remote object */
};
/* extra data associated with local object */
void *cookie;
};
type是上面枚举直定义的,flags如下定义:
enum transaction_flags {
TF_ONE_WAY = 0x01, /* this is a one-way call: async, no return */
TF_ROOT_OBJECT = 0x04, /* contents are the component's root object */
TF_STATUS_CODE = 0x08, /* contents are a 32-bit status code */
TF_ACCEPT_FDS = 0x10, /* allow replies with file descriptors */
};
TF_ONE_WAY 是单向传递,不需要返回,为异步调用,TF_ROOT_OBJECT表示根对象,TF_STATUS_CODE 表示内容是32位状态码,TF_ACCEPT_FDS 表示允许用文件描述符做回应。
另外 本地对象可以把额外的数据存储在cookie字段中。
之后定义了结构体binder_transaction_data,从变量的名字中就可以猜出,这个结构体表示了传递的内容,
struct binder_transaction_data {
/* The first two are only used for bcTRANSACTION and brTRANSACTION,
* identifying the target and contents of the transaction.
*/
union {
size_t handle; /* target descriptor of command transaction */
void *ptr; /* target descriptor of return transaction */
} target;
void *cookie; /* target object cookie */
unsigned int code; /* transaction command */
/* General information about the transaction. */
unsigned int flags;
pid_t sender_pid;
uid_t sender_euid;
size_t data_size; /* number of bytes of data */
size_t offsets_size; /* number of bytes of offsets */
/* If this transaction is inline, the data immediately
* follows here; otherwise, it ends with a pointer to
* the data buffer.
*/
union {
struct {
/* transaction data */
const void *buffer;
/* offsets from buffer to flat_binder_object structs */
const void *offsets;
} ptr;
uint8_t buf[8];
} data;
首先看下核心数据结构,一个典型的双向列表
struct list_head {
struct list_head *next, *prev;
};
struct binder_work {
struct list_head entry;
enum {
BINDER_WORK_TRANSACTION = 1,
BINDER_WORK_TRANSACTION_COMPLETE,
BINDER_WORK_NODE,
BINDER_WORK_DEAD_BINDER,
BINDER_WORK_DEAD_BINDER_AND_CLEAR,
BINDER_WORK_CLEAR_DEATH_NOTIFICATION,
} type;
};
下面开始学习Binder.c 文件。首先定义了两个信号量 互斥锁:
static DEFINE_MUTEX(binder_lock);
static DEFINE_MUTEX(binder_deferred_lock);
之后定义了三个内核列表头
static HLIST_HEAD(binder_procs);
static HLIST_HEAD(binder_deferred_list);
static HLIST_HEAD(binder_dead_nodes);
a 创建目录。所有该模块输出的信息都在该目录下面。
struct dentry *debugfs_create_dir(const char *name, struct dentry *parent);
b 创建文件。在上面创建的目录下面创建文件。
struct dentry *debugfs_create_file(const char *name, mode_t mode, struct dentry *parent, void *data,
const struct file_operations *fops);
当一个进程想通过binder进程通讯的时候首先要用系统调用open调用驱动的binder_open,binder_open保存当前进程的一些信息,并初始化todo和wait队列。
static int binder_open(struct inode *nodp, struct file *filp)
{
struct binder_proc *proc;
binder_debug(BINDER_DEBUG_OPEN_CLOSE, "binder_open: %d:%d\n",
current->group_leader->pid, current->pid);
proc = kzalloc(sizeof(*proc), GFP_KERNEL);
if (proc == NULL)
return -ENOMEM;
get_task_struct(current);
proc->tsk = current;
INIT_LIST_HEAD(&proc->todo);
init_waitqueue_head(&proc->wait);
proc->default_priority = task_nice(current);
mutex_lock(&binder_lock);
binder_stats_created(BINDER_STAT_PROC);
hlist_add_head(&proc->proc_node, &binder_procs);
proc->pid = current->group_leader->pid;
group_leader字段,指向线程组中的第一个线程,创建第一个线程的时候,
group_leader指向自己,创建其后的线程时,指向第一个线程的task_struct结构;
thread_group,当前进程所有线程的队列,对于group_leader,这是个队列头,对于其
后的进程而言,通过这个字段,挂入队列中,可以通过此队列,遍历所有线程。
INIT_LIST_HEAD(&proc->delivered_death);
filp->private_data = proc;
mutex_unlock(&binder_lock);
if (binder_debugfs_dir_entry_proc) {
char strbuf[11];
snprintf(strbuf, sizeof(strbuf), "%u", proc->pid);
proc->debugfs_entry = debugfs_create_file(strbuf, S_IRUGO,
binder_debugfs_dir_entry_proc, proc, &binder_proc_fops);
}
return 0;