话说Android国度中每个行程都各自为政.这些行程大致上分为两族, Server 跟 Application. Server族的阶级比较高, 而Application族的阶级低一级了. 也因为如此Sever族比较接近核心政府OS, Application 若有需要政府的支持, 常就需要透过Server 族来帮忙. 然而, Application跟Server本来就不同族群的人了, 沟通自然成问题. 再说在Server族中的Service又不只有一个, Application又要如何指定哪一个Service来帮忙传达需求呢? 这两个问题就要从Binder机制谈起了.
由于每个service各司其职, 有管界面的SurfaceFlingerservice, 管声音的 AudioFlinger, 管媒体播放的MediaPlayerService等等, 这些Service各管各的功能, 在指派指令方面上总是个问题. 这时,Android国王就派出一个人作为ServiceManager, 其功用就是用来管理这些各自为政的service.之后Application若有需要特定的需求只要透过ServiceManager来指派就可以了.ServiceManager, Service, 和 Application 都是不同个体的行程, 沟通依然是个问题. 英明的Android国王当然知道行程沟通是一定要解决的, 所以就又颁布了一套Binder机制请所有在Android国度中的行程务必遵守. 建立Binder机制需要分为Server端,Client 端, BinderAdapter, 和 Binder核心. 基本沟通流程如下:
角色分配:
Client端: Application
Server端: 特定的service
1. Client端利用BinderAdapter 通知 ServiceManger, 请他指派特定的service支持.
2. ServiceManager就会在他自己的管理清单看看有没有Client端需要的Service.
3. 若有找到就借由BinderAdapter 回传一个Binderproxy给Client端使用. 若没找到就回个error讯息.
4. Client端就可以藉由这块 Binderproxy来使唤service帮他做事了.
BinderAdapter到底是甚么?ServiceManager的管理清单是怎么产生的? Binder proxy指的到底是甚么? 这种种的一切, 到底是是事出有因, 还是天意捉弄? 在冥冥之中, 自有安排. 如何安排, 就让我们从程序代码看下去.
首先先从ServiceManager开始分析, 再一开机时候,android 系统在做初始化时会启动一些native service, ServiceManger就是其中之一.
// init.rc
service servicemanager /system/bin/servicemanager
class core
user system
group system
critical
onrestart restart zygote
onrestart restart media
onrestart restart surfaceflinger
onrestart restart drm
ServiceManager一启动到底处理哪些事情, 让我们继续看下去
// service_manager.c
int main(int argc, char **argv)
{
struct binder_state *bs;
void *svcmgr = BINDER_SERVICE_MANAGER;
bs = binder_open(128*1024);
if (binder_become_context_manager(bs)) {
ALOGE("cannot become context manager (%s)\n", strerror(errno));
return -1;
}
svcmgr_handle = svcmgr;
binder_loop(bs, svcmgr_handler);
return 0;
}
由上面的程序代码, 可以知道ServiceManager一启动作三件事.
1. 打开binder 设备
// binder.c
struct binder_state *binder_open(unsigned mapsize)
{
struct binder_state *bs;
bs = malloc(sizeof(*bs));
if (!bs) {
errno = ENOMEM;
return 0;
}
// 打开Binder设备档/dev/binder, 并回传一个档案描述符.
bs->fd = open("/dev/binder", O_RDWR);
if (bs->fd < 0) {
fprintf(stderr,"binder: cannot open device (%s)\n",
strerror(errno));
goto fail_open;
}
bs->mapsize = mapsize;
//利用刚刚得到的档案描述符加上传进来的mapsize大小, 产生一个
// memory 映射. 优点在于直接存取这块内存相当于存取这个映像档案.
bs->mapped = mmap(NULL, mapsize, PROT_READ, MAP_PRIVATE, bs->fd, 0);
if (bs->mapped == MAP_FAILED) {
fprintf(stderr,"binder: cannot map device (%s)\n",
strerror(errno));
goto fail_map;
}
/* TODO: check version */
return bs;
fail_map:
close(bs->fd);
fail_open:
free(bs);
return 0;
}
2. 设定为Contextmanager
// binder.c
int binder_become_context_manager(struct binder_state *bs)
{
// 利用 ioctl 设定为 context manager.
return ioctl(bs->fd, BINDER_SET_CONTEXT_MGR, 0);
}
3. 启动 Binderloop来监听需求.
// binder.c
void binder_loop(struct binder_state *bs, binder_handler func)
{
int res;
struct binder_write_read bwr;
unsigned readbuf[32];
bwr.write_size = 0;
bwr.write_consumed = 0;
bwr.write_buffer = 0;
readbuf[0] = BC_ENTER_LOOPER;
// 开始启动looper 用来监控接收Client端的需求.
binder_write(bs, readbuf, sizeof(unsigned));
for (;;) {
bwr.read_size = sizeof(readbuf);
bwr.read_consumed = 0;
bwr.read_buffer = (unsigned) readbuf;
// 利用ioctl函数搭配BINDER_WRITE_READ对Binder的档设备读取
// data.
res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);
if (res < 0) {
ALOGE("binder_loop: ioctl failed (%s)\n", strerror(errno));
break;
}
// 将读到的data作相对应的处理. 假若有需求要处理, data会夹带
// BR_TRANSACTION command. 依照程序代码的分析结果, 之后会去呼
// 叫传进来的 function pointer: func.
res = binder_parse(bs, 0, readbuf, bwr.read_consumed, func);
if (res == 0) {
ALOGE("binder_loop: unexpected reply?!\n");
break;
}
if (res < 0) {
ALOGE("binder_loop: io error %d %s\n", res, strerror(errno));
break;
}
}
}
到目前为止, ServiceManager就在一个无穷循环中等待着呼唤. 究ServiceManager这样等着, 其他的行程需要他的时候, 又是如何藉BinderAdapter通知他呢? 在故事前面有提到,ServiceManager的功用式来管理Android 国度里所有的Service,而在ServiceManager里有一份service list记录着这些Service. 这种种的一切到底是如何演变的呢?在分析这一切的变化, 先来介绍一下Binder 核心角色
1. IBinder: 接口类, 里面的功能需要其衍生类别去实作
2. BBinder: IBinder的子类别, 用来作为实作Service端的.
ps:ServiceManager是属于一个特殊的service, 所以实作并不会继承这个类
别来实作, 里应来说他是个主要的程序, 因此接收并处理要求的机制是
另外设计, 详见以上的binder_loop分析.
3. BpBinder: IBinder的子类别, 用来提供给Client端使用.
接下来在介绍一下 BinderAdapter的角色
1.Parcel: 用来存放从Binder 设备中来的data.
2.ProcessState: 维护所有的service的Binderproxy.
3.IPCThreadState: 负责传送, 接收跟处理要求.
这几个角色之间的关系并不单纯, 这一切就由mediaserver说起.
//init.rc
service media /system/bin/mediaserver
class main
user media
group audio camera inet net_bt net_bt_admin net_bw_acct drmrpc
ioprio rt 4
// main_mediaserver.cpp
int main(int argc, char** argv)
{
sp<ProcessState> proc(ProcessState::self());
sp<IServiceManager> sm = defaultServiceManager();
ALOGI("ServiceManager: %p", sm.get());
AudioFlinger::instantiate();
MediaPlayerService::instantiate();
CameraService::instantiate();
AudioPolicyService::instantiate();
ProcessState::self()->startThreadPool();
IPCThreadState::self()->joinThreadPool();
}
由上面的程序代码可以知道, mediaserver 一启动作的事请如下:
1. 打开binder 设备
// main_mediaserver.cpp
sp<ProcessState> proc(ProcessState::self());
// ProcessState.cpp
sp<ProcessState> ProcessState::self()
{
Mutex::Autolock _l(gProcessMutex);
//只要gProcess不是NULL, 就直接回传, 否则就new 一个 ProcessState 物
//件, 由此可知此设计是辅合单利模式, 一个类别永远只有一个对象.
if (gProcess != NULL) {
return gProcess;
}
gProcess = new ProcessState;
return gProcess;
}
ProcessState::ProcessState()
: mDriverFD(open_driver()) //打开Binder设备档/dev/binder, 并回传
//一个档案描述符.
, mVMStart(MAP_FAILED)
, mManagesContexts(false)
, mBinderContextCheckFunc(NULL)
, mBinderContextUserData(NULL)
, mThreadPoolStarted(false)
, mThreadPoolSeq(1)
{
if (mDriverFD >= 0) {
// XXX Ideally, there should be a specific define for whether we
// have mmap (or whether we could possibly have the kernel module
// availabla).
#if !defined(HAVE_WIN32_IPC)
// mmap the binder, providing a chunk of virtual address space to receive transactions.
//利用刚刚得到的档案描述符加上BINDER_VM_SIZE大小, 产生一个
// memory 映射.
mVMStart = mmap(0, BINDER_VM_SIZE, PROT_READ, MAP_PRIVATE |
MAP_NORESERVE, mDriverFD, 0);
if (mVMStart == MAP_FAILED) {
// *sigh*
ALOGE("Using /dev/binder failed: unable to mmap transaction memory.\n");
close(mDriverFD);
mDriverFD = -1;
}
#else
mDriverFD = -1;
#endif
}
LOG_ALWAYS_FATAL_IF(mDriverFD < 0, "Binder driver could not be opened. Terminating.");
}
2. 获得ServiceManager的BinderProxy.
// main_mediaserver.cpp
sp<IServiceManager> sm = defaultServiceManager();
// IServiceManager.cpp
sp<IServiceManager> defaultServiceManager()
{
if (gDefaultServiceManager != NULL) return gDefaultServiceManager;
{
AutoMutex _l(gDefaultServiceManagerLock);
if (gDefaultServiceManager == NULL) {
gDefaultServiceManager = interface_cast<IServiceManager>(
ProcessState::self()->getContextObject(NULL));
}
}
return gDefaultServiceManager;
}
在c++的世界里, 对象型态转化方式只有 static_cast, dynamic_cast, const_cast和reinterpreter_cast 这四种, 并没有interface_cast的方式? 这究竟是怎么一回事? 程序代码之前, 绝无秘密. 果然在一处发现了interface_cast的足迹.
//IInterface.h
template<typename INTERFACE>
inline sp<INTERFACE> interface_cast(const sp<IBinder>& obj)
{
return INTERFACE::asInterface(obj);
}
#define DECLARE_META_INTERFACE(INTERFACE) \
static const android::String16 descriptor; \
static android::sp<I##INTERFACE> asInterface( \
const android::sp<android::IBinder>& obj); \
virtual const android::String16& getInterfaceDescriptor() const; \
I##INTERFACE(); \
virtual ~I##INTERFACE(); \
#define IMPLEMENT_META_INTERFACE(INTERFACE, NAME) \
const android::String16 I##INTERFACE::descriptor(NAME); \
const android::String16& \
I##INTERFACE::getInterfaceDescriptor() const { \
return I##INTERFACE::descriptor; \
} \
android::sp<I##INTERFACE> I##INTERFACE::asInterface( \
const android::sp<android::IBinder>& obj) \
{ \
android::sp<I##INTERFACE> intr; \
if (obj != NULL)
{ \
intr = static_cast<I##INTERFACE*>( \
obj->queryLocalInterface( \
I##INTERFACE::descriptor).get()); \
if (intr == NULL) { \
intr = new Bp##INTERFACE(obj); \
} \
} \
return intr; \
} \
I##INTERFACE::I##INTERFACE() { } \
I##INTERFACE::~I##INTERFACE() { } \
由上面的演化便可以发现interface_cast<IServiceManager>(
ProcessState::self()->getContextObject(NULL));
的演变, 演变如下:
interface_cast<IServiceManager>(ProcessState::self()->getContextObject(NULL));
IServiceManager:: asInterface(ProcessState::self()->getContextObject(NULL));
new BpServiceManager(ProcessState::self()->getContextObject(NULL));
到头来原来interface_cast<IServiceManager>的真面目就是new 一个BpServiceManager的对象. 也因为如此, 之后只要在Android国度中只要一发现interface_cast<IXXX>, 就知道它的真面目一定是new 一个BpXXX对象.
然而故事到这还没完, BpServiceManager建构子里带的参数又是甚么呢? 由其建构子宣告只能知道是IBinder的型态, 但详细对象到底是甚么呢? 一切源头就要由ProcessState::self()->getContextObject(NULL)开始分析
// ProcessState.cpp
sp<IBinder> ProcessState::getContextObject(const sp<IBinder>& caller)
{
return getStrongProxyForHandle(0);
}
sp<IBinder> ProcessState::getStrongProxyForHandle(int32_t handle)
{
sp<IBinder> result;
AutoMutex _l(mLock);
handle_entry* e = lookupHandleLocked(handle);
if (e != NULL) {
IBinder* b = e->binder;
if (b == NULL || !e->refs->attemptIncWeak(this)) {
b = new BpBinder(handle);
e->binder = b;
if (b) e->refs = b->getWeakRefs();
result = b;
} else {
result.force_set(b);
e->refs->decWeak(this);
}
}
return result;
}
就这样抽丝剥卷的一路分析下来, 发现ProcessState::self()->getContextObject(NULL) 得到的东西竟是 BpBinder 对象.也就是Binder proxy.
3. 开始跟ServiceManager注册各个Service.
// main_mediaserver.cpp
AudioFlinger::instantiate(); //注册 AudioFlinger service
MediaPlayerService::instantiate(); //注册 MediaPlayerService service
CameraService::instantiate(); //注册 CameraService service
AudioPolicyService::instantiate(); //注册 AudioPolicyService service
就拿注册CameraService的案例来解释跟ServiceManager注册的流程. 在c++的程序语法中, 一旦由类别带出来的函数一定是定义在类别中的static函数. 所以去收查了一下CameraService类别, 发现竟然没有instantiate函数宣告或是定义. 整件事情到此已经进入朴硕离迷的罗生门, 正当失望之际, 突然发现了一个嫌疑犯 BinderService 类别. 在c++的程序语法中, 类别的函数事可以继承使用的, 也就是说子类别若没有重写父类别的函数,哪怕是static函数. 也是极有可能所呼叫的函数是定义在父类别. CameraService刚好继承自BinderService.
//BinderService.h
template<typename SERVICE>
class BinderService
{
public:
static status_t publish(bool allowIsolated = false) {
sp<IServiceManager> sm(defaultServiceManager());
return sm->addService(String16(SERVICE::getServiceName()), new
SERVICE(), allowIsolated);
}
static void publishAndJoinThreadPool(bool allowIsolated = false) {
sp<IServiceManager> sm(defaultServiceManager());
sm->addService(String16(SERVICE::getServiceName()), new SERVICE(),
allowIsolated);
ProcessState::self()->startThreadPool();
IPCThreadState::self()->joinThreadPool();
}
static void instantiate() { publish(); }
static status_t shutdown() {
return NO_ERROR;
}
};
故事到此越是明朗化, 原来CameraService::instantiate() 演变如下
CameraService::instantiate()
defaultServiceManager()->addService(String16(CameraService::getServiceName()),
new CameraService (), false);
BpServiceManager(BpBinder(0)) -> addService(String16(CameraService::getServiceName()), new CameraService (),
false);
//IServiceManager.cpp
virtual status_t addService(const String16& name, const sp<IBinder>& service,
bool allowIsolated)
{
// 产生data跟 reply的容器.用来存放从Binder设备得来的data货是
// 要传送给Binder设备的data.
Parcel data, reply;
// 写入interface Token: android.os.IServiceManager
// 详见 IMPLEMENT_META_INTERFACE(ServiceManager,
//"android.os.IServiceManager"); in IServiceManager.cpp
data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());
// 写入service 名子: media.camera
// 详见 CameraService.h
data.writeString16(name);
// 写入 service 物件: new CameraService ()
data.writeStrongBinder(service);
data.writeInt32(allowIsolated ? 1 : 0);
status_t err = remote()->transact(ADD_SERVICE_TRANSACTION, data,
&reply);
return err == NO_ERROR ? reply.readExceptionCode() : err;
}
最后一道remote()->transact 程序代码中的 remote()指的是甚么? 这可有学问了.
事情都到了这个地步了, 还是谜团重重. 真是叫人匪思所疑. 既然是跟呼叫 transact 有关的一定是跟Binder的接口有关系. 先来看一下BpServiceManager类别怎么来的.
// IServiceManager.cpp
class BpServiceManager : public BpInterface<IServiceManager>
//IInterface.h
template<typename INTERFACE>
class BpInterface : public INTERFACE, public BpRefBase
{
public:
BpInterface(const sp<IBinder>& remote);
protected:
virtual IBinder* onAsBinder();
};
由此样板可以知道, BpInterface继承IServiceManager和BpRefBase两个类别.
而BpServiceManager继承BpInterface类别, 所以可以推得 BpServiceManager 就是IServiceManager的孙类别. 继续看下去IServiceManager类别又是怎么产生的?
//IServiceManager.h
class IServiceManager : public IInterface
//IInterface.h
class IInterface : public virtual RefBase
{
public:
IInterface();
sp<IBinder> asBinder();
sp<const IBinder> asBinder() const;
protected:
virtual ~IInterface();
virtual IBinder* onAsBinder() = 0;
};
总结一下这些类别的继承关系.
BpServiceManager 继承 BpInterface,
BpInterface 继承 IServiceManager
IServiceManager 继承IInterface
层层的分析下来, BpServiceManager 还真的跟 IInterface 类别有关系阿, 在C++的程序语法中, 只要衍生类别一旦要建构出对象, 一定先从基础类别开始呼叫建构子. 在前面的故事中,mediaserver一启动时就经由defaultServiceManager函数去new了一个BpServiceManager 对象, 一早上面的继承关系其呼叫建构子的顺序如下:
// IInterface.cpp
IInterface::IInterface()
: RefBase() { //呼叫RefBase建构子
}
// IInterface.h
// 这里的建构子实作是利用Macro来定义的, 详见
// IMPLEMENT_META_INTERFACE(ServiceManager, "android.os.IServiceManager");
IServiceManager:: IServiceManager() { }
由于BpInterface中没有宣告预设建构子, 所以直接往下呼叫.
// IServiceManager.cpp
BpServiceManager(const sp<IBinder>& impl)
: BpInterface<IServiceManager>(impl) //呼叫父类别的自定义建构子
{
}
// IInterface.h
// 其中 INTERFACE 为 IServiceManager
template<typename INTERFACE>
inline BpInterface<INTERFACE>::BpInterface(const sp<IBinder>& remote)
: BpRefBase(remote) //呼叫父类别的自定义建构子
{
}
//Binder.cpp
BpRefBase::BpRefBase(const sp<IBinder>& o)
: mRemote(o.get()), mRefs(NULL), mState(0)
{
extendObjectLifetime(OBJECT_LIFETIME_WEAK);
if (mRemote) {
mRemote->incStrong(this); // Removed on first IncStrong().
mRefs = mRemote->createWeak(this); // Held for our entire lifetime.
}
}
到这里便会发现 BpServiceManager 建构子会再去呼叫 BpRefBase 建构子, 之后会发现所带的BpBinder 对象就指向mRemote这个参数. 故事真的是千变万化阿, 在去查阅一下BpRefBase类别竟然发现了 remote()的足迹.
// Binder.h
class BpRefBase : public virtual RefBase
{
protected:
BpRefBase(const sp<IBinder>& o);
virtual ~BpRefBase();
virtual void onFirstRef();
virtual void onLastStrongRef(const void* id);
virtual bool onIncStrongAttempted(uint32_t flags, const void*
id);
inline IBinder* remote() { return mRemote; }
inline IBinder* remote() const { return mRemote; }
private:
BpRefBase(const BpRefBase& o);
BpRefBase& operator=(const BpRefBase& o);
IBinder* const mRemote;
RefBase::weakref_type* mRefs;
volatile int32_t mState;
};
最后一道remote()->transact 程序代码中的 remote()指的就是BpBinder对象.
// BpBinder.cpp
status_t BpBinder::transact(
uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
// Once a binder has died, it will never come back to life.
if (mAlive) {
// 会利用一个statci self函数去new 一个 IPCThreadState 对象就是
// 要说 IPCThreadState 也是一个单利模式设计.
status_t status = IPCThreadState::self()->transact(
mHandle, code, data, reply, flags);
if (status == DEAD_OBJECT) mAlive = 0;
return status;
}
return DEAD_OBJECT;
}
// IPCThreadState.cpp
status_t IPCThreadState::transact(int32_t handle,
uint32_t code, const Parcel& data,
Parcel* reply, uint32_t flags)
{
// do something
// 将所要传送的data写入mOut 容器里, 此data里含有cmd跟
// binder_transaction_data
err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data,
NULL);
if (reply) {
//详见以下分析
err = waitForResponse(reply);
} else {
Parcel fakeReply;
err = waitForResponse(&fakeReply);
}
//so something
}
status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
{
//so something
while (1) {
// 详见以下分析
if ((err=talkWithDriver()) < NO_ERROR) break;
err = mIn.errorCheck();
if (err < NO_ERROR) break;
if (mIn.dataAvail() == 0) continue;
cmd = mIn.readInt32();
switch (cmd) {
//do something case
default:
//执行从Binder设备得到的command, 并且呼叫BBinder对象的
//onTransact 函数.
err = executeCommand(cmd);
if (err != NO_ERROR) goto finish;
break;
}
}
finish:
if (err != NO_ERROR) {
if (acquireResult) *acquireResult = err;
if (reply) reply->setError(err);
mLastError = err;
}
return err;
}
status_t IPCThreadState::talkWithDriver(bool doReceive)
{
// 检查一下Binder设备是否被开启
if (mProcess->mDriverFD <= 0) {
return -EBADF;
}
binder_write_read bwr;
//do something
bwr.write_size = outAvail;
bwr.write_buffer = (long unsigned int)mOut.data();
//do something
// 藉由ioctl搭配 BINDER_WRITE_READ 将data写
// 入Binder 设备.
if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)
err = NO_ERROR;
//do something
}
4.ServiceManager 一收到request command, 马上作addservice的处理.
在故事前面, ServiceManager一启动就开始在做Binder loop, 目的就是用来监控目前在Binder 设备的变化.一旦Binder设备中有变化, 就会触发ServiceManager对象中的binder_parse函数, 而这函数中有个funtion pointer参数, 正是ServiceManager在作Binder loop一开始注册的 svcmgr_handler 函数.
//service_manager.c
int svcmgr_handler(struct binder_state *bs,
struct binder_txn *txn,
struct binder_io *msg,
struct binder_io *reply)
{
struct svcinfo *si;
//do something
switch(txn->code) {
//检查service list上的service
case SVC_MGR_GET_SERVICE:
case SVC_MGR_CHECK_SERVICE:
s = bio_get_string16(msg, &len);
ptr = do_find_service(bs, s, len, txn->sender_euid);
if (!ptr)
break;
bio_put_ref(reply, ptr);
return 0;
//增加service到service list
case SVC_MGR_ADD_SERVICE:
s = bio_get_string16(msg, &len);
ptr = bio_get_ref(msg);
allow_isolated = bio_get_uint32(msg) ? 1 : 0;
if (do_add_service(bs, s, len, ptr, txn->sender_euid, allow_isolated))
return -1;
break;
// 检视service list
case SVC_MGR_LIST_SERVICES: {
unsigned n = bio_get_uint32(msg);
si = svclist;
while ((n-- > 0) && si)
si = si->next;
if (si) {
bio_put_string16(reply, si->name);
return 0;
}
return -1;
}
default:
ALOGE("unknown code %d\n", txn->code);
return -1;
}
bio_put_uint32(reply, 0);
return 0;
}
在BpServiceManager呼叫transact函数来写入Binder设备时, 有带一个command为 ADD_SERVICE_TRANSACTION, 因此, ServiceManager从Binder 设备也会读到一个类似ADD_SERVICE_TRANSACTION的command:SVC_MGR_ADD_SERVICE.而这个command SVC_MGR_ADD_SERVICE 作的事正是 do_add_service.
int do_add_service(struct binder_state *bs,
uint16_t *s, unsigned len,
void *ptr, unsigned uid, int allow_isolated)
{
struct svcinfo *si;
if (!ptr || (len == 0) || (len > 127))
return -1;
//检查此service是否允许可以被注册, 详细请看 svc_can_register 实作.
if (!svc_can_register(uid, s)) {
ALOGE("add_service('%s',%p) uid=%d - PERMISSION DENIED\n",
str8(s), ptr, uid);
return -1;
}
//查询Service list 上的service item
si = find_svc(s, len);
if (si) {
//在service list中已经有service的si了, 就不再另外新增了. 但需要
//重新联机Binder机制.
if (si->ptr) {
svcinfo_death(bs, si);
}
si->ptr = ptr;
} else {
//在service list中新增一笔service 的si.
si = malloc(sizeof(*si) + (len + 1) * sizeof(uint16_t));
if (!si) {
ALOGE("add_service('%s',%p) uid=%d - OUT OF MEMORY\n",
str8(s), ptr, uid);
return -1;
}
si->ptr = ptr;
si->len = len;
memcpy(si->name, s, (len + 1) * sizeof(uint16_t));
si->name[len] = '\0';
si->death.func = svcinfo_death;
si->death.ptr = si;
si->allow_isolated = allow_isolated;
si->next = svclist;
svclist = si;
}
// 开始与Client端搭起Binder机制.
binder_acquire(bs, ptr);
binder_link_to_death(bs, ptr, &si->death);
return 0;
}
5.Client端开始启动Binder loop, 用来监控Server端的回应状态.
Service端既然有个Binder loop作监控, Client端也不甘示弱也是需要有个Binder loop来监控. 在前面做完各个Service 将ADD_SERVICE_TRANSACTION command写入Binder 设备之后, 就开始利用ProcessState跟IPCThreadState来启动Binder loop.
// main_mediaserver.cpp
ProcessState::self()->startThreadPool();
IPCThreadState::self()->joinThreadPool();
// ProcessState.cpp
void ProcessState::startThreadPool()
{
AutoMutex _l(mLock);
if (!mThreadPoolStarted) {
mThreadPoolStarted = true;
spawnPooledThread(true);
}
}
void ProcessState::spawnPooledThread(bool isMain)
{
if (mThreadPoolStarted) {
int32_t s = android_atomic_add(1, &mThreadPoolSeq);
char buf[16];
snprintf(buf, sizeof(buf), "Binder_%X", s);
ALOGV("Spawning new pooled thread, name=%s\n", buf);
sp<Thread> t = new PoolThread(isMain);
t->run(buf);
}
}
class PoolThread : public Thread
{
public:
PoolThread(bool isMain)
: mIsMain(isMain)
{
}
protected:
// 这是个Callback function, 一旦Thread 对象中的run函数被呼叫,
// threadLoop 函数就会被触发.
virtual bool threadLoop()
{
IPCThreadState::self()->joinThreadPool(mIsMain);
return false;
}
const bool mIsMain;
};
一路追寻下来, 发现此流程会再起另外一个thread去启动 Thread pool. 此Thread pool里作的事就跟Main thread最后作的事是一样的.都是呼叫IPCThreadState::self()->joinThreadPool程序代码.
// IPCThreadState.cpp
void IPCThreadState::joinThreadPool(bool isMain)
{
// 将BC_ENTER_LOOPER command写入 Binder设备. 表示目前进入Binder
// loop 状态.
mOut.writeInt32(isMain ? BC_ENTER_LOOPER : BC_REGISTER_LOOPER);
//设定优先权为SP_FOREGROUND确保在作Transaction的时候不被中断.
set_sched_policy(mMyThreadId, SP_FOREGROUND);
status_t result;
do {
int32_t cmd;
//do something
// 详见以下分析
result = talkWithDriver();
if (result >= NO_ERROR) {
size_t IN = mIn.dataAvail();
if (IN < sizeof(int32_t)) continue;
cmd = mIn.readInt32();
result = executeCommand(cmd);
}
// 由于在作executeCommand时, 会将优先权设为正常状态, 所以在
//下一次进loop前,要再把优先权设高.
set_sched_policy(mMyThreadId, SP_FOREGROUND);
//do something
} while (result != -ECONNREFUSED && result != -EBADF);
// 将BC_EXIT_LOOPER command写入 Binder设备. 表示目前离开Binder
// loop 状态.
mOut.writeInt32(BC_EXIT_LOOPER);
//关掉从Binder设备接收data的功能. 预设是打开.
talkWithDriver(false);
}
status_t IPCThreadState::talkWithDriver(bool doReceive)
{
if (mProcess->mDriverFD <= 0) {
return -EBADF;
}
binder_write_read bwr;
// Is the read buffer empty?
const bool needRead = mIn.dataPosition() >= mIn.dataSize();
//do something
// This is what we'll read.
if (doReceive && needRead) {
// 将mIn Parcel 容器信息设定给 binder_write_read 中的read buffer,
// 以作为从Binder 设备读取data.
bwr.read_size = mIn.dataCapacity();
bwr.read_buffer = (long unsigned int)mIn.data();
} else {
bwr.read_size = 0;
bwr.read_buffer = 0;
}
// Return immediately if there is nothing to do.
if ((bwr.write_size == 0) && (bwr.read_size == 0)) return NO_ERROR;
bwr.write_consumed = 0;
bwr.read_consumed = 0;
status_t err;
do {
//开是对Binder 设备作存取.
if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)
err = NO_ERROR;
else
err = -errno;
//do something
} while (err == -EINTR);
//do something
return err;
}
故事发展到这里, 只能说拍案叫绝, 原来joinThreadPool就是在作 Binder loop的动作. 请注意这里的joinThreadPool是由mediaserver另外起的thread来执行. 那接下来的 joinThreadPool 动作就是由mediaserver的main thread来执行, 究竟是为了甚么, mediaserver非得还要再另外起个Thread去执行binder loop. 造就了本身形成有两个Binder loop在监控. 这只能说是个不解之谜阿.亦或者说一个当作主要的Binder loop, 另外一个当作工作 Binder loop用. 至于两个差别,只能问Google了.
总结
纵观Android的Binder机制, Android国王运用了一个很简单的Clent/Server机制来作各个行程的沟通协调, 并指派ServiceManager来管理众多Service,并提供Client端管道来寻求Server端的支持. 总结上面的分析流程归纳如下:
1. 一开机, ServiceManager一启动便打开Binder设备
2. ServiceManager 启动 Binder loop 开始监控Binder设备中的command
3. mediaserver一启动便打开Binder设备
4.mediaserver 获得Binder Proxy: BpServiceManager
5. 利用BinderProxy来注册各个的Service
6.Binder proxy 将command ADD_SERVICE_TRANSACTION 写入Binder设备.
7.mediaserver 启动 Main thread的Binderloop和Work Thread的 Binderloop来监控Binder 设备着状态.
8. ServiceManager的Binder loop 一查觉Binder设备起了变化便读取command.
9. ServiceManager从Binder loop读到的command是 SVC_MGR_ADD_SERVICE
10.ServiceManager 开始新增一笔记录到他所管理的service list里.
11.Service Manager 便将reply的信息些入Binder 设备.
12. mediaserver 的binderloop一查觉到Binder设备有Reply数据便读取.
这整个沟通管道冥冥之中自有安排, 沟通的主要关键有IPCThreadState 中的talkWithDriver跟executeCommand,让我们看不透的是在这安排之前多了多到层层关卡, 这些迷样的关系就称为"Binder 架构".