Binder作为Android系统进程间通信的机制是各种service能够提供服务的基础,本文从mediaserver入手,试分析Binder机制的实现
一.综述
Binder机制的功能有二:
1.管理手机上的各种服务
2.应用程序通过Binder使用service提供的服务
为此,在手机启动过程中,需要注册各种服务到ServiceManager。之后,应用程序可以查询服务,并选择和某种服务进行通信。
二.注册服务
以mediaserver为例,它的入口函数在main_mediaserver.cpp,在同目录的Android.mk中,我们可以看到它被编译为mediaserver,代码如下:
int main(int argc, char** argv)
{
sp<ProcessState> proc(ProcessState::self());
sp<IServiceManager> sm = defaultServiceManager();
LOGI("ServiceManager: %p", sm.get());
AudioFlinger::instantiate();
MediaPlayerService::instantiate();
CameraService::instantiate();
AudioPolicyService::instantiate();
ProcessState::self()->startThreadPool();
IPCThreadState::self()->joinThreadPool();
}
其中ProcessState::self()获取ProcessState的唯一实例,它是进程相关的,每个进程只有一个实例,定义于ProcessState.cpp,其构造函数如下:
ProcessState::ProcessState()
: mDriverFD(open_driver()) //open_driver函数以读写方式打开/dev/binder设备
, mVMStart(MAP_FAILED)
, mManagesContexts(false)
, mBinderContextCheckFunc(NULL)
, mBinderContextUserData(NULL)
, mThreadPoolStarted(false)
, mThreadPoolSeq(1)
{
if (mDriverFD >= 0) {
// XXX Ideally, there should be a specific define for whether we
// have mmap (or whether we could possibly have the kernel module
// availabla).
#if !defined(HAVE_WIN32_IPC)
// mmap the binder, providing a chunk of virtual address space to receive transactions.
/ /mmap的功能是为fd创建一块内存映射,对内存映射的读写就对应fd的一定偏移量的读写,这里指定的BINDER_VM_SIZE=1M-8k,PROT_READ表示fd的映射区域是只读的,也就是我们只能从binder设备中读取transaction
mVMStart = mmap(0, BINDER_VM_SIZE, PROT_READ, MAP_PRIVATE | MAP_NORESERVE, mDriverFD, 0);
if (mVMStart == MAP_FAILED) {
// *sigh*
LOGE("Using /dev/binder failed: unable to mmap transaction memory.\n");
close(mDriverFD);
mDriverFD = -1;
}
#else
mDriverFD = -1;
#endif
}
LOG_ALWAYS_FATAL_IF(mDriverFD < 0, "Binder driver could not be opened. Terminating.");
}
也就是说,调用这个方法的进程都会打开binder设备,/dev/binder是Binder机制的基础,后面我们会看到Binder使用了共享内存,使得进程间的通信机制效率更高。
下面再来看sp<IServiceManager> sm = defaultServiceManager();它定义于IServiceManager.cpp
sp<IServiceManager> defaultServiceManager()
{
if (gDefaultServiceManager != NULL) return gDefaultServiceManager;
{
AutoMutex _l(gDefaultServiceManagerLock);
if (gDefaultServiceManager == NULL) {
gDefaultServiceManager = interface_cast<IServiceManager>(
ProcessState::self()->getContextObject(NULL));
}
}
return gDefaultServiceManager;
}
关键是红色部分,先看参数 ProcessState::self()->getContextObject(NULL),它在ProcessState.cpp中,实际上它涉及三个函数
sp<IBinder> ProcessState::getContextObject(const sp<IBinder>& caller)
{
return getStrongProxyForHandle(0);
}
//mHandleToObject是一个vector,本函数会根据handle查找是否有对应的entry,如果没找到会创建一个新的entry,其中binder为NULL
ProcessState::handle_entry* ProcessState::lookupHandleLocked(int32_t handle)
{
const size_t N=mHandleToObject.size();
if (N <= (size_t)handle) {
handle_entry e;
e.binder = NULL;
e.refs = NULL;
status_t err = mHandleToObject.insertAt(e, N, handle+1-N);
if (err < NO_ERROR) return NULL;
}
return &mHandleToObject.editItemAt(handle);
}
sp<IBinder> ProcessState::getStrongProxyForHandle(int32_t handle)
{
sp<IBinder> result;
AutoMutex _l(mLock);
handle_entry* e = lookupHandleLocked(handle);
if (e != NULL) {
// We need to create a new BpBinder if there isn't currently one, OR we
// are unable to acquire a weak reference on this current one. See comment
// in getWeakProxyForHandle() for more info about this.
IBinder* b = e->binder;
if (b == NULL || !e->refs->attemptIncWeak(this)) {
b = new BpBinder(handle);
e->binder = b;
if (b) e->refs = b->getWeakRefs();
result = b;
} else {
// This little bit of nastyness is to allow us to add a primary
// reference to the remote proxy when this team doesn't have one
// but another team is sending the handle to us.
result.force_set(b);
e->refs->decWeak(this);
}
}
return result;
}
第一次调用getStrongProxyForHandle(0)的时候,必然会走new BpBinder(handle)的路径,即返回了一个sp<BpBinder>,BpBinder继承自IBinder,见参考2中的类图,也就是说ProcessState::getContextObject函数返回的是一个sp<BpBinder>。
再回到 gDefaultServiceManager = interface_cast<IServiceManager>( ProcessState::self()->getContextObject(NULL));
interface_cast定义于frameworks/base/include/binder/IInterface.h
template<typename INTERFACE>
inline sp<INTERFACE> interface_cast(const sp<IBinder>& obj)
{
return INTERFACE::asInterface(obj);
}
它是一个模板函数,interface_cast<IServiceManager>()将会返回IServiceManager::asInterface(obj)
在IInterface.h中定义了如下的宏
//用于在类中声明
#define DECLARE_META_INTERFACE(INTERFACE) \
static const android::String16 descriptor; \
static android::sp<I##INTERFACE> asInterface( \
const android::sp<android::IBinder>& obj); \
virtual const android::String16& getInterfaceDescriptor() const; \
I##INTERFACE(); \
virtual ~I##INTERFACE();
//用于实现
#define IMPLEMENT_META_INTERFACE(INTERFACE, NAME) \
const android::String16 I##INTERFACE:: descriptor(NAME); \
const android::String16& \
I##INTERFACE:: getInterfaceDescriptor() const { \
return I##INTERFACE::descriptor; \
} \
android::sp<I##INTERFACE> I##INTERFACE:: asInterface( \
const android::sp<android::IBinder>& obj) \
{ \
android::sp<I##INTERFACE> intr; \
if (obj != NULL) { \
intr = static_cast<I##INTERFACE*>( \
obj->queryLocalInterface( \
I##INTERFACE::descriptor).get()); \
if (intr == NULL) { \
intr = new Bp##INTERFACE(obj); \
} \
} \
return intr; \
} \
I##INTERFACE::I##INTERFACE() { } \
I##INTERFACE::~I##INTERFACE() { } \
在IServiceManager.h的IServiceManager类中声明了这个宏,在IServiceManager.cpp中
IMPLEMENT_META_INTERFACE(ServiceManager, "android.os.IServiceManager");
也 就是说IServiceManager::descriptor = "android.os.IServiceManager"。同时,添加了两个函数,getInterfaceDescriptor和asInterface
我们主要看asInterface函数,它的意思是说,参数obj,即IBinder实例,要调用queryLocalInterface函数,从刚才的分析,我们知道这里obj实际上是BpBinder,它没有实现queryLocalInterface,而是继承了基类IBinder的函数实现,而IBinder::queryLocalInterface只是返回了NULL,所以后续流程看,asInterface函数,将会返回new BpServiceManager(obj),在后面我们还会看到asInterface这个函数,理解它为什么这么做。
从参考2的类图中,我们可以看到BpServiceManager继承自BpRefBase,mRemote是BpRefBase类的成员变量,在这里参数obj将会保存至mRemote中,后面会用到这个变量。
class BpServiceManager定义于IServiceManager.cpp,它派生自BpInterface<IServiceManager>,而BpInterface定义于IInterface.h
template<typename INTERFACE>
class BpInterface : public INTERFACE, public BpRefBase
{
public:
BpInterface(const sp<IBinder>& remote);
protected:
virtual IBinder* onAsBinder();
};
从 BpInterface类的定义,我们可以看到BpInterface<IServiceManager>是从IServiceManager继承而来,所以 defaultServiceManager函数返回的是一个sp<BpServiceManager>对象是与 sp<IServiceManager>兼容的。
再回到main函数中,后面的几句都是用来注册服务的,我们以MediaPlayerService为例
MediaPlayerService::instantiate();
该函数实现于MediaPalyerService.cpp
void MediaPlayerService::instantiate() {
defaultServiceManager()->addService(
String16("media.player"), new MediaPlayerService());
}
defaultServiceManager(),我们知道将会返回一个sp<BpServiceManager>,其addService函数实现于IServiceManager.cpp,如下:
virtual status_t addService(const String16& name, const sp<IBinder>& service)
{
Parcel data, reply;
//write "android.os.IServiceManager"
data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());
//write service name
data.writeString16(name);
//write service,后面我们将会看到这里到底写了什么
data.writeStrongBinder(service);
LOGI("Remote_transact add service");
status_t err = remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply);
return err == NO_ERROR ? reply.readExceptionCode() : err;
}
关键点在红色部分,首先remote(),即BpServiceManager::remote(),前面我们分析过,它返回BpBinder,即将会调用BpBinder->transact,该函数定义于BpBinder.cpp,如下:
status_t BpBinder::transact(
uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
// Once a binder has died, it will never come back to life.
if (mAlive) {
//mHandle即在new BpBinder的时候传入的0
status_t status = IPCThreadState::self()->transact(
mHandle, code, data, reply, flags);
if (status == DEAD_OBJECT) mAlive = 0;
return status;
}
return DEAD_OBJECT;
}
IPCThreadState::self()会返回IPCThreadState的唯一实例,它是线程相关的,通过TLS实现,每个线程都会有一个这个实例
status_t IPCThreadState::transact(int32_t handle,
uint32_t code, const Parcel& data,
Parcel* reply, uint32_t flags)
{
LOGI("ipc_client pid=%d,handle=%d,code=%d,flags=%d\n",getpid(),handle,code,flags);
status_t err = data.errorCheck();
flags |= TF_ACCEPT_FDS;
IF_LOG_TRANSACTIONS() {
TextOutput::Bundle _b(alog);
alog << "BC_TRANSACTION thr " << (void*)pthread_self() << " / hand "
<< handle << " / code " << TypeCode(code) << ": "
<< indent << data << dedent << endl;
}
if (err == NO_ERROR) {
LOG_ONEWAY(">>>> SEND from pid %d uid %d %s", getpid(), getuid(),
(flags & TF_ONE_WAY) == 0 ? "READ REPLY" : "ONE WAY");
LOGI("ipc_client pid=%d send cmd BC_TRANSACTION",getpid());
//将会把参数写入mOut变量,等待发送数据
err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);
}
if (err != NO_ERROR) {
if (reply) reply->setError(err);
return (mLastError = err);
}
if ((flags & TF_ONE_WAY) == 0) {
#if 0
if (code == 4) { // relayout
LOGI(">>>>>> CALLING transaction 4");
} else {
LOGI(">>>>>> CALLING transaction %d", code);
}
#endif
if (reply) {
//这里发送和接收数据
err = waitForResponse(reply);
} else {
Parcel fakeReply;
err = waitForResponse(&fakeReply);
}
#if 0
if (code == 4) { // relayout
LOGI("<<<<<< RETURNING transaction 4");
} else {
LOGI("<<<<<< RETURNING transaction %d", code);
}
#endif
IF_LOG_TRANSACTIONS() {
TextOutput::Bundle _b(alog);
alog << "BR_REPLY thr " << (void*)pthread_self() << " / hand "
<< handle << ": ";
if (reply) alog << indent << *reply << dedent << endl;
else alog << "(none requested)" << endl;
}
} else {
err = waitForResponse(NULL, NULL);
}
return err;
}
IPCThreadState::transact函数中有两个函数需要重点说明:
1.err = writeTransactionData( BC_TRANSACTION, flags, handle, code, data, NULL);
该函数把数据写到mOut变量中,mOut是一个Parcle对象
2.err = waitForResponse(reply);
status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
{
int32_t cmd;
int32_t err;
while (1) {
//talkWithDriver是用来发送和接收数据的函数
if ((err=talkWithDriver()) < NO_ERROR) break;
err = mIn.errorCheck();
if (err < NO_ERROR) break;
if (mIn.dataAvail() == 0) continue;
cmd = mIn.readInt32();
IF_LOG_COMMANDS() {
alog << "Processing waitForResponse Command: "
<< getReturnString(cmd) << endl;
}
switch (cmd) {
case BR_TRANSACTION_COMPLETE:
if (!reply && !acquireResult) goto finish;
break;
case BR_REPLY:
....
goto finish
。。。。。。
default:
//执行接收到的命令
err = executeCommand(cmd);
if (err != NO_ERROR) goto finish;
break;
}
}
finish:
if (err != NO_ERROR) {
if (acquireResult) *acquireResult = err;
if (reply) reply->setError(err);
mLastError = err;
}
return err;
}
talkWithDriver的定义如下:
status_t IPCThreadState::talkWithDriver(bool doReceive)
{
LOG_ASSERT(mProcess->mDriverFD >= 0, "Binder driver is not opened");
。。。。
do {
IF_LOG_COMMANDS() {
alog << "About to read/write, write size = " << mOut.dataSize() << endl;
}
#if defined(HAVE_ANDROID_OS)
//使用ioctl控制内核收发数据,注意这里的fd是mProcess->mDriverFD,它是属于进程的,由线程共享的
if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)
err = NO_ERROR;
else
err = -errno;
#else
err = INVALID_OPERATION;
#endif
IF_LOG_COMMANDS() {
alog << "Finished read/write, write size = " << mOut.dataSize() << endl;
}
} while (err == -EINTR);
。。。。
return err;
}
talkWithDriver的功能很简单,就是调用系统调用ioctl同时进行发送和接收。BC_XXX应该表示command,BR_XXX应该表示response
我们会经过一系列的command交互,以下是log显示的一个command交互过程,handle=0,表示发送给servicemanager,code=3,表示是addService:
I/IPCThreadState( 75): ipc_client pid=75,handle=0,code=3,flags=0
I/IPCThreadState( 75): ipc_client pid=75 send cmd BC_TRANSACTION
I/Binder ( 27): svc_server pid=27 recv BR_NOOP
I/Binder ( 27): svc_server pid=27 recv BR_TRANSACTION
I/ServiceManager( 27): svc_server pid=27 ipc_client pid=75 target=0x0 code=3 uid=1000
I/ServiceManager( 27): svc_server add_service('batteryinfo',0xa) uid=1000
I/Binder ( 27): svc_server pid=27 send cmd BC_ACQUIRE
I/Binder ( 27): svc_server pid=27 send cmd BC_REQUEST_DEATH_NOTIFICATION
I/Binder ( 27): svc_server pid=27 send cmd BC_FREE_BUFFER
I/Binder ( 27): svc_server pid=27 send cmd BC_REPLY,status=0
I/Binder ( 27): svc_server pid=27 recv BR_NOOP
I/Binder ( 27): svc_server pid=27 recv BR_TRANSACTION_COMPLETE
I/IPCThreadState( 75): ipc_client pid=75 recv cmd=BR_NOOP
I/IPCThreadState( 75): ipc_client pid=75 recv cmd=BR_INCREFS
I/IPCThreadState( 75): ipc_client pid=75 recv cmd=BR_ACQUIRE
I/IPCThreadState( 75): ipc_client pid=75 recv cmd=BR_TRANSACTION_COMPLETE
I/IPCThreadState( 75): ipc_client pid=75 recv cmd=BR_NOOP
I/IPCThreadState( 75): ipc_client pid=75 recv cmd=BR_REPLY
特别需要注意的是waitForResponse函数在此过程中,如果不处理收到的响应,一直处于休眠,直到我们收到REPLY,表示addService成功,才退出循环并返回。所以addService函数是一个同步函数。
至此,MediaPlayerService::instantiate();调用分析完成,但是现在的问题是,我们在和谁通信。
int main(int argc, char** argv)
{
sp<ProcessState> proc(ProcessState::self());
sp<IServiceManager> sm = defaultServiceManager();
LOGI("ServiceManager: %p", sm.get());
AudioFlinger::instantiate();
MediaPlayerService::instantiate();
CameraService::instantiate();
AudioPolicyService::instantiate();
ProcessState::self()->startThreadPool();
IPCThreadState::self()->joinThreadPool();
}
其中ProcessState::self()获取ProcessState的唯一实例,它是进程相关的,每个进程只有一个实例,定义于ProcessState.cpp,其构造函数如下:
ProcessState::ProcessState()
: mDriverFD(open_driver()) //open_driver函数以读写方式打开/dev/binder设备
, mVMStart(MAP_FAILED)
, mManagesContexts(false)
, mBinderContextCheckFunc(NULL)
, mBinderContextUserData(NULL)
, mThreadPoolStarted(false)
, mThreadPoolSeq(1)
{
if (mDriverFD >= 0) {
// XXX Ideally, there should be a specific define for whether we
// have mmap (or whether we could possibly have the kernel module
// availabla).
#if !defined(HAVE_WIN32_IPC)
// mmap the binder, providing a chunk of virtual address space to receive transactions.
/ /mmap的功能是为fd创建一块内存映射,对内存映射的读写就对应fd的一定偏移量的读写,这里指定的BINDER_VM_SIZE=1M-8k,PROT_READ表示fd的映射区域是只读的,也就是我们只能从binder设备中读取transaction
mVMStart = mmap(0, BINDER_VM_SIZE, PROT_READ, MAP_PRIVATE | MAP_NORESERVE, mDriverFD, 0);
if (mVMStart == MAP_FAILED) {
// *sigh*
LOGE("Using /dev/binder failed: unable to mmap transaction memory.\n");
close(mDriverFD);
mDriverFD = -1;
}
#else
mDriverFD = -1;
#endif
}
LOG_ALWAYS_FATAL_IF(mDriverFD < 0, "Binder driver could not be opened. Terminating.");
}
也就是说,调用这个方法的进程都会打开binder设备,/dev/binder是Binder机制的基础,后面我们会看到Binder使用了共享内存,使得进程间的通信机制效率更高。
下面再来看sp<IServiceManager> sm = defaultServiceManager();它定义于IServiceManager.cpp
sp<IServiceManager> defaultServiceManager()
{
if (gDefaultServiceManager != NULL) return gDefaultServiceManager;
{
AutoMutex _l(gDefaultServiceManagerLock);
if (gDefaultServiceManager == NULL) {
gDefaultServiceManager = interface_cast<IServiceManager>(
ProcessState::self()->getContextObject(NULL));
}
}
return gDefaultServiceManager;
}
关键是红色部分,先看参数 ProcessState::self()->getContextObject(NULL),它在ProcessState.cpp中,实际上它涉及三个函数
sp<IBinder> ProcessState::getContextObject(const sp<IBinder>& caller)
{
return getStrongProxyForHandle(0);
}
//mHandleToObject是一个vector,本函数会根据handle查找是否有对应的entry,如果没找到会创建一个新的entry,其中binder为NULL
ProcessState::handle_entry* ProcessState::lookupHandleLocked(int32_t handle)
{
const size_t N=mHandleToObject.size();
if (N <= (size_t)handle) {
handle_entry e;
e.binder = NULL;
e.refs = NULL;
status_t err = mHandleToObject.insertAt(e, N, handle+1-N);
if (err < NO_ERROR) return NULL;
}
return &mHandleToObject.editItemAt(handle);
}
sp<IBinder> ProcessState::getStrongProxyForHandle(int32_t handle)
{
sp<IBinder> result;
AutoMutex _l(mLock);
handle_entry* e = lookupHandleLocked(handle);
if (e != NULL) {
// We need to create a new BpBinder if there isn't currently one, OR we
// are unable to acquire a weak reference on this current one. See comment
// in getWeakProxyForHandle() for more info about this.
IBinder* b = e->binder;
if (b == NULL || !e->refs->attemptIncWeak(this)) {
b = new BpBinder(handle);
e->binder = b;
if (b) e->refs = b->getWeakRefs();
result = b;
} else {
// This little bit of nastyness is to allow us to add a primary
// reference to the remote proxy when this team doesn't have one
// but another team is sending the handle to us.
result.force_set(b);
e->refs->decWeak(this);
}
}
return result;
}
第一次调用getStrongProxyForHandle(0)的时候,必然会走new BpBinder(handle)的路径,即返回了一个sp<BpBinder>,BpBinder继承自IBinder,见参考2中的类图,也就是说ProcessState::getContextObject函数返回的是一个sp<BpBinder>。
再回到 gDefaultServiceManager = interface_cast<IServiceManager>( ProcessState::self()->getContextObject(NULL));
interface_cast定义于frameworks/base/include/binder/IInterface.h
template<typename INTERFACE>
inline sp<INTERFACE> interface_cast(const sp<IBinder>& obj)
{
return INTERFACE::asInterface(obj);
}
它是一个模板函数,interface_cast<IServiceManager>()将会返回IServiceManager::asInterface(obj)
在IInterface.h中定义了如下的宏
//用于在类中声明
#define DECLARE_META_INTERFACE(INTERFACE) \
static const android::String16 descriptor; \
static android::sp<I##INTERFACE> asInterface( \
const android::sp<android::IBinder>& obj); \
virtual const android::String16& getInterfaceDescriptor() const; \
I##INTERFACE(); \
virtual ~I##INTERFACE();
//用于实现
#define IMPLEMENT_META_INTERFACE(INTERFACE, NAME) \
const android::String16 I##INTERFACE:: descriptor(NAME); \
const android::String16& \
I##INTERFACE:: getInterfaceDescriptor() const { \
return I##INTERFACE::descriptor; \
} \
android::sp<I##INTERFACE> I##INTERFACE:: asInterface( \
const android::sp<android::IBinder>& obj) \
{ \
android::sp<I##INTERFACE> intr; \
if (obj != NULL) { \
intr = static_cast<I##INTERFACE*>( \
obj->queryLocalInterface( \
I##INTERFACE::descriptor).get()); \
if (intr == NULL) { \
intr = new Bp##INTERFACE(obj); \
} \
} \
return intr; \
} \
I##INTERFACE::I##INTERFACE() { } \
I##INTERFACE::~I##INTERFACE() { } \
在IServiceManager.h的IServiceManager类中声明了这个宏,在IServiceManager.cpp中
IMPLEMENT_META_INTERFACE(ServiceManager, "android.os.IServiceManager");
也 就是说IServiceManager::descriptor = "android.os.IServiceManager"。同时,添加了两个函数,getInterfaceDescriptor和asInterface
我们主要看asInterface函数,它的意思是说,参数obj,即IBinder实例,要调用queryLocalInterface函数,从刚才的分析,我们知道这里obj实际上是BpBinder,它没有实现queryLocalInterface,而是继承了基类IBinder的函数实现,而IBinder::queryLocalInterface只是返回了NULL,所以后续流程看,asInterface函数,将会返回new BpServiceManager(obj),在后面我们还会看到asInterface这个函数,理解它为什么这么做。
从参考2的类图中,我们可以看到BpServiceManager继承自BpRefBase,mRemote是BpRefBase类的成员变量,在这里参数obj将会保存至mRemote中,后面会用到这个变量。
class BpServiceManager定义于IServiceManager.cpp,它派生自BpInterface<IServiceManager>,而BpInterface定义于IInterface.h
template<typename INTERFACE>
class BpInterface : public INTERFACE, public BpRefBase
{
public:
BpInterface(const sp<IBinder>& remote);
protected:
virtual IBinder* onAsBinder();
};
从 BpInterface类的定义,我们可以看到BpInterface<IServiceManager>是从IServiceManager继承而来,所以 defaultServiceManager函数返回的是一个sp<BpServiceManager>对象是与 sp<IServiceManager>兼容的。
再回到main函数中,后面的几句都是用来注册服务的,我们以MediaPlayerService为例
MediaPlayerService::instantiate();
该函数实现于MediaPalyerService.cpp
void MediaPlayerService::instantiate() {
defaultServiceManager()->addService(
String16("media.player"), new MediaPlayerService());
}
defaultServiceManager(),我们知道将会返回一个sp<BpServiceManager>,其addService函数实现于IServiceManager.cpp,如下:
virtual status_t addService(const String16& name, const sp<IBinder>& service)
{
Parcel data, reply;
//write "android.os.IServiceManager"
data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());
//write service name
data.writeString16(name);
//write service,后面我们将会看到这里到底写了什么
data.writeStrongBinder(service);
LOGI("Remote_transact add service");
status_t err = remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply);
return err == NO_ERROR ? reply.readExceptionCode() : err;
}
关键点在红色部分,首先remote(),即BpServiceManager::remote(),前面我们分析过,它返回BpBinder,即将会调用BpBinder->transact,该函数定义于BpBinder.cpp,如下:
status_t BpBinder::transact(
uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
// Once a binder has died, it will never come back to life.
if (mAlive) {
//mHandle即在new BpBinder的时候传入的0
status_t status = IPCThreadState::self()->transact(
mHandle, code, data, reply, flags);
if (status == DEAD_OBJECT) mAlive = 0;
return status;
}
return DEAD_OBJECT;
}
IPCThreadState::self()会返回IPCThreadState的唯一实例,它是线程相关的,通过TLS实现,每个线程都会有一个这个实例
status_t IPCThreadState::transact(int32_t handle,
uint32_t code, const Parcel& data,
Parcel* reply, uint32_t flags)
{
LOGI("ipc_client pid=%d,handle=%d,code=%d,flags=%d\n",getpid(),handle,code,flags);
status_t err = data.errorCheck();
flags |= TF_ACCEPT_FDS;
IF_LOG_TRANSACTIONS() {
TextOutput::Bundle _b(alog);
alog << "BC_TRANSACTION thr " << (void*)pthread_self() << " / hand "
<< handle << " / code " << TypeCode(code) << ": "
<< indent << data << dedent << endl;
}
if (err == NO_ERROR) {
LOG_ONEWAY(">>>> SEND from pid %d uid %d %s", getpid(), getuid(),
(flags & TF_ONE_WAY) == 0 ? "READ REPLY" : "ONE WAY");
LOGI("ipc_client pid=%d send cmd BC_TRANSACTION",getpid());
//将会把参数写入mOut变量,等待发送数据
err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);
}
if (err != NO_ERROR) {
if (reply) reply->setError(err);
return (mLastError = err);
}
if ((flags & TF_ONE_WAY) == 0) {
#if 0
if (code == 4) { // relayout
LOGI(">>>>>> CALLING transaction 4");
} else {
LOGI(">>>>>> CALLING transaction %d", code);
}
#endif
if (reply) {
//这里发送和接收数据
err = waitForResponse(reply);
} else {
Parcel fakeReply;
err = waitForResponse(&fakeReply);
}
#if 0
if (code == 4) { // relayout
LOGI("<<<<<< RETURNING transaction 4");
} else {
LOGI("<<<<<< RETURNING transaction %d", code);
}
#endif
IF_LOG_TRANSACTIONS() {
TextOutput::Bundle _b(alog);
alog << "BR_REPLY thr " << (void*)pthread_self() << " / hand "
<< handle << ": ";
if (reply) alog << indent << *reply << dedent << endl;
else alog << "(none requested)" << endl;
}
} else {
err = waitForResponse(NULL, NULL);
}
return err;
}
IPCThreadState::transact函数中有两个函数需要重点说明:
1.err = writeTransactionData( BC_TRANSACTION, flags, handle, code, data, NULL);
该函数把数据写到mOut变量中,mOut是一个Parcle对象
2.err = waitForResponse(reply);
status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
{
int32_t cmd;
int32_t err;
while (1) {
//talkWithDriver是用来发送和接收数据的函数
if ((err=talkWithDriver()) < NO_ERROR) break;
err = mIn.errorCheck();
if (err < NO_ERROR) break;
if (mIn.dataAvail() == 0) continue;
cmd = mIn.readInt32();
IF_LOG_COMMANDS() {
alog << "Processing waitForResponse Command: "
<< getReturnString(cmd) << endl;
}
switch (cmd) {
case BR_TRANSACTION_COMPLETE:
if (!reply && !acquireResult) goto finish;
break;
case BR_REPLY:
....
goto finish
。。。。。。
default:
//执行接收到的命令
err = executeCommand(cmd);
if (err != NO_ERROR) goto finish;
break;
}
}
finish:
if (err != NO_ERROR) {
if (acquireResult) *acquireResult = err;
if (reply) reply->setError(err);
mLastError = err;
}
return err;
}
talkWithDriver的定义如下:
status_t IPCThreadState::talkWithDriver(bool doReceive)
{
LOG_ASSERT(mProcess->mDriverFD >= 0, "Binder driver is not opened");
。。。。
do {
IF_LOG_COMMANDS() {
alog << "About to read/write, write size = " << mOut.dataSize() << endl;
}
#if defined(HAVE_ANDROID_OS)
//使用ioctl控制内核收发数据,注意这里的fd是mProcess->mDriverFD,它是属于进程的,由线程共享的
if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)
err = NO_ERROR;
else
err = -errno;
#else
err = INVALID_OPERATION;
#endif
IF_LOG_COMMANDS() {
alog << "Finished read/write, write size = " << mOut.dataSize() << endl;
}
} while (err == -EINTR);
。。。。
return err;
}
talkWithDriver的功能很简单,就是调用系统调用ioctl同时进行发送和接收。BC_XXX应该表示command,BR_XXX应该表示response
我们会经过一系列的command交互,以下是log显示的一个command交互过程,handle=0,表示发送给servicemanager,code=3,表示是addService:
I/IPCThreadState( 75): ipc_client pid=75,handle=0,code=3,flags=0
I/IPCThreadState( 75): ipc_client pid=75 send cmd BC_TRANSACTION
I/Binder ( 27): svc_server pid=27 recv BR_NOOP
I/Binder ( 27): svc_server pid=27 recv BR_TRANSACTION
I/ServiceManager( 27): svc_server pid=27 ipc_client pid=75 target=0x0 code=3 uid=1000
I/ServiceManager( 27): svc_server add_service('batteryinfo',0xa) uid=1000
I/Binder ( 27): svc_server pid=27 send cmd BC_ACQUIRE
I/Binder ( 27): svc_server pid=27 send cmd BC_REQUEST_DEATH_NOTIFICATION
I/Binder ( 27): svc_server pid=27 send cmd BC_FREE_BUFFER
I/Binder ( 27): svc_server pid=27 send cmd BC_REPLY,status=0
I/Binder ( 27): svc_server pid=27 recv BR_NOOP
I/Binder ( 27): svc_server pid=27 recv BR_TRANSACTION_COMPLETE
I/IPCThreadState( 75): ipc_client pid=75 recv cmd=BR_NOOP
I/IPCThreadState( 75): ipc_client pid=75 recv cmd=BR_INCREFS
I/IPCThreadState( 75): ipc_client pid=75 recv cmd=BR_ACQUIRE
I/IPCThreadState( 75): ipc_client pid=75 recv cmd=BR_TRANSACTION_COMPLETE
I/IPCThreadState( 75): ipc_client pid=75 recv cmd=BR_NOOP
I/IPCThreadState( 75): ipc_client pid=75 recv cmd=BR_REPLY
特别需要注意的是waitForResponse函数在此过程中,如果不处理收到的响应,一直处于休眠,直到我们收到REPLY,表示addService成功,才退出循环并返回。所以addService函数是一个同步函数。
至此,MediaPlayerService::instantiate();调用分析完成,但是现在的问题是,我们在和谁通信。
三.servicemanager
service_manager就是我们与之通信的进程,它由service_manager.c编译而来,其main函数如下:
int main(int argc, char **argv)
{
struct binder_state *bs;
void *svcmgr = BINDER_SERVICE_MANAGER;
bs = binder_open(128*1024);
//这里调用系统调用ioctl,告诉系统,servicemanager的handle为0,也就是BpBinder中使用的handle
if ( binder_become_context_manager(bs)) {
LOGE("cannot become context manager (%s)\n", strerror(errno));
return -1;
}
svcmgr_handle = svcmgr;
binder_loop(bs, svcmgr_handler);
return 0;
}
service_manager就是我们与之通信的进程,它由service_manager.c编译而来,其main函数如下:
int main(int argc, char **argv)
{
struct binder_state *bs;
void *svcmgr = BINDER_SERVICE_MANAGER;
bs = binder_open(128*1024);
//这里调用系统调用ioctl,告诉系统,servicemanager的handle为0,也就是BpBinder中使用的handle
if ( binder_become_context_manager(bs)) {
LOGE("cannot become context manager (%s)\n", strerror(errno));
return -1;
}
svcmgr_handle = svcmgr;
binder_loop(bs, svcmgr_handler);
return 0;
}
其中,binder_open定义如下:
struct binder_state *binder_open(unsigned mapsize)
{
struct binder_state *bs;
bs = malloc(sizeof(*bs));
if (!bs) {
errno = ENOMEM;
return 0;
}
//打开binder设备
bs->fd = open("/dev/binder", O_RDWR);
if (bs->fd < 0) {
fprintf(stderr,"binder: cannot open device (%s)\n",
strerror(errno));
goto fail_open;
}
bs->mapsize = mapsize;
//为fd分配共享内存
bs->mapped = mmap(NULL, mapsize, PROT_READ, MAP_PRIVATE, bs->fd, 0);
if (bs->mapped == MAP_FAILED) {
fprintf(stderr,"binder: cannot map device (%s)\n",
strerror(errno));
goto fail_map;
}
/* TODO: check version */
return bs;
fail_map:
close(bs->fd);
fail_open:
free(bs);
return 0;
}
再来看binder_loop
void binder_loop(struct binder_state *bs, binder_handler func)
{
int res;
struct binder_write_read bwr;
unsigned readbuf[32];
bwr.write_size = 0;
bwr.write_consumed = 0;
bwr.write_buffer = 0;
readbuf[0] = BC_ENTER_LOOPER;
binder_write(bs, readbuf, sizeof(unsigned));
for (;;) {
bwr.read_size = sizeof(readbuf);
bwr.read_consumed = 0;
bwr.read_buffer = (unsigned) readbuf;
//同样调用ioctl进行收发数据
res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);
if (res < 0) {
LOGE("binder_loop: ioctl failed (%s)\n", strerror(errno));
break;
}
//调用binder_parse解析收到的数据
res = binder_parse(bs, 0, readbuf, bwr.read_consumed, func);
if (res == 0) {
LOGE("binder_loop: unexpected reply?!\n");
break;
}
if (res < 0) {
LOGE("binder_loop: io error %d %s\n", res, strerror(errno));
break;
}
}
}
binder_parse中比较特殊的处理是调用了func回调函数,和binder_send_reply。binder_send_reply用来发送reply,func回调函数就是main函数中设置的svcmgr_handler,其定义如下:
int svcmgr_handler(struct binder_state *bs,
struct binder_txn *txn,
struct binder_io *msg,
struct binder_io *reply)
{
struct svcinfo *si;
uint16_t *s;
unsigned len;
void *ptr;
uint32_t strict_policy;
LOGI("svc_server pid=%d ipc_client pid=%d target=%p code=%d uid=%d\n",getpid(),
txn->sender_pid,txn->target, txn->code, txn->sender_euid);
if (txn->target != svcmgr_handle)
return -1;
// Equivalent to Parcel::enforceInterface(), reading the RPC
// header with the strict mode policy mask and the interface name.
// Note that we ignore the strict_policy and don't propagate it
// further (since we do no outbound RPCs anyway).
strict_policy = bio_get_uint32(msg);
s = bio_get_string16(msg, &len);
if ((len != (sizeof(svcmgr_id) / 2)) ||
memcmp(svcmgr_id, s, sizeof(svcmgr_id))) {
fprintf(stderr,"invalid id %s\n", str8(s));
return -1;
}
switch(txn->code) {
case SVC_MGR_GET_SERVICE:
case SVC_MGR_CHECK_SERVICE:
s = bio_get_string16(msg, &len);
ptr = do_find_service(bs, s, len);
if (!ptr)
break;
bio_put_ref(reply, ptr);
return 0;
case SVC_MGR_ADD_SERVICE:
s = bio_get_string16(msg, &len);
ptr = bio_get_ref(msg);
if (do_add_service(bs, s, len, ptr, txn->sender_euid))
return -1;
break;
case SVC_MGR_LIST_SERVICES: {
unsigned n = bio_get_uint32(msg);
si = svclist;
while ((n-- > 0) && si)
si = si->next;
if (si) {
bio_put_string16(reply, si->name);
return 0;
}
return -1;
}
default:
LOGE("unknown code %d\n", txn->code);
return -1;
}
bio_put_uint32(reply, 0);
return 0;
}
从这里我们可以很清楚的看到add_service交给do_add_service函数处理,实际上就是加入一个list中加入新的节点。
四.亢龙有悔,再来看IServiceManager
现在我们再回顾一下service是如何通过binder注册给servicemanager的
1.每个进程都有一个ProcessState实例,它持有一个/dev/binder设备的fd,这样线程就能通过它与servicemanager通信,而且ProcessState实例还有一个handle与BpBinder对应的列表,通过handle可以找到相应的BpBinder
2.通过defaultServiceManager函数,每个进程都得到了一个BpServiceManager实例,它实际上包含一个BpBinder,即ProcessState实例中,handle为0的BpBinder
3.当前IPCThreadState,使用进程持有的binder设备的fd,使用ioctl系统调用,与BpBinder持有的handle对应的server端交互。
4.需要区分的是,BC_TRANSACTION,BR_REPLY是binder机制中的command,而ADD_SERVICE,CHECK_SERVICE是client与server协商好的code,是与binder无关的。
再来看IServiceManager.cpp中的BpServiceManager,它实际上是servicemanager的代理类,它运行在client进程中,如果一个进程想要提供服务,首先就要通过defaultServiceManager得到一个BpServiceManager实例,BpServiceManager提供getService,checkService,addService,listService服务,而这些功能又是通过它包含的BpBinder实现的。由此看来,BpBinder封装了Binder通信机制,client端可以通过使用它与服务器端通信。
五.Service
struct binder_state *binder_open(unsigned mapsize)
{
struct binder_state *bs;
bs = malloc(sizeof(*bs));
if (!bs) {
errno = ENOMEM;
return 0;
}
//打开binder设备
bs->fd = open("/dev/binder", O_RDWR);
if (bs->fd < 0) {
fprintf(stderr,"binder: cannot open device (%s)\n",
strerror(errno));
goto fail_open;
}
bs->mapsize = mapsize;
//为fd分配共享内存
bs->mapped = mmap(NULL, mapsize, PROT_READ, MAP_PRIVATE, bs->fd, 0);
if (bs->mapped == MAP_FAILED) {
fprintf(stderr,"binder: cannot map device (%s)\n",
strerror(errno));
goto fail_map;
}
/* TODO: check version */
return bs;
fail_map:
close(bs->fd);
fail_open:
free(bs);
return 0;
}
再来看binder_loop
void binder_loop(struct binder_state *bs, binder_handler func)
{
int res;
struct binder_write_read bwr;
unsigned readbuf[32];
bwr.write_size = 0;
bwr.write_consumed = 0;
bwr.write_buffer = 0;
readbuf[0] = BC_ENTER_LOOPER;
binder_write(bs, readbuf, sizeof(unsigned));
for (;;) {
bwr.read_size = sizeof(readbuf);
bwr.read_consumed = 0;
bwr.read_buffer = (unsigned) readbuf;
//同样调用ioctl进行收发数据
res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);
if (res < 0) {
LOGE("binder_loop: ioctl failed (%s)\n", strerror(errno));
break;
}
//调用binder_parse解析收到的数据
res = binder_parse(bs, 0, readbuf, bwr.read_consumed, func);
if (res == 0) {
LOGE("binder_loop: unexpected reply?!\n");
break;
}
if (res < 0) {
LOGE("binder_loop: io error %d %s\n", res, strerror(errno));
break;
}
}
}
binder_parse中比较特殊的处理是调用了func回调函数,和binder_send_reply。binder_send_reply用来发送reply,func回调函数就是main函数中设置的svcmgr_handler,其定义如下:
int svcmgr_handler(struct binder_state *bs,
struct binder_txn *txn,
struct binder_io *msg,
struct binder_io *reply)
{
struct svcinfo *si;
uint16_t *s;
unsigned len;
void *ptr;
uint32_t strict_policy;
LOGI("svc_server pid=%d ipc_client pid=%d target=%p code=%d uid=%d\n",getpid(),
txn->sender_pid,txn->target, txn->code, txn->sender_euid);
if (txn->target != svcmgr_handle)
return -1;
// Equivalent to Parcel::enforceInterface(), reading the RPC
// header with the strict mode policy mask and the interface name.
// Note that we ignore the strict_policy and don't propagate it
// further (since we do no outbound RPCs anyway).
strict_policy = bio_get_uint32(msg);
s = bio_get_string16(msg, &len);
if ((len != (sizeof(svcmgr_id) / 2)) ||
memcmp(svcmgr_id, s, sizeof(svcmgr_id))) {
fprintf(stderr,"invalid id %s\n", str8(s));
return -1;
}
switch(txn->code) {
case SVC_MGR_GET_SERVICE:
case SVC_MGR_CHECK_SERVICE:
s = bio_get_string16(msg, &len);
ptr = do_find_service(bs, s, len);
if (!ptr)
break;
bio_put_ref(reply, ptr);
return 0;
case SVC_MGR_ADD_SERVICE:
s = bio_get_string16(msg, &len);
ptr = bio_get_ref(msg);
if (do_add_service(bs, s, len, ptr, txn->sender_euid))
return -1;
break;
case SVC_MGR_LIST_SERVICES: {
unsigned n = bio_get_uint32(msg);
si = svclist;
while ((n-- > 0) && si)
si = si->next;
if (si) {
bio_put_string16(reply, si->name);
return 0;
}
return -1;
}
default:
LOGE("unknown code %d\n", txn->code);
return -1;
}
bio_put_uint32(reply, 0);
return 0;
}
从这里我们可以很清楚的看到add_service交给do_add_service函数处理,实际上就是加入一个list中加入新的节点。
四.亢龙有悔,再来看IServiceManager
现在我们再回顾一下service是如何通过binder注册给servicemanager的
1.每个进程都有一个ProcessState实例,它持有一个/dev/binder设备的fd,这样线程就能通过它与servicemanager通信,而且ProcessState实例还有一个handle与BpBinder对应的列表,通过handle可以找到相应的BpBinder
2.通过defaultServiceManager函数,每个进程都得到了一个BpServiceManager实例,它实际上包含一个BpBinder,即ProcessState实例中,handle为0的BpBinder
3.当前IPCThreadState,使用进程持有的binder设备的fd,使用ioctl系统调用,与BpBinder持有的handle对应的server端交互。
4.需要区分的是,BC_TRANSACTION,BR_REPLY是binder机制中的command,而ADD_SERVICE,CHECK_SERVICE是client与server协商好的code,是与binder无关的。
再来看IServiceManager.cpp中的BpServiceManager,它实际上是servicemanager的代理类,它运行在client进程中,如果一个进程想要提供服务,首先就要通过defaultServiceManager得到一个BpServiceManager实例,BpServiceManager提供getService,checkService,addService,listService服务,而这些功能又是通过它包含的BpBinder实现的。由此看来,BpBinder封装了Binder通信机制,client端可以通过使用它与服务器端通信。
五.Service
现在我们了解service与servicemanager之间的通信,还有另外一个方面,应用程序作为client是如何与作为server端的service通信的呢?
我们再来看void MediaPlayerService::instantiate()函数,
void MediaPlayerService::instantiate() {
defaultServiceManager()->addService(
String16("media.player"), new MediaPlayerService());
}
addService的原型如下:
virtual status_t addService(const String16& name, const sp<IBinder>& service)
按照我们的理解,这个函数是用来向servicemanager注册服务的,name指定service的名称,service是服务,client与之通信的服务,应该就是这个实例,那么这个服务到底是什么呢?我们来看MediaPlayerService类,MediaPlayerService类在MediaPlayerService.h中定义,从参考2的类图中,我们可以看到MediaPlayerService继承自BnMediaPlayerService,而BnMediaPlayerService继承自BnInterface<IMediaPlayerService>(定义于IMediaPlayerService.h),而BnInterface<IMediaPlayerService>与BpInterface一样都是定义于IInterface.h,它是从IMediaPlayerService和BBinder多重继承的。IMediaPlayerService类继承自IInterface。这个BBinder就是server端的代理,它定义于Binder.h它负责接收client端的请求,表现在它的transact函数的实现上,
status_t BBinder::transact(
uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
data.setDataPosition(0);
status_t err = NO_ERROR;
switch (code) {
case PING_TRANSACTION:
reply->writeInt32(pingBinder());
break;
default:
err = onTransact(code, data, reply, flags);
break;
}
if (reply != NULL) {
reply->setDataPosition(0);
}
return err;
}
onTransact是一个可以由派生类覆盖的虚函数,派生类可以实现自己的code的处理。那么现在的问题是,BBinder::transact函数是如何被驱动起来的?它是在哪里被调用的呢?答案在main_mediaserver.cpp的main函数中,在main函数的最后,
ProcessState::self()->startThreadPool();
IPCThreadState::self()->joinThreadPool();
这里除了主线程之外,还启动了一个线程,线程中会调用talkWithDriver函数,然后调用executeCommand函数,该函数在处理BR_TRANSACTION的时候,有如下的一段:
if (tr.target.ptr) {
sp<BBinder> b((BBinder*)tr.cookie);
const status_t error = b->transact(tr.code, buffer, &reply, tr.flags);
if (error < NO_ERROR) reply.setError(error);
} else {
const status_t error = the_context_object->transact(tr.code, buffer, &reply, tr.flags);
if (error < NO_ERROR) reply.setError(error);
}
if ((tr.flags & TF_ONE_WAY) == 0) {
LOG_ONEWAY("Sending reply to %d!", mCallingPid);
sendReply(reply, 0);
} else {
LOG_ONEWAY("NOT sending reply to %d!", mCallingPid);
}
也就是说talkWithDriver把当前线程的命令发送给driver,然后从driver接收响应,交给executeCommand处理,在处理BR_TRANSACTION的时候,会从参数得到BBinder,然后调用其transact处理,这样server端就对用户的请求做了处理,sendReply发送结果给client端,从它的实现,我们可以看到,它实际上是发送了一个BC_REPLY的command
status_t IPCThreadState::sendReply(const Parcel& reply, uint32_t flags)
{
status_t err;
status_t statusBuffer;
err = writeTransactionData(BC_REPLY, flags, -1, 0, reply, &statusBuffer);
if (err < NO_ERROR) return err;
return waitForResponse(NULL, NULL);
}
同样是mediaserver进程,对比注册服务的过程,那时候BpServiceManager是作为client端,使用BpBinder与servicemanager通信,现在Service作为server端,使用BBinder与client通信。
4.reply
local,remote
我们再来看void MediaPlayerService::instantiate()函数,
void MediaPlayerService::instantiate() {
defaultServiceManager()->addService(
String16("media.player"), new MediaPlayerService());
}
addService的原型如下:
virtual status_t addService(const String16& name, const sp<IBinder>& service)
按照我们的理解,这个函数是用来向servicemanager注册服务的,name指定service的名称,service是服务,client与之通信的服务,应该就是这个实例,那么这个服务到底是什么呢?我们来看MediaPlayerService类,MediaPlayerService类在MediaPlayerService.h中定义,从参考2的类图中,我们可以看到MediaPlayerService继承自BnMediaPlayerService,而BnMediaPlayerService继承自BnInterface<IMediaPlayerService>(定义于IMediaPlayerService.h),而BnInterface<IMediaPlayerService>与BpInterface一样都是定义于IInterface.h,它是从IMediaPlayerService和BBinder多重继承的。IMediaPlayerService类继承自IInterface。这个BBinder就是server端的代理,它定义于Binder.h它负责接收client端的请求,表现在它的transact函数的实现上,
status_t BBinder::transact(
uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
data.setDataPosition(0);
status_t err = NO_ERROR;
switch (code) {
case PING_TRANSACTION:
reply->writeInt32(pingBinder());
break;
default:
err = onTransact(code, data, reply, flags);
break;
}
if (reply != NULL) {
reply->setDataPosition(0);
}
return err;
}
onTransact是一个可以由派生类覆盖的虚函数,派生类可以实现自己的code的处理。那么现在的问题是,BBinder::transact函数是如何被驱动起来的?它是在哪里被调用的呢?答案在main_mediaserver.cpp的main函数中,在main函数的最后,
ProcessState::self()->startThreadPool();
IPCThreadState::self()->joinThreadPool();
这里除了主线程之外,还启动了一个线程,线程中会调用talkWithDriver函数,然后调用executeCommand函数,该函数在处理BR_TRANSACTION的时候,有如下的一段:
if (tr.target.ptr) {
sp<BBinder> b((BBinder*)tr.cookie);
const status_t error = b->transact(tr.code, buffer, &reply, tr.flags);
if (error < NO_ERROR) reply.setError(error);
} else {
const status_t error = the_context_object->transact(tr.code, buffer, &reply, tr.flags);
if (error < NO_ERROR) reply.setError(error);
}
if ((tr.flags & TF_ONE_WAY) == 0) {
LOG_ONEWAY("Sending reply to %d!", mCallingPid);
sendReply(reply, 0);
} else {
LOG_ONEWAY("NOT sending reply to %d!", mCallingPid);
}
也就是说talkWithDriver把当前线程的命令发送给driver,然后从driver接收响应,交给executeCommand处理,在处理BR_TRANSACTION的时候,会从参数得到BBinder,然后调用其transact处理,这样server端就对用户的请求做了处理,sendReply发送结果给client端,从它的实现,我们可以看到,它实际上是发送了一个BC_REPLY的command
status_t IPCThreadState::sendReply(const Parcel& reply, uint32_t flags)
{
status_t err;
status_t statusBuffer;
err = writeTransactionData(BC_REPLY, flags, -1, 0, reply, &statusBuffer);
if (err < NO_ERROR) return err;
return waitForResponse(NULL, NULL);
}
同样是mediaserver进程,对比注册服务的过程,那时候BpServiceManager是作为client端,使用BpBinder与servicemanager通信,现在Service作为server端,使用BBinder与client通信。
4.reply
local,remote
通过handle确定了发送端和目的端之间的通信
六.Binder的层次结构
七.Binder实践
七.Binder实践
参考
1.文件列表
frameworks/base/media/mediaserver/main_mediaserver.cpp
frameworks/base/include/binder/IServiceManager.h
frameworks/base/libs/binder/IServiceManager.cpp
frameworks/base/libs/binder/ProcessState.cpp
frameworks/base/include/binder/ProcessState.h
frameworks/base/libs/binder/IServiceManager.cpp
frameworks/base/include/binder/IInterface.h
frameworks/base/media/libmediaplayerservice/MediaPlayerService.cpp
frameworks/base/libs/binder/BpBinder.cpp
frameworks/base/include/binder/BpBinder.h
frameworks/base/libs/binder/IPCThreadState.cpp
frameworks/base/include/binder/IPCThreadState.h
frameworks/base/cmds/servicemanager/service_manager.c
frameworks/base/cmds/servicemanager/binder.c
frameworks/base/media/libmediaplayerservice/MediaPlayerService.h
frameworks/base/include/binder/Binder.h
1.文件列表
frameworks/base/media/mediaserver/main_mediaserver.cpp
frameworks/base/include/binder/IServiceManager.h
frameworks/base/libs/binder/IServiceManager.cpp
frameworks/base/libs/binder/ProcessState.cpp
frameworks/base/include/binder/ProcessState.h
frameworks/base/libs/binder/IServiceManager.cpp
frameworks/base/include/binder/IInterface.h
frameworks/base/media/libmediaplayerservice/MediaPlayerService.cpp
frameworks/base/libs/binder/BpBinder.cpp
frameworks/base/include/binder/BpBinder.h
frameworks/base/libs/binder/IPCThreadState.cpp
frameworks/base/include/binder/IPCThreadState.h
frameworks/base/cmds/servicemanager/service_manager.c
frameworks/base/cmds/servicemanager/binder.c
frameworks/base/media/libmediaplayerservice/MediaPlayerService.h
frameworks/base/include/binder/Binder.h
2.Binder的类图