该系列文章总纲链接:专题总纲目录 Android Framework 总纲
本章关键点总结 & 说明:
这里主要关注➕ Binder C++部分中的TestServer部分,针对这5个关键点进行分析,从而对ProcessState和IPCThreadState连个关键类有了更进一步的理解。
TestServer 关键点解析
这里主要针对TestServer的部分核心代码进行解读,共5个关键点
int main(void)
{
//...
// 关键点1 初始化binder
sp<ProcessState> proc(ProcessState::self());
// 关键点2 获得BpServiceManager
sp<IServiceManager> sm = defaultServiceManager();
// 关键点3
sm->addService(String16("hello"), new BnHelloService(sockets[1]));
// 关键点4 创建新的子线程,并开始处理驱动上报的消息
ProcessState::self()->startThreadPool();
// 关键点5 主线程 循环,循环并处理驱动上报的消息
IPCThreadState::self()->joinThreadPool();//辅助类 IPCThreadState的分析
return 0;
}
1 这里从ProcessState的使用开始分析
前面C++ demo案例中的TestServer中最开始使用ProcessState的地方,关键代码是:
sp<ProcessState> proc(ProcessState::self());
这里ProcessState使用了单例模式,其中self的方法实现如下所示:
sp<ProcessState> ProcessState::self()
{
Mutex::Autolock _l(gProcessMutex);
if (gProcess != NULL) {
return gProcess;
}
gProcess = new ProcessState;
return gProcess;
}
使用单例模式说明:一个进程只能有一个ProcessState对象,在创建ProcessState对象之后,构造器首先执行
这里我们对构造方法进行分析:
ProcessState::ProcessState()
: mDriverFD(open_driver()) /* 这里打开了binder驱动 */
, mVMStart(MAP_FAILED) /* 这里用MAP_FAILED初始化mmap映射地址*/
, mManagesContexts(false)
, mBinderContextCheckFunc(NULL)
, mBinderContextUserData(NULL)
, mThreadPoolStarted(false)
, mThreadPoolSeq(1)
{
if (mDriverFD >= 0) {
// XXX Ideally, there should be a specific define for whether we
// have mmap (or whether we could possibly have the kernel module
// availabla).
#if !defined(HAVE_WIN32_IPC)
// mmap the binder, providing a chunk of virtual address space to receive transactions.
mVMStart = mmap(0, BINDER_VM_SIZE, PROT_READ, MAP_PRIVATE | MAP_NORESERVE, mDriverFD, 0);
if (mVMStart == MAP_FAILED) {
// *sigh*
ALOGE("Using /dev/binder failed: unable to mmap transaction memory.\n");
close(mDriverFD);
mDriverFD = -1;
}
#else
mDriverFD = -1;
#endif
}
LOG_ALWAYS_FATAL_IF(mDriverFD < 0, "Binder driver could not be opened. Terminating.");
}
这里关键点之一就是打开binder驱动,继续分析:
static int open_driver()
{
int fd = open("/dev/binder", O_RDWR);
if (fd >= 0) {
fcntl(fd, F_SETFD, FD_CLOEXEC);
int vers;
status_t result = ioctl(fd, BINDER_VERSION, &vers);/* 版本相关信息 */
if (result == -1) {
ALOGE("Binder ioctl to obtain version failed: %s", strerror(errno));
close(fd);
fd = -1;
}
if (result != 0 || vers != BINDER_CURRENT_PROTOCOL_VERSION) { /* 版本相关信息 */
ALOGE("Binder driver protocol does not match user space protocol!");
close(fd);
fd = -1;
}
size_t maxThreads = 15; /* 这里告诉驱动,最大支持的线程数是15*/
result = ioctl(fd, BINDER_SET_MAX_THREADS, &maxThreads);
if (result == -1) {
ALOGE("Binder ioctl to set max threads failed: %s", strerror(errno));
}
} else {
ALOGW("Opening '/dev/binder' failed: %s\n", strerror(errno));
}
return fd;
}
致此,对这部分进行结束,对以上流程进行总结,ProcessState::self()所做的事情如下:
- 使用单例模式获取对象,保证一个进程只能打开设备一次
- 打开binder设备;同时对版本有一定的验证机制
- 进行mmap操作,分配一块内存来接收数据
2 获取servicemanager
sp<IServiceManager> sm = defaultServiceManager();
2.1 defaultServiceManager()的实现如之前所示:
sp<IServiceManager> defaultServiceManager()
{
if (gDefaultServiceManager != NULL) return gDefaultServiceManager;
{
AutoMutex _l(gDefaultServiceManagerLock);
while (gDefaultServiceManager == NULL) {
/* 这里是获取gDefaultServiceManager的关键 */
gDefaultServiceManager = interface_cast<IServiceManager>(ProcessState::self()->getContextObject(NULL));
if (gDefaultServiceManager == NULL)
sleep(1);
}
}
return gDefaultServiceManager;
}
defaultServiceManager()在获取gDefaultServiceManager时也使用了单例模式
对ProcessState::self()->getContextObject(NULL)这里的关键方法getContextObject进行分析,如下所示:
sp<IBinder> ProcessState::getContextObject(const sp<IBinder>& caller)
{
return getStrongProxyForHandle(0);
}
继续分析getStrongProxyForHandle,代码如下所示:
sp<IBinder> ProcessState::getStrongProxyForHandle(int32_t handle)
{
sp<IBinder> result;
AutoMutex _l(mLock);
handle_entry* e = lookupHandleLocked(handle);
if (e != NULL) {
IBinder* b = e->binder;
/* 只有对于新的资源项,binder才为空 */
if (b == NULL || !e->refs->attemptIncWeak(this)) {
if (handle == 0) {
/* 这里对handle为0的进程多了特殊处理,先ping一下,ping通了才可以使用*/
Parcel data;
status_t status = IPCThreadState::self()->transact(0, IBinder::PING_TRANSACTION, data, NULL, 0);
if (status == DEAD_OBJECT)
return NULL;
}
/* 这里创建一个BpBinder对象并填充到handle_entry结构体中 */
b = new BpBinder(handle);
e->binder = b;
if (b) e->refs = b->getWeakRefs();
result = b;
} else {
result.force_set(b);
e->refs->decWeak(this);
}
}
return result;
}
这里的lookuphandleLocked主要是根据handle值查找对应的索引值,如果没有会创建一个新项并返回(注意:只有出错才会返回NULL),一般情况下一定会返回一个不为空的handle_entry对象,它的实现如下;
ProcessState::handle_entry* ProcessState::lookupHandleLocked(int32_t handle)
{
const size_t N=mHandleToObject.size();
if (N <= (size_t)handle) {
handle_entry e;
e.binder = NULL;
e.refs = NULL;
status_t err = mHandleToObject.insertAt(e, N, handle+1-N);
if (err < NO_ERROR) return NULL;
}
return &mHandleToObject.editItemAt(handle);
}
getStrongProxyForHandle返回的是一个BpBinder对象,interface_cast<IServiceManager>(ProcessState::self()->getContextObject(NULL));这句代码就等价于interface_cast<IServiceManager>(new BpBinder(0));
这里对BpBinder和BBinder进行进一步的说明:
- BpBinder和BBinder都是从IBinder中派生出来的
- BpBinder是客户端用来与Server端交互的代理类,同时BBinder代表服务端
- BpBinder与BBinder是多对一关系,即BpBinder只能与对应的BBinder进行交互(这里只要看BpBinder用handle来初始化即可明白)
- BpBinder用handle来初始化自己,而所有的BBinder是通过handle来区分的,即BpBinder通过handle来绑定BBinder
2.2 这里对BpBinder的构造方法实现进行说明:
BpBinder::BpBinder(int32_t handle)
: mHandle(handle)
, mAlive(1)
, mObitsSent(0)
, mObituaries(NULL)
{
extendObjectLifetime(OBJECT_LIFETIME_WEAK);
IPCThreadState::self()->incWeakHandle(handle);
}
这里有一点很可疑,那就是BpBinder没有和binder驱动进行交互,而如果要通讯,与binder驱动交互是一定的(实际上,与binder交互的关键点是IPCThreadState方法中的talkWithDriver;下面会继续分析)
2.3 接下来仔细分析interface_cast
interface_cast的实现如下所示:
template<typename INTERFACE>
inline sp<INTERFACE> interface_cast(const sp<IBinder>& obj)
{
return INTERFACE::asInterface(obj);
}
这里返回的就是关键,后面也主要是分析asInterface方法的内部实现,基于此,继续分析,这里是一个模板函数,因此将IServiceManager等价过去即为:
inline sp< IServiceManager > interface_cast(const sp<IBinder>& obj)
{
return IServiceManager::asInterface(obj);
}
这里要做一下说明:BpBinder和BBinder是与通信相关的逻辑,而与业务相关的逻辑主要在IServiceManager中,IServiceManager代码如下所示:
class IServiceManager : public IInterface
{
public:
DECLARE_META_INTERFACE(ServiceManager);
virtual sp<IBinder> getService( const String16& name) const = 0;
virtual sp<IBinder> checkService( const String16& name) const = 0;
virtual status_t addService( const String16& name,const sp<IBinder>& service,bool allowIsolated = false) = 0;
/**
* Return list of all existing services.
*/
virtual Vector<String16> listServices() = 0;
enum {
GET_SERVICE_TRANSACTION = IBinder::FIRST_CALL_TRANSACTION,
CHECK_SERVICE_TRANSACTION,
ADD_SERVICE_TRANSACTION,
LIST_SERVICES_TRANSACTION,
};
};
这里保证了通信和业务的挂钩,关键点在于两个宏:DECLARE_META_INTERFACE和IMPLENT_META_INTERFACE,下面对其分别进行说明:
DECLARE_META_INTERFACE的实现如下所示:
#define DECLARE_META_INTERFACE(INTERFACE) \
static const android::String16 descriptor; \
static android::sp<I##INTERFACE> asInterface( \
const android::sp<android::IBinder>& obj); \
virtual const android::String16& getInterfaceDescriptor() const; \
I##INTERFACE(); \
virtual ~I##INTERFACE();
将这里的用ServiceManager替换INTERFACE,替换后如下所示:
#define DECLARE_META_INTERFACE(ServiceManager) \
static const android::String16 descriptor; \
static android::sp<IServiceManager > asInterface( \
const android::sp<android::IBinder>& obj); \
virtual const android::String16& getInterfaceDescriptor() const; \
IServiceManager(); \
virtual ~IServiceManager();
同时,DECLARE_META_INTERFACE目的是将其声明,IMPLENT_META_INTERFACE目的是将其实现,IMPLENT_META_INTERFACE的代码实现如下所示:
#define IMPLEMENT_META_INTERFACE(INTERFACE, NAME) \
const android::String16 I##INTERFACE::descriptor(NAME); \
const android::String16& \
I##INTERFACE::getInterfaceDescriptor() const { \
return I##INTERFACE::descriptor; \
} \
android::sp<I##INTERFACE> I##INTERFACE::asInterface( \
const android::sp<android::IBinder>& obj) \
{ \
android::sp<I##INTERFACE> intr; \
if (obj != NULL) { \
intr = static_cast<I##INTERFACE*>( \
obj->queryLocalInterface( \
I##INTERFACE::descriptor).get()); \
if (intr == NULL) { \
intr = new Bp##INTERFACE(obj); \
} \
} \
return intr; \
} \
I##INTERFACE::I##INTERFACE() { } \
I##INTERFACE::~I##INTERFACE() { } \
在这里的用ServiceManager替换INTERFACE后,用"android.os.IServiceManager"替换NAME,即为:
#define IMPLEMENT_META_INTERFACE(ServiceManager, "android.os.IServiceManager") \
const android::String16 IServiceManager::descriptor("android.os.IServiceManager");\
const android::String16& \
IServiceManager::getInterfaceDescriptor() const { \
return IServiceManager::descriptor; \
} \
android::sp<IServiceManager > IServiceManager::asInterface( \
const android::sp<android::IBinder>& obj) \
{ \
android::sp<IServiceManager> intr; \
if (obj != NULL) { \
intr = static_cast<IServiceManager *>( \
obj->queryLocalInterface( \
IServiceManager::descriptor).get()); \
if (intr == NULL) { \
intr = new BpServiceManager(obj); \
} \
} \
return intr; \
} \
IServiceManager::IServiceManager() { } \
IServiceManager::~IServiceManager() { } \
至此,IServiceManager::asInterface实现就展现出来了,它的功能是:通过传递进来的Bpbinder对象创建一个BpServiceManager对象。
同时,目前interface_cast<IServiceManager>(new BpBinder(0)); 等价为BpServiceManager(new BpBinder(0));在这里,对IServiceManager家族图谱进行说明:
- IServiceManager、BpServiceManager、BnServiceManager均与业务逻辑有关
- BnServiceManager继承BBinder与IServiceManager,可以直接与binder设备进行通信
- BnServiceManager是一个虚类,需要子类来实现对应的功能
- BpServiceManager继承BpInterface,而BpInterface又继承BpRefBase,BpRefBase中mRemote为IBinder类型,与BpBinder有关系
2.4 分析BpServiceManager
查看BpServiceManager的实现代码
class BpServiceManager : public BpInterface<IServiceManager>
{
public:
BpServiceManager(const sp<IBinder>& impl)
: BpInterface<IServiceManager>(impl)/* 这里调用基类的构造函数 */
{
}
...
};
这里,关注BpInterface的实现,代码如下所示:
template<typename INTERFACE>
class BpInterface : public INTERFACE, public BpRefBase /* 基类构造函数 */
{
public:
BpInterface(const sp<IBinder>& remote);
protected:
virtual IBinder* onAsBinder();
};
这里,关注BpRefBase的实现代码,如下所示:
BpRefBase::BpRefBase(const sp<IBinder>& o)
: mRemote(o.get()), mRefs(NULL), mState(0)
{
extendObjectLifetime(OBJECT_LIFETIME_WEAK);
if (mRemote) {
mRemote->incStrong(this); // Removed on first IncStrong().
mRefs = mRemote->createWeak(this); // Held for our entire lifetime.
}
}
至此,BpServiceManager中的mRemote对象指向了BpBinder,BpServiceManager实现了IServiceManager的业务逻辑,同时有BpBinder作为代表,通信和业务逻辑平台就都搭建好了。
3 添加服务 流程分析
sm->addService(String16("hello"), new BnHelloService(sockets[1]));
这里的sm即BpServiceManager对象,继续分析addService:
virtual status_t addService(const String16& name, const sp<IBinder>& service,
bool allowIsolated)
{
Parcel data, reply;
data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());
data.writeString16(name);
data.writeStrongBinder(service);
data.writeInt32(allowIsolated ? 1 : 0);
/* 这里的remote就是一个mRemote对象,即BpBinder(0) */
status_t err = remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply);
return err == NO_ERROR ? reply.readExceptionCode() : err;
}
特殊说明:
- addService是一个业务层的函数;通信的任务主要交给BpBinder来做
- 业务层和通信层的差别在于;业务层是将请求信息打包,通信层是传输信息。
接下来,addService的关键点就在这里,Bpbinder的transtract方法,接下来分析通信层的工作
3.1 通信层的工作,主要针对BpBinder的传输
之前分析过,BpBinder中没有直接与binder设备交互的地方,那么继续分析transact方法,代码如下所示:
status_t BpBinder::transact(
uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
// Once a binder has died, it will never come back to life.
if (mAlive) {
status_t status = IPCThreadState::self()->transact(mHandle, code, data, reply, flags);
if (status == DEAD_OBJECT) mAlive = 0;
return status;
}
return DEAD_OBJECT;
}
接下来,分析IPCThreadState::self()的transact方法,先分析self与构造器,代码如下:
IPCThreadState* IPCThreadState::self()
{
if (gHaveTLS) {/* 第一次进来为false */
restart:
const pthread_key_t k = gTLS;
/* 通过pthread_getspecific获取对应的IPCThreadState对象 */
IPCThreadState* st = (IPCThreadState*)pthread_getspecific(k);
/* 如果不为空,直接返回;如果为空,则创建一个IPCThreadState对象出来 */
if (st) return st;
return new IPCThreadState;
}
if (gShutdown) return NULL;
pthread_mutex_lock(&gTLSMutex);
if (!gHaveTLS) {
if (pthread_key_create(&gTLS, threadDestructor) != 0) {
pthread_mutex_unlock(&gTLSMutex);
return NULL;
}
gHaveTLS = true;
}
pthread_mutex_unlock(&gTLSMutex);
goto restart;
}
这里对TLS(Thread Local Storage,本地线程存储)进行下说明:
- 每个线程空间都有一个,但是线程间不共享这些空间
- 通过pthread_getspecific/pthread_setspecific来获取/设置这些空间中的内容
- 从pthread_getspecific中可以获取其中的IPCThread_State对象
3.2 接下来分析其构造函数
代码如下:
IPCThreadState::IPCThreadState()
: mProcess(ProcessState::self()),
mMyThreadId(androidGetTid()),
mStrictModePolicy(0),
mLastTransactionBinderFlags(0)
{
/* 在构造函数中,把自己设置到TLS中 */
pthread_setspecific(gTLS, this);
clearCaller();
/* 分别是发送和接收命令的缓冲区 */
mIn.setDataCapacity(256);
mOut.setDataCapacity(256);
}
关于IPCThreadState的一些说明:
- 每个线程都有一个IPCThreadState对象。而每个IPCThreadState也都有一个mIn和mOut对象
- mIn用来接收Binder设备发来的数据,mOut用来存储发送给binder设备的数据
3.3 接下来分析transact方法
代码如下所示:
status_t IPCThreadState::transact(int32_t handle,uint32_t code, const Parcel& data,Parcel* reply, uint32_t flags)
{
status_t err = data.errorCheck();
flags |= TF_ACCEPT_FDS;
if (err == NO_ERROR) {
err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);
}
if (err != NO_ERROR) {
if (reply) reply->setError(err);
return (mLastError = err);
}
if ((flags & TF_ONE_WAY) == 0) {
} else {
err = waitForResponse(NULL, NULL);
}
return err;
}
这里的两个关键方法是writeTransactionData(构造发送数据)和waitForResponse(发送消息与等待回复)。接下来分别对其进行分析,writeTransactionData负责写传输数据,代码实现如下:
status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags,
int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer)
{
binder_transaction_data tr; /* 这个是binder设备通信的结构 */
tr.target.handle = handle;/* handle表示target,表示传输的目的地 */
tr.code = code;
tr.flags = binderFlags;
tr.cookie = 0;
tr.sender_pid = 0;
tr.sender_euid = 0;
const status_t err = data.errorCheck();
if (err == NO_ERROR) {
tr.data_size = data.ipcDataSize();
tr.data.ptr.buffer = data.ipcData();
tr.offsets_size = data.ipcObjectsCount()*sizeof(size_t);
tr.data.ptr.offsets = data.ipcObjects();
} else if (statusBuffer) {
tr.flags |= TF_STATUS_CODE;
*statusBuffer = err;
tr.data_size = sizeof(status_t);
tr.data.ptr.buffer = statusBuffer;
tr.offsets_size = 0;
tr.data.ptr.offsets = NULL;
} else {
return (mLastError = err);
}
/* 将数据写到mOut中,并没有发送 */
mOut.writeInt32(cmd);
mOut.write(&tr, sizeof(tr));
return NO_ERROR;
}
接下来,查看发送请求和接收回复的部分,waitForResponse方法的实现如下所示:
status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
{
int32_t cmd;
int32_t err;
while (1) {
if ((err=talkWithDriver()) < NO_ERROR) break;/* talkWithDriver与地层binder设备交互,发送请求 */
/* 以下均为收到回复后所做的处理 */
err = mIn.errorCheck();
if (err < NO_ERROR) break;
if (mIn.dataAvail() == 0) continue;
cmd = mIn.readInt32();
switch (cmd) {
case BR_TRANSACTION_COMPLETE:
if (!reply && !acquireResult) goto finish;
break;
...
case BR_DEAD_REPLY:
...
case BR_FAILED_REPLY:
...
case BR_ACQUIRE_RESULT:
...
case BR_REPLY:
...
default:
err = executeCommand(cmd);
if (err != NO_ERROR) goto finish;
break;
}
}
finish:
if (err != NO_ERROR) {
if (acquireResult) *acquireResult = err;
if (reply) reply->setError(err);
mLastError = err;
}
return err;
}
如果发送请求后立刻就收到了回复,对于这个回复的处理,一方面waitForResponse会处理一部分,主要的部分会交给executeCommand来执行。
executeCommand的代码实现如下所示:
status_t IPCThreadState::executeCommand(int32_t cmd)
{
BBinder* obj;
RefBase::weakref_type* refs;
status_t result = NO_ERROR;
switch (cmd) {
case BR_ERROR:
...
case BR_TRANSACTION:
{
binder_transaction_data tr;
result = mIn.read(&tr, sizeof(tr));
ALOG_ASSERT(result == NO_ERROR,"Not enough command data for brTRANSACTION");
if (result != NO_ERROR) break;
Parcel buffer;
buffer.ipcSetDataReference(
reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
tr.data_size,
reinterpret_cast<const size_t*>(tr.data.ptr.offsets),
tr.offsets_size/sizeof(size_t), freeBuffer, this);
const pid_t origPid = mCallingPid;
const uid_t origUid = mCallingUid;
mCallingPid = tr.sender_pid;
mCallingUid = tr.sender_euid;
int curPrio = getpriority(PRIO_PROCESS, mMyThreadId);
if (gDisableBackgroundScheduling) {
if (curPrio > ANDROID_PRIORITY_NORMAL) {
setpriority(PRIO_PROCESS, mMyThreadId, ANDROID_PRIORITY_NORMAL);
}
} else {
if (curPrio >= ANDROID_PRIORITY_BACKGROUND) {
set_sched_policy(mMyThreadId, SP_BACKGROUND);
}
}
Parcel reply;
if (tr.target.ptr) {
sp<BBinder> b((BBinder*)tr.cookie);
const status_t error = b->transact(tr.code, buffer, &reply, tr.flags);
if (error < NO_ERROR) reply.setError(error);
} else {
/* the_context_object是IPCThreadState中的一个全局变量,可以通过setTheContextObject函数来设置 */
const status_t error = the_context_object->transact
(tr.code, buffer, &reply, tr.flags);
if (error < NO_ERROR) reply.setError(error);
}
if ((tr.flags & TF_ONE_WAY) == 0) {
sendReply(reply, 0);
}
mCallingPid = origPid;
mCallingUid = origUid;
}
break;
case BR_DEAD_BINDER:
{ /* 收到binder驱动发来的service死亡的消息,只有Bp端能收到 */
BpBinder *proxy = (BpBinder*)mIn.readInt32();
proxy->sendObituary();
mOut.writeInt32(BC_DEAD_BINDER_DONE);
mOut.writeInt32((int32_t)proxy);
} break;
...
case BR_SPAWN_LOOPER:
/* 这里收到驱动的指示,创建了一个线程,用于和binder进行通信 */
mProcess->spawnPooledThread(false);
break;
default:
printf("*** BAD COMMAND %d received from Binder driver\n", cmd);
result = UNKNOWN_ERROR;
break;
}
if (result != NO_ERROR) {
mLastError = result;
}
return result;
}
与驱动的交互,这里对talkWithDriver进行分析,因为这是与底层交互的根基,代码实现如下:
status_t IPCThreadState::talkWithDriver(bool doReceive) /*这里注意,默认不指定参数时doReceive值为true*/
{
if (mProcess->mDriverFD <= 0) {
return -EBADF;
}
binder_write_read bwr;/* 构造与binder设备通信的参数 */
// Is the read buffer empty?
const bool needRead = mIn.dataPosition() >= mIn.dataSize();
const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0;
/*请求命令的填充 */
bwr.write_size = outAvail;
bwr.write_buffer = (long unsigned int)mOut.data();
// This is what we'll read.
if (doReceive && needRead) {
/* 接收缓冲区信息填充,如果收到信息,就写在该缓冲区内 */
bwr.read_size = mIn.dataCapacity();
bwr.read_buffer = (long unsigned int)mIn.data();
} else {
bwr.read_size = 0;
bwr.read_buffer = 0;
}
// Return immediately if there is nothing to do.
if ((bwr.write_size == 0) && (bwr.read_size == 0)) return NO_ERROR;
bwr.write_consumed = 0;
bwr.read_consumed = 0;
status_t err;
do {
if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)
err = NO_ERROR;
else
err = -errno;
if (mProcess->mDriverFD <= 0) {
err = -EBADF;
}
} while (err == -EINTR);
if (err >= NO_ERROR) {
if (bwr.write_consumed > 0) {
if (bwr.write_consumed < (ssize_t)mOut.dataSize())
mOut.remove(0, bwr.write_consumed);
else
mOut.setDataSize(0);
}
if (bwr.read_consumed > 0) {
mIn.setDataSize(bwr.read_consumed);
mIn.setDataPosition(0);
}
return NO_ERROR;
}
return err;
}
4 创建新线程
ProcessState::self()->startThreadPool();
startThreadPool的实现如下所示:
void ProcessState::startThreadPool()
{
AutoMutex _l(mLock);
if (!mThreadPoolStarted) {
mThreadPoolStarted = true;/* 表明一个进程只能启动一个线程池 */
spawnPooledThread(true);
}
}
继续分析spawnPooledThread,代码实现如下:
void ProcessState::spawnPooledThread(bool isMain)
{
if (mThreadPoolStarted) {
String8 name = makeBinderThreadName();
sp<Thread> t = new PoolThread(isMain);/* 为了方便分析代码,这里的isMain为true */
t->run(name.string());
}
}
Poolthread是在ProcessState类中定义的的一个类,继续分析Poolthread的内部实现,代码如下所示:
class PoolThread : public Thread
{
public:
PoolThread(bool isMain)
: mIsMain(isMain)
{
}
protected:
virtual bool threadLoop()
{
/* 这里注意,新创建的线程都会调用该方法 */
IPCThreadState::self()->joinThreadPool(mIsMain);
return false;
}
const bool mIsMain;
};
这里通过ProcessState 来创建一个新的线程,该线程再执行IPCThreadState->joinThreadPool。
5 IPCThreadState与驱动层的交互
IPCThreadState::self()->joinThreadPool();
5.1 joinThreadPool方法分析
按照android的要求,新创建的线程都会调用这个函数,实现如下所示:
void IPCThreadState::joinThreadPool(bool isMain)
{
/* 如果isMain为真,需要循环处理,将请求信息写到mOut中,最后一起发送 */
mOut.writeInt32(isMain ? BC_ENTER_LOOPER : BC_REGISTER_LOOPER);
set_sched_policy(mMyThreadId, SP_FOREGROUND);
status_t result;
do {
processPendingDerefs();//处理已经死亡的BBinder对象
// now get the next command to be processed, waiting if necessary
result = getAndExecuteCommand();
if (result < NO_ERROR && result != TIMED_OUT && result != -ECONNREFUSED && result != -EBADF) {
abort();
}
if(result == TIMED_OUT && !isMain) {
break;
}
} while (result != -ECONNREFUSED && result != -EBADF);
mOut.writeInt32(BC_EXIT_LOOPER);
talkWithDriver(false);
}
5.2 processPendingDerefs和getAndExecuteCommand方法分析
这里有2个重要的方法(processPendingDerefs和getAndExecuteCommand)。分别对这2个方法进行说明,processPendingDerefs主要是处理已经死亡的BBinder,实现如下所示:
void IPCThreadState::processPendingDerefs()
{
if (mIn.dataPosition() >= mIn.dataSize()) {
size_t numPending = mPendingWeakDerefs.size();
if (numPending > 0) {
for (size_t i = 0; i < numPending; i++) {
RefBase::weakref_type* refs = mPendingWeakDerefs[i];
refs->decWeak(mProcess.get());
}
mPendingWeakDerefs.clear();
}
numPending = mPendingStrongDerefs.size();
if (numPending > 0) {
for (size_t i = 0; i < numPending; i++) {
BBinder* obj = mPendingStrongDerefs[i];
obj->decStrong(mProcess.get());
}
mPendingStrongDerefs.clear();
}
}
}
getAndExecuteCommand主要是发送命令和读取请求,代码实现如下:
status_t IPCThreadState::getAndExecuteCommand()
{
status_t result;
int32_t cmd;
result = talkWithDriver();
if (result >= NO_ERROR) {
size_t IN = mIn.dataAvail();
if (IN < sizeof(int32_t)) return result;
cmd = mIn.readInt32();
result = executeCommand(cmd);//处理消息
set_sched_policy(mMyThreadId, SP_FOREGROUND);
}
return result;
}
即无论是ProcessState::self()->startThreadPool()还是IPCThreadState::self()->joinThreadPool();本质上都是IPCThreadState::self()->joinThreadPool();最后都会talkWithDriver,从而与binder驱动交互。
6 总结
- binder本身支持多线程和同步操作
- 到目前为止,总共有2个线程在为service服务,一个是startThreadPool()线程池中启动的,另一个是自身joinThreadPool()启动的
- Binder通信逻辑和业务逻辑之间的关系:Binder仅仅是通信机制,业务可以基于binder,也可以基于其他IPC通信机制
- Binder复杂的原因:Android通过封装,将业务和通信巧妙的融合在了一起