接上文...
本文根据网上现有资源进行整合,以及自己的理解,有误之处欢迎指正~~
三、MediaService的运行
由2.6中的分析,可知defaultServiceManager得到了BpServiceManager,
然后MediaPlayerService 实例化后,调用BpServiceManager的addService函数
这个过程中,是service_manager收到addService的请求,然后把对应信息放到自己保存的一个服务list中
到这儿,可看到,service_manager有一个binder_looper函数(在2.8中后部分),专门等着从binder中接收请求。虽然service_manager没有从BnServiceManager中派生,但是它肯定完成了BnServiceManager的功能。
同样,我们创建了MediaPlayerService即BnMediaPlayerService,那它也应该有一下功能:
1. 打开binder设备
2. 也搞一个looper循环,然后坐等请求
但是MediaPlayerService的构造函数中,没有看到显示的打开binder设备,就查看它的父类,即BnXXX的工作
3.1 MediaPlayerService打开binder
目录在frameworks/av/media/libmediaplayerservice/MediaPlayerService.h
可知 MediaPlayerService从BnMediaPlayerService派生,而BnMediaPlayerService从BnInterface和IMediaPlayerService同时派生,于是乎再追BnMediaPlayerService和BnInterface ,
目录frameworks/native/include/binder/IInterface.h
class BnInterface : public INTERFACE, public BBinder
{
public:
virtual sp<IInterface> queryLocalInterface(const String16& _descriptor);
virtual const String16& getInterfaceDescriptor() const;
protected:
virtual IBinder* onAsBinder();
};
进行代入兑现后
class BnInterface : public IMediaPlayerService, public BBinder
{
...
}
思考BBinder是什么?与BpBinder类似?
BBinder::BBinder() : mExtras(nullptr)
{
// 和BnXXX与BpXXX对应的
// 然而此处没有打开设备的地方
}
但是每个Service都有对应的binder设备fd
...
再次回到main_mediaservice一开始的地方,在ProcessState已经打开过binder了
3.2 looper
打开binder设备的地方和进程相关的,一个进程打开一个就可以了。
在第二章的一开始main处,就有类似的消息循环looper操作
>> ProcessState::self()->startThreadPool();
>> IPCThreadState::self()->joinThreadPool();
先看看startThreadPool
void ProcessState::startThreadPool()
{
AutoMutex _l(mLock);
...
spawnPooledThread(true);
}
}
void ProcessState::spawnPooledThread(bool isMain)
{
if (mThreadPoolStarted) {
...
sp<Thread> t = new PoolThread(isMain);
// isMain就是true,创建线程池,然后run起来
t->run(name.string());
}
}
// PoolThread从Thread类中派生,
class PoolThread : public Thread
{
public:
explicit PoolThread(bool isMain)
: mIsMain(isMain)
{
}
...
目录system/core/libutils/Threads.cpp
Thread::Thread(bool canCallJava)
: mCanCallJava(canCallJava),
mThread(thread_id_t(-1)),
mLock("Thread::mLock"),
mStatus(NO_ERROR),
mExitPending(false), mRunning(false)
{
}
// 此时,仍未创建线程,然后调用PoolThread::run,实际调用基类的run
status_t Thread::run(const char* name, int32_t priority, size_t stack)
{
LOG_ALWAYS_FATAL_IF(name == nullptr, "thread name not provided to Thread::run");
Mutex::Autolock _l(mLock);
...
mStatus = NO_ERROR;
mExitPending = false;
mThread = thread_id_t(-1);
...
bool res;
if (mCanCallJava) {
>> res = createThreadEtc(_threadLoop,
this, name, priority, stack, &mThread);
} else {
res = androidCreateRawThreadEtc(_threadLoop,
this, name, priority, stack, &mThread);
}
...
// 终于,在run函数中,创建线程了,从这主线程执行
IPCThreadState::self()->joinThreadPool();
还是先追_threadLoop
int Thread::_threadLoop(void* user)
{
Thread* const self = static_cast<Thread*>(user);
sp<Thread> strong(self->mHoldSelf);
wp<Thread> weak(strong);
self->mHoldSelf.clear();
#if defined(__ANDROID__)
// this is very useful for debugging with gdb
self->mTid = gettid();
#endif
do {
bool result;
...
if (result && !self->exitPending()) {
// Binder threads (and maybe others) rely on threadLoop
// running at least once after a successful ::readyToRun()
// (unless, of course, the thread has already been asked to exit
// at that point).
// This is because threads are essentially used like this:
// (new ThreadSubclass())->run();
// The caller therefore does not retain a strong reference to
// the thread and the thread would simply disappear after the
// successful ::readyToRun() call instead of entering the
// threadLoop at least once.
result = self->threadLoop();
// 调用自己的threadloop
}
} else {
result = self->threadLoop();
}
创建的PoolThread对象,由此调用PoolThread的threadLoop函数
// 这是一个新的线程,所以必然会创建一个新的IPCThreadState对象(线程本地存储 TLS)
virtual bool threadLoop()
{
IPCThreadState::self()->joinThreadPool(mIsMain);
return false;
}
const bool mIsMain;
};
// 主线程和工作线程都调用了joinThreadPool,于是追
void IPCThreadState::joinThreadPool(bool isMain)
{
LOG_THREADPOOL("**** THREAD %p (PID %d) IS JOINING THE THREAD POOL\n", (void*)pthread_self(), getpid());
>> mOut.writeInt32(isMain ? BC_ENTER_LOOPER : BC_REGISTER_LOOPER);
status_t result;
do {
processPendingDerefs();
// now get the next command to be processed, waiting if necessary
result = getAndExecuteCommand();
...
} while (result != -ECONNREFUSED && result != -EBADF);
LOG_THREADPOOL("**** THREAD %p (PID %d) IS LEAVING THE THREAD POOL err=%d\n",
(void*)pthread_self(), getpid(), result);
>> mOut.writeInt32(BC_EXIT_LOOPER);
talkWithDriver(false);
}
这边有loopl了,但是好像有两个线程都执行了这个?
先看看getAndExecuteCommand
status_t IPCThreadState::getAndExecuteCommand()
{
status_t result;
int32_t cmd;
result = talkWithDriver();
if (result >= NO_ERROR) {
size_t IN = mIn.dataAvail();
if (IN < sizeof(int32_t)) return result;
>> cmd = mIn.readInt32();
IF_LOG_COMMANDS() {
alog << "Processing top-level Command: "
<< getReturnString(cmd) << endl;
}
pthread_mutex_lock(&mProcess->mThreadCountLock);
mProcess->mExecutingThreadsCount++;
if (mProcess->mExecutingThreadsCount >= mProcess->mMaxThreads &&
mProcess->mStarvationStartTimeMs == 0) {
mProcess->mStarvationStartTimeMs = uptimeMillis();
}
pthread_mutex_unlock(&mProcess->mThreadCountLock);
>> result = executeCommand(cmd);
pthread_mutex_lock(&mProcess->mThreadCountLock);
mProcess->mExecutingThreadsCount--;
if (mProcess->mExecutingThreadsCount < mProcess->mMaxThreads &&
mProcess->mStarvationStartTimeMs != 0) {
int64_t starvationTimeMs = uptimeMillis() - mProcess->mStarvationStartTimeMs;
if (starvationTimeMs > 100) {
ALOGE("binder thread pool (%zu threads) starved for %" PRId64 " ms",
mProcess->mMaxThreads, starvationTimeMs);
}
mProcess->mStarvationStartTimeMs = 0;
}
pthread_cond_broadcast(&mProcess->mThreadCountDecrement);
pthread_mutex_unlock(&mProcess->mThreadCountLock);
}
return result;
}
发现getAndExecuteCommand调用到了executeCommand,于是再追
status_t IPCThreadState::executeCommand(int32_t cmd)
{
BBinder* obj;
RefBase::weakref_type* refs;
status_t result = NO_ERROR;
...
case BR_TRANSACTION:
{
binder_transaction_data tr;
result = mIn.read(&tr, sizeof(tr));
// 来了一个命令,解析成BR_TRANSACTION,然后读取后续的信息
...
if (tr.target.ptr)
if (reinterpret_cast<RefBase::weakref_type*>(
tr.target.ptr)->attemptIncStrong(this)) {
if(!bFind){
error = NO_ERROR;
}else
// 这里用的是BBinder
error = reinterpret_cast<BBinder*>(tr.cookie)->transact(tr.code, buffer,
&reply, tr.flags);
reinterpret_cast<BBinder*>(tr.cookie)->decStrong(this);
} else {
error = UNKNOWN_TRANSACTION;
}
} else {
error = the_context_object->transact(tr.code, buffer, &reply, tr.flags);
}
再追BBinder
目录在frameworks/native/libs/binder/Binder.cpp
status_t BBinder::transact(
uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
data.setDataPosition(0);
...
// 就是在调用自己的onTransact函数
err = onTransact(code, data, reply, flags);
break;
}
BnMediaPlayerService从BBinder派生,故调用了onTransact函数,
最后,看看onTransact函数
status_t BBinder::onTransact(
uint32_t code, const Parcel& data, Parcel* reply, uint32_t /*flags*/)
{
// 正如3.1开始提到的,BnMediaPlayerService从BBinder和IMediaPlayerService派生,所有IMediaPlayerService提供的函数都通过命令类型来区分
switch (code) {
...
case SHELL_COMMAND_TRANSACTION: {
int in = data.readFileDescriptor();
int out = data.readFileDescriptor();
int err = data.readFileDescriptor();
int argc = data.readInt32();
Vector<String16> args;
for (int i = 0; i < argc && data.dataAvail() > 0; i++) {
args.add(data.readString16());
}
sp<IShellCallback> shellCallback = IShellCallback::asInterface(
data.readStrongBinder());
sp<IResultReceiver> resultReceiver = IResultReceiver::asInterface(
data.readStrongBinder());
...
if (resultReceiver != NULL) {
resultReceiver->send(INVALID_OPERATION);
}
}
...
default:
return UNKNOWN_TRANSACTION;
}
}
小结:到这里,可以看见,BnXXX的onTransact函数收取命令,然后派发到派生类的函数,由他们完成实际的工作。
但是这里有点特殊,startThreadPool和joinThreadPool完后确实有两个线程,主线程和工作线程,而且都在做消息循环。为什么要这么做呢?他们参数isMain都是true。Google原生操作。估计是怕一个线程工作量太多,所以搞两个线程工作?这种解释应该也是合理的。
四、MediaPlayerService的运行
MediaPlayerClient如何与MediaPlayerService交互的,在使用MediaPlayerService时,需要先创建一个BpMediaPlayerService.
目录在frameworks/av/media/libmedia/IMediaDeathNotifier.cpp
>> IMediaDeathNotifier::getMediaPlayerService()
{
if (sMediaPlayerService == 0) {
sp<IServiceManager> sm = defaultServiceManager();
sp<IBinder> binder;
// 想ServiceManager查询对应服务的信息,返回给binder
do {
binder = sm->getService(String16("media.player"));
if (binder != 0) {
break;
}
ALOGW("Media player service not published, waiting...");
usleep(500000); // 0.5 s
} while (true);
...
binder->linkToDeath(sDeathNotifier);
// 通过interface_cast,将这个binder转换成BpMediaPlayerService,
// 这个binder只是用来和binder设备通讯用的,实际上和IMediaPlayerService的功能没有一点关系。
sMediaPlayerService = interface_cast<IMediaPlayerService>(binder);
}
ALOGE_IF(sMediaPlayerService == 0, "no media player service!?");
return sMediaPlayerService;
}
// 这是一种Bridge模式,BpMediaPlayerService用这个binder和BnMediaPlayerService通讯
Binder其实就是一个和binder设备打交道的接口,而上层IMediaPlayerService只不过把它当做一个类似socket使用罢了。binder和上层类IMediaPlayerService的功能容易混淆。
注:Native层的实现(资料补充说明)
getMediaPlayerService是C++层的
int main()
{
getMediaPlayerService();
// 直接调用这个函数能获得BpMediaPlayerService吗?
// 不能,为什么?因为我还没打开binder驱动呐!但是你在JAVA应用程序里边却有google
// 已经替你封装好了。
// 所以,纯native层的代码,必须也得像下面这样处理:
sp<ProcessState> proc(ProcessState::self());
// 这个其实不是必须的,因为好多地方都需要这个,所以自动也会创建.
getMediaPlayerService();
// 还得起消息循环呐,否则如果Bn那边有消息通知你,你怎么接受得到呢?
ProcessState::self()->startThreadPool();
// 至于主线程是否也需要调用消息循环,就看个人而定了。不过一般是等着接收其他来源的消息,例如socket发来的命令,然后控制MediaPlayerService就可以了。
}
五、总结
至此,Binder就算分析完了,大家看完后,应该能做到以下几点:
>> 如果需要写自己的Service的话,总得知道系统是怎么个调用你的函数,恩。对。有2个线程在那不停得从binder设备中收取命令,然后调用你的函数呢。恩,这是个多线程问题。
>> 如果需要跟踪bug的话,得知道从Client端调用的函数,是怎么最终传到到远端的Service。这样,对于一些函数调用,Client端跟踪完了,我就知道转到Service去看对应函数调用了。反正是同步方式。也就是Client一个函数调用会一直等待到Service返回为止。