Linux 角度看binder原理(三)
驱动层以上就是c++环境的应用层。在android中提供了binder库来方便的进行IPC,而不用去和驱动交互。
Binde库
还是cs两端,一个是服务端一个是客户端。其中服务端对应BnInterface,客户端对应的是BpInterface,这个和binder_node,binder_ref对应。
# frameworks/native/include/binder/
class BnInterface : public INTERFACE, public BBinder
{
public:
virtual sp<IInterface> queryLocalInterface(const String16& _descriptor);
virtual const String16& getInterfaceDescriptor() const;
protected:
typedef INTERFACE BaseInterface;
virtual IBinder* onAsBinder();
};
class BpInterface : public INTERFACE, public BpRefBase
{
public:
explicit BpInterface(const sp<IBinder>& remote);
protected:
typedef INTERFACE BaseInterface;
virtual IBinder* onAsBinder();
};
可以看到BnInterface继承了BBinder,而BpInterface继承的是BpRefBase,接下来就看下BBinder和BpRefBase
#Binder.h
class BBinder : public IBinder
{
public:
.....
virtual const String16& getInterfaceDescriptor() const;
// NOLINTNEXTLINE(google-default-arguments)
virtual status_t transact( uint32_t code,
const Parcel& data,
Parcel* reply,
uint32_t flags = 0) final;
protected:
virtual ~BBinder();
// NOLINTNEXTLINE(google-default-arguments)
virtual status_t onTransact( uint32_t code,
const Parcel& data,
Parcel* reply,
uint32_t flags = 0);
.....
};
首先它是继承了IBinder,它是进程间通讯接口,提供了进程间通讯能力。BBinder中有两个方法特别重要,就是transact和onTransact,service会在继承BBinder的时候实现onTransact,当binder驱动收到client的请求时会调用Binder本地对象的transact方法,然后在transact中会调用onTransact从而实现service中的功能。接下来看BpRefBase
class BpRefBase : public virtual RefBase
{
protected:
explicit BpRefBase(const sp<IBinder>& o);
virtual ~BpRefBase();
virtual void onFirstRef();
virtual void onLastStrongRef(const void* id);
virtual bool onIncStrongAttempted(uint32_t flags, const void* id);
inline IBinder* remote() { return mRemote; }
inline IBinder* remote() const { return mRemote; }
private:
BpRefBase(const BpRefBase& o);
BpRefBase& operator=(const BpRefBase& o);
IBinder* const mRemote;
RefBase::weakref_type* mRefs;
std::atomic<int32_t> mState;
};
BpRefBase继承了RefBase,这个是c++引用管理的类。BpRefBase的成员里面有一个mRemote,它实际指向的是一个BpBinder类。
#BpBinder.h
class BpBinder : public IBinder
{
......
public:
static BpBinder* create(int32_t handle);
virtual status_t transact( uint32_t code,
const Parcel& data,
Parcel* reply,
uint32_t flags = 0) final;
int32_t handle() const;
......
private:
const int32_t mHandle;
};
BpBinder中最重要的就是mHandle,每个binder_ref有一个handle,BpBinder中的mHandle就和binderRef中的对应。BpBinder在调用transact进行进程间通讯的时候就可以通过mHandle找到对应的binder_ref->Binder_node->service,找到最终的服务。
ServiceManager
这里说的ServiceManager是c++层的ServiceManager,它的作用就是管理所有的服务。service的注册和获取就能很好的展示IPC的过程
ServiceManager启动
ServiceManager在系统init时候启动,启动文件是servicemanager.rc。其中启动函数是main.cpp的main。这点和老一点的版本差别挺大的。
#frameworks/native/cmds/servicemanager/main.cpp
int main(int argc, char** argv) {
#ifdef __ANDROID_RECOVERY__
android::base::InitLogging(argv, android::base::KernelLogger);
#endif
if (argc > 2) {
LOG(FATAL) << "usage: " << argv[0] << " [binder driver]";
}
const char* driver = argc == 2 ? argv[1] : "/dev/binder";
LOG(INFO) << "Starting sm instance on " << driver;
sp<ProcessState> ps = ProcessState::initWithDriver(driver);//打开binder驱动
ps->setThreadPoolMaxThreadCount(0);
ps->setCallRestriction(ProcessState::CallRestriction::FATAL_IF_NOT_ONEWAY);
sp<ServiceManager> manager = sp<ServiceManager>::make(std::make_unique<Access>());//创建ServiceManager
if (!manager->addService("manager", manager, false /*allowIsolated*/, IServiceManager::DUMP_FLAG_PRIORITY_DEFAULT).isOk()) {
LOG(ERROR) << "Could not self register servicemanager";
} //将自身当作一个服务添加到manager中
IPCThreadState::self()->setTheContextObject(manager); 将manager设置给IPCThreadState的全局变量
ps->becomeContextManager();// 将当前服务设置为binder驱动的ContextManager
sp<Looper> looper = Looper::prepare(false /*allowNonCallbacks*/);
BinderCallback::setupTo(looper);
ClientCallbackCallback::setupTo(looper, manager);
#ifndef VENDORSERVICEMANAGER
if (!SetProperty("servicemanager.ready", "true")) {
LOG(ERROR) << "Failed to set servicemanager ready property";
}
#endif
//开始循环
while(true) {
looper->pollAll(-1);
}
// should not be reached
return EXIT_FAILURE;
}
ProcessState
先来看下ProcessState,一看名字就知道这个是进程状态。main函数中通过initWithDriver(“/dev/binder”)来打开了binder驱动
#frameworks/native/libs/binder/ProcessState.cpp
sp<ProcessState> ProcessState::initWithDriver(const char* driver)
{
return init(driver, true /*requireDefault*/);
}
sp<ProcessState> ProcessState::init(const char *driver, bool requireDefault)
{
if (driver == nullptr) {
std::lock_guard<std::mutex> l(gProcessMutex);
if (gProcess) {
verifyNotForked(gProcess->mForked);
}
return gProcess;
}
[[clang::no_destroy]] static std::once_flag gProcessOnce;
std::call_once(gProcessOnce, [&](){
if (access(driver, R_OK) == -1) {
ALOGE("Binder driver %s is unavailable. Using /dev/binder instead.", driver);
driver = "/dev/binder";
}
// we must install these before instantiating the gProcess object,
// otherwise this would race with creating it, and there could be the
// possibility of an invalid gProcess object forked by another thread
// before these are installed
int ret = pthread_atfork(ProcessState::onFork, ProcessState::parentPostFork,
ProcessState::childPostFork);
LOG_ALWAYS_FATAL_IF(ret != 0, "pthread_atfork error %s", strerror(ret));
std::lock_guard<std::mutex> l(gProcessMutex);
gProcess = sp<ProcessState>::make(driver);//初始化ProcessState
});
if (requireDefault) {
// Detect if we are trying to initialize with a different driver, and
// consider that an error. ProcessState will only be initialized once above.
LOG_ALWAYS_FATAL_IF(gProcess->getDriverName() != driver,
"ProcessState was already initialized with %s,"
" can't initialize with %s.",
gProcess->getDriverName().c_str(), driver);
}
verifyNotForked(gProcess->mForked);
return gProcess;
}
#StrongPointer.h
template <typename T>
template <typename... Args>
sp<T> sp<T>::make(Args&&... args) {
T* t = new T(std::forward<Args>(args)...);
sp<T> result;
result.m_ptr = t;
t->incStrong(t);
return result;
}
通过**sp::make(driver);**来初始化了binder设备。这种初始化方式是通过StrongPointer.h中的模板来实现的。在new的同时增加强引用。下面就看下ProcessState的初始化。
ProcessState::ProcessState(const char* driver)
: mDriverName(String8(driver)),
mDriverFD(-1),
mVMStart(MAP_FAILED),
mThreadCountLock(PTHREAD_MUTEX_INITIALIZER),
mThreadCountDecrement(PTHREAD_COND_INITIALIZER),
mExecutingThreadsCount(0),
mWaitingForThreads(0),
mMaxThreads(DEFAULT_MAX_BINDER_THREADS),
mCurrentThreads(0),
mKernelStartedThreads(0),
mStarvationStartTimeMs(0),
mForked(false),
mThreadPoolStarted(false),
mThreadPoolSeq(1),
mCallRestriction(CallRestriction::NONE) {
base::Result<int> opened = open_driver(driver);//open操作在这版本终于在函数中间了,不是在参数中初始化了
if (opened.ok()) {
// mmap the binder, providing a chunk of virtual address space to receive transactions.
mVMStart = mmap(nullptr, BINDER_VM_SIZE, PROT_READ, MAP_PRIVATE | MAP_NORESERVE,
opened.value(), 0);//mmap分配内存BINDER_VM_SIZE = (1*1024*1024) - (4096 *2) binder分配的默认内存大小1M-8k
if (mVMStart == MAP_FAILED) {
close(opened.value());
// *sigh*
opened = base::Error()
<< "Using " << driver << " failed: unable to mmap transaction memory.";
mDriverName.clear();
}
}
#ifdef __ANDROID__
LOG_ALWAYS_FATAL_IF(!opened.ok(), "Binder driver '%s' could not be opened. Terminating: %s",
driver, opened.error().message().c_str());
#endif
if (opened.ok()) {
mDriverFD = opened.value();
}
}
static base::Result<int> open_driver(const char* driver) {
int fd = open(driver, O_RDWR | O_CLOEXEC);//打开binder设备
if (fd < 0) {
return base::ErrnoError() << "Opening '" << driver << "' failed";
}
int vers = 0;
status_t result = ioctl(fd, BINDER_VERSION, &vers);//获取binder版本
if (result == -1) {
close(fd);
return base::ErrnoError() << "Binder ioctl to obtain version failed";
}
if (result != 0 || vers != BINDER_CURRENT_PROTOCOL_VERSION) {
close(fd);
return base::Error() << "Binder driver protocol(" << vers
<< ") does not match user space protocol("
<< BINDER_CURRENT_PROTOCOL_VERSION
<< ")! ioctl() return value: " << result;
}
size_t maxThreads = DEFAULT_MAX_BINDER_THREADS;
result = ioctl(fd, BINDER_SET_MAX_THREADS, &maxThreads);//设置最大线程数
if (result == -1) {
ALOGE("Binder ioctl to set max threads failed: %s", strerror(errno));
}
uint32_t enable = DEFAULT_ENABLE_ONEWAY_SPAM_DETECTION;
result = ioctl(fd, BINDER_ENABLE_ONEWAY_SPAM_DETECTION, &enable);//android12中新增,设置是否开启oneway 垃圾信息检测
if (result == -1) {
ALOGE_IF(ProcessState::isDriverFeatureEnabled(
ProcessState::DriverFeature::ONEWAY_SPAM_DETECTION),
"Binder ioctl to enable oneway spam detection failed: %s", strerror(errno));
}
return fd;
}
打开binder设备初始化ProcessState其中通过std::call_once来实现了一个单例。
manager->addService
接下来回到ServiceManager中同样通用sp::make(std::make_unique());创建了ServiceManager。然后通过manager->addService将自身添加到服务中。其中ServiceManager 继承了BnServiceManager,通过Bn就能知道这是一个Binder本地对象。也就是Service注册时候需要通讯的Binder本地对象
IPCThreadState::self()->setTheContextObject(manager)
IPCThreadState* IPCThreadState::self()
{
if (gHaveTLS.load(std::memory_order_acquire)) {
restart:
const pthread_key_t k = gTLS;
IPCThreadState* st = (IPCThreadState*)pthread_getspecific(k);
if (st) return st;
return new IPCThreadState;
}
// Racey, heuristic test for simultaneous shutdown.
if (gShutdown.load(std::memory_order_relaxed)) {
ALOGW("Calling IPCThreadState::self() during shutdown is dangerous, expect a crash.\n");
return nullptr;
}
pthread_mutex_lock(&gTLSMutex);
if (!gHaveTLS.load(std::memory_order_relaxed)) {
int key_create_value = pthread_key_create(&gTLS, threadDestructor);
if (key_create_value != 0) {
pthread_mutex_unlock(&gTLSMutex);
ALOGW("IPCThreadState::self() unable to create TLS key, expect a crash: %s\n",
strerror(key_create_value));
return nullptr;
}
gHaveTLS.store(true, std::memory_order_release);
}
pthread_mutex_unlock(&gTLSMutex);
goto restart;
}
IPCThreadState::IPCThreadState()
: mProcess(ProcessState::self()),
mServingStackPointer(nullptr),
mServingStackPointerGuard(nullptr),
mWorkSource(kUnsetWorkSource),
mPropagateWorkSource(false),
mIsLooper(false),
mIsFlushing(false),
mStrictModePolicy(0),
mLastTransactionBinderFlags(0),
mCallRestriction(mProcess->mCallRestriction) {
pthread_setspecific(gTLS, this);
clearCaller();
mIn.setDataCapacity(256);
mOut.setDataCapacity(256);
}
可以看到这里是通过tls来判断是否已经存在,从而实现的线程唯一对象其中ProcessState是通过ProcessState::self()来获取的,ProcessState::self()会调用ProcessState::init()来获取单例。初始化IPCThreadState时会设置mIn和mOut的的容量,这两个变量也是进程通讯时缓存数据的变量。最后通过setTheContextObject()将serviceManager注册为当前IPCThreadState的 the_context_object。
becomeContextManager()
bool ProcessState::becomeContextManager()
{
AutoMutex _l(mLock);
flat_binder_object obj {
.flags = FLAT_BINDER_FLAG_TXN_SECURITY_CTX,
};
int result = ioctl(mDriverFD, BINDER_SET_CONTEXT_MGR_EXT, &obj);
// fallback to original method
if (result != 0) {//调用失败就退化为普通方法来注册为CONTEXT_MGR
android_errorWriteLog(0x534e4554, "121035042");
int unused = 0;
result = ioctl(mDriverFD, BINDER_SET_CONTEXT_MGR, &unused);
}
if (result == -1) {
ALOGE("Binder ioctl to become context manager failed: %s\n", strerror(errno));
}
return result == 0;
}
通过ioctl设置当前对象为驱动的ContextManager,可以看下驱动中的实现
static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
......
switch (cmd) {
case BINDER_SET_CONTEXT_MGR_EXT: {
struct flat_binder_object fbo;
if (copy_from_user(&fbo, ubuf, sizeof(fbo))) {
ret = -EINVAL;
goto err;
}
ret = binder_ioctl_set_ctx_mgr(filp, &fbo);
if (ret)
goto err;
break;
}
case BINDER_SET_CONTEXT_MGR:
ret = binder_ioctl_set_ctx_mgr(filp, NULL);
if (ret)
goto err;
break;
......
}
static int binder_ioctl_set_ctx_mgr(struct file *filp,
struct flat_binder_object *fbo)
{
int ret = 0;
struct binder_proc *proc = filp->private_data;
struct binder_context *context = proc->context;//binder_context device唯一
struct binder_node *new_node;
kuid_t curr_euid = current_euid();
mutex_lock(&context->context_mgr_node_lock);
if (context->binder_context_mgr_node) {//mgr只能设置一次
pr_err("BINDER_SET_CONTEXT_MGR already set\n");
ret = -EBUSY;
goto out;
}
ret = security_binder_set_context_mgr(proc->cred);//判断调用进程是否有权限设置context manager
if (ret < 0)
goto out;
if (uid_valid(context->binder_context_mgr_uid)) {
if (!uid_eq(context->binder_context_mgr_uid, curr_euid)) {
pr_err("BINDER_SET_CONTEXT_MGR bad uid %d != %d\n",
from_kuid(&init_user_ns, curr_euid),
from_kuid(&init_user_ns,
context->binder_context_mgr_uid));
ret = -EPERM;
goto out;
}
} else {
context->binder_context_mgr_uid = curr_euid;
}
new_node = binder_new_node(proc, fbo); //binder_new_node中通过binder_init_node_ilocked()新建一个binder_node
if (!new_node) {
ret = -ENOMEM;
goto out;
}
binder_node_lock(new_node);
new_node->local_weak_refs++;
new_node->local_strong_refs++;
new_node->has_strong_ref = 1;
new_node->has_weak_ref = 1;
context->binder_context_mgr_node = new_node;//将新建的node设置为binder_context的binder_context_mgr_node
binder_node_unlock(new_node);
binder_put_node(new_node);
out:
mutex_unlock(&context->context_mgr_node_lock);
return ret;
}
BinderCallback
class BinderCallback : public LooperCallback {
public:
static sp<BinderCallback> setupTo(const sp<Looper>& looper) {
sp<BinderCallback> cb = sp<BinderCallback>::make();
int binder_fd = -1;
IPCThreadState::self()->setupPolling(&binder_fd);//注册looper并拿到binder的fd
LOG_ALWAYS_FATAL_IF(binder_fd < 0, "Failed to setupPolling: %d", binder_fd);
int ret = looper->addFd(binder_fd,
Looper::POLL_CALLBACK,
Looper::EVENT_INPUT,
cb,
nullptr /*data*/);//通过looper机制监听fd来实现在handleEvent监听回调
LOG_ALWAYS_FATAL_IF(ret != 1, "Failed to add binder FD to Looper");
return cb;
}
int handleEvent(int /* fd */, int /* events */, void* /* data */) override {
IPCThreadState::self()->handlePolledCommands();
return 1; // Continue receiving callbacks.
}
};
status_t IPCThreadState::setupPolling(int* fd)
{
if (mProcess->mDriverFD < 0) {
return -EBADF;
}
mOut.writeInt32(BC_ENTER_LOOPER);//发送BC_ENTER_LOOPER来向驱动开启循环
flushCommands();
*fd = mProcess->mDriverFD;
pthread_mutex_lock(&mProcess->mThreadCountLock);
mProcess->mCurrentThreads++;
pthread_mutex_unlock(&mProcess->mThreadCountLock);
return 0;
}
首先通过BC_ENTER_LOOPER开启了循环并通过handleEvent来接受消息,在handleEvent中又把消息处理交给了IPCThreadState::self()->handlePolledCommands();
status_t IPCThreadState::handlePolledCommands()
{
status_t result;
do {
result = getAndExecuteCommand();//真正处理的函数
} while (mIn.dataPosition() < mIn.dataSize());
processPendingDerefs();
flushCommands();
return result;
}
status_t IPCThreadState::getAndExecuteCommand()
{
status_t result;
int32_t cmd;
result = talkWithDriver();// //从binder驱动中读写数据
if (result >= NO_ERROR) {
size_t IN = mIn.dataAvail();//获取数据
if (IN < sizeof(int32_t)) return result;
cmd = mIn.readInt32();
IF_LOG_COMMANDS() {
alog << "Processing top-level Command: "
<< getReturnString(cmd) << endl;
}
pthread_mutex_lock(&mProcess->mThreadCountLock);
mProcess->mExecutingThreadsCount++;
if (mProcess->mExecutingThreadsCount >= mProcess->mMaxThreads &&
mProcess->mStarvationStartTimeMs == 0) {
mProcess->mStarvationStartTimeMs = uptimeMillis();
}
pthread_mutex_unlock(&mProcess->mThreadCountLock);
result = executeCommand(cmd);//处理驱动返回的数据
pthread_mutex_lock(&mProcess->mThreadCountLock);
mProcess->mExecutingThreadsCount--;
.....
}
return result;
}
在main函数的结尾通过一个while循环中的 looper->pollAll(-1)来实现循环,这里使用了looper机制,使用setupTo将BinderCallback和 ClientCallbackCallback绑定到了looper上,BinderCallback通过handleEvent->handlePolledCommands->talkWithDriver()来实现和驱动的交互,最终的不同的消息处理都在executeCommand中实现,ClientCallbackCallback实现也类似。到这里servicemanager就启动完成了。
参考
《Android系统源代码情景分析》
[gityuan博客][http://gityuan.com/2015/11/07/binder-start-sm/]