Android Binder机制情景源码分析之Binder回调注册和反注册

我们在日常开发中,经常用到Binder来进行跨进程通信,有个比较常见的场景是向服务端注册Binder回调,比如:
IActivityManager中有两个成对的方法,Client端向AMS所在的服务端注册或者反注册IProcessObserver类型的Binder回调接口

public void registerProcessObserver(android.app.IProcessObserver observer) throws android.os.RemoteException;
public void unregisterProcessObserver(android.app.IProcessObserver observer) throws android.os.RemoteException;

如果使用匿名内部类、成员内部类、方法中的local类,直接作为成员注册,可能会造成内存泄漏。

下面进行具体分析。相关源码目录,参考http://gityuan.com/2015/10/31/binder-prepare/

1,Binder对象创建

创建IProcessObserver类型的Binder对象,

IProcessObserver mIProcessObserver = new android.app.IProcessObserver.Stub();

android.app.IProcessObserver.StubBinder的子类,看一下Binder的构造方法:

frameworks/base/core/java/android/os/Binder.java

/* mObject is used by native code, do not remove or rename */
//这个字段保存native中JavaBBinderHolder指针
private long mObject;
public Binder() {
    init();
    //这条日志说的很清楚,匿名内部类,成员内部类,方法中的local类,
    //如果都不是static的,就有引起内存泄漏的风险
    if (FIND_POTENTIAL_LEAKS) {
        final Class<? extends Binder> klass = getClass();
        if ((klass.isAnonymousClass() || klass.isMemberClass() || klass.isLocalClass()) &&
            (klass.getModifiers() & Modifier.STATIC) == 0) {
            Log.w(TAG, "The following Binder class should be static or leaks might occur: " +
                  klass.getCanonicalName());
        }
    }
}

对应的JNI方法如下:

frameworks/base/core/jni/android_util_Binder.cpp

static void android_os_Binder_init(JNIEnv* env, jobject obj)
{
    //创建一个JavaBBinderHolder对象指针
    JavaBBinderHolder* jbh = new JavaBBinderHolder();
    if (jbh == NULL) {
        jniThrowException(env, "java/lang/OutOfMemoryError", NULL);
        return;
    }
    ALOGV("Java Binder %p: acquiring first ref on holder %p", obj, jbh);
    jbh->incStrong((void*)android_os_Binder_init);
    // 将JavaBBinderHolder对象指针赋值给Binder对象的mObject
    env->SetLongField(obj, gBinderOffsets.mObject, (jlong)jbh);
}

这里只是创建了JavaBBinderHolder对象指针,特别留意JavaBBinderHolder中的get方法,这个get方法是在使用Binder对象的时候调用的。

frameworks/base/core/jni/android_util_Binder.cpp

sp<JavaBBinder> get(JNIEnv* env, jobject obj)
{
    AutoMutex _l(mLock);
    sp<JavaBBinder> b = mBinder.promote();
    if (b == NULL) {
        //这里创建了一个JavaBBinder对象,智能指针
        b = new JavaBBinder(env, obj);
        mBinder = b;
        ALOGV("Creating JavaBinder %p (refs %p) for Object %p, weakCount=%" PRId32 "\n",
              b.get(), b->getWeakRefs(), obj, b->getWeakRefs()->getWeakCount());
    }

    return b;
}

JavaBBinder构造函数如下:

frameworks/base/core/jni/android_util_Binder.cpp

JavaBBinder(JNIEnv* env, jobject object)
    : mVM(jnienv_to_javavm(env)), mObject(env->NewGlobalRef(object))
    //敲黑板,看这里,创建了一个全局引用env->NewGlobalRef(object),
    //如不主动调用env->DeleteGlobalRef(object),Java层的对象也就是networkCallback就不会被释放。
{
    ALOGV("Creating JavaBBinder %p\n", this);
    android_atomic_inc(&gNumLocalRefs);
    incRefsCreated(env);
}
2,Binder对象的使用之register
public void registerProcessObserver(android.app.IProcessObserver observer) throws android.os.RemoteException;

当Client端调用IActivityManager.registerProcessObserver的时候,IActivityManager其实是ActivityManagerProxy对象,一个BinderProxy对象,IProcessObserver是一个即将传递的Binder对象

public void registerProcessObserver(android.app.IProcessObserver observer) throws android.os.RemoteException
{
android.os.Parcel _data = android.os.Parcel.obtain();
android.os.Parcel _reply = android.os.Parcel.obtain();
try {
_data.writeInterfaceToken(DESCRIPTOR);
_data.writeStrongBinder((((observer!=null))?(observer.asBinder()):(null)));
mRemote.transact(Stub.TRANSACTION_registerProcessObserver, _data, _reply, 0);
_reply.readException();
}
finally {
_reply.recycle();
_data.recycle();
}
}

跨进程传输必须用到Parcel,在这段代码里有这句

_data.writeStrongBinder((((observer!=null))?(observer.asBinder()):(null)));

而这个_data就是Java层的Parcel对象。
Binder写入native层以后,mRemote.transact(Stub.TRANSACTION_registerProcessObserver, _data, _reply, 0);

看下Parcel.java的writeStrongBinder方法
frameworks/base/core/java/android/os/Parcel.java

public final void writeStrongBinder(IBinder val) {
    //调用native方法
    nativeWriteStrongBinder(mNativePtr, val);
}

frameworks/base/core/jni/android_os_Parcel.cpp

static void android_os_Parcel_writeStrongBinder(JNIEnv* env, jclass clazz, jlong nativePtr, jobject object)
{
    Parcel* parcel = reinterpret_cast<Parcel*>(nativePtr);
    if (parcel != NULL) {
        //ibinderForJavaObject,这里的object就是对应java层Binder对象也就是IProcessObserver observer
        const status_t err = parcel->writeStrongBinder(ibinderForJavaObject(env, object));
        if (err != NO_ERROR) {
            signalExceptionForError(env, clazz, err);
        }
    }
}

这里有两个方法,ibinderForJavaObject方法和Parcel的writeStrongBinder方法,
先看 ibinderForJavaObject方法:

frameworks/base/core/jni/android_util_Binder.cpp

//这里的obj就是对应java层Binder对象也就是IProcessObserver observer
sp<IBinder> ibinderForJavaObject(JNIEnv* env, jobject obj)
{
    if (obj == NULL) return NULL;

    //mClass指向Java层中的Binder class
    if (env->IsInstanceOf(obj, gBinderOffsets.mClass)) { 
        JavaBBinderHolder* jbh = (JavaBBinderHolder*)
            env->GetIntField(obj, gBinderOffsets.mObject);
        //JavaBBinderHolder的get() 返回一个JavaBBinder,继承自BBinder,
        //JavaBBinder中这里,创建了一个全局引用env->NewGlobalRef(object),
        //会持有java层Binder对象也就是IProcessObserver observer
        //如不主动调用env->DeleteGlobalRef(object),Java层的对象也就是observer就不会被释放
        return jbh != NULL ? jbh->get(env, obj) : NULL; 
    }
    //mClass 指向Java层的BinderProxy class
    if (env->IsInstanceOf(obj, gBinderProxyOffsets.mClass)) { 
        return (IBinder*)
            //返回一个BpBinder,mObject是它的地址值
            env->GetIntField(obj, gBinderProxyOffsets.mObject); 
    }
    ALOGW("ibinderForJavaObject: %p is not a Binder object", obj);
    return NULL;
}

后面会分析调用env->DeleteGlobalRef(object),释放对Java层的对象也就是observer的引用。

到此,就可以知道java层Binder对象被native层的DeleteGlobalRef引用。这就是导致内存泄漏风险的原因所在。

ibinderForJavaObject调用JavaBBinderHolder的get() 返回一个 JavaBBinder对象, JavaBBinder继承自BBinder,这里对应着java层传入的Binder对象

再看Parcel的writeStrongBinder方法
frameworks/native/libs/binder/Parcel.cpp

status_t Parcel::writeStrongBinder(const sp<IBinder>& val)
{
    //这里传入了一个ProcessState对象,val就是JavaBBinder对象,
    //JavaBBinder中,创建了一个全局引用env->NewGlobalRef(object),会持有java层Binder对象
    return flatten_binder(ProcessState::self(), val, this);
}

frameworks/native/libs/binder/Parcel.cpp

status_t flatten_binder(const sp<ProcessState>& /*proc*/,
    const sp<IBinder>& binder, Parcel* out)
{
    flat_binder_object obj;
    ......
    if (binder != NULL) {
        //binder是BBinder,localBinder返回了this,所以local不为NULL,下面会执行else代码块
        IBinder *local = binder->localBinder();
        if (!local) {
            BpBinder *proxy = binder->remoteBinder();
            if (proxy == NULL) {
                ALOGE("null proxy");
            }
            const int32_t handle = proxy ? proxy->handle() : 0;
            obj.type = BINDER_TYPE_HANDLE;
            obj.binder = 0; /* Don't pass uninitialized stack data to a remote process */
            obj.handle = handle;
            obj.cookie = 0;
        } else {
            //对flat_binder_object结构体obj的各个属性进行设置
            obj.type = BINDER_TYPE_BINDER;
            //BBinder内部的一个弱引用计数对象的地址值
            obj.binder = reinterpret_cast<uintptr_t>(local->getWeakRefs());
            //BBinder本地对象local的地址值
            obj.cookie = reinterpret_cast<uintptr_t>(local);
        }
    } else {
        obj.type = BINDER_TYPE_BINDER;
        obj.binder = 0;
        obj.cookie = 0;
    }
    //全局函数finish_flatten_binder将flat_binder_object结构体obj写入到Parcel对象的out中
    return finish_flatten_binder(binder, obj, out);
}

frameworks/native/libs/binder/Parcel.cpp

inline static status_t finish_flatten_binder(
    const sp<IBinder>& /*binder*/, const flat_binder_object& flat, Parcel* out)
{
    //将flat_binder_object结构体obj写入到Parcel对象的out中
    return out->writeObject(flat, false);
}

接下来分析怎么传递到AMS服务端,
看这句代码:
mRemote.transact(Stub.TRANSACTION_registerProcessObserver, _data, _reply, 0);
mRemote其实是IActivityManagerBinderProxy对象,BinderProxy的transact方法调用了transactNative方法,又会调用native的方法:

frameworks/base/core/jni/android_util_Binder.cpp

static jboolean android_os_BinderProxy_transact(JNIEnv* env, jobject obj,
        jint code, jobject dataObj, jobject replyObj, jint flags) // throws RemoteException
        //code代指调用哪个方法,dataObj传过来的参数
{
    if (dataObj == NULL) {
        jniThrowNullPointerException(env, NULL);
        return JNI_FALSE;
    }

    //从Java的Parcel对象中得到native的Parcel对象
    Parcel* data = parcelForJavaObject(env, dataObj);
    if (data == NULL) {
        return JNI_FALSE;
    }
    //得到一个用于接收回复的Parcel对象
    Parcel* reply = parcelForJavaObject(env, replyObj);
    if (reply == NULL && replyObj != NULL) {
        return JNI_FALSE;
    }
    //从Java的BinderProxy对象中得到之前已经创建好的那个native的BpBinder对象。
    //这个target就是对应着ActivityManagerProxy
    IBinder* target = (IBinder*)
        env->GetLongField(obj, gBinderProxyOffsets.mObject);
    ......
    //通过BpBinder对象,将请求发送给AMS所在的服务端
    status_t err = target->transact(code, *data, reply, flags);
    ......
    if (err == NO_ERROR) {
        return JNI_TRUE;
    } else if (err == UNKNOWN_TRANSACTION) {
        return JNI_FALSE;
    }

    signalExceptionForError(env, obj, err, true /*canThrowRemoteException*/, data->dataSize());
    return JNI_FALSE;
}

frameworks/native/libs/binder/BpBinder.cpp

status_t BpBinder::transact(
    uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
    // Once a binder has died, it will never come back to life.
    if (mAlive) {
        //调用了IPCThreadState的transact
        status_t status = IPCThreadState::self()->transact(
            mHandle, code, data, reply, flags);
        if (status == DEAD_OBJECT) mAlive = 0;
        return status;
    }

    return DEAD_OBJECT;
}

BpBinder对象的transact方法中调用了IPCThreadStatetransact方法。mHandle是Client组件的句柄值,Client组件就是通过这个句柄值来和Binder驱动程序中的Binder引用对象建立对应关系的,这个mHandle就是服务端Binder的代理对象创建时初始化的,最终和服务端Binder对象建立对应关系后面会分析BinderProxy的创建过程

frameworks/native/libs/binder/IPCThreadState.cpp

status_t IPCThreadState::transact(int32_t handle,
                                  uint32_t code, const Parcel& data,
                                  Parcel* reply, uint32_t flags)
{
    status_t err = data.errorCheck();
    ......
    if (err == NO_ERROR) {
        LOG_ONEWAY(">>>> SEND from pid %d uid %d %s", getpid(), getuid(),
            (flags & TF_ONE_WAY) == 0 ? "READ REPLY" : "ONE WAY");
        //发起IPC去的流程中是BC_TRANSACTION,回的流程是BR_TRANSACTION
        err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);
    }
    ......
    if ((flags & TF_ONE_WAY) == 0) {
        ......
        if (reply) {
            err = waitForResponse(reply);
        } else {
            Parcel fakeReply;
            err = waitForResponse(&fakeReply);
        }
        ......
    } else {
        err = waitForResponse(NULL, NULL);
    }

    return err;
}

这里通过writeTransactionData方法将相关数据写入mOut,然后调用waitForResponse方法,waitForResponse方法中调用talkWithDriver方法和内核Binder驱动通信。

frameworks/native/libs/binder/IPCThreadState.cpp

//参数说明:
//发起IPC去的流程中cmd是BC_TRANSACTION,回的流程是BR_TRANSACTION
//handle表示Client组件的句柄值,Client组件就是通过这个句柄值来和Binder驱动程序中的Binder引用对象建立对应关系的,
//这个就是获取到服务端Binder的代理对象时,赋值的,最终和服务端Binder对象建立对应关系
//code是调用那个方法
//data是Client组件传递过去参数,data中flat_binder_object结构体obj的type=BINDER_TYPE_BINDER
status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags,
    int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer)
{
    //handler,code,data都封装在binder_transaction_data结构体tr中,
    //data中flat_binder_object结构体obj的type=BINDER_TYPE_BINDER
    //cmd和tr又被封装到mOut中,第一个32位是cmd,
    //第二个32位是binder_transaction_data结构体tr,binder_transaction_data结构体tr中保存着handle
    binder_transaction_data tr;

    tr.target.ptr = 0; /* Don't pass uninitialized stack data to a remote process */
    tr.target.handle = handle;//最终会指向服务端Binder
    tr.code = code;
    tr.flags = binderFlags;
    tr.cookie = 0;
    tr.sender_pid = 0;
    tr.sender_euid = 0;

    const status_t err = data.errorCheck();
    //把data封装到binder_transaction_data tr中
    if (err == NO_ERROR) {
        tr.data_size = data.ipcDataSize();
        tr.data.ptr.buffer = data.ipcData();
        tr.offsets_size = data.ipcObjectsCount()*sizeof(binder_size_t);
        tr.data.ptr.offsets = data.ipcObjects();
    } else if (statusBuffer) {
        ......
    } else {
        return (mLastError = err);
    }

    mOut.writeInt32(cmd);
    mOut.write(&tr, sizeof(tr));

    return NO_ERROR;
}

把data封装到binder_transaction_data tr中,然后先把cmd(上面就是BC_TRANSACTION)写入mOut,再把binder_transaction_data tr写入mOutmOut这个是为了把相关信息传递给服务端的,内核回从这里读取客户端用户空间的数据。

接下来分析,waitForResponse方法调用了talkWithDriver(bool doReceive=true);

frameworks/native/libs/binder/IPCThreadState.cpp

status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
{
    uint32_t cmd;
    int32_t err;

    while (1) {
        //这里使用了talkWithDriver参数的默认值true
        if ((err=talkWithDriver()) < NO_ERROR) break;
        err = mIn.errorCheck();
        if (err < NO_ERROR) break;
        if (mIn.dataAvail() == 0) continue;

        cmd = (uint32_t)mIn.readInt32();

        IF_LOG_COMMANDS() {
            alog << "Processing waitForResponse Command: "
                << getReturnString(cmd) << endl;
        }

        switch (cmd) {
        case BR_TRANSACTION_COMPLETE:
            if (!reply && !acquireResult) goto finish;
            break;

        case BR_DEAD_REPLY:
            err = DEAD_OBJECT;
            goto finish;

        case BR_FAILED_REPLY:
            err = FAILED_TRANSACTION;
            goto finish;

        case BR_ACQUIRE_RESULT:
            {
                ALOG_ASSERT(acquireResult != NULL, "Unexpected brACQUIRE_RESULT");
                const int32_t result = mIn.readInt32();
                if (!acquireResult) continue;
                *acquireResult = result ? NO_ERROR : INVALID_OPERATION;
            }
            goto finish;

        case BR_REPLY:
            {
                binder_transaction_data tr;
                err = mIn.read(&tr, sizeof(tr));
                ALOG_ASSERT(err == NO_ERROR, "Not enough command data for brREPLY");
                if (err != NO_ERROR) goto finish;

                if (reply) {
                    if ((tr.flags & TF_STATUS_CODE) == 0) {
                        reply->ipcSetDataReference(
                            reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
                            tr.data_size,
                            reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
                            tr.offsets_size/sizeof(binder_size_t),
                            freeBuffer, this);
                    } else {
                        err = *reinterpret_cast<const status_t*>(tr.data.ptr.buffer);
                        freeBuffer(NULL,
                            reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
                            tr.data_size,
                            reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
                            tr.offsets_size/sizeof(binder_size_t), this);
                    }
                } else {
                    freeBuffer(NULL,
                        reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
                        tr.data_size,
                        reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
                        tr.offsets_size/sizeof(binder_size_t), this);
                    continue;
                }
            }
            goto finish;

        default:
            err = executeCommand(cmd);
            if (err != NO_ERROR) goto finish;
            break;
        }
    }

finish:
    if (err != NO_ERROR) {
        if (acquireResult) *acquireResult = err;
        if (reply) reply->setError(err);
        mLastError = err;
    }

    return err;
}

重点就是看waitForResponse方法调用了talkWithDriver(true),会在talkWithDriver方法中阻塞,等着结果返回。

frameworks/native/libs/binder/IPCThreadState.cpp

status_t IPCThreadState::talkWithDriver(bool doReceive)
{
    if (mProcess->mDriverFD <= 0) {
        return -EBADF;
    }

    binder_write_read bwr;

    //mIn是从binder驱动读取数据的对象,
    //mIn.dataPosition() >= mIn.dataSize()说明上次从Binder驱动传来的数据读完了,
    //mIn此时偏移量肯定会在结尾处的,所以needRead=true
    // Is the read buffer empty?
    const bool needRead = mIn.dataPosition() >= mIn.dataSize();

    // We don't want to write anything if we are still reading
    // from data left in the input buffer and the caller
    // has requested to read the next data.
    const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0;

    //binder_write_read bwr中的write部分,保存mOut的大小和数据起始位置指针地址,
    //mOut中,第一个32位是cmd,第二个32位是binder_transaction_data结构体tr,
    //binder_transaction_data结构体tr中保存着handle
    bwr.write_size = outAvail;
    bwr.write_buffer = (uintptr_t)mOut.data();

    // This is what we'll read.
    if (doReceive && needRead) {
        bwr.read_size = mIn.dataCapacity();
        bwr.read_buffer = (uintptr_t)mIn.data();
    } else {
        bwr.read_size = 0;
        bwr.read_buffer = 0;
    }
    ......
    bwr.write_consumed = 0;
    bwr.read_consumed = 0;
    status_t err;
    do {
        IF_LOG_COMMANDS() {
            alog << "About to read/write, write size = " << mOut.dataSize() << endl;
        }
#if defined(__ANDROID__)
        //ioctl和内核Binder驱动通信,参数说明:
        //第一个参数mProcess->mDriverFD当前进程是Binder描述文件,
        //第二个参数BINDER_WRITE_READ是cmd,
        //第三个参数&bwr是数据指针地址,即IPCThreadState::transact中传递过来的mOut,
        //mOut中的第一个32位,是cmd=BC_TRANSACTION,这里后面回用到
        if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)
            err = NO_ERROR;
        else
            err = -errno;
#else
        err = INVALID_OPERATION;
#endif
        if (mProcess->mDriverFD <= 0) {
            err = -EBADF;
        }
        IF_LOG_COMMANDS() {
            alog << "Finished read/write, write size = " << mOut.dataSize() << endl;
        }
    } while (err == -EINTR);

    IF_LOG_COMMANDS() {
        alog << "Our err: " << (void*)(intptr_t)err << ", write consumed: "
            << bwr.write_consumed << " (of " << mOut.dataSize()
                        << "), read consumed: " << bwr.read_consumed << endl;
    }

    if (err >= NO_ERROR) {
        if (bwr.write_consumed > 0) {
            if (bwr.write_consumed < mOut.dataSize())
                mOut.remove(0, bwr.write_consumed);
            else
                mOut.setDataSize(0);
        }
        if (bwr.read_consumed > 0) {
            mIn.setDataSize(bwr.read_consumed);
            mIn.setDataPosition(0);
        }
        ......
        return NO_ERROR;
    }

    return err;
}

talkWithDriver方法既负责向Binder驱动程序发送进程间通信,又负责接收来自Binder驱动的进程间通信请求

进入Binder驱动:

kernel/msm-4.4/driver/android/binder.c

//cmd=BC_TRANSACTION
//arg参数是数据指针地址,即IPCThreadState::transact中传递过来的mOut,mOut中的第一个32位,是BC_TRANSACTION
static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
	int ret;
	struct binder_proc *proc = filp->private_data;
	struct binder_thread *thread;
	unsigned int size = _IOC_SIZE(cmd);
	void __user *ubuf = (void __user *)arg;

	/*pr_info("binder_ioctl: %d:%d %x %lx\n",
			proc->pid, current->pid, cmd, arg);*/

	trace_binder_ioctl(cmd, arg);
	......
    //客户端进程中执行逻辑的线程
	thread = binder_get_thread(proc);
	......
	switch (cmd) {
	case BINDER_WRITE_READ:
		ret = binder_ioctl_write_read(filp, cmd, arg, thread);
		if (ret)
			goto err;
		break;
	......
	}
	ret = 0;
err:
	if (thread)
		thread->looper_need_return = false;
	wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);
	if (ret && ret != -ERESTARTSYS)
		pr_info("%d:%d ioctl %x %lx returned %d\n", proc->pid, current->pid, cmd, arg, ret);
err_unlocked:
	trace_binder_ioctl_done(ret);
	return ret;
}

talkWithDriver方法通过ioctl方法传递过来的cmd是BINDER_WRITE_READ,所以执行case BINDER_WRITE_READ,即调用binder_ioctl_write_read方法.

kernel/msm-4.4/driver/android/binder.c

//arg参数是数据指针地址,即IPCThreadState::transact中传递过来的mOut,mOut中的第一个32位,是BC_TRANSACTION
static int binder_ioctl_write_read(struct file *filp,
				unsigned int cmd, unsigned long arg,
				struct binder_thread *thread)
{
	int ret = 0;
	struct binder_proc *proc = filp->private_data;
	unsigned int size = _IOC_SIZE(cmd);
	void __user *ubuf = (void __user *)arg;//把传递过来的参数arg转换成ubuf
	struct binder_write_read bwr;

	if (size != sizeof(struct binder_write_read)) {
		ret = -EINVAL;
		goto out;
	}
    // 用户态数据拷贝至内核态
    //把ubuf复制到bwr中,大小是sizeof(bwr),其实就是把即IPCThreadState的mOut给了bwr,mOut中的第一个
    //32位,是BC_TRANSACTION,第二个32位往后是binder_transaction_data结构体tr,
    //binder_transaction_data结构体tr中保存着handle
	if (copy_from_user(&bwr, ubuf, sizeof(bwr))) {
		ret = -EFAULT;
		goto out;
	}
	......

	if (bwr.write_size > 0) {
        // 处理用户态发送的命令协议数据
        //write_size因为从客户端用户控件传递数据过来,所以这里bwr.write_size > 0
        //bwr.write_buffer就是mOut数据首地址
		ret = binder_thread_write(proc, thread,
					  bwr.write_buffer,
					  bwr.write_size,
					  &bwr.write_consumed);
		trace_binder_write_done(ret);
		if (ret < 0) {
			bwr.read_consumed = 0;
			if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
				ret = -EFAULT;
			goto out;
		}
	}
        
	if (bwr.read_size > 0) {
         //当前进程的当前线程会在binder_thread_read中阻塞,等待被再次唤醒
		ret = binder_thread_read(proc, thread, bwr.read_buffer,
					 bwr.read_size,
					 &bwr.read_consumed,
					 filp->f_flags & O_NONBLOCK);
		trace_binder_read_done(ret);
		binder_inner_proc_lock(proc);
		if (!binder_worklist_empty_ilocked(&proc->todo))
			binder_wakeup_proc_ilocked(proc);
		binder_inner_proc_unlock(proc);
		if (ret < 0) {
			if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
				ret = -EFAULT;
			goto out;
		}
	}
	......
    // 内核态bwr拷贝至用户态
	if (copy_to_user(ubuf, &bwr, sizeof(bwr))) {
		ret = -EFAULT;
		goto out;
	}
out:
	return ret;
}

kernel/msm-4.4/driver/android/binder.c

//用户态与binder驱动数据交互的控制结构体
struct binder_write_read {
    binder_size_t write_size; // 用户态发给binder驱动的命令数据大小
    binder_size_t write_consumed; // binder驱动消耗的数据的大小
    binder_size_t write_buffer; // 命令数据buffer地址
    binder_size_t read_size; // 用于接收binder驱动返回的命令数据的buffer大小
    binder_size_t read_consumed; // binder驱动返回给用户态的命令数据大小
    binder_size_t read_buffer; // 接收binder驱动返回的命令数据的buffer地址
}

先看下binder_thread_read方法中怎么阻塞的?然后再回来看binder_thread_write方法。
kernel/msm-4.4/driver/android/binder.c

static int binder_thread_read(struct binder_proc *proc,
			      struct binder_thread *thread,
			      binder_uintptr_t binder_buffer, size_t size,
			      binder_size_t *consumed, int non_block)
{
	void __user *buffer = (void __user *)(uintptr_t)binder_buffer;
	void __user *ptr = buffer + *consumed;
	void __user *end = buffer + size;

	int ret = 0;
	int wait_for_proc_work;

	if (*consumed == 0) {
		if (put_user(BR_NOOP, (uint32_t __user *)ptr))
			return -EFAULT;
		ptr += sizeof(uint32_t);
	}

retry:
	binder_inner_proc_lock(proc);
	wait_for_proc_work = binder_available_for_proc_work_ilocked(thread);
	binder_inner_proc_unlock(proc);
    //即将进入等待状态,置为等待状态
	thread->looper |= BINDER_LOOPER_STATE_WAITING;

	trace_binder_wait_for_work(wait_for_proc_work,
				   !!thread->transaction_stack,
				   !binder_worklist_empty(proc, &thread->todo));
	if (wait_for_proc_work) {
		if (!(thread->looper & (BINDER_LOOPER_STATE_REGISTERED |
					BINDER_LOOPER_STATE_ENTERED))) {
			......
            //阻塞可被中断
			wait_event_interruptible(binder_user_error_wait,
						 binder_stop_on_user_error < 2);
		}
		binder_restore_priority(current, proc->default_priority);
	}

	if (non_block) {
		if (!binder_has_work(thread, wait_for_proc_work))
			ret = -EAGAIN;
	} else {
         //进入阻塞,等待被唤醒
		ret = binder_wait_for_work(thread, wait_for_proc_work);
	}
    //被唤醒后,置为非等待状态
	thread->looper &= ~BINDER_LOOPER_STATE_WAITING;
	......
	return 0;
}

kernel/msm-4.4/driver/android/binder.c

static int binder_wait_for_work(struct binder_thread *thread,
				bool do_proc_work)
{
	DEFINE_WAIT(wait);
	struct binder_proc *proc = thread->proc;
	int ret = 0;

	freezer_do_not_count();
	binder_inner_proc_lock(proc);
	for (;;) {
		prepare_to_wait(&thread->wait, &wait, TASK_INTERRUPTIBLE);
		if (binder_has_work_ilocked(thread, do_proc_work))
			break;
		if (do_proc_work)
             //加入阻塞队列
			list_add(&thread->waiting_thread_node,
				 &proc->waiting_threads);
		binder_inner_proc_unlock(proc);
        //进入阻塞计划
		schedule();
		binder_inner_proc_lock(proc);
		list_del_init(&thread->waiting_thread_node);
         //唤醒
		if (signal_pending(current)) {
			ret = -ERESTARTSYS;
			break;
		}
	}
    //结束等待
	finish_wait(&thread->wait, &wait);
	binder_inner_proc_unlock(proc);
	freezer_count();

	return ret;
}

接下来回来分析binder_thread_write方法,目前位置proc还是客户端进程
kernel/msm-4.4/driver/android/binder.c

static int binder_thread_write(struct binder_proc *proc,
			struct binder_thread *thread,
			binder_uintptr_t binder_buffer, size_t size,
			binder_size_t *consumed)
{
	uint32_t cmd;
	struct binder_context *context = proc->context;
	void __user *buffer = (void __user *)(uintptr_t)binder_buffer;
	void __user *ptr = buffer + *consumed;
	void __user *end = buffer + size;

	while (ptr < end && thread->return_error.cmd == BR_OK) {
		int ret;
         //mOut第一个32位是BC_TRANSACTION
		if (get_user(cmd, (uint32_t __user *)ptr))
			return -EFAULT;
		ptr += sizeof(uint32_t);
		......
		switch (cmd) {
		......
                ......
		case BC_TRANSACTION_SG:
		case BC_REPLY_SG: {
			struct binder_transaction_data_sg tr;

			if (copy_from_user(&tr, ptr, sizeof(tr)))
				return -EFAULT;
			ptr += sizeof(tr);
			binder_transaction(proc, thread, &tr.transaction_data,
					   cmd == BC_REPLY_SG, tr.buffers_size);
			break;
		}
		case BC_TRANSACTION:
		case BC_REPLY: {
			struct binder_transaction_data tr;
             //把mOut第二个32位开始往后的数据,即:
             //mOut中的binder_transaction_data结构体tr,复制到tr中,这里面有handle
			if (copy_from_user(&tr, ptr, sizeof(tr)))
				return -EFAULT;
			ptr += sizeof(tr);
             //cmd == BC_REPLY 是false
			binder_transaction(proc, thread, &tr,
					   cmd == BC_REPLY, 0);
			break;
		}

		......
		
		}
		*consumed = ptr - buffer;
	}
	return 0;
}

继续分析binder_transaction方法,proc代指客户端进程,thread代指客户端中的线程,tr用户态传递过来数据,reply是false.

kernel/msm-4.4/driver/android/binder.c

static void binder_transaction(struct binder_proc *proc,
			       struct binder_thread *thread,
			       struct binder_transaction_data *tr, int reply,
			       binder_size_t extra_buffers_size)
{
	int ret;
	struct binder_transaction *t;
	struct binder_work *tcomplete;
	binder_size_t *offp, *off_end, *off_start;
	binder_size_t off_min;
	u8 *sg_bufp, *sg_buf_end;
	struct binder_proc *target_proc = NULL;
	struct binder_thread *target_thread = NULL;
	struct binder_node *target_node = NULL;
	struct binder_transaction *in_reply_to = NULL;
	struct binder_transaction_log_entry *e;
	......
	if (reply) {
		......
	} else {
        //目标handle不为0时说明是客户端调用服务端的情况,
        //tr可以追溯到mOut,mOut中的binder_transaction_data结构体tr中有handle,
        //这个handle即对应着服务端的Binder
		if (tr->target.handle) {
			struct binder_ref *ref;

			/*
			 * There must already be a strong ref
			 * on this node. If so, do a strong
			 * increment on the node to ensure it
			 * stays alive until the transaction is
			 * done.
			 */
              //Google翻译:此节点上必须已存在强大的引用。如果是这样,
              //请在节点上执行强增量以确保它在事务完成之前保持活动状态。
			binder_proc_lock(proc);
             //获取与tr->target.handle对应的Binder引用对象binder_ref ref
			ref = binder_get_ref_olocked(proc, tr->target.handle,
						     true);
			if (ref) {
                  //通过这个Binder引用对象binder_ref ref的成员变量node
                  //来找到目标Binder的实体对象target_node,并且再通过这个node找到target_proc
				target_node = binder_get_node_refs_for_txn(
						ref->node, &target_proc,
						&return_error);
			} else {
				......
			}
			binder_proc_unlock(proc);
		} else {
			......
		}
		......
         //判断此次调用是否需要reply; 
		if (!(tr->flags & TF_ONE_WAY) && thread->transaction_stack) {
			struct binder_transaction *tmp;

			tmp = thread->transaction_stack;
			if (tmp->to_thread != thread) {
				spin_lock(&tmp->lock);
				......
				spin_unlock(&tmp->lock);
				binder_inner_proc_unlock(proc);
				return_error = BR_FAILED_REPLY;
				return_error_param = -EPROTO;
				return_error_line = __LINE__;
				goto err_bad_call_stack;
			}
			while (tmp) {
				struct binder_thread *from;

				spin_lock(&tmp->lock);
				from = tmp->from;
				if (from && from->proc == target_proc) {
                      //根据transaction_stack找到目标线程(第一次传输不会进来); 
					atomic_inc(&from->tmp_ref);
					target_thread = from;
					spin_unlock(&tmp->lock);
					break;
				}
				spin_unlock(&tmp->lock);
				tmp = tmp->from_parent;
			}
		}
		binder_inner_proc_unlock(proc);
	}
	if (target_thread)
		e->to_thread = target_thread->pid;
	e->to_proc = target_proc->pid;

	/* TODO: reuse incoming transaction for reply */
	t = kzalloc(sizeof(*t), GFP_KERNEL);
	......
	binder_stats_created(BINDER_STAT_TRANSACTION);
	spin_lock_init(&t->lock);

	tcomplete = kzalloc(sizeof(*tcomplete), GFP_KERNEL);
	......
	binder_stats_created(BINDER_STAT_TRANSACTION_COMPLETE);

	t->debug_id = t_debug_id;
	......
	if (!reply && !(tr->flags & TF_ONE_WAY))
		t->from = thread;
	else
		t->from = NULL;
	t->sender_euid = task_euid(proc->tsk);
	t->to_proc = target_proc;
	t->to_thread = target_thread;
	t->code = tr->code;
	t->flags = tr->flags;
	if (!(t->flags & TF_ONE_WAY) &&
	    binder_supported_policy(current->policy)) {
		/* Inherit supported policies for synchronous transactions */
		t->priority.sched_policy = current->policy;
		t->priority.prio = current->normal_prio;
	} else {
		/* Otherwise, fall back to the default priority */
		t->priority = target_proc->default_priority;
	}

	trace_binder_transaction(reply, t, target_node);
    //通过binder_alloc_new_buf方法,目标进程的在mmap空间分配一块buf,
    //后面接着调用copy_from_use将用户空间数据拷贝进刚分配的buf中,这样目标进程可以直接读取数据; 
	t->buffer = binder_alloc_new_buf(&target_proc->alloc, tr->data_size,
		tr->offsets_size, extra_buffers_size,
		!reply && (t->flags & TF_ONE_WAY));
	if (IS_ERR(t->buffer)) {
		/*
		 * -ESRCH indicates VMA cleared. The target is dying.
		 */
		return_error_param = PTR_ERR(t->buffer);
		return_error = return_error_param == -ESRCH ?
			BR_DEAD_REPLY : BR_FAILED_REPLY;
		return_error_line = __LINE__;
		t->buffer = NULL;
		goto err_binder_alloc_buf_failed;
	}
	t->buffer->allow_user_free = 0;
	t->buffer->debug_id = t->debug_id;
	t->buffer->transaction = t;
	t->buffer->target_node = target_node;
	trace_binder_transaction_alloc_buf(t->buffer);
	off_start = (binder_size_t *)(t->buffer->data +
				      ALIGN(tr->data_size, sizeof(void *)));
	offp = off_start;
    //copy_from_use将用户空间数据data拷贝进刚分配的t->buffer->data,这样目标进程可以直接读取数据
    //回顾Parcel.cpp中tr.data.ptr.buffer = data.ipcData();这句代码,
    //用户空间Parcel data中有flat_binder_object obj=BINDER_TYPE_BINDER
	if (copy_from_user(t->buffer->data, (const void __user *)(uintptr_t)
			   tr->data.ptr.buffer, tr->data_size)) {
		binder_user_error("%d:%d got transaction with invalid data ptr\n",
				proc->pid, thread->pid);
		return_error = BR_FAILED_REPLY;
		return_error_param = -EFAULT;
		return_error_line = __LINE__;
		goto err_copy_data_failed;
	}
	if (copy_from_user(offp, (const void __user *)(uintptr_t)
			   tr->data.ptr.offsets, tr->offsets_size)) {
		binder_user_error("%d:%d got transaction with invalid offsets ptr\n",
				proc->pid, thread->pid);
		return_error = BR_FAILED_REPLY;
		return_error_param = -EFAULT;
		return_error_line = __LINE__;
		goto err_copy_data_failed;
	}
	......
	off_end = (void *)off_start + tr->offsets_size;
	sg_bufp = (u8 *)(PTR_ALIGN(off_end, sizeof(void *)));
	sg_buf_end = sg_bufp + extra_buffers_size;
	off_min = 0;
	for (; offp < off_end; offp++) {
		struct binder_object_header *hdr;
		size_t object_size = binder_validate_object(t->buffer, *offp);

		if (object_size == 0 || *offp < off_min) {
			......
			return_error = BR_FAILED_REPLY;
			return_error_param = -EINVAL;
			return_error_line = __LINE__;
			goto err_bad_offset;
		}
         //获取struct flat_binder_object的首地址, 
         //offp保存的是object距数据头的偏移值,这里就是flat_binder_objec,此处hdr->type是BINDER_TYPE_BINDER
		hdr = (struct binder_object_header *)(t->buffer->data + *offp);
		off_min = *offp + object_size;
		switch (hdr->type) {
		case BINDER_TYPE_BINDER:
		case BINDER_TYPE_WEAK_BINDER: {
			struct flat_binder_object *fp;
             //获取struct flat_binder_object的首地址
			fp = to_flat_binder_object(hdr);
             //看一下binder_translate_binder方法
			ret = binder_translate_binder(fp, t, thread);
			if (ret < 0) {
				return_error = BR_FAILED_REPLY;
				return_error_param = ret;
				return_error_line = __LINE__;
				goto err_translate_failed;
			}
		} break;
		......
		}
	}
        
	tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE;
    //将需要处理的事务加入客户端进程或客户端线程的todo链表,目的是为了增加对传递的Binder对象的引用计数
	binder_enqueue_work(proc, tcomplete, &thread->todo);
	t->work.type = BINDER_WORK_TRANSACTION;

	if (reply) {
		......
	} else if (!(t->flags & TF_ONE_WAY)) {
		BUG_ON(t->buffer->async_transaction != 0);
		binder_inner_proc_lock(proc);
		t->need_reply = 1;
		t->from_parent = thread->transaction_stack;
		thread->transaction_stack = t;
		binder_inner_proc_unlock(proc);
         //唤醒目标服务端进程
		if (!binder_proc_transaction(t, target_proc, target_thread)) {
			binder_inner_proc_lock(proc);
			binder_pop_transaction_ilocked(thread, t);
			binder_inner_proc_unlock(proc);
			goto err_dead_proc_or_thread;
		}
	} else {
		BUG_ON(target_node == NULL);
		BUG_ON(t->buffer->async_transaction != 1);
		if (!binder_proc_transaction(t, target_proc, NULL))
			goto err_dead_proc_or_thread;
	}
	......
	return;

......
}

binder_translate_binder方法

kernel/msm-4.4/drivers/android/binder.c

static int binder_translate_binder(struct flat_binder_object *fp,
				   struct binder_transaction *t,
				   struct binder_thread *thread)
{
	struct binder_node *node;
	struct binder_proc *proc = thread->proc;//通过客户端线程获取到客户端进程
	struct binder_proc *target_proc = t->to_proc;//服务端进程
	struct binder_ref_data rdata;
	int ret = 0;
     //为客户端传进来的Binder对象的binder实体构造一个binder_node,
     //这里就是客户端中new那个Binder对象android.app.IProcessObserver observer
	node = binder_get_node(proc, fp->binder);
	if (!node) {
         //因为是这个Binder对象实体是第一次传递,
         //所以通过binder_get_node方法获取出来是NULL,这里需要新创建一个出来
		node = binder_new_node(proc, fp);
		if (!node)
			return -ENOMEM;
	}
	......
    //为客户端传进来的Binder对象的binder_node增加引用计数,增加给客户端todo列表
	ret = binder_inc_ref_for_node(target_proc, node,
			fp->hdr.type == BINDER_TYPE_BINDER,
			&thread->todo, &rdata);
	if (ret)
		goto done;
    //客户端传进来的Binder对象对应的flat_binder_object在writeStrongBinder的时候,
    //type是BINDER_TYPE_BINDER,这里将type改为BINDER_TYPE_HANDLE
	if (fp->hdr.type == BINDER_TYPE_BINDER)
		fp->hdr.type = BINDER_TYPE_HANDLE;
	else
		fp->hdr.type = BINDER_TYPE_WEAK_HANDLE;
	fp->binder = 0;
    //将handle赋值为rdata.desc
	fp->handle = rdata.desc;
	fp->cookie = 0;

	trace_binder_transaction_node_to_ref(t, node, &rdata);
	binder_debug(BINDER_DEBUG_TRANSACTION,
		     "        node %d u%016llx -> ref %d desc %d\n",
		     node->debug_id, (u64)node->ptr,
		     rdata.debug_id, rdata.desc);
done:
	binder_put_node(node);
	return ret;
}

node = binder_get_node(proc, fp->binder)为客户端传进来的Binder对象的binder实体构造一个binder_node,然后通过binder_inc_ref_for_node方法,为这个binder_node增加引用计数。

分析binder_inc_ref_for_node方法

kernel/msm-4.4/drivers/android/binder.c

//proc是服务端进程target_proc
//node是客户端传进来的Binder对象的binder实体对应的binder_node
//strong是true
static int binder_inc_ref_for_node(struct binder_proc *proc,
			struct binder_node *node,
			bool strong,
			struct list_head *target_list,
			struct binder_ref_data *rdata)
{
	struct binder_ref *ref;
	struct binder_ref *new_ref = NULL;
	int ret = 0;

	binder_proc_lock(proc);
    //为客户端传进来的Binder对象的binder实体对应的binder_node,创建一个binder_ref
	ref = binder_get_ref_for_node_olocked(proc, node, NULL);
	if (!ref) {
		binder_proc_unlock(proc);
		new_ref = kzalloc(sizeof(*ref), GFP_KERNEL);
		if (!new_ref)
			return -ENOMEM;
		binder_proc_lock(proc);
		ref = binder_get_ref_for_node_olocked(proc, node, new_ref);
	}
    //通过ref为node增加强引用计数
	ret = binder_inc_ref_olocked(ref, strong, target_list);
	*rdata = ref->data;
	binder_proc_unlock(proc);
	if (new_ref && ref != new_ref)
		/*
		 * Another thread created the ref first so
		 * free the one we allocated
		 */
		kfree(new_ref);
	return ret;
}

kernel/msm-4.4/drivers/android/binder.c

/**
 * binder_inc_ref_olocked() - increment the ref for given handle
 * @ref:         ref to be incremented
 * @strong:      if true, strong increment, else weak
 * @target_list: list to queue node work on
 *
 * Increment the ref. @ref->proc->outer_lock must be held on entry
 *
 * Return: 0, if successful, else errno
 */
static int binder_inc_ref_olocked(struct binder_ref *ref, int strong,
				  struct list_head *target_list)
{
	int ret;

	if (strong) {
        //strong 为true
		if (ref->data.strong == 0) {
            //为客户端传进来的Binder对象的binder_node增加引用计数
			ret = binder_inc_node(ref->node, 1, 1, target_list);
			if (ret)
				return ret;
		}
		ref->data.strong++;
	} else {
		if (ref->data.weak == 0) {
			ret = binder_inc_node(ref->node, 0, 1, target_list);
			if (ret)
				return ret;
		}
		ref->data.weak++;
	}
	return 0;
}

以上是Client端进程

下面就是BinderProxy对象的创建

上面分析到binder_transaction方法中调用了binder_proc_transaction方法唤醒服务端,该场景中是AMS,AMS之前在binder_thread_read方法中阻塞,导致用户空间中IPCThreadStatetalkWithDriver中的ioctl阻塞着。AMS在创建之初,会调用IPCThreadStatejoinThreadPool方法,其中有个循环,会调用getAndExecuteCommand方法,getAndExecuteCommand里又调用了talkWithDriver,所以没有消息需要处理的时候,该循环会在ioctl方法阻塞,binder_thread_read方法中被唤醒以后,AMS服务端进程继续执行后面的逻辑

kernel/msm-4.4/drivers/android/binder.c

static int binder_thread_read(struct binder_proc *proc,
			      struct binder_thread *thread,
			      binder_uintptr_t binder_buffer, size_t size,
			      binder_size_t *consumed, int non_block)
{
	......
	thread->looper &= ~BINDER_LOOPER_STATE_WAITING;

	if (ret)
		return ret;

	while (1) {
		uint32_t cmd;
		struct binder_transaction_data tr;
		struct binder_work *w = NULL;
		struct list_head *list = NULL;
		struct binder_transaction *t = NULL;
		struct binder_thread *t_from;
		......
		w = binder_dequeue_work_head_ilocked(list);
		switch (w->type) {
		......
		}
		......
		BUG_ON(t->buffer == NULL);
		if (t->buffer->target_node) {
			struct binder_node *target_node = t->buffer->target_node;
			struct binder_priority node_prio;

			tr.target.ptr = target_node->ptr;
			tr.cookie =  target_node->cookie;
			node_prio.sched_policy = target_node->sched_policy;
			node_prio.prio = target_node->min_priority;
			binder_transaction_priority(current, t, node_prio,
						    target_node->inherit_rt);
             //指令对应设置为BR_TRANSACTION
			cmd = BR_TRANSACTION;
		} else {
			tr.target.ptr = 0;
			tr.cookie = 0;
			cmd = BR_REPLY;
		}
		......
         //命令
		if (put_user(cmd, (uint32_t __user *)ptr)) {
			if (t_from)
				binder_thread_dec_tmpref(t_from);
			return -EFAULT;
		}
		ptr += sizeof(uint32_t);
         //拷贝数据到用户空间,将协议以及协议内容写入到由AMS Server进程所提供的一个用户空间缓冲区,
         //然后返回到Server进程的用户空间
		if (copy_to_user(ptr, &tr, sizeof(tr))) {
			if (t_from)
				binder_thread_dec_tmpref(t_from);
			return -EFAULT;
		}
		ptr += sizeof(tr);
		......
		break;
	}
    ......
	return 0;
}

Server进程在IPCThreadState类的成员函数getAndExecuteCommand()中调用成员函数executeCommand中处理协议。

getAndExecuteCommand方法

frameworks/native/libs/binder/IPCThreadState.cpp

status_t IPCThreadState::getAndExecuteCommand()
{
    status_t result;
    int32_t cmd;

    result = talkWithDriver();
    if (result >= NO_ERROR) {
        size_t IN = mIn.dataAvail();
        if (IN < sizeof(int32_t)) return result;
        //读出指令
        cmd = mIn.readInt32();
        IF_LOG_COMMANDS() {
            alog << "Processing top-level Command: "
                 << getReturnString(cmd) << endl;
        }

        pthread_mutex_lock(&mProcess->mThreadCountLock);
        mProcess->mExecutingThreadsCount++;
        if (mProcess->mExecutingThreadsCount >= mProcess->mMaxThreads &&
                mProcess->mStarvationStartTimeMs == 0) {
            mProcess->mStarvationStartTimeMs = uptimeMillis();
        }
        pthread_mutex_unlock(&mProcess->mThreadCountLock);
        //执行具体逻辑
        result = executeCommand(cmd);

        pthread_mutex_lock(&mProcess->mThreadCountLock);
        mProcess->mExecutingThreadsCount--;
        if (mProcess->mExecutingThreadsCount < mProcess->mMaxThreads &&
                mProcess->mStarvationStartTimeMs != 0) {
            int64_t starvationTimeMs = uptimeMillis() - mProcess->mStarvationStartTimeMs;
            if (starvationTimeMs > 100) {
                ALOGE("binder thread pool (%zu threads) starved for %" PRId64 " ms",
                      mProcess->mMaxThreads, starvationTimeMs);
            }
            mProcess->mStarvationStartTimeMs = 0;
        }
        pthread_cond_broadcast(&mProcess->mThreadCountDecrement);
        pthread_mutex_unlock(&mProcess->mThreadCountLock);
    }

    return result;
}

frameworks/native/libs/binder/IPCThreadState.cpp

status_t IPCThreadState::executeCommand(int32_t cmd)
{
    BBinder* obj;
    RefBase::weakref_type* refs;
    status_t result = NO_ERROR;
    //cmd对应与Client段的BC_TRANSACTION,这里是BR_TRANSACTION
    switch ((uint32_t)cmd) {
    ......
    case BR_TRANSACTION:
        {
            binder_transaction_data tr;
            result = mIn.read(&tr, sizeof(tr));
            ALOG_ASSERT(result == NO_ERROR,
                "Not enough command data for brTRANSACTION");
            if (result != NO_ERROR) break;

            Parcel buffer;
            //这里有Client传过来的数据
            buffer.ipcSetDataReference(
                reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
                tr.data_size,
                reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
                tr.offsets_size/sizeof(binder_size_t), freeBuffer, this);
            ......
            Parcel reply;
            status_t error;
            ......
            if (tr.target.ptr) {
                // We only have a weak reference on the target object, so we must first try to
                // safely acquire a strong reference before doing anything else with it.
                if (reinterpret_cast<RefBase::weakref_type*>(
                        tr.target.ptr)->attemptIncStrong(this)) {
                    //这里是AMS端对应的BBinder,这个BBinder对象,
                    //对应于AMS创建初始化时的JavaBBinder,
                    //父类的transact方法中调用了子类的onTransact方法,
                    //看JavaBBinder的onTransact
                    error = reinterpret_cast<BBinder*>(tr.cookie)->transact(tr.code, buffer,
                            &reply, tr.flags);
                    reinterpret_cast<BBinder*>(tr.cookie)->decStrong(this);
                } else {
                    error = UNKNOWN_TRANSACTION;
                }

            } else {
                error = the_context_object->transact(tr.code, buffer, &reply, tr.flags);
            }
            ......
        }
        break;
    ......
    return result;
}

JavaBBinderonTransact方法

frameworks/base/core/jni/android_util_Binder.cpp

virtual status_t onTransact(
        uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags = 0)
{
    JNIEnv* env = javavm_to_jnienv(mVM);

    ALOGV("onTransact() on %p calling object %p in env %p vm %p\n", this, mObject, env, mVM);

    IPCThreadState* thread_state = IPCThreadState::self();
    const int32_t strict_policy_before = thread_state->getStrictModePolicy();

    //这里又调用回了java层代码,
    //mObject是java层AMS对象,
    //gBinderOffsets.mExecTransact是Binder(这个Binder是AMS)的execTransact方法,
    //code是指令代表调用那个方法,这里回调用到android.app.IActivityManager.Stub#onTransact方法
    jboolean res = env->CallBooleanMethod(mObject, gBinderOffsets.mExecTransact,
                                          code, reinterpret_cast<jlong>(&data), 
                                          reinterpret_cast<jlong>(reply), flags);

    ......
    return res != JNI_FALSE ? NO_ERROR : UNKNOWN_TRANSACTION;
}

java层代码,android.app.IActivityManager.Stub#onTransact方法

case TRANSACTION_registerProcessObserver:
{
data.enforceInterface(DESCRIPTOR);
android.app.IProcessObserver _arg0;
_arg0 = android.app.IProcessObserver.Stub.asInterface(data.readStrongBinder());
this.registerProcessObserver(_arg0);
reply.writeNoException();
return true;
}

_arg0即Client端注册的IProcessObserver对象时,AMS服务端创建的BinderProxy对象。asInterface(data.readStrongBinder())中,data.readStrongBinder(),即Parcel.javareadStrongBinder方法

frameworks/base/core/java/android/os/Parcel.java

/**
* Read an object from the parcel at the current dataPosition().
*/
public final IBinder readStrongBinder() {
    return nativeReadStrongBinder(mNativePtr);
}

调用android_os_Parcel.cppandroid_os_Parcel_readStrongBinder

frameworks/base/core/jni/android_os_Parcel.cpp

static jobject android_os_Parcel_readStrongBinder(JNIEnv* env, jclass clazz, jlong nativePtr)
{
    Parcel* parcel = reinterpret_cast<Parcel*>(nativePtr);
    if (parcel != NULL) {
        return javaObjectForIBinder(env, parcel->readStrongBinder());
    }
    return NULL;
}

parcel->readStrongBinder()回返回一个BpBinder对象,
javaObjectForIBinder(env, parcel->readStrongBinder())会将BpBinder转换成java层的BinderProxy对象。接下来看看BpBinder对象怎么创建的:

先看parcel->readStrongBinder()

frameworks/native/libs/binder/Parcel.cpp

status_t Parcel::readStrongBinder(sp<IBinder>* val) const
{
    status_t status = readNullableStrongBinder(val);
    if (status == OK && !val->get()) {
        status = UNEXPECTED_NULL;
    }
    return status;
}

这里调用了readNullableStrongBinder方法,readNullableStrongBinder方法简单调用了unflatten_binder方法.

frameworks/native/libs/binder/Parcel.cpp

status_t unflatten_binder(const sp<ProcessState>& proc,
    const Parcel& in, sp<IBinder>* out)
{
    //从Parcel中读取出它所保存的flat_binder_object类型的对象
    const flat_binder_object* flat = in.readObject(false);

    if (flat) {
        //flat_binder_object类型的对象flat的type在内核中被改成了BINDER_TYPE_HANDLE,所以走BINDER_TYPE_HANDLE分支
        switch (flat->type) {
            case BINDER_TYPE_BINDER:
                *out = reinterpret_cast<IBinder*>(flat->cookie);
                return finish_unflatten_binder(NULL, *flat, in);
            case BINDER_TYPE_HANDLE:
                *out = proc->getStrongProxyForHandle(flat->handle);
                //finish_unflatten_binder()中只有return NO_ERROR,不用特别关心
                return finish_unflatten_binder(
                    static_cast<BpBinder*>(out->get()), *flat, in);
        }
    }
    return BAD_TYPE;
}

proc->getStrongProxyForHandle这句,真正获取 Binder对象的代理对象BpBinder

frameworks/native/libs/binder/ProcessState.cpp

sp<IBinder> ProcessState::getStrongProxyForHandle(int32_t handle)
{
    sp<IBinder> result;

    AutoMutex _l(mLock);

    handle_entry* e = lookupHandleLocked(handle);

    if (e != NULL) {
        // We need to create a new BpBinder if there isn't currently one, OR we
        // are unable to acquire a weak reference on this current one.  See comment
        // in getWeakProxyForHandle() for more info about this.
        IBinder* b = e->binder;
        if (b == NULL || !e->refs->attemptIncWeak(this)) {
        	// 句柄0被赋予了特殊意义,是ServiceManager的句柄
            // 这里handle不为0,在binder.c的binder_translate_binder方法中fp->handle = rdata.desc
            if (handle == 0) {
                ......
            }
            // 对同一个Binder对象获取它的代理对象时,
            // 首次调用时会把他对应的handle作为参数,新建BpBinder代理
            // 把这个BpBinder对象作为结果返回
            b = new BpBinder(handle); 
            // 并将这个BpBinder对象保存在handle_entry结构体 e 的成员变量binder中
            e->binder = b;
            if (b) e->refs = b->getWeakRefs();
            result = b;
        } else {
            // This little bit of nastyness is to allow us to add a primary
            // reference to the remote proxy when this team doesn't have one
            // but another team is sending the handle to us.
            // 不是首次的话,会把缓存中已有的BpBinder放入result中返回
            result.force_set(b);
            e->refs->decWeak(this);
        }
    }

    return result;
}

lookupHandleLocked() 会先查找本地的缓存。如果已经为对应的 handle 生成过 BpBinder,则直接返回。即便没有,lookupHandleLocked() 也会创建一个 handle_entry,但是 e->binder 为空。接下来 new BpBinder(handle),并把新生成的对象放到缓存中。 【来自: binder 情景分析 - service 的注册(上)

这里先说一下 handle_entry
【来自:Android ServiceManager 代理对象的获取文章内容来自: 罗升阳《Android系统源代码情景分析》第5章第5.7节
Binder 库为每一个进程维护了一个 handle_entry 类型的 Binder 代理对象列表, 它以句柄作为关键字来维护进程内部所有的 Binder 代理对象。这个 Binder 代理对象列表保存在 ProcessState 类的成员变量 mHandleToObject 中,定义如下
源码路径: /frameworks/native/include/binder/ProcessState.h

struct handle_entry {
   IBinder* binder;
   RefBase::weakref_type* refs;
};
...
Vector<handle_entry>mHandleToObject;

每一个 Binder 代理对象都使用一个 handle_entry 的结构体来描述。 handle_entry 的两个成员变量 binderrefs 分别指向一个 Binder 代理对象以及它内部的一个弱引用计数对象。

getStrongProxyForHandle 函数中,调用 lookupHandleLocked 函数,来检查成员变量 mHandleToObject 中是否已经存在一个与句柄值 handle 对应的 handle_entry 结构体,实现如下:
源码路径: /frameworks/native/include/binder/ProcessState.h

ProcessState::handle_entry* ProcessState::lookupHandleLocked(int32_t handle)
{
    const size_t N=mHandleToObject.size();
    if (N <= (size_t)handle) {
        handle_entry e;
        e.binder = NULL;
        e.refs = NULL;
        status_t err = mHandleToObject.insertAt(e, N, handle+1-N);
        if (err < NO_ERROR) return NULL;
    }
    return &mHandleToObject.editItemAt(handle);
}

一个 Binder 代理对象的句柄值同时也是它在列表 mHandleToObject 中的索引值。 在函数lookupHandleLocked中先来检查句柄值 handle 的值是否大于或者等于列表 mHandleToObject 的大小。如果是,则命中 if,说明 mHandleToObject 列表里不存在一个与句柄值 handle 相对应的 handle_entry 结构体,那么 if 内的几行代码就会在列表 mHandleToObject 中插入一个 handle_entry 结构体,在最后就可以将与句柄值 handle 对应的 handle_entry 结构体返回给调用者。

注: 这些插入的 handle_entry 结构体还未与 Binder 代理对象关联起来,因此它们的成员变量 binderrefs 都还是 NULL.

也就是getStrongProxyForHandle会通过lookupHandleLocked() 返回缓存的BpBinder,没有缓存则new BpBinder(handle)返回,并放到缓存中。也就是说,同一个Binder对象对应的代理对象BpBinder是同一个。

getStrongProxyForHandlelookupHandleLocked的分析参考:
【1:罗升阳《Android系统源代码情景分析》第5章第5.7节】
【2:Android ServiceManager 代理对象的获取 , 这篇文章内容其实也来自罗升阳书中】
【3:Android系统进程间通信(IPC)机制Binder中的Client获得Server远程接口过程源代码分析 罗升阳】
【4:Android Binder机制(四) defaultServiceManager()的实现
【5:binder 情景分析 - service 的注册(上)

现在再回来看android_os_Parcel.cppandroid_os_Parcel_readStrongBinder调用了javaObjectForIBinder,这方法参数传入的是前面获取到的BpBinder对象。
这个方法在android_util_Binder.cpp中:
frameworks/base/core/jni/android_util_Binder.cpp

jobject javaObjectForIBinder(JNIEnv* env, const sp<IBinder>& val)
{
    if (val == NULL) return NULL;
    //这里是检查val是不是JavaBBinder,在我们当前分析的场景,val是BpBinder,所以这里是false
    if (val->checkSubclass(&gBinderOffsets)) {
        // One of our own!
        jobject object = static_cast<JavaBBinder*>(val.get())->object();
        LOGDEATH("objectForBinder %p: it's our own %p!\n", val.get(), object);
        return object;
    }

    // For the rest of the function we will hold this lock, to serialize
    // looking/creation/destruction of Java proxies for native Binder proxies.
    AutoMutex _l(mProxyLock);

    // Someone else's...  do we know about it?
    //findObject试图找一个val对应的BinderProxy,首先在gBinderProxyOffsets中查找,是不是已经创建并关联了Java层代理BinderProxy对象
    //(1)对于同一个Binder对象第一次调用时,这里是NULL
    //(2)findObject返回的是一个BinderProxy对象的WeakReference对象
    jobject object = (jobject)val->findObject(&gBinderProxyOffsets);
    if (object != NULL) {
    	//(3)调用WeakReference对象get方法,尝试获取真正的BinderProxy对象
        jobject res = env->CallObjectMethod(object, gWeakReferenceOffsets.mGet);
        if (res != NULL) {
            ALOGV("objectForBinder %p: found existing %p!\n", val.get(), res);
            //(4)如果WeakReference中的BinderProxy没有被回收,则返回该BinderProxy对象
            return res;
        }
        LOGDEATH("Proxy object %p of IBinder %p no longer in working set!!!", object, val.get());
        android_atomic_dec(&gNumProxyRefs);
        val->detachObject(&gBinderProxyOffsets);
        env->DeleteGlobalRef(object);
    }
    //gBinderProxyOffsets.mClass是BinderProxy.class,
    //gBinderProxyOffsets.mConstructor是BinderProxy.class构造方法
    //(5)如果前面找不到已经创建好的BinderProxy对象,
    // 那么这里创建一个java层的BinderProxy对象,
    // Java层的BinderProxy都是Native新建的,Java层并没有BinderProxy的新建入口
    object = env->NewObject(gBinderProxyOffsets.mClass, gBinderProxyOffsets.mConstructor);
    if (object != NULL) {
        //初始化BinderProxy对象object的属性
        LOGDEATH("objectForBinder %p: created new proxy %p !\n", val.get(), object);
        // The proxy holds a reference to the native object.
        env->SetLongField(object, gBinderProxyOffsets.mObject, (jlong)val.get());
        val->incStrong((void*)javaObjectForIBinder);

        // The native object needs to hold a weak reference back to the
        // proxy, so we can retrieve the same proxy if it is still active.
        jobject refObject = env->NewGlobalRef(
                env->GetObjectField(object, gBinderProxyOffsets.mSelf));
        //(6)使用gBinderProxyOffsets将java层的BinderProxy和native层的BpBinder关联起来,下次访问时就可以直接获取到了
        val->attachObject(&gBinderProxyOffsets, refObject,
                jnienv_to_javavm(env), proxy_cleanup);

        // Also remember the death recipients registered on this proxy
        sp<DeathRecipientList> drl = new DeathRecipientList;
        drl->incStrong((void*)javaObjectForIBinder);
        env->SetLongField(object, gBinderProxyOffsets.mOrgue, reinterpret_cast<jlong>(drl.get()));

        // Note that a new object reference has been created.
        android_atomic_inc(&gNumProxyRefs);
        incRefsCreated(env);
    }
    //(7)java层的BinderProxy
    return object;
}

前面说到,同一个Binder对象对应的代理对象BpBinder是同一个,而这里发现同一个BpBinder对象,获取到的大概率是同一个BinderProxy对象,所以,同一个Binder对象对应同一个BinderProxy对象

javaObjectForIBinder的分析参考自:
【1:罗升阳《Android系统源代码情景分析》第5章第5.10.1节】
【2:理解Binder通信原理及常见问题6

到此,AMS服务端就有了Client端传过来的Binder对象的BinderProxy对象
整个注册流程中Binder对象走向
Binder ——> binder_node ——> binder_ref ——> handle ——> AMS ——> BpBinder(mHandle) ——> BinderProxy ——> IProcessObservor ——> RemoteCallList
这个往回推,就是Binder被持有引用,无法回收的原因。

3,Binder对象的使用之unregister

unregister的时候,AMS服务端会从RemoteCallList中移除BinderProxy,BinderProxy回收的时候,会回调finalize()方法。

看android.os.BinderProxy#finalize
frameworks/base/core/java/android/os/Binder.java

@Override
protected void finalize() throws Throwable {
    try {
        destroy();
    } finally {
        super.finalize();
    }
}

调用了private native final void destroy();

frameworks/base/core/jni/android_util_Binder.cpp

static void android_os_BinderProxy_destroy(JNIEnv* env, jobject obj)
{
    // Don't race with construction/initialization
    AutoMutex _l(mProxyLock);
    //jobject obj是java层BinderProxy对象,b是BinderProxy对象对应的BpBinder对象
    IBinder* b = (IBinder*)
            env->GetLongField(obj, gBinderProxyOffsets.mObject);
    DeathRecipientList* drl = (DeathRecipientList*)
            env->GetLongField(obj, gBinderProxyOffsets.mOrgue);

    LOGDEATH("Destroying BinderProxy %p: binder=%p drl=%p\n", obj, b, drl);
    if (b != nullptr) {
        env->SetLongField(obj, gBinderProxyOffsets.mObject, 0);
        env->SetLongField(obj, gBinderProxyOffsets.mOrgue, 0);
        drl->decStrong((void*)javaObjectForIBinder);
        b->decStrong((void*)javaObjectForIBinder);
    }

    IPCThreadState::self()->flushCommands();
}

重点看这句b->decStrong((void*)javaObjectForIBinder),分析decStrong方法的由来,BpBinder对象继承自IBinderIBinder继承自RefBasedecStrong方法是从RefBase继承来的。

system/core/libutils/RefBase.cpp

system/core/libutils/RefBase.cpp
void RefBase::decStrong(const void* id) const
{
    weakref_impl* const refs = mRefs;
    refs->removeStrongRef(id);
    const int32_t c = refs->mStrong.fetch_sub(1, std::memory_order_release);
#if PRINT_REFS
    ......
    if (c == 1) {
        std::atomic_thread_fence(std::memory_order_acquire);
        //这句,最终调用的是BpBinder重写的onLastStrongRef
        refs->mBase->onLastStrongRef(id);
        int32_t flags = refs->mFlags.load(std::memory_order_relaxed);
        if ((flags&OBJECT_LIFETIME_MASK) == OBJECT_LIFETIME_STRONG) {
            delete this;
            // The destructor does not delete refs in this case.
        }
    }
    ......
}

BpBinderonLastStrongRef方法
frameworks/native/libs/binder/BpBinder.cpp

void BpBinder::onLastStrongRef(const void* /*id*/)
{
    ALOGV("onLastStrongRef BpBinder %p handle %d\n", this, mHandle);
    IF_ALOGV() {
        printRefs();
    }
    IPCThreadState* ipc = IPCThreadState::self();
    if (ipc) ipc->decStrongHandle(mHandle);
}

这里调用了IPCThreadStatedecStrongHandle方法,降低强引用计数。
frameworks/native/libs/binder/IPCThreadState.cpp

void IPCThreadState::decStrongHandle(int32_t handle)
{
    LOG_REMOTEREFS("IPCThreadState::decStrongHandle(%d)\n", handle);
    mOut.writeInt32(BC_RELEASE);
    mOut.writeInt32(handle);
}

BC_RELEASE是指令,handle对应着Client端创建的IProcessObservor对象,IPCThreadState类的成员函数decStrongHandle将降低Binder引用对象的强引用计数的操作缓存在内部的一个成员变量mOut中,等到下次使用IO控制命令ioctl BINDER_WRITE_READ进入到Binder驱动程序时,再请求Binder驱动程序降低对应的Binder引用对象的强引用计数,mOut会在和talkWithDriver中转换成 binder_write_read结构体bwr和内核通信
bwr.write_size = outAvail;
bwr.write_buffer = (uintptr_t)mOut.data();
由此可见,Binder对象在反注册以后,并没有立刻回收,如果Binder对象持有了Activity、Fragment、View等的强引用,必定会造成内存泄漏,解决办法就是不再以匿名内部类实现,而是单独以一个类实现,内部使用WeakReference持有Activity、Fragment、View等的引用。

看binder.c的binder_thread_write方法,读取bwr.write_buffer
kernel/msm-4.4/drivers/android/binder.c

static int binder_thread_write(struct binder_proc *proc,
			struct binder_thread *thread,
			binder_uintptr_t binder_buffer, size_t size,
			binder_size_t *consumed)
{
	uint32_t cmd;
	struct binder_context *context = proc->context;
	void __user *buffer = (void __user *)(uintptr_t)binder_buffer;
	void __user *ptr = buffer + *consumed;
	void __user *end = buffer + size;

	while (ptr < end && thread->return_error.cmd == BR_OK) {
		int ret;

		if (get_user(cmd, (uint32_t __user *)ptr))
			return -EFAULT;
		ptr += sizeof(uint32_t);
		trace_binder_command(cmd);
		......
		switch (cmd) {
		case BC_INCREFS:
		case BC_ACQUIRE:
		case BC_RELEASE:
		case BC_DECREFS: {
			uint32_t target;
			const char *debug_string;
             // strong是true
			bool strong = cmd == BC_ACQUIRE || cmd == BC_RELEASE;
             // increment是false
			bool increment = cmd == BC_INCREFS || cmd == BC_ACQUIRE;
			struct binder_ref_data rdata;

			if (get_user(target, (uint32_t __user *)ptr))
				return -EFAULT;

			ptr += sizeof(uint32_t);
			ret = -1;
			if (increment && !target) {
				......
			}
             //当用数值表示真假时,0为假,非0为真。因此,负数在if语句中为真
			if (ret)
                 //为这个handle增加或者删除ref,这里是删除
				ret = binder_update_ref_for_handle(
						proc, target, increment, strong,
						&rdata);
			......
		}
		......
		*consumed = ptr - buffer;
	}
	return 0;
}

kernel/msm-4.4/drivers/android/binder.c

/**
 * 为这个handle增加或者删除ref
 * @proc:	proc containing the ref
 * @desc:	the handle associated with the ref这里对应着要回收的那个Binder
 * @increment:	true=inc reference, false=dec reference
 * @strong:	true=strong reference, false=weak reference
 * @rdata:	the id/refcount data for the ref
 *
 * Given a proc and ref handle, increment or decrement the ref
 * according to "increment" arg.
 *
 * Return: 0 if successful, else errno
 */
static int binder_update_ref_for_handle(struct binder_proc *proc,
		uint32_t desc, bool increment, bool strong,
		struct binder_ref_data *rdata)
{
	int ret = 0;
	struct binder_ref *ref;
	bool delete_ref = false;

	binder_proc_lock(proc);
	ref = binder_get_ref_olocked(proc, desc, strong);
	if (!ref) {
		ret = -EINVAL;
		goto err_no_ref;
	}
    //这里increment是false,执行else分支,减少引用计数
	if (increment)
		ret = binder_inc_ref_olocked(ref, strong, NULL);
	else
		delete_ref = binder_dec_ref_olocked(ref, strong);

	if (rdata)
		*rdata = ref->data;
	binder_proc_unlock(proc);

	if (delete_ref)
		binder_free_ref(ref);
	return ret;

err_no_ref:
	binder_proc_unlock(proc);
	return ret;
}

binder_dec_ref_olocked方法
kernel/msm-4.4/drivers/android/binder.c

static bool binder_dec_ref_olocked(struct binder_ref *ref, int strong)
{
	if (strong) {
		......
         //减少ref引用计数
		ref->data.strong--;
		if (ref->data.strong == 0)
			binder_dec_node(ref->node, strong, 1);
	} else {
		......
	}
	if (ref->data.strong == 0 && ref->data.weak == 0) {
		binder_cleanup_ref_olocked(ref);
		return true;
	}
	return false;
}

ref->data.strong == 0当ref的strong计数等于0的时候,会调用binder_dec_node方法和binder_cleanup_ref_olocked方法,这两个方法都调用了binder_dec_node_nilocked方法.

binder_dec_node_nilocked方法分析,参考https://www.cnblogs.com/hrhguanli/p/3905462.html
kernel/msm-4.4/drivers/android/binder.c

static bool binder_dec_node_nilocked(struct binder_node *node,
				     int strong, int internal)
{
	struct binder_proc *proc = node->proc;

	BUG_ON(!spin_is_locked(&node->lock));
	if (proc)
		BUG_ON(!spin_is_locked(&proc->inner_lock));
	if (strong) {
		if (internal)
             //降低Binder实体对象node的外部强引用计数internal_strong_refs
			node->internal_strong_refs--;
		else
             //降低Binder实体对象node的内部强引用计数local_strong_refs
			node->local_strong_refs--;
		if (node->local_strong_refs || node->internal_strong_refs)
			return false;
	} else {
		if (!internal)
			node->local_weak_refs--;
		if (node->local_weak_refs || node->tmp_refs ||
				!hlist_empty(&node->refs))
			return false;
	}
    //说明Binder实体对象node的强引用计数或者弱引用计数等于0了,
    //这时候假设它的成员变量has_strong_ref和has_week_ref的当中有一个等于1
	if (proc && (node->has_strong_ref || node->has_weak_ref)) {
         //是否已经将一个类型为BINDER_WORK_NODE的工作项加入到要降低引用计数的Binder本地对象所在进程的todo队列中
		if (list_empty(&node->work.entry)) {
             //加入该工作项
			binder_enqueue_work_ilocked(&node->work, &proc->todo);
             //proc是含有待回收Binder的进程,然后唤醒该进程
			binder_wakeup_proc_ilocked(proc);
		}
	} else {
		if (hlist_empty(&node->refs) && !node->local_strong_refs &&
		    !node->local_weak_refs && !node->tmp_refs) {
			if (proc) {
				binder_dequeue_work_ilocked(&node->work);
				rb_erase(&node->rb_node, &proc->nodes);
				binder_debug(BINDER_DEBUG_INTERNAL_REFS,
					     "refless node %d deleted\n",
					     node->debug_id);
			} else {
				BUG_ON(!list_empty(&node->work.entry));
				spin_lock(&binder_dead_nodes_lock);
				/*
				 * tmp_refs could have changed so
				 * check it again
				 */
				if (node->tmp_refs) {
					spin_unlock(&binder_dead_nodes_lock);
					return false;
				}
				hlist_del(&node->dead_node);
				spin_unlock(&binder_dead_nodes_lock);
				binder_debug(BINDER_DEBUG_INTERNAL_REFS,
					     "dead node %d deleted\n",
					     node->debug_id);
			}
			return true;
		}
	}
	return false;
}

binder_wakeup_proc_ilocked(proc)唤醒持有待回收Binder的进程,该进程之前在binder_thread_read方法中阻塞,导致该进程用户空间IPCThreadStategetAndExecuteCommand方法中调用talkWithDriver的时候阻塞,binder_thread_read方法被唤醒.

kernel/msm-4.4/drivers/android/binder.c

static int binder_thread_read(struct binder_proc *proc,
			      struct binder_thread *thread,
			      binder_uintptr_t binder_buffer, size_t size,
			      binder_size_t *consumed, int non_block)
{
	......
	thread->looper &= ~BINDER_LOOPER_STATE_WAITING;

	if (ret)
		return ret;

	while (1) {
		uint32_t cmd;
		......
		w = binder_dequeue_work_head_ilocked(list);

		switch (w->type) {
		......
		case BINDER_WORK_NODE: {
			struct binder_node *node = container_of(w, struct binder_node, work);
			int strong, weak;
			binder_uintptr_t node_ptr = node->ptr;
			binder_uintptr_t node_cookie = node->cookie;
			int node_debug_id = node->debug_id;
			int has_weak_ref;
			int has_strong_ref;
			void __user *orig_ptr = ptr;

			BUG_ON(proc != node->proc);
             //假设有强引用计数,那么就将变量strong的值设置为1;否则为0
			strong = node->internal_strong_refs ||
					node->local_strong_refs;
             //有弱引用计数,那么就将变量weak的值设置为1;否则就设置为0
			weak = !hlist_empty(&node->refs) ||
					node->local_weak_refs ||
					node->tmp_refs || strong;
			has_strong_ref = node->has_strong_ref;
			has_weak_ref = node->has_weak_ref;

			......
			if (!ret && !strong && has_strong_ref)
                 //使用BR_RELEASE协议来请求降低相应的Binder本地对象的强引用计数
				ret = binder_put_node_cmd(
						proc, thread, &ptr, node_ptr,
						node_cookie, node_debug_id,
						BR_RELEASE, "BR_RELEASE");
			......
			if (ret)
				return ret;
		} break;
		......
		}

		......
                //命令
		if (put_user(cmd, (uint32_t __user *)ptr)) {
			if (t_from)
				binder_thread_dec_tmpref(t_from);
			return -EFAULT;
		}
		ptr += sizeof(uint32_t);
         //拷贝数据到用户空间,将协议以及协议内容写入到由AMS Server进程所提供的一个用户空间缓冲区,
         //然后返回到Server进程的用户空间
		if (copy_to_user(ptr, &tr, sizeof(tr))) {
			if (t_from)
				binder_thread_dec_tmpref(t_from);
			return -EFAULT;
		}
		......
		break;
	}

......
	return 0;
}

Server进程在IPCThreadState类的成员函数getAndExecuteCommand()中调用成员函数executeCommand中处理协议
frameworks/native/libs/binder/IPCThreadState.cpp

status_t IPCThreadState::getAndExecuteCommand()
{
    status_t result;
    int32_t cmd;

    result = talkWithDriver();
    if (result >= NO_ERROR) {
        size_t IN = mIn.dataAvail();
        if (IN < sizeof(int32_t)) return result;
        cmd = mIn.readInt32();
        ......

        result = executeCommand(cmd);

        ......
    }

    return result;
}

frameworks/native/libs/binder/IPCThreadState.cpp

status_t IPCThreadState::executeCommand(int32_t cmd)
{
    BBinder* obj;
    RefBase::weakref_type* refs;
    status_t result = NO_ERROR;
    //前面是BC_RELEASE,所以这里对应是BR_RELEASE
    switch ((uint32_t)cmd) {
    ......
    case BR_RELEASE:
        refs = (RefBase::weakref_type*)mIn.readPointer();
        obj = (BBinder*)mIn.readPointer();
        ALOG_ASSERT(refs->refBase() == obj,
                   "BR_RELEASE: object %p does not match cookie %p (expected %p)",
                   refs, obj, refs->refBase());
        IF_LOG_REMOTEREFS() {
            LOG_REMOTEREFS("BR_RELEASE from driver on %p", obj);
            obj->printRefs();
        }
        mPendingStrongDerefs.push(obj);
        break;

    ......
    return result;
}

对于BR_DECREFS、BR_RELEASE,Server进程却不急于去处理,而是将它们缓存在IPCThreadState类的成员变量mPendingStrongDerefs和mPendingWeakDerefs中,等到Server进程下次使用IO控制命令BINDER_WRITE_READ进入Binder驱动程序之前,再来处理它们。由于添加一个Binder本地对象的引用计数是一件紧急的事情,必须立即处理。否则该Binder本地对象就可能提前被销毁。相反,降低一个Binder本地对象的引用计数是一件不重要的事情,至多就是延迟了Binder本地对象的生命周期,这样做的优点就是能够让Server进程优先去处理其它更重要的事情。(节选自《Android系统源代码情景分析·罗升阳》
mPendingStrongDerefs是在processPendingDerefs方法中处理的,而processPendingDerefs方法是在joinThreadPool方法的循环中被执行到的。

frameworks/native/libs/binder/IPCThreadState.cpp

// When we've cleared the incoming command queue, process any pending derefs
void IPCThreadState::processPendingDerefs()
{
    if (mIn.dataPosition() >= mIn.dataSize()) {
        size_t numPending = mPendingWeakDerefs.size();
        if (numPending > 0) {
            for (size_t i = 0; i < numPending; i++) {
                RefBase::weakref_type* refs = mPendingWeakDerefs[i];
                refs->decWeak(mProcess.get());
            }
            mPendingWeakDerefs.clear();
        }

        numPending = mPendingStrongDerefs.size();
        if (numPending > 0) {
            for (size_t i = 0; i < numPending; i++) {
                BBinder* obj = mPendingStrongDerefs[i];
                obj->decStrong(mProcess.get());
            }
            mPendingStrongDerefs.clear();
        }
    }
}

重点看这句obj->decStrong(mProcess.get()),分析decStrong方法的由来,BBinder对象继承自IBinder,IBinder继承自RefBase,decStrong方法是从RefBase继承来的,这里的BBinder* obj其实是JavaBBinder对象。
system/core/libutils/RefBase.cpp

void RefBase::decStrong(const void* id) const
{
    weakref_impl* const refs = mRefs;
    refs->removeStrongRef(id);//空方法
    //强引用减1,
    const int32_t c = refs->mStrong.fetch_sub(1, std::memory_order_release);
    //c是减1之前的值,所以减去后强引用计数 等于 0
    if (c == 1) {
        std::atomic_thread_fence(std::memory_order_acquire);
        refs->mBase->onLastStrongRef(id);
        int32_t flags = refs->mFlags.load(std::memory_order_relaxed);
        if ((flags&OBJECT_LIFETIME_MASK) == OBJECT_LIFETIME_STRONG) {
            //受强引用计数控制, 默认, 删掉RefBase, 并且调用RefBase的析构函数
            delete this;
            // The destructor does not delete refs in this case.
        }
    }
    
    refs->decWeak(id);
}

JavaBBinder的析构函数
frameworks/base/core/jni/android_util_Binder.cpp

virtual ~JavaBBinder()
{
    ALOGV("Destroying JavaBBinder %p\n", this);
    android_atomic_dec(&gNumLocalRefs);
    JNIEnv* env = javavm_to_jnienv(mVM);
    env->DeleteGlobalRef(mObject);
}

env->DeleteGlobalRef(mObject)释放对mObject的全局引用,mObject是Client端传进来的Binder对象IProcessObservor,这样,IProcessObservor对象就可以被GC回收了


getStrongProxyForHandlelookupHandleLocked的分析参考:
【1:罗升阳《Android系统源代码情景分析》第5章第5.7节】
【2:Android ServiceManager 代理对象的获取 , 这篇文章内容其实也来自罗升阳书中】
【3:Android系统进程间通信(IPC)机制Binder中的Client获得Server远程接口过程源代码分析 罗升阳】
【4:Android Binder机制(四) defaultServiceManager()的实现
【5:binder 情景分析 - service 的注册(上)

javaObjectForIBinder的分析参考自:
【1:罗升阳《Android系统源代码情景分析》第5章第5.10.1节】
【2:理解Binder通信原理及常见问题6

不错的文章:

Android跨进程内存泄漏 这篇稍微有点分析错误

Binder之ServiceManager

Android:binder记录

图解Android - Binder 和 Service

【Android】从匿名服务的生命周期来研究binder

Android Java Binder 通信机制

Binder机制情景分析之深入驱动

Binder机制,从Java到C (5. IBinder对象传递形式)

Android Binder机制(十一) getService详解03之 请求的反馈

《深入理解Android(卷2)》笔记 6.第二章 深入理解Java Binder

干货 | 彻底理解ANDROID BINDER通信架构(上)

Andorid Binder进程间通信—Binder本地对象,实体对象,引用对象,代理对象的引用计数

android智能指针

理解Refbase强弱引用

Binder系列—开篇

Binder系列1—Binder Driver初探

Binder系列2—Binder Driver再探

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值