Android5.0中Binder相关的ProcessState和IPCThreadState的认识.

    文章仅仅用于个人的学习记录,基本上内容都是网上各个大神的杰作,此处摘录过来以自己的理解学习方式记录一下。
    个人最为认可和推崇的大神文章:
           http://blog.csdn.net/luoshengyang/article/details/6618363     罗升阳Binder系列文章
           http://blog.csdn.net/innost/article/details/47208049                Innost的Binder讲解

          https://my.oschina.net/youranhongcha/blog/149575              侯 亮的Binder系列文章.


1、ProcessState

         它被设计成单例模式,因此一个进程只会走一次它的构造,这也间接导致了只会打开一次binder设备(某一个server端.)
         而当调用ProcessState::self()的时候就会调用.
   
   
ProcessState::ProcessState() //构造
: mDriverFD(open_driver())//在初始化列表时打开驱动.
, mVMStart(MAP_FAILED)
, mManagesContexts(false)
, mBinderContextCheckFunc(NULL)
, mBinderContextUserData(NULL)
, mThreadPoolStarted(false)
, mThreadPoolSeq(1)
{
if (mDriverFD >= 0) {
// mmap the binder, providing a chunk of virtual address space to receive transactions.
//把设备文件映射到进程的虚拟地址空间.第一个好处,可以直接读写。
//BINDER_VM_SIZE = ((1*1024*1024) - (4096 *2)) 1M-8K
mVMStart = mmap(0, BINDER_VM_SIZE, PROT_READ, MAP_PRIVATE | MAP_NORESERVE, mDriverFD, 0);
if (mVMStart == MAP_FAILED) {
//......
}
#else
mDriverFD = -1;
#endif
}
//......
}
        ProcessState中另一个比较有意思的域是mHandleToObject,在.h文件中Vector<handle_entry>mHandleToObject声明.它是本进程中记
    录所有BpBinder的向量表,非常重要.在Binder架构中,应用进程是通过“binder句柄”来找到对应的BpBinder的。从这张向量表中我们可
    以看 到,那个句柄值其实对应着这个向量表的下标.
    
    
            struct handle_entry {
IBinder* binder;
RefBase::weakref_type* refs;
};
       其中binder域记录的就是BpBinder对象.
  
2、IPCThreadState最终和Binder驱动打交道的地方,它是线程单例,存到了线程的本地存储区域.
              由于 IPCThreadState的构造函数的声明如下,是在private下面声明的所以其它类不能实例化它,但是可以通过self()方法
      来得到它的实例。
             private:
                                IPCThreadState();
                                ~IPCThreadState();
  
  
IPCThreadState* IPCThreadState::self()
{
if (gHaveTLS) {//第一次进来为false
restart:
const pthread_key_t k = gTLS;
//TLS是Thread Local Storage的意思.线程本地存储的意思(C层实现的和java层的ThreadLocal类一样),
//有pthread_getspecific,那么肯定有地方调用 pthread_setspecific。
IPCThreadState* st = (IPCThreadState*)pthread_getspecific(k);
if (st) return st;
return new IPCThreadState;
}
//......
}
          构造:函数如下,需要特别关注的由mProcess私有变量的初始化问题,此变量标记了当前线程输入那个进程,并且通过
                   ProcessState的self方法拿到这个单例模式的ProcessState的变量,就可以拿到在实例化时候打开binder设备时返回
                   的文件描述符fd了,其实最终IPCThreadState还是通过这个fd通过ioctl机制来和设备驱动文件通信。
  
  
IPCThreadState::IPCThreadState()
: mProcess(ProcessState::self()),//很关键!!!
mMyThreadId(androidGetTid()),
mStrictModePolicy(0),
mLastTransactionBinderFlags(0)
{
pthread_setspecific(gTLS, this);//在构造中这个存入本地存储区.保证每个线程有自己的IPCThreadState.
clearCaller();
mIn.setDataCapacity(256);
mOut.setDataCapacity(256);
}
         2.1、 IPCThreadState中的client端发出请求和驱动交互的大体流程:
                一般在代理端如BpServiceManager的addservice调用时,本身代理端是没有通信机制的它还是要靠BpBinder来操作,
         所以都会获得一个mRemote,这是一个BpBinder对象然后调用它的 transact()方法传入指定的命令 code,更具code驱动会做
         出反应。      
  
  
status_t BpBinder::transact(
uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
// Once a binder has died, it will never come back to life.
if (mAlive) {
//当是一个service通过service manager 调用addservice,然后到这里的时候.
//这里的mHandle为0,code为ADD_SERVICE_TRANSACTION。ADD_SERVICE_TRANSACTION是上面以参数形式传进来的,
//那mHandle的只是由前面实例化BpBinder传入的值决定的,一般在如:BpXXXX : public BpInterface<XXX>
//而在实例化BpXXX的时候构造中都会需要传入一个IBinder对象,在跨进程调用的时候这个对象就是此处的BpBinder
//传入这个BpBinder的时候由于模板类BpInterface就会去实例化BpRefBase最终实例化这个BpBinder的相关变量.
status_t status = IPCThreadState::self()->transact(
mHandle, code, data, reply, flags);
if (status == DEAD_OBJECT) mAlive = 0;
return status;
}
 
return DEAD_OBJECT;
}
             可以看出BpBinder最终是调用IPCThreadState去和驱动打交道的。此处显示通过实例化获取 IPCThreadState实例,然后
      调用它的 transact.
   
   
status_t IPCThreadState::transact(int32_t handle,
uint32_t code, const Parcel& data,
Parcel* reply, uint32_t flags)
{
status_t err = data.errorCheck();
//IPCThreadState::transact函数的参数flags ==0 ,这是因为在BpBinder中默认参数列表传入的0.
flags |= TF_ACCEPT_FDS;//TF_ACCEPT_FDS = 0x10 ,binder.h linux中
//......  
if (err == NO_ERROR) {
//准备好一个struct binder_transaction_data结构体变量,这个是等一下要传输给Binder驱动程序的数据.
//这里就是发送数据的地方.把它的handle,code,data封装好然后写入到IPCThreadState的mOut中
err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);
}
//......
//如果没有设置TF_ONE_WAY这个置也就是需要回复。
if ((flags & TF_ONE_WAY) == 0) {//0x10&0x01确实会进入.
//......
//等待回复,如果reply可以用就直接用,否则自己新建一个.
if (reply) {
err = waitForResponse(reply);
} else {
Parcel fakeReply;
err = waitForResponse(&fakeReply);
}
//......
} else {
//不需要回复的时候会走到这里.
err = waitForResponse(NULL, NULL);
}
return err;
}
          可以看到在IPCThreadState的 transact方法中主要有两步,第一步把要传输的数据封装成 binder_transaction_data的结构
     体(就是要这么定义,也方便驱动那别解析啊,大家都按照这种来)然后写入到Parcel mOut当中,这个就是当前的IPCThrea
     d State要往驱动中传入的序列化的数据.第二步就是真正通过ioctl要和驱动去交流的地方在 waitForResponse当中.  
  
  
status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags,
int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer)
{
binder_transaction_data tr;//代表Binder传输过程中的数据.
//这个结构体的初始化设置.
tr.target.ptr = 0; /* Don't pass uninitialized stack data to a remote process */
tr.target.handle = handle;
tr.code = code;//注意!!!!这个code会存入到封装的binder_transaction_data结构体当中!!!!!注意和cmd的区别
tr.flags = binderFlags;
tr.cookie = 0;
tr.sender_pid = 0;
tr.sender_euid = 0;
const status_t err = data.errorCheck();
if (err == NO_ERROR) {
tr.data_size = data.ipcDataSize();
//data.ipcData();就相当于.
//writeInt32(IPCThreadState::self()->getStrictModePolicy() | STRICT_MODE_PENALTY_GATHER);
//writeString16("android.os.IServiceManager");
//writeString16("media.player");
//writeStrongBinder(new MediaPlayerService());
tr.data.ptr.buffer = data.ipcData(); //真正要传输的数据保存的地方.
tr.offsets_size = data.ipcObjectsCount()*sizeof(binder_size_t); //记录一下偏移量.
tr.data.ptr.offsets = data.ipcObjects();
} else if (statusBuffer) {
//......
} else {
return (mLastError = err);
}
//Parcel mOut 往驱动发送的数据最终都存入到mOut.
mOut.writeInt32(cmd);//cmd为BC_TRANSACTION直接存入到mOut当中,在binder_thread_write方法中读取.
mOut.write(&tr, sizeof(tr));
return NO_ERROR;
}
          注意此处写入的cmd不要和一开始从BpXXXX传入的code弄混了,这个cmd就是和binder设备文件进行ioctl的时候的命令
     码,在驱动的实现中又具体的定义.此处的 writeStrongBinder也十分的关键后面再分析.接下来看一下 waitForResponse。
  
  
status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
{
int32_t cmd;
int32_t err;
 
while (1) {
// 最后bwr.write_size和bwr.read_size均为0,IPCThreadState::talkWithDriver函数什么也不做,返回到IPCThreadState::waitForResponse函 
                 // 在IPCThreadState::waitForResponse函数,又继续从mIn读出一个整数,这个便是BR_TRANSACTION_COMPLETE.
                // 进入到Binder驱动程序中的binder_ioctl函数中。由于bwr.write_size为0,bwr.read_size不为0,这次 直接就进入到binder_thread_read                 //函数中。这时候,
                //thread->transaction_stack!=0,thread- >todo为空,线程通过: wait_event_interruptible(thread->wait, binder_has_thread_work(thread)) 
                //进入睡眠状态,等待Service Manager来唤醒了。(!!!!反正最终又是要等待Service Manager来唤醒,注意此时Service Manager已经被唤醒)
//主要调用了talkWithDriver函数来与Binder驱动程序进行交互
if ((err=talkWithDriver()) < NO_ERROR) break;
//从驱动返回以后,这里开始操作mIn了,看来talkWithDriver中把mOut发出去,然后从driver中读到数据放到mIn中了。
err = mIn.errorCheck();//addservice时候,mIn读出一个整数,这个便是BR_NOOP了,这是一个空操作,什么也不做。
if (err < NO_ERROR) break;
if (mIn.dataAvail() == 0) continue;
cmd = mIn.readInt32();//取出来驱动放入的命令.
//......
switch (cmd) {
case BR_TRANSACTION_COMPLETE:
if (!reply && !acquireResult) goto finish;
break;
case BR_DEAD_REPLY:
err = DEAD_OBJECT;
goto finish;
 
case BR_FAILED_REPLY:
err = FAILED_TRANSACTION;
goto finish;
case BR_ACQUIRE_RESULT:
{
ALOG_ASSERT(acquireResult != NULL, "Unexpected brACQUIRE_RESULT");
const int32_t result = mIn.readInt32();
if (!acquireResult) continue;
*acquireResult = result ? NO_ERROR : INVALID_OPERATION;
}
goto finish;
case BR_REPLY:
{
binder_transaction_data tr;
err = mIn.read(&tr, sizeof(tr));
ALOG_ASSERT(err == NO_ERROR, "Not enough command data for brREPLY");
if (err != NO_ERROR) goto finish;
 
if (reply) {
if ((tr.flags & TF_STATUS_CODE) == 0) {
reply->ipcSetDataReference(
reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
tr.data_size,
reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
tr.offsets_size/sizeof(binder_size_t),
freeBuffer, this);
} else {
err = *reinterpret_cast<const status_t*>(tr.data.ptr.buffer);
freeBuffer(NULL,
reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
tr.data_size,
reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
tr.offsets_size/sizeof(binder_size_t), this);
}
} else {
freeBuffer(NULL,
reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
tr.data_size,
reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
tr.offsets_size/sizeof(binder_size_t), this);
continue;
}
}
goto finish;
 
default:
err = executeCommand(cmd);//注意!
if (err != NO_ERROR) goto finish;
break;
}
}
 
finish:
//......
return err;
}
            此时我们主要关注的是往驱动发出的时候的事情,那么此时就是去调用 talkWithDriver.一定要注意那个mProcess的成员变
      量mDriverFD,最终是通过,这个fd和binder设备文件交互.   binder_write_read这个结构体用来进一步的封装和驱动交互的数据
      结构。通过下面可见最终还是通过ioctl来和binder驱动进行交互.Binder驱动收到后根据不同的cmd做出不同的处理.
  
  
status_t IPCThreadState::talkWithDriver(bool doReceive)
{
if (mProcess->mDriverFD <= 0) {
return -EBADF;
}
// binder_write_read是用来与Binder设备交换数据的结构,
binder_write_read bwr;//来换传入Binder命令的时候就会用到这个结构体.
// Is the read buffer empty?
const bool needRead = mIn.dataPosition() >= mIn.dataSize();
// We don't want to write anything if we are still reading
// from data left in the input buffer and the caller
// has requested to read the next data.
const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0;
//先是在写的缓冲区写入要发送给驱动的数据,
bwr.write_size = outAvail;
bwr.write_buffer = (uintptr_t)mOut.data();//!!!非常重要,看出来把前面的cmd和 binder_transaction_data放入到了buffer
 
// This is what we'll read.
if (doReceive && needRead) {
//接收数据缓冲区信息的填充。如果以后收到数据,就直接填在mIn中了。
bwr.read_size = mIn.dataCapacity();
bwr.read_buffer = (uintptr_t)mIn.data();
} else {
bwr.read_size = 0;
bwr.read_buffer = 0;
}
//......  
// Return immediately if there is nothing to do.
status_t err;
do {
//......
#if defined(HAVE_ANDROID_OS)
//看来不是read/write调用去和Binder设备文件进行通信,而是ioctl方式。
if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)//最终去和驱动进行通信.
err = NO_ERROR;
else
err = -errno;
#else
err = INVALID_OPERATION;
#endif
if (mProcess->mDriverFD <= 0) {
err = -EBADF;
}
//......
} while (err == -EINTR);//应该是只要有数据就会一直循环的往外发送,通过ioctl的返回值判断的.
//......  
if (err >= NO_ERROR) {
//到这里从驱动中回来了,回复数据就在bwr中了,bmr接收回复数据的buffer就是mIn提供的
if (bwr.write_consumed > 0) {
if (bwr.write_consumed < mOut.dataSize())
mOut.remove(0, bwr.write_consumed);
else
mOut.setDataSize(0);//从驱动回来以后,首先是把mOut的数据清空.方便下一次操作啊.
}
if (bwr.read_consumed > 0) {
mIn.setDataSize(bwr.read_consumed);//然后设置已经读取的内容的大小.
mIn.setDataPosition(0);
}
//......
return NO_ERROR;
}
//然后就返回到了waitForResponse当中.我们去看一下.
//在addservice流程中到这里,我们发送addService的流程就彻底走完了。
return err;
}
         此处一定要注意bwr算是又进行了一层的封装,把 binder_transaction_data和cmd等放入mOut当中的数据通过:  bwr.write_buffer = 
        (uintptr_t)mOut.data()、bwr.read_buffer = (uintptr_t)mIn.data()等放入到了binder_write_read结构体的对应的buffer缓冲区中。

3、 ProcessState::self()->startThreadPool()和 IPCThreadState::self()->joinThreadPool()两个操作.
     3.1、 ProcessState::  startThreadPool方法.
   
   
void ProcessState::startThreadPool()
{
AutoMutex _l(mLock);
if (!mThreadPoolStarted) {
mThreadPoolStarted = true;
spawnPooledThread(true);
}
}
        可以看到就是调用了spawnPooledThread(true).
  
  
void ProcessState::spawnPooledThread(bool isMain)
{
if (mThreadPoolStarted) {
String8 name = makeBinderThreadName();
ALOGV("Spawning new pooled thread, name=%s\n", name.string());
//创建线程池,然后run起来,和java的Thread何其像也。
sp<Thread> t = new PoolThread(isMain);//会去实例化父类Thread.
t->run(name.string());//调用PoolThread::run,实际调用了父类Thread的run
}
}
       Thread的run函数最终调用子类的threadLoop函数,这里即为PoolThread::threadLoop函数:
  
  
virtual bool threadLoop()
{
//此时mIsMain为true.
IPCThreadState::self()->joinThreadPool(mIsMain);
return false;
}
      可以看到此处也是调用 IPCThreadState::self()->joinThreadPool().一般在native层的标准调用:
          ProcessState::self()->startThreadPool();  
          IPCThreadState::self()->joinThreadPool(); 
      它们的区别是,这里的参数isMain都是等于true,表示是应用程序自己主动创建的Binder线程,而不是Binder驱动程序请求应用程序创
      建的.接下来看看 IPCThreadState:: joinThreadPool()的实现.
      注意它在. h的声明文件中void  joinThreadPool(bool isMain = true) 默认的参数是true.
    
    
void IPCThreadState::joinThreadPool(bool isMain)
{
 
mOut.writeInt32(isMain ? BC_ENTER_LOOPER : BC_REGISTER_LOOPER);
set_sched_policy(mMyThreadId, SP_FOREGROUND);
status_t result;
do {//进入到这个循环中.
processPendingDerefs();
// now get the next command to be processed, waiting if necessary
result = getAndExecuteCommand();//去里面读取
 
if (result < NO_ERROR && result != TIMED_OUT && result != -ECONNREFUSED && result != -EBADF) {
......
}
// Let this thread exit the thread pool if it is no longer
// needed and it is not the main process thread.
if(result == TIMED_OUT && !isMain) {
break;
}
} while (result != -ECONNREFUSED && result != -EBADF);
......
mOut.writeInt32(BC_EXIT_LOOPER);
talkWithDriver(false);
}
        函数最终是在一个无穷循环中,通过调用talkWithDriver函数来和Binder驱动程序进行交互,实际上就是调用 talkWithDriver来等待Client
     的请求,然后调用executeCommand来处理请求,而在getAndExecuteCommand 函数中,最终会调 用BBinder::transact来真正处理Client的 
     请求.
       
  
  
status_t IPCThreadState::getAndExecuteCommand()
{
status_t result;
int32_t cmd;
 
result = talkWithDriver();
if (result >= NO_ERROR) {
size_t IN = mIn.dataAvail();
if (IN < sizeof(int32_t)) return result;
cmd = mIn.readInt32();
IF_LOG_COMMANDS() {
alog << "Processing top-level Command: "
<< getReturnString(cmd) << endl;
}
 
result = executeCommand(cmd);
 
// After executing the command, ensure that the thread is returned to the
// foreground cgroup before rejoining the pool. The driver takes care of
// restoring the priority, but doesn't do anything with cgroups so we
// need to take care of that here in userspace. Note that we do make
// sure to go in the foreground after executing a transaction, but
// there are other callbacks into user code that could have changed
// our group so we want to make absolutely sure it is put back.
set_sched_policy(mMyThreadId, SP_FOREGROUND);
}
 
return result;
}
        可以看到就是调用到executeCommand(int cmd )
  
  
status_t IPCThreadState::executeCommand(int32_t cmd)
{
BBinder* obj;
RefBase::weakref_type* refs;
status_t result = NO_ERROR;
switch (cmd) {
......
default:
printf("*** BAD COMMAND %d received from Binder driver\n", cmd);
result = UNKNOWN_ERROR;
break;
}
 
if (result != NO_ERROR) {
mLastError = result;
}
return result;
}
       它会走到case为:BR_TRANSACTION当中,我们单独提取出来分析.
   
   
case BR_TRANSACTION: //Binder通信
{
binder_transaction_data tr;
result = mIn.read(&tr, sizeof(tr));
ALOG_ASSERT(result == NO_ERROR,
"Not enough command data for brTRANSACTION");
if (result != NO_ERROR) break;
Parcel buffer;
buffer.ipcSetDataReference(
reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
tr.data_size,
reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
tr.offsets_size/sizeof(binder_size_t), freeBuffer, this);
const pid_t origPid = mCallingPid;
const uid_t origUid = mCallingUid;
const int32_t origStrictModePolicy = mStrictModePolicy;
const int32_t origTransactionBinderFlags = mLastTransactionBinderFlags;
 
mCallingPid = tr.sender_pid;
mCallingUid = tr.sender_euid;
mLastTransactionBinderFlags = tr.flags;
 
int curPrio = getpriority(PRIO_PROCESS, mMyThreadId);
if (gDisableBackgroundScheduling) {
if (curPrio > ANDROID_PRIORITY_NORMAL) {
// We have inherited a reduced priority from the caller, but do not
// want to run in that state in this process. The driver set our
// priority already (though not our scheduling class), so bounce
// it back to the default before invoking the transaction.
setpriority(PRIO_PROCESS, mMyThreadId, ANDROID_PRIORITY_NORMAL);
}
} else {
if (curPrio >= ANDROID_PRIORITY_BACKGROUND) {
// We want to use the inherited priority from the caller.
// Ensure this thread is in the background scheduling class,
// since the driver won't modify scheduling classes for us.
// The scheduling group is reset to default by the caller
// once this method returns after the transaction is complete.
set_sched_policy(mMyThreadId, SP_BACKGROUND);
}
}
 
//ALOGI(">>>> TRANSACT from pid %d uid %d\n", mCallingPid, mCallingUid);
//前面是来了一个命令,解析成BR_TRANSACTION,然后读取后续的信息
Parcel reply;
status_t error;
IF_LOG_TRANSACTIONS() {
TextOutput::Bundle _b(alog);
alog << "BR_TRANSACTION thr " << (void*)pthread_self()
<< " / obj " << tr.target.ptr << " / code "
<< TypeCode(tr.code) << ": " << indent << buffer
<< dedent << endl
<< "Data addr = "
<< reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer)
<< ", offsets addr="
<< reinterpret_cast<const size_t*>(tr.data.ptr.offsets) << endl;
}
if (tr.target.ptr) {
sp<BBinder> b((BBinder*)tr.cookie);//看来这个binder_transaction_data.cookie很关键啊!!!!!!!!
error = b->transact(tr.code, buffer, &reply, tr.flags);
 
} else {
/*
the_context_object是IPCThreadState.cpp中定义的一个全局变量,
可通过setTheContextObject函数设置
*/
error = the_context_object->transact(tr.code, buffer, &reply, tr.flags);
}
 
//ALOGI("<<<< TRANSACT from pid %d restore pid %d uid %d\n",
// mCallingPid, origPid, origUid);
if ((tr.flags & TF_ONE_WAY) == 0) {
LOG_ONEWAY("Sending reply to %d!", mCallingPid);
if (error < NO_ERROR) reply.setError(error);
sendReply(reply, 0);
} else {
LOG_ONEWAY("NOT sending reply to %d!", mCallingPid);
}
mCallingPid = origPid;
mCallingUid = origUid;
mStrictModePolicy = origStrictModePolicy;
mLastTransactionBinderFlags = origTransactionBinderFlags;
 
IF_LOG_TRANSACTIONS() {
TextOutput::Bundle _b(alog);
alog << "BC_REPLY thr " << (void*)pthread_self() << " / obj "
<< tr.target.ptr << ": " << indent << reply << dedent << endl;
}
}
break;
        接下来再看一下BBinder::transact的实现:  
  
  
status_t BBinder::transact(
uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
data.setDataPosition(0);
 
status_t err = NO_ERROR;
switch (code) {
case PING_TRANSACTION:
reply->writeInt32(pingBinder());
break;
default:
err = onTransact(code, data, reply, flags);//就是调用自己的onTransact函数嘛,谁实例化的就调用谁!
break;
}
 
if (reply != NULL) {
reply->setDataPosition(0);
}
 
return err;
}
        然后会调用到子类的onTransact函数来处理.
        PCThreadState接收到了Client处的请求后,就会调用BBinder类的transact函数,并传入相关参数,BBinder类的transact函数最终调
    用BnMediaPlayerService类的onTransact函数,于是,就开始真正地处理Client的请求 .
            
     
         

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值