OpenHarmony 的 Binder C++ 框架

OpenHarmony 的 Binder C++ 框架

开发步骤

  1. 添加依赖

    SDK 依赖:

    #ipc场景
    external_deps = [
      "ipc:ipc_single",
    ]
    
    #rpc场景
    external_deps = [
      "ipc:ipc_core",
    ]
    

    此外, IPC/RPC 依赖的 refbase 实现在公共基础库下,请增加对 utils 的依赖:

    external_deps = [
     "c_utils:utils",
    ]
    
  2. 定义IPC接口ITestAbility

    SA 接口继承 IPC 基类接口 IRemoteBroker,接口里定义描述符、业务函数和消息码,其中业务函数在 Proxy 端和 Stub 端都需要实现。

    #include "iremote_broker.h"
    
    //定义消息码
    const int TRANS_ID_PING_ABILITY = 5
    
    const std::string DESCRIPTOR = "test.ITestAbility";
    
    class ITestAbility : public IRemoteBroker {
    public:
        // DECLARE_INTERFACE_DESCRIPTOR 是必需的,入参需使用 std::u16string。在 IRemoteStub 构造函数中会调用此元接口,初始化父类
        DECLARE_INTERFACE_DESCRIPTOR(to_utf16(DESCRIPTOR));
        virtual int TestPingAbility(const std::u16string &dummy) = 0; // 定义业务函数
    };
    
  3. 定义和实现服务端 TestAbilityStub

    该类是和 IPC 框架相关的实现,需要继承 IRemoteStubStub 端作为接收请求的一端,需重写 OnRemoteRequest 方法用于接收客户端调用。

    #include "iability_test.h"
    #include "iremote_stub.h"
    
    class TestAbilityStub : public IRemoteStub<ITestAbility> {
    public:
        virtual int OnRemoteRequest(uint32_t code, MessageParcel &data, MessageParcel &reply, MessageOption &option) override;
        int TestPingAbility(const std::u16string &dummy) override;
     };
    
    int TestAbilityStub::OnRemoteRequest(uint32_t code,
        MessageParcel &data, MessageParcel &reply, MessageOption &option)
    {
        switch (code) {
            case TRANS_ID_PING_ABILITY: {
                std::u16string dummy = data.ReadString16();
                int result = TestPingAbility(dummy);
                reply.WriteInt32(result);
                return 0;
            }
            default:
                return IPCObjectStub::OnRemoteRequest(code, data, reply, option);
        }
    }
    
  4. 定义服务端业务函数具体实现类 TestAbility

    #include "iability_server_test.h"
    
    class TestAbility : public TestAbilityStub {
    public:
        int TestPingAbility(const std::u16string &dummy);
    }
    
    int TestAbility::TestPingAbility(const std::u16string &dummy) {
        return 0;
    }
    
  5. 定义和实现客户端 TestAbilityProxy

    该类是 Proxy 端实现,继承 IRemoteProxy,调用 SendRequest 接口向 Stub 端发送请求,对外暴露服务端提供的能力。

    #include "iability_test.h"
    #include "iremote_proxy.h"
    #include "iremote_object.h"
    
    class TestAbilityProxy : public IRemoteProxy<ITestAbility> {
    public:
        explicit TestAbilityProxy(const sptr<IRemoteObject> &impl);
        int TestPingAbility(const std::u16string &dummy) override;
    private:
        static inline BrokerDelegator<TestAbilityProxy> delegator_; // 方便后续使用iface_cast宏
    }
    
    TestAbilityProxy::TestAbilityProxy(const sptr<IRemoteObject> &impl)
        : IRemoteProxy<ITestAbility>(impl)
    {
    }
    
    int TestAbilityProxy::TestPingAbility(const std::u16string &dummy){
        MessageOption option;
        MessageParcel dataParcel, replyParcel;
        dataParcel.WriteString16(dummy);
        int error = Remote()->SendRequest(TRANS_ID_PING_ABILITY, dataParcel, replyParcel, option);
        int result = (error == ERR_NONE) ? replyParcel.ReadInt32() : -1;
        return result;
    }
    
  6. SA 注册与启动

    SA 需要将自己的 TestAbilityStub 实例通过 AddSystemAbility 接口注册到 SystemAbilityManager,设备内与分布式的注册参数不同。

    // 注册到本设备内
    auto samgr = SystemAbilityManagerClient::GetInstance().GetSystemAbilityManager();
    samgr->AddSystemAbility(saId, new TestAbility());
    
    // 在组网场景下,会被同步到其他设备上
    auto samgr = SystemAbilityManagerClient::GetInstance().GetSystemAbilityManager();
    ISystemAbilityManager::SAExtraProp saExtra;
    saExtra.isDistributed = true; // 设置为分布式SA
    int result = samgr->AddSystemAbility(saId, new TestAbility(), saExtra);
    
  7. SA获取与调用

    通过 SystemAbilityManagerGetSystemAbility 方法可获取到对应 SA 的代理 IRemoteObject,然后构造 TestAbilityProxy 即可。

    // 获取本设备内注册的SA的proxy
    sptr<ISystemAbilityManager> samgr = SystemAbilityManagerClient::GetInstance().GetSystemAbilityManager();
    sptr<IRemoteObject> remoteObject = samgr->GetSystemAbility(saId);
    sptr<ITestAbility> testAbility = iface_cast<ITestAbility>(remoteObject); // 使用iface_cast宏转换成具体类型
    
    // 获取其他设备注册的SA的proxy
    sptr<ISystemAbilityManager> samgr = SystemAbilityManagerClient::GetInstance().GetSystemAbilityManager();
    
    // networkId是组网场景下对应设备的标识符,可以通过GetLocalNodeDeviceInfo获取
    sptr<IRemoteObject> remoteObject = samgr->GetSystemAbility(saId, networkId);
    sptr<TestAbilityProxy> proxy(new TestAbilityProxy(remoteObject)); // 直接构造具体Proxy
    

原理分析

相关类的继承关系:

在这里插入图片描述

类名含义
RefBase用于智能指针引用计数。
Parcelable表示可以被序列化,子对象可以用于 Binder 跨进程传输。与之对应的 Parcel 类,用于保存数据本身。
IRemoteObjectIPC 对象的父类,定义了SendRequest 接口,不会被实例化。
IPCObjectProxy代理类,由 Binder 框架实例化。
IRemoteProxy持有代理类实例。作为业务类的父类使用。
IPCObjectStub桩类,通过子类来创建对象。
IRemoteStubIPCObjectStub 的子类,作为业务类的父类使用。
IPCThreadSkeleton线程单例,主要用来获取 IRemoteInvoker 对象。
InvokerFactory工厂类,用户创建 IRemoteInvoker 对象。
IRemoteInvoker远程调用者抽象类,根据不同协议有两种实现。
BinderInvoker解析 Binder 通信。
BinderConnector单例,负责与 Binder 驱动交互。
IPCWorkThread负责启动 Binder 工作线程。
IPCWorkThreadPool线程池,管理所有的 Binder 工作线程。
IPCProcessSkeleton单例,负责派生线程。
IRemoteBroker业务类的父类,辅助进行类型转换,将 IRemoteProxy 对象转换成业务类对象。
BrokerDelegator在代理端业务类中用 static 方式定义。用于将 IRemoteProxy 对象,创建业务类对象。

发送消息

调用时序图:

在这里插入图片描述

代理端发送消息会调用的 BinderInvoker 的 SendRequest 函数:

int BinderInvoker::SendRequest(int handle, uint32_t code, MessageParcel &data, MessageParcel &reply,
    MessageOption &option)
{
    ...
    if (!WriteTransaction(BC_TRANSACTION, flags, handle, code, data, nullptr)) {
        ...
        return IPC_INVOKER_WRITE_TRANS_ERR;
    }

    if ((flags & TF_ONE_WAY) != 0) {
        error = WaitForCompletion(nullptr);
    } else {
        error = WaitForCompletion(&reply);
    }
    ...
    return error;
}

Binder 事务使用 BINDER_WRITE_READ 这个 ioctl 命令写到驱动,此命令的参数使用 binder_write_read 结构体表示,包含了读写 buffer 的指针。

           |   binder_write_read   |
               /               \
|     write_buffer     |     read_buffer     |
| cmd |      data      | cmd |     data      |
              /                      \
         | user data|           | user data|
  • cmd:4 字节表示命令,发送端用 BC_* 打头的宏表示,接收端用 BR_* 打头的宏表示。
  • data:当 cmd 是 BC_TRANSACTION/BR_TRANSACTION/BC_REPLY/BR_REPLY 时,数据部分由 binder_transaction_data 描述,真正用于跨进程传输的数据则通过指针指向另一篇内存。(由于传输的指针,因此不会发生内存拷贝)
  • user data:对应函数的参数,也就是跨进程传输的有效载荷。

扁平化后的用户数据在内存中的组织形式如下:

offs0          data0
+--------------+----------------------------------------------------------+
|i|j|..........|.....| flat_binder_object |......| flat_binder_object |...|
+--------------+----------------------------------------------------------+
|<-offs_avail->|<-i->|                           |                        |
               |<----------------j-------------->|                        |
               |<-----------------------data_avail----------------------->|
  • offs0:偏移地址的记录区,每一条记录占用 sizeof(binder_size_t) 个字节。
  • data0:有效数据区。包括普通数据和 flat_binder_object 对象。
  • i:表示 data0+i 地址存放了一个 flat_binder_object 对象。
  • offs_avail:表示偏移地址区的存放的条目数。
  • data_avail:表示数据区字节数。

为什么要个单独保存 flat_binder_object 的偏移地址?原因是驱动需要处理 flat_binder_object,驱动不能分辨哪些数据是普通数据,哪些数据是 flat_binder_object,因此需要保存其在数据区的偏移。而其它数据只需在用户空间处理,发送端和接收按约定的顺序取数据即可。

将扁平化后的数据写入发送缓存:

bool BinderInvoker::WriteTransaction(int cmd, uint32_t flags, int32_t handle, uint32_t code, const MessageParcel &data,
    const int32_t *status)
{
    binder_transaction_data tr {};
    tr.target.handle = (uint32_t)handle;
    tr.code = code;
    tr.flags = flags;
    tr.flags |= TF_ACCEPT_FDS;
    if (data.GetDataSize() > 0) {
        // Send this parcel's data through the binder.
        tr.data_size = data.GetDataSize();
        tr.data.ptr.buffer = (binder_uintptr_t)data.GetData();
        tr.offsets_size = data.GetOffsetsSize() * sizeof(binder_size_t);
        tr.data.ptr.offsets = data.GetObjectOffsets();
    } else if (status != nullptr) {
        // Send this parcel's status through the binder.
        tr.flags |= TF_STATUS_CODE;
        tr.data_size = sizeof(int32_t);
        tr.data.ptr.buffer = reinterpret_cast<uintptr_t>(status);
        tr.offsets_size = 0;
        tr.data.ptr.offsets = 0;
    }

    if (!output_.WriteInt32(cmd)) {
        ZLOGE(LABEL, "WriteTransaction Command failure");
        return false;
    }
    return output_.WriteBuffer(&tr, sizeof(binder_transaction_data));
}

使用 BINDER_WRITE_READ 命令发送数据:

int BinderInvoker::TransactWithDriver(bool doRead)
{
    if ((binderConnector_ == nullptr) || (!binderConnector_->IsDriverAlive())) {
        ZLOGE(LABEL, "%{public}s: Binder Driver died", __func__);
        return IPC_INVOKER_CONNECT_ERR;
    }

    binder_write_read bwr;
    const bool readAvail = input_.GetReadableBytes() == 0;
    const size_t outAvail = (!doRead || readAvail) ? output_.GetDataSize() : 0;

    bwr.write_size = (binder_size_t)outAvail;
    bwr.write_buffer = output_.GetData();

    if (doRead && readAvail) {
        bwr.read_size = input_.GetDataCapacity();
        bwr.read_buffer = input_.GetData();
    } else {
        bwr.read_size = 0;
        bwr.read_buffer = 0;
    }
    if ((bwr.write_size == 0) && (bwr.read_size == 0)) {
        return ERR_NONE;
    }

    bwr.write_consumed = 0;
    bwr.read_consumed = 0;
    int error = binderConnector_->WriteBinder(BINDER_WRITE_READ, &bwr);
    if (bwr.write_consumed > 0) {
        if (bwr.write_consumed < output_.GetDataSize()) {
            // we still have some bytes not been handled.
        } else {
            output_.FlushBuffer();
        }
    }
    if (bwr.read_consumed > 0) {
        input_.SetDataSize(bwr.read_consumed);
        input_.RewindRead(0);
    }
    if (error != ERR_NONE) {
        ZLOGE(LABEL, "TransactWithDriver result = %{public}d", error);
    }

    return error;
}

调用驱动的 ioctl 接口,将数据写入内核,再由内核将数据拷贝目标进程的共享内存空间。

int BinderConnector::WriteBinder(unsigned long request, void *value)
{
    int err = -EINTR;

    while (err == -EINTR) {
        if (ioctl(driverFD_, request, value) >= 0) {
            err = ERR_NONE;
        } else {
            err = -errno;
        }
        ...
    }

    return err;
}

处理服务端的回复消息:

int BinderInvoker::HandleReply(MessageParcel *reply)
{
    const size_t readSize = sizeof(binder_transaction_data);
    const uint8_t *buffer = input_.ReadBuffer(readSize);
    if (buffer == nullptr) {
        ZLOGE(LABEL, "HandleReply read tr failed");
        return IPC_INVOKER_INVALID_DATA_ERR;
    }
    const binder_transaction_data *tr = reinterpret_cast<const binder_transaction_data *>(buffer);
    ...
    if (tr->data_size > 0) {
        ...
        reply->ParseFrom(tr->data.ptr.buffer, tr->data_size);
    }
    ...
    return ERR_NONE;
}

取出有效数据负载,以便后面反扁平化。

接收消息

接收消息时序图:

在这里插入图片描述

首先需要将服务线程加入 Binder 循环:

void BinderInvoker::JoinThread(bool initiative)
{
    isMainWorkThread = initiative;
    output_.WriteUint32(initiative ? BC_ENTER_LOOPER : BC_REGISTER_LOOPER);
    StartWorkLoop();
    output_.WriteUint32(BC_EXIT_LOOPER);
    // pass in nullptr directly
    FlushCommands(nullptr);
    ZLOGE(LABEL, "Current Thread %d is leaving", getpid());
}

向驱动写入 BC_ENTER_LOOPER 或 BC_REGISTER_LOOPER,告诉驱动当前线程是一个服务线程(具备生成线程的能力),然后循环从驱动读取数据,处理代理端的消息。

void BinderInvoker::StartWorkLoop()
{
    int error;
    do {
        error = TransactWithDriver();
        if (error < ERR_NONE && error != -ECONNREFUSED && error != -EBADF) {
            ZLOGE(LABEL, "returned unexpected error %d, aborting", error);
            break;
        }
        uint32_t cmd = input_.ReadUint32();
        int userError = HandleCommands(cmd);
        if ((userError == -ERR_TIMED_OUT || userError == IPC_INVOKER_INVALID_DATA_ERR) && !isMainWorkThread) {
            break;
        }
    } while (error != -ECONNREFUSED && error != -EBADF && !stopWorkThread);
}

当代理端发送消息 BC_TRANSACTION 命令时,服务端将收到 BR_TRANSACTION 命令。并调用 OnTransaction 函数处理。

void BinderInvoker::OnTransaction(const uint8_t *buffer)
{
    const binder_transaction_data *tr = reinterpret_cast<const binder_transaction_data *>(buffer);
    ..
    if (tr->target.ptr != 0) {
        auto *refs = reinterpret_cast<RefCounter *>(tr->target.ptr);
        int count = 0;
        if ((refs != nullptr) && (tr->cookie) && (refs->AttemptIncStrongRef(this, count))) {
            auto *targetObject = reinterpret_cast<IPCObjectStub *>(tr->cookie);
            if (targetObject != nullptr) {
                error = targetObject->SendRequest(tr->code, *data, reply, option);
                service = Str16ToStr8(targetObject->GetObjectDescriptor());
                targetObject->DecStrongRef(this);
            }
        }
    } else { // 系统能力管家场景
        auto targetObject = IPCProcessSkeleton::GetCurrent()->GetRegistryObject(); // 获取服务管家
        if (targetObject == nullptr) {
            ZLOGE(LABEL, "Invalid samgr stub object");
        } else {
            error = targetObject->SendRequest(tr->code, *data, reply, option);
        }
        service = "samgr";
    }
    ...
    if (!(flagValue & TF_ONE_WAY)) {
        SendReply(reply, 0, error);
    }
    ...
}

samgr 源码:foundation/systemabilitymgr/samgr/services/samgr/native/source/main.cpp

cookie 保存的是业务类的指针,调用其父类的 SendRequest 函数。

int IPCObjectStub::SendRequest(uint32_t code, MessageParcel &data, MessageParcel &reply, MessageOption &option)
{
    int result = ERR_NONE;
    switch (code) {
        case PING_TRANSACTION: {
            ...
            break;
        }
        case INTERFACE_TRANSACTION: {
            ...
            break;
        }
        ...
        default:
            result = OnRemoteRequest(code, data, reply, option);
            break;
    }

    return result;
}

SendRequest 实现了通用命令的处理逻辑,对于业务场景这调用子类的 OnRemoteRequest 函数实现。

对于同步消息,还需要发送回复消息给客户端。

int BinderInvoker::SendReply(MessageParcel &reply, uint32_t flags, int32_t result)
{
    int error = WriteTransaction(BC_REPLY, flags, -1, 0, reply, &result);
    if (error < ERR_NONE) {
        return error;
    }

    return WaitForCompletion();
}

注意发送回复消息 handle 传参为 -1,code 传参为 0,kernel 会根据传输栈,找到目标线程。

消息循环

启动消息循环时序图:

在这里插入图片描述

只有服务对象需要消息循环,从前面的类的继承图可知,IPCObjectStubRefBase 子类,因此当它被智能指针第一次引用的时候,OnFirstStrongRef 函数会被调用。

void IPCObjectStub::OnFirstStrongRef(const void *objectId)
{
    IPCProcessSkeleton *current = IPCProcessSkeleton::GetCurrent();

    if (current != nullptr) {
        current->AttachObject(this);
    }
}

IPCProcessSkeleton 是单例模式,调用 GetCurrent 获取唯一实例。

IPCProcessSkeleton *IPCProcessSkeleton::GetCurrent()
{
    if (instance_ == nullptr) {
        std::lock_guard<std::mutex> lockGuard(procMutex_);
        if (instance_ == nullptr) {
            IPCProcessSkeleton *temp = new (std::nothrow) IPCProcessSkeleton();
            if (temp == nullptr) {
                ZLOGE(LOG_LABEL, "create IPCProcessSkeleton object failed");
                return nullptr;
            }
            if (temp->SetMaxWorkThread(DEFAULT_WORK_THREAD_NUM)) {
                temp->SpawnThread(IPCWorkThread::SPAWN_ACTIVE);
            }
            instance_ = temp;
        }
    }

    return instance_;
}

实例化 IPCProcessSkeleton 的同时,设置 Binder 线程池的大小,以及产生第一个 Binder 线程。

bool IPCWorkThreadPool::SpawnThread(int policy, int proto)
{
    std::lock_guard<std::mutex> lock(mutex_);
    if (!(proto == IRemoteObject::IF_PROT_DEFAULT && idleThreadNum_ > 0) &&
        !(proto == IRemoteObject::IF_PROT_DATABUS && idleSocketThreadNum_ > 0)) {
        return false;
    }
    std::string threadName = MakeThreadName(proto);
    ZLOGD(LOG_LABEL, "SpawnThread Name= %{public}s", threadName.c_str());

    if (threads_.find(threadName) == threads_.end()) {
        auto ipcThread = new (std::nothrow) IPCWorkThread(threadName);
        if (ipcThread == nullptr) {
            ZLOGE(LOG_LABEL, "create IPCWorkThread object failed");
            return false;
        }
        sptr<IPCWorkThread> newThread = sptr<IPCWorkThread>(ipcThread);
        threads_[threadName] = newThread;
        if (proto == IRemoteObject::IF_PROT_DEFAULT) {
            idleThreadNum_--;
            ZLOGD(LOG_LABEL, "SpawnThread, now idleThreadNum_ =%d", idleThreadNum_);
        }
        if (proto == IRemoteObject::IF_PROT_DATABUS) {
            idleSocketThreadNum_--;
            ZLOGD(LOG_LABEL, "SpawnThread, now idleSocketThreadNum_ =%d", idleSocketThreadNum_);
        }
        newThread->Start(policy, proto, threadName);
        return true;
    }
    return false;
}

启动线程:

void IPCWorkThread::Start(int policy, int proto, std::string threadName)
{
    policy_ = policy;
    proto_ = proto;
    threadName_ = threadName;
    pthread_t threadId;
    int ret = pthread_create(&threadId, NULL, &IPCWorkThread::ThreadHandler, this);
    if (ret != 0) {
        ZLOGE(LOG_LABEL, "create thread failed");
    }
    ZLOGD(LOG_LABEL, "create thread, policy=%d, proto=%d", policy, proto);
    if (pthread_detach(threadId) != 0) {
        ZLOGE(LOG_LABEL, "detach error");
    }
}

线程工作函数会被调用:

void *IPCWorkThread::ThreadHandler(void *args)
{
    IPCWorkThread *threadObj = (IPCWorkThread *)args;
    IRemoteInvoker *invoker = IPCThreadSkeleton::GetRemoteInvoker(threadObj->proto_);
    threadObj->threadName_ += "_" + std::to_string(syscall(SYS_gettid));
    int32_t ret = prctl(PR_SET_NAME, threadObj->threadName_.c_str());
    if (ret != 0) {
        ZLOGE(LOG_LABEL, "set thread name: %{public}s fail, ret: %{public}d",
            threadObj->threadName_.c_str(), ret);
    }
    ZLOGD(LOG_LABEL, "proto_=%{public}d,policy_=%{public}d, name: %{public}s, ret: %{public}d",
        threadObj->proto_, threadObj->policy_, threadObj->threadName_.c_str(), ret);
    if (invoker != nullptr) {
        switch (threadObj->policy_) {
            case SPAWN_PASSIVE:
                invoker->JoinThread(false); // kernel 创建的线程
                break;
            case SPAWN_ACTIVE:
                invoker->JoinThread(true); // 第一个线程,走此分支
                break;
            case PROCESS_PASSIVE:
                invoker->JoinProcessThread(false);
                break;
            case PROCESS_ACTIVE:
                invoker->JoinProcessThread(true);
                break;
            default:
                ZLOGE(LOG_LABEL, "policy_ = %{public}d", threadObj->policy_);
                break;
        }
    }

    IPCProcessSkeleton *current = IPCProcessSkeleton::GetCurrent();
    if (current != nullptr) {
        current->OnThreadTerminated(threadObj->threadName_);
    }
    return nullptr;
}

这里调用 BinderInvokerJoinThread,在线程的主循环中处理 Binder 事务。后面的步骤,在接收消息的场景中已经分析过了。

代理类创建

IRemoteObject 扁平化:

bool BinderInvoker::FlattenObject(Parcel &parcel, const IRemoteObject *object) const
{
    if (object == nullptr) {
        return false;
    }
    flat_binder_object flat;
    if (object->IsProxyObject()) {
        const IPCObjectProxy *proxy = reinterpret_cast<const IPCObjectProxy *>(object);
        const int32_t handle = proxy ? static_cast<int32_t>(proxy->GetHandle()) : -1;
        flat.hdr.type = BINDER_TYPE_HANDLE;
        flat.binder = 0;
        flat.handle = (uint32_t)handle;
        flat.cookie = proxy ? static_cast<binder_uintptr_t>(proxy->GetProto()) : 0;
    } else {
        flat.hdr.type = BINDER_TYPE_BINDER;
        flat.binder = reinterpret_cast<uintptr_t>(object->GetRefCounter());
        flat.cookie = reinterpret_cast<uintptr_t>(object);
    }

    flat.flags = 0x7f | FLAT_BINDER_FLAG_ACCEPTS_FDS;
    bool status = parcel.WriteBuffer(&flat, sizeof(flat_binder_object));
    if (!status) {
        ZLOGE(LABEL, "Fail to flatten object");
#ifndef BUILD_PUBLIC_VERSION
        ReportDriverEvent(DbinderErrorCode::COMMON_DRIVER_ERROR, std::string(DbinderErrorCode::ERROR_TYPE),
            DbinderErrorCode::IPC_DRIVER, std::string(DbinderErrorCode::ERROR_CODE),
            DbinderErrorCode::FLATTEN_OBJECT_FAILURE);
#endif
    }
    return status;
}

创建 flat_binder_object 对象,根据 IRemoteObject 的类型,分别赋值 binderhandlecookie 字段,写入到 Parcel 中。

IRemoteObject 反扁平化:

sptr<IRemoteObject> BinderInvoker::UnflattenObject(Parcel &parcel)
{
    const uint8_t *buffer = parcel.ReadBuffer(sizeof(flat_binder_object));
    if (buffer == nullptr) {
        ZLOGE(LABEL, "UnflattenObject null object buffer");
        return nullptr;
    }

    IPCProcessSkeleton *current = IPCProcessSkeleton::GetCurrent();
    if (current == nullptr) {
        return nullptr;
    }

    sptr<IRemoteObject> remoteObject = nullptr;
    auto *flat = reinterpret_cast<const flat_binder_object *>(buffer);
    switch (flat->hdr.type) {
        case BINDER_TYPE_BINDER: { // 同进程访问服务
            remoteObject = reinterpret_cast<IRemoteObject *>(flat->cookie);
            if (!current->IsContainsObject(remoteObject)) {
                remoteObject = nullptr;
            }
            break;
        }
        case BINDER_TYPE_HANDLE: {
            remoteObject = current->FindOrNewObject(flat->handle);
            break;
        }
        default:
            ZLOGE(LABEL, "%s: unknown binder type %u", __func__, flat->hdr.type);
            remoteObject = nullptr;
            break;
    }

    return remot关于描述,主要用于校验代理和服务是否匹配。代理端可以通过  GetDescriptor 获取描述,通过 GetInterfaceDescriptor 获取远程服务的描述。服务端通过 GetDescriptor、GetInterfaceDescriptor、GetObjectDescriptor 都能获取服务的描述。服务校验可以这样实现:代理端每次发送消息前,通过 WriteInterfaceToken 在消息头部写入代理端的描述,服务收到消息后,通过 ReadInterfaceToken 取取出描述,将代理描述与服务描述对比,如果相同,则进行后续消息处理,否则返回错误。eObject;
}

如果类型是 BINDER_TYPE_BINDER,则表示服务和客户端在同一进程, 直接将 cookie 转换成 IRemoteObject 对象。如果类型是 BINDER_TYPE_HANDLE,这表示远程服务端传过来的 binder 对象,需要创建代理类。

sptr<IRemoteObject> IPCProcessSkeleton::FindOrNewObject(int handle)
{
    sptr<IRemoteObject> result = nullptr;
    std::u16string descriptor = MakeHandleDescriptor(handle); // IPCObjectProxy[handle]
    ...
    {
        result = QueryObject(descriptor);
        if (result == nullptr) {
            ...
            // OnFirstStrongRef will be called.
            result = new (std::nothrow) IPCObjectProxy(handle, descriptor); // 代理的描述无法标识服务的身份,业务类中的描述才能标识
            if (result == nullptr) {
                ZLOGE(LOG_LABEL, "new IPCObjectProxy failed!");
                return result;
            }
            AttachObject(result.GetRefPtr());
        }
    }
    sptr<IPCObjectProxy> proxy = reinterpret_cast<IPCObjectProxy *>(result.GetRefPtr());
    proxy->WaitForInit();
#ifndef CONFIG_IPC_SINGLE
    if (proxy->GetProto() == IRemoteObject::IF_PROT_ERROR) {
        ZLOGE(LOG_LABEL, "init rpc proxy:%{public}d failed", handle);
        return nullptr;
    }
#endif
    return result;
}

因此代理对象 IPCObjectProxy 实例是在反扁平化的时候创建。(永远是被动创建,在此之前一定发送了一次 binder 调用)

为什么服务端通过继承 IPCObjectStub 方式实现业务类,而客户端则是将 IPCObjectProxy 作为业务类的成员变量?对于客户端 IPCObjectProxy 实例化由框架完成,框架在实例化代理对象时,并不清楚具体的业务,因此找不到具体业务类。而采用成员变量的方式加上 IRemoteBroker 的辅助,则可以实现 IPCObjectProxy 对象转换成业务类对象。对服务端而言业务类由业务程序实例化,最终业务的处理也是业务类完成,如果采用成员变量的方式,就变得舍本逐末,舍近求远了。(最终目标是创建业务类,服务端一步到位,代理端需要两步)

关于描述,主要用于校验代理和服务是否匹配。代理端可以通过 GetDescriptor 获取描述,通过 GetInterfaceDescriptor 获取远程服务的描述。服务端通过 GetDescriptor、GetInterfaceDescriptor、GetObjectDescriptor 都能获取服务的描述。服务校验可以这样实现:代理端每次发送消息前,通过 WriteInterfaceToken 在消息头部写入代理端的描述,服务收到消息后,通过 ReadInterfaceToken 取取出描述,将代理描述与服务描述对比,如果相同,则进行后续消息处理,否则返回错误。

HDI 中应用

相关类的继承关系:

在这里插入图片描述

服务端实现

在 UHDF 框架中服务使用 HdfRemoteService 表示,通过 HdfRemoteServiceObtain 创建服务对象。

struct HdfRemoteService *HdfRemoteServiceObtain(struct HdfObject *object, struct HdfRemoteDispatcher *dispatcher)
{
    struct HdfRemoteService *service = HdfRemoteAdapterObtain();
    if ((service != NULL) && (service->dispatcher == NULL)) {
        service->dispatcher = dispatcher;
        service->target = object;
    }
    return service;
}

获取 HdfRemoteService 实例,并设置服务消息的分发器。其中服务的实例则是由 HdfRemoteServiceHolder 持有(HdfRemoteService 永远不是直接实例化,它属于伴生对象,只有 proxy 或 stub 类实例化是才会产生 HdfRemoteService 对象)。

struct HdfRemoteService *HdfRemoteAdapterObtain(void)
{
    struct HdfRemoteServiceHolder *holder = new HdfRemoteServiceHolder();
    holder->remote_ = new HdfRemoteServiceStub(&holder->service_); // stub
    return &holder->service_;
}

新建 HdfRemoteServiceHolder 结构体对象,其内部同时包含了 HdfRemoteService 实例和 IRemoteObject 的指针,这样将 UHDF 中的服务与 Binder 框架中的服务绑定到一起,其中 HdfRemoteServiceStub 类的定义如下:

class HdfRemoteServiceStub : public OHOS::IPCObjectStub {
public:
    explicit HdfRemoteServiceStub(struct HdfRemoteService *service);
    int OnRemoteRequest(uint32_t code,
        OHOS::MessageParcel &data, OHOS::MessageParcel &reply, OHOS::MessageOption &option) override;
    ~HdfRemoteServiceStub();
    int32_t Dump(int32_t fd, const std::vector<std::u16string> &args) override;
private:
    struct HdfRemoteService *service_;
};

Binder 框架在,OnRemoteRequest 时服务消息路由的函数,在 HdfRemoteServiceStub 中的负责将接收的客户端消息,分发给 HdfRemoteDispatcher,实现如下:

int HdfRemoteServiceStub::OnRemoteRequest(uint32_t code,
    OHOS::MessageParcel &data, OHOS::MessageParcel &reply, OHOS::MessageOption &option)
{
    (void)option;
    if (service_ == nullptr) {
        return HDF_ERR_INVALID_OBJECT;
    }

    int ret = HDF_FAILURE;
    struct HdfSBuf *dataSbuf = ParcelToSbuf(&data);
    struct HdfSBuf *replySbuf = ParcelToSbuf(&reply);

    struct HdfRemoteDispatcher *dispatcher = service_->dispatcher;
    if (dispatcher != nullptr && dispatcher->Dispatch != nullptr) {
        ret = dispatcher->Dispatch(reinterpret_cast<HdfRemoteService *>(service_->target), code, dataSbuf, replySbuf);
    } else {
        HDF_LOGE("dispatcher or dispatcher->Dispatch is null, flags is: %{public}d", option.GetFlags());
    }

    HdfSbufRecycle(dataSbuf);
    HdfSbufRecycle(replySbuf);
    return ret;
}

客户端实现

在 UHDF 框架中,客户端同样使用 HdfRemoteService 表示,使用 HdfRemoteAdapterBind 获取客户端对象。

struct HdfRemoteService *HdfRemoteAdapterBind(OHOS::sptr<OHOS::IRemoteObject> binder)
{
    struct HdfRemoteService *remoteService = nullptr;
    static HdfRemoteDispatcher dispatcher = {
        .Dispatch = HdfRemoteAdapterDispatch,
        .DispatchAsync = HdfRemoteAdapterDispatchAsync,
    };

    struct HdfRemoteServiceHolder *holder = new HdfRemoteServiceHolder();
    if (holder != nullptr) {
        holder->remote_ = binder; // proxy
        remoteService = &holder->service_;
        remoteService->dispatcher = &dispatcher;
        remoteService->index = (uint64_t)binder.GetRefPtr();
        return remoteService;
    }
    return nullptr;
}

HdfRemoteAdapterBind 的参数 IRemoteObject 的指针,实际传参是 IPCObjectProxy 类型,在 Binder 框架分析,IPCObjectProxy 实例是由框架创建。对比服务类 HdfRemoteDispatcher 通过参数传入,客户端则是固定实现,用于发送消息。

static int HdfRemoteAdapterDispatch(struct HdfRemoteService *service,
    int code, HdfSBuf *data, HdfSBuf *reply)
{
    return HdfRemoteAdapterOptionalDispatch(service, code, data, reply, true);
}

HdfRemoteAdapterOptionalDispatch 的源码:

static int HdfRemoteAdapterOptionalDispatch(struct HdfRemoteService *service, int code,
    HdfSBuf *data, HdfSBuf *reply, bool sync)
{
    ...
    struct HdfRemoteServiceHolder *holder = reinterpret_cast<struct HdfRemoteServiceHolder *>(service);
    if (dataParcel != nullptr) {
        OHOS::sptr<OHOS::IRemoteObject> remote = holder->remote_;
        if (remote != nullptr) {
            return remote->SendRequest(code, *dataParcel, *replyParcel, option); // 调用代理类发送消息
        }
    }
    return HDF_FAILURE;
}

因此客户端通过 HdfRemoteService->dispatcher->Dispatch 发送消息。

设备服务管家

在 Binder 框架中只有一个服务管家(samgr),在 OpenHarmony 中服务管家是通过 ID 来索引服务,服务 ID 被集中管控。为了区分系统服务和 UHDF 服务,UHDF 服务有自己的服务管家,服务通过名字进行索引。首先设备服务管家也是一个普通的服务,需要向 samgr 注册。

int DevSvcManagerStubStart(struct IDevSvcManager *svcmgr)
{
    ...
    static struct HdfRemoteDispatcher dispatcher = {.Dispatch = DevSvcManagerStubDispatch};
    inst->remote = HdfRemoteServiceObtain((struct HdfObject *)inst, &dispatcher);
    if (inst->remote == NULL) {
        HDF_LOGE("failed to obtain device service manager remote service");
        return HDF_ERR_MALLOC_FAIL;
    }
    if (!HdfRemoteServiceSetInterfaceDesc(inst->remote, "HDI.IServiceManager.V1_0")) {
        HDF_LOGE("%{public}s: failed to init interface desc", __func__);
        HdfRemoteServiceRecycle(inst->remote);
        return HDF_ERR_INVALID_OBJECT;
    }

    inst->recipient.OnRemoteDied = DevSvcManagerOnServiceDied;
    int ret = HdfRemoteServiceRegister(DEVICE_SERVICE_MANAGER_SA_ID, inst->remote);
    if (ret != 0) {
        HDF_LOGE("failed to publish device service manager, %{public}d", ret);
        HdfRemoteServiceRecycle(inst->remote);
        inst->remote = NULL;
    } else {
        HDF_LOGI("publish device service manager success");
        inst->started = true;
    }
    ...
}

首先创建服务对象,并指定消息分配器 HdfRemoteDispatcher,然后以 DEVICE_SERVICE_MANAGER_SA_ID 为服务 ID 像 samgr 注册服务,HdfRemoteServiceRegister 会调用 HdfRemoteAdapterAddSa 函数:

int HdfRemoteAdapterAddSa(int32_t saId, struct HdfRemoteService *service)
{
    if (service == nullptr) {
        return HDF_ERR_INVALID_PARAM;
    }

    auto saManager = OHOS::SystemAbilityManagerClient::GetInstance().GetSystemAbilityManager(); // 获取服务管家代理
    const int32_t waitTimes = 50;
    const int32_t sleepInterval = 20000;
    int32_t timeout = waitTimes;
    while (saManager == nullptr && (timeout > 0)) { // 当服务管家未启动时,间隔 10 ms 重试 50 次。
        HDF_LOGI("waiting for samgr...");
        usleep(sleepInterval);
        saManager = OHOS::SystemAbilityManagerClient::GetInstance().GetSystemAbilityManager();
        timeout--;
    }

    if (saManager == nullptr) {
        HDF_LOGE("failed to get sa manager, waiting timeot");
        return HDF_FAILURE;
    }
    struct HdfRemoteServiceHolder *holder = reinterpret_cast<struct HdfRemoteServiceHolder *>(service); // 转换成父类
    int ret = saManager->AddSystemAbility(saId, holder->remote_); // 注册 HdfRemoteServiceStub 对象
    (void)OHOS::IPCSkeleton::GetInstance().SetMaxWorkThreadNum(g_remoteThreadMax++);
    HDF_LOGI("add sa %{public}d, ret = %{public}s", saId, (ret == 0) ? "succ" : "fail");

    return HDF_SUCCESS;
}

先获取 samgr 代理,由于 HdfRemoteService 对象作为 HdfRemoteServiceHolder 的成员被实例化,因此可以通过 HdfRemoteService 指针,获取与之关联的 HdfRemoteServiceStub 对象 ,将其作为服务实体向 samgr 进行注册。服务的消息将由 DevSvcManagerStubDispatch 处理:

int DevSvcManagerStubDispatch(struct HdfRemoteService *service, int code, struct HdfSBuf *data, struct HdfSBuf *reply)
{
    int ret = HDF_FAILURE;
    struct DevSvcManagerStub *stub = (struct DevSvcManagerStub *)service;
    if (stub == NULL) {
        HDF_LOGE("DevSvcManagerStubDispatch failed, object is null, code is %{public}d", code);
        return ret;
    }
    struct IDevSvcManager *super = (struct IDevSvcManager *)&stub->super;
    HDF_LOGD("DevSvcManagerStubDispatch called: code=%{public}d", code);
    switch (code) {
        case DEVSVC_MANAGER_ADD_SERVICE:
            ret = DevSvcManagerStubAddService(super, data);  // 注册服务
            break;
        case DEVSVC_MANAGER_UPDATE_SERVICE:
            ret = DevSvcManagerStubUpdateService(super, data);
            break;
        case DEVSVC_MANAGER_GET_SERVICE:
            ret = DevSvcManagerStubGetService(super, data, reply); // 获取服务
            break;
        case DEVSVC_MANAGER_REMOVE_SERVICE:
            ret = DevSvcManagerStubRemoveService(super, data);
            break;
        case DEVSVC_MANAGER_REGISTER_SVCLISTENER:
            ret = DevSvcManagerStubRegisterServListener(super, data);
            break;
        case DEVSVC_MANAGER_UNREGISTER_SVCLISTENER:
            ret = DevSvcManagerStubUnregisterServListener(super, data);
            break;
        case DEVSVC_MANAGER_LIST_ALL_SERVICE:
            ret = DevSvcManagerStubListAllService(super, data, reply);
            break;
        case DEVSVC_MANAGER_LIST_SERVICE_BY_INTERFACEDESC:
            ret = DevSvcManagerStubListServiceByInterfaceDesc(super, data, reply);
            break;
        default:
            ret = HdfRemoteServiceDefaultDispatch(stub->remote, code, data, reply);
            break;
    }
    return ret;
}

如何使用设备服务管家?创建设备服务关键的代理对象:

struct HdfObject *DevSvcManagerProxyCreate(void)
{
    static struct IDevSvcManager *instance = NULL;
    if (instance == NULL) {
        struct HdfRemoteService *remote = HdfRemoteServiceGet(DEVICE_SERVICE_MANAGER_SA_ID);
        if (remote != NULL) {
            ...
            instance = DevSvcManagerProxyObtain(remote); // 将 remote 保存到 DevSvcManagerProxy 的 remote 成员
        }
    }
    return (struct HdfObject *)instance;
}

首先向 samgr 获取设备服务管家的代理,然后创建 DevSvcManagerProxy 类。HdfRemoteServiceGet 会调用 HdfRemoteAdapterGetSa 获取服务。

struct HdfRemoteService *HdfRemoteAdapterGetSa(int32_t saId)
{
    auto saManager = OHOS::SystemAbilityManagerClient::GetInstance().GetSystemAbilityManager(); // 获取 samgr 的代理
    if (saManager == nullptr) {
        HDF_LOGE("failed to get sa manager");
        return nullptr;
    }
    OHOS::sptr<OHOS::IRemoteObject> remote = saManager->GetSystemAbility(saId); // 获取服务
    constexpr int32_t waitTimes = 50;
    constexpr int32_t sleepInterval = 20000;
    int32_t timeout = waitTimes;
    while (remote == nullptr && (timeout > 0)) {
        HDF_LOGD("waiting for saId %{public}d", saId);
        usleep(sleepInterval);
        remote = saManager->GetSystemAbility(saId);
        timeout--;
    }
    if (remote != nullptr) {
        return HdfRemoteAdapterBind(remote); // 将服务与 HdfRemoteService 关联
    } else {
        HDF_LOGE("failed to get sa %{public}d", saId);
    }
    return nullptr;
}

首先通过 samgr 获取服务代理对象 IPCObjectProxy,然后通过 HdfRemoteAdapterBind 函数,将其与 HdfRemoteService 对象绑定,并返回 UHDF 需要服务对象HdfRemoteService

DevSvcManagerProxy 的构造函数实例化了所有的服务接口:

void DevSvcManagerProxyConstruct(struct DevSvcManagerProxy *inst, struct HdfRemoteService *remote)
{
    inst->pvtbl.AddService = DevSvcManagerProxyAddService;
    inst->pvtbl.UpdateService = DevSvcManagerProxyUpdateService;
    inst->pvtbl.GetService = DevSvcManagerProxyGetService;
    inst->pvtbl.RemoveService = DevSvcManagerProxyRemoveService;
    inst->remote = remote;
    inst->recipient.OnRemoteDied = DevSvcManagerProxyOnRemoteDied;
    HdfRemoteServiceAddDeathRecipient(remote, &inst->recipient);
}
注册服务

设备服务管家代理端的 AddService 接口实现如下:

static int DevSvcManagerProxyAddService(
    struct IDevSvcManager *inst, struct HdfDeviceObject *service, const struct HdfServiceInfo *servInfo)
{
    ...
    int status = HDF_FAILURE;
    struct HdfSBuf *data = HdfSbufTypedObtain(SBUF_IPC);
    struct HdfSBuf *reply = HdfSbufTypedObtain(SBUF_IPC);
    do {
        if (data == NULL || reply == NULL) {
            HDF_LOGE("Add service failed, failed to obtain sbuf");
            break;
        }
        if (!HdfRemoteServiceWriteInterfaceToken(serviceProxy->remote, data) ||
            WriteServiceInfo(data, service, servInfo) != HDF_SUCCESS) { // 重点
            break;
        }
        status =
            serviceProxy->remote->dispatcher->Dispatch(serviceProxy->remote, DEVSVC_MANAGER_ADD_SERVICE, data, reply);
        HDF_LOGI("servmgr add service %{public}s, result is %{public}d", servInfo->servName, status);
    } while (0);

    HdfSbufRecycle(reply);
    HdfSbufRecycle(data);
    return status;
}

首先将服务信息(包括服务名)和服务对象扁平化,通过 DEVSVC_MANAGER_ADD_SERVICE 命令将数据发送给服务端。重点关注 WriteServiceInfo 函数如何将服务对象扁平化:

static int WriteServiceInfo(
    struct HdfSBuf *data, struct HdfDeviceObject *service, const struct HdfServiceInfo *servInfo)
{
    ...
    struct HdfDeviceNode *devNode =
        HDF_SLIST_CONTAINER_OF(struct HdfDeviceObject, service, struct HdfDeviceNode, deviceObject);
    struct DeviceServiceStub *deviceFullService = (struct DeviceServiceStub *)devNode;
    if (deviceFullService->remote == NULL) {
        HDF_LOGE("%{public}s: device service is broken", __func__);
        return ret;
    }

    if (HdfSbufWriteRemoteService(data, deviceFullService->remote) != HDF_SUCCESS) {
        HDF_LOGE("Add service failed, failed to write remote object");
        return ret;
    }
    ...

    return HDF_SUCCESS;
}

HdfDeviceObjectHdfDeviceNode 的转换与 UHDF 框架有关,看后面的 HdfSbufWriteRemoteService 函数,其实现函数如下:

static int32_t SbufMParcelImplWriteRemoteService(struct HdfSBufImpl *sbuf, const struct HdfRemoteService *service)
{
    if (service == nullptr) {
        return HDF_ERR_INVALID_PARAM;
    }
    MessageParcel *parcel = MParcelCast(sbuf);
    const struct HdfRemoteServiceHolder *holder = reinterpret_cast<const struct HdfRemoteServiceHolder *>(service);
    return parcel->WriteRemoteObject(holder->remote_) ? HDF_SUCCESS : HDF_FAILURE;
}

经过前面分析可知 holder->remote_ 就是 HdfRemoteServiceStub 对象,于是流程回到了通用 Binder 框架。

获取服务

设备服务管家代理端的 GetService 接口实现如下:

struct HdfObject *DevSvcManagerProxyGetService(struct IDevSvcManager *inst, const char *svcName)
{
    ...
    struct HdfRemoteService *remoteService = NULL;
    struct DevSvcManagerProxy *serviceProxy = (struct DevSvcManagerProxy *)inst;
    do {
        ...
        dispatcher = serviceProxy->remote->dispatcher;
        if (!HdfRemoteServiceWriteInterfaceToken(serviceProxy->remote, data) || !HdfSbufWriteString(data, svcName)) {
            break;
        }
        status = dispatcher->Dispatch(serviceProxy->remote, DEVSVC_MANAGER_GET_SERVICE, data, reply);
        if (status == HDF_SUCCESS) {
            remoteService = HdfSbufReadRemoteService(reply);
        }
    } while (0);
    ...
}

首先将服务名作为参数,通过 DEVSVC_MANAGER_GET_SERVICE 命令将数据发送给服务端。然后从服务的回复消息中取出服务的代理对象。HdfSbufReadRemoteService 的实现函数如下:

static struct HdfRemoteService *SbufMParcelImplReadRemoteService(struct HdfSBufImpl *sbuf)
{
    auto remote = MParcelCast(sbuf)->ReadRemoteObject();
    if (remote == nullptr) {
        HDF_LOGE("%{public}s: read remote object fail", __func__);
        return nullptr;
    }
    return HdfRemoteAdapterBind(remote);
}

首先通过 ReadRemoteObject 从缓冲去取出 IPCObjectProxy 对象,然后 HdfRemoteAdapterBind 创建 HdfRemoteService 对象。

与 C++ 不同,客户端和服务不是以类的方式实现,HDF 客户端和服务对应 HdfRemoteService 对象,没有关联具体的业务。

binder c是指在Android系统中使用C语言编写的binder代码实现。它可以通过添加内存共享、读写通知来实现binder通信。在Android 9上经过测试并没有问题。[1] 在Android系统中,binder的作用是配接函数对象,形成新的函数对象,调用全局函数针对对象、对象指针、智能对象指针调用成员函数。它可以用于使用C标准库预定义的函数对象时指定参数。 对于Android系统而言,binder机制是常见的,也是初学者最难搞明白的部分,很多Service都是通过binder机制与客户端进行通讯交互的。理解binder的机制可以帮助我们更好地理解程序运行的流程。<span class="em">1</span><span class="em">2</span><span class="em">3</span> #### 引用[.reference_title] - *1* [Android binder C++ service/client 实现. 共享内存](https://download.csdn.net/download/jounehou/12915968)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_2"}}] [.reference_item style="max-width: 33.333333333333336%"] - *2* [C++函数适配器Binder](https://blog.csdn.net/weixin_44048823/article/details/93380331)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_2"}}] [.reference_item style="max-width: 33.333333333333336%"] - *3* [C++使用binder实例](https://blog.csdn.net/weixin_30293079/article/details/95537179)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_2"}}] [.reference_item style="max-width: 33.333333333333336%"] [ .reference_list ]
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

翻滚吧香香

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值