逻辑运算符 位运算符_Tarantool Kubernetes运算符

逻辑运算符 位运算符

Kubernetes has already become a de-facto standard for running stateless applications, mainly because it can reduce time-to-market for new features. Launching stateful applications, such as databases or stateful microservices, is still a complex task, but companies have to meet the competition and maintain a high delivery rate. So they create a demand for such solutions.

Kubernetes已经成为运行无状态应用程序的实际标准,主要是因为它可以减少新功能的上市时间。 启动有状态应用程序(例如数据库或有状态微服务)仍然是一项复杂的任务,但是公司必须面对竞争并保持较高的交付率。 因此,他们对此类解决方案产生了需求。

We want to introduce our solution for launching stateful Tarantool Cartridge clusters: Tarantool Kubernetes Operator, more under the cut.

我们想介绍我们的用于启动有状态Tarantool墨盒集群的解决方案: Tarantool Kubernetes Operator ,更多内容尚待完善

  1. Instead of a Thousand Words

    而不是千言万语

  2. What the Operator Actually Does

    操作员实际做什么

  3. A Little About the Details

    一点细节

  4. How the Operator Works

    操作员如何工作

  5. What the Operator Creates

    运营商创造的东西

  6. Summary

    摘要

Tarantool is an open-source DBMS and an application server all-in-one. As a database, it has many unique characteristics: high efficiency of hardware utilization, flexible data schema, support for both in-memory and disk storage, and the possibility of extension using Lua language. As an application server, it allows you to move the application code as close to the data as possible with minimum response time and maximum throughput. Moreover, Tarantool has an extensive ecosystem providing ready-to-use modules for solving application problems: sharding, queue, modules for easy development (cartridge, luatest), solutions for operation (metrics, ansible), just to name a few.

Tarantool是一个开放源代码的DBMS和一个应用程序服务器一体化。 作为数据库,它具有许多独特的特性:高效的硬件利用,灵活的数据模式,对内存和磁盘存储的支持以及使用Lua语言进行扩展的可能性。 作为应用程序服务器,它允许您以最小的响应时间和最大的吞吐量将应用程序代码尽可能地靠近数据。 而且,Tarantool拥有广泛的生态系统,提供用于解决应用程序问题的即用型模块: 分片队列 ,易于开发的模块( 盒式磁带luatest ),操作解决方案( 指标ansible ),仅举几例。

For all its merits, the capabilities of a single Tarantool instance are always limited. You would have to create tens and hundreds of instances in order to store terabytes of data and process millions of requests, which already implies a distributed system with all its typical problems. To solve them, we have Tarantool Cartridge, which is a framework designed to hide all sorts of difficulties when writing distributed applications. It allows developers to concentrate on the business value of the application. Cartridge provides a robust set of components for automatic cluster orchestration, automatic data distribution, WebUI for operation, and developer tools.

尽管具有所有优点,但单个Tarantool实例的功能始终受到限制。 您将不得不创建数十个和数百个实例,以存储TB的数据并处理数百万个请求,这已经意味着具有所有典型问题的分布式系统。 为了解决这些问题,我们有Tarantool Cartridge ,这是一个旨在隐藏编写分布式应用程序时遇到的各种困难的框架。 它使开发人员可以专注于应用程序的业务价值。 卡式盒提供了一组强大的组件,用于自动集群流程,自动数据分发,用于操作的WebUI和开发人员工具。

Tarantool is not only about technologies, but also about a team of engineers working on the development of turnkey enterprise systems, out-of-the-box solutions, and support for open-source components.

Tarantool不仅涉及技术,还涉及致力于开发交钥匙企业系统,现成的解决方案以及对开源组件的支持的工程师团队。

Globally, all our tasks can be divided into two areas: the development of new systems and the improvement of existing solutions. For example, there is an extensive database from a well-known vendor. To scale it for reading, Tarantool-based eventually consistent cache is placed behind it. Or vice versa: to scale the writing, Tarantool is installed in the hot/cold configuration: while the data is «cooling down», it is dumped to the cold storage and at the same time into the analytics queue. Or a light version of an existing system is written (functional backup) to back up the «hot» data by using data replication from the main system. Learn more from the T+ 2019 conference reports.

在全球范围内,我们的所有任务可以分为两个领域:新系统的开发和现有解决方案的改进。 例如,有来自知名供应商的大量数据库。 为了扩展它以便读取,将基于Tarantool的最终一致缓存放置在其后。 反之亦然:为了扩展写作,将Tarantool安装在热/冷配置中:在数据“冷却”时,将其转储到冷存储中,同时转储到分析队列中。 或者编写现有系统的精简版(功能备份),以通过使用来自主系统的数据复制来备份“热”数据。 从T + 2019会议报告中了解更多信息。

All of these systems have one thing in common: they are somewhat difficult to operate. Well, there are many exciting things: to quickly create a cluster of 100+ instances backing up in 3 data centers; to update the application that stores data with no downtime or maintenance drawdowns; to create a backup and restore in order to prepare for a possible accident or human mistakes; to ensure hidden component failover; to organize configuration management…

所有这些系统都有一个共同点:它们有些难以操作。 嗯,有很多令人兴奋的事情:快速创建一个由100多个实例组成的集群,并在3个数据中心中进行备份; 更新存储数据的应用程序而无停机时间或维护费用下降; 创建备份和还原,以准备可能发生的事故或人为错误; 确保隐藏的组件故障转移; 组织配置管理…


从字面上看, Tarantool Cartridge that literally has just been released into open source considerably simplifies the distributed system development: it supports component clustering, service discovery, configuration management, instance failure detection and automatic failover, replication topology management, and sharding components. Tarantool Cartridge刚刚被开源,这极大地简化了分布式系统的开发:它支持组件集群,服务发现,配置管理,实例故障检测和自动故障转移,复制拓扑管理以及分片组件。

It would be so great if we could operate all of this as quickly as develop it. Kubernetes makes it possible, but a specialized operator would make life even more comfortable.

如果我们能尽快开发出所有这些,那就太好了。 Kubernetes使这成为可能,但是专业的操作员会让生活更加舒适。

Today we introduce the alpha version of Tarantool Kubernetes Operator.

今天,我们介绍Tarantool Kubernetes Operator的Alpha版本。

而不是千言万语 (Instead of a Thousand Words)

We have prepared a small example based on Tarantool Cartridge, and we are going to work with it. It is a simple application called a distributed key-value storage with HTTP-interface. After start-up, we have the following:

我们已经准备了一个基于Tarantool弹药筒的小例子,我们将使用它。 这是一个简单的应用程序,称为具有HTTP接口的分布式键值存储。 启动后,我们有以下内容:

Where

哪里

  • Routers are part of the cluster responsible for accepting and processing incoming HTTP requests;

    路由器是集群的一部分,负责接受和处理传入的HTTP请求。

  • Storages are part of the cluster responsible for storing and processing data; three shards are installed out of the box, each one having a master and a replica.

    存储是集群的一部分,负责存储和处理数据。 开箱即用地安装了三个碎片,每个碎片都有一个母版和一个副本。

To balance incoming HTTP traffic on the routers, a Kubernetes Ingress is used. The data is distributed in the storage at the level of Tarantool itself using the vshard component.

为了平衡路由器上的传入HTTP流量,使用了Kubernetes入口。 使用vshard组件 ,数据在Tarantool本身的级别上分布在存储中。

We need Kubernetes 1.14+, but minikube will do. It is also lovely to have kubectl. To start the operator, create a ServiceAccount, a Role, and a RoleBinding:

我们需要Kubernetes 1.14+,但是minikube可以。 拥有kubectl也很可爱。 要启动操作员,请创建ServiceAccount,Role和RoleBinding:

$ kubectl create -f https://raw.githubusercontent.com/tarantool/tarantool-operator/0.0.1/deploy/service_account.yaml
$ kubectl create -f https://raw.githubusercontent.com/tarantool/tarantool-operator/0.0.1/deploy/role.yaml
$ kubectl create -f https://raw.githubusercontent.com/tarantool/tarantool-operator/0.0.1/deploy/role_binding.yaml

Tarantool Operator extends Kubernetes API with its resource definitions, so let’s create them:

Tarantool Operator通过其资源定义扩展了Kubernetes API,因此让我们创建它们:

$ kubectl create -f https://raw.githubusercontent.com/tarantool/tarantool-operator/0.0.1/deploy/crds/tarantool_v1alpha1_cluster_crd.yaml
$ kubectl create -f https://raw.githubusercontent.com/tarantool/tarantool-operator/0.0.1/deploy/crds/tarantool_v1alpha1_role_crd.yaml
$ kubectl create -f https://raw.githubusercontent.com/tarantool/tarantool-operator/0.0.1/deploy/crds/tarantool_v1alpha1_replicasettemplate_crd.yaml

Everything is ready to start the operator, so here it goes:

一切都准备就绪,可以开始操作员了,现在就开始:

$ kubectl create -f https://raw.githubusercontent.com/tarantool/tarantool-operator/0.0.1/deploy/operator.yaml

We are waiting for the operator to start, and then we can proceed with starting the application:

我们正在等待操作员启动,然后我们可以继续启动应用程序:

$ kubectl create -f https://raw.githubusercontent.com/tarantool/tarantool-operator/0.0.1/examples/kv/deployment.yaml

An Ingress is declared on the web UI in the YAML file with the example; it is available in cluster_ip/admin/cluster. When at least one Ingress Pod is ready and running, you can go there to watch how new instances are added to the cluster and how its topology changes.

带有示例的YAML文件在Web UI上声明了一个Ingress; 在cluster_ip/admin/cluster可用。 至少有一个Ingress Pod就绪并运行后,您可以去那里观察如何将新实例添加到集群中以及其拓扑结构如何变化。

We are waiting for the cluster to be used:

我们正在等待集群的使用:

$ kubectl describe clusters.tarantool.io examples-kv-cluster

We are waiting for the following cluster Status:

我们正在等待以下群集状态:

…
Status:
  State:  Ready
…

That is all, and the application is ready to use!

仅此而已,该应用程序就可以使用了!

Do you need more storage space? Then, let’s add some shards:

您需要更多的存储空间吗? 然后,让我们添加一些碎片:

$ kubectl scale roles.tarantool.io storage --replicas=3

If shards cannot handle the load, then let’s increase the number of instances in the shard by editing the replica set template:

如果分片无法处理负载,那么让我们通过编辑副本集模板来增加分片中的实例数:

$ kubectl edit replicasettemplates.tarantool.io storage-template

Let us set the .spec.replicas value to two in order to increase the number of instances in each replica set to two.

让我们将.spec.replicas值设置为2,以将每个副本中的实例数增加为2。

If a cluster is no longer needed, just delete it together with all the resources:

如果不再需要集群,只需将其连同所有资源一起删除:

$ kubectl delete clusters.tarantool.io examples-kv-cluster

Did something go wrong? Create a ticket, and we will quickly work on it.

出问题了吗? 创建故障单 ,我们将快速进行处理。

操作员实际做什么 (What the Operator Actually Does)

The start-up and operation of the Tarantool Cartridge cluster is a story of performing specific actions in a specific order at a specific time.

Tarantool Cartridge集群的启动和操作是一个在特定时间以特定顺序执行特定操作的故事。

The cluster itself is managed primarily via the admin API: GraphQL over HTTP. You can undoubtedly go a level lower and give commands directly through the console, but this doesn't happen very often.

群集本身主要通过admin API:基于HTTP的GraphQL进行管理。 毫无疑问,您可以降低级别并直接通过控制台发出命令,但这很少发生。

For example, this is how the cluster starts:

例如,这是集群启动的方式:

  1. We deploy the required number of Tarantool instances, for example, using systemd.

    例如,使用systemd部署所需数量的Tarantool实例。
  2. Then we connect the instances into membership:

    然后我们将实例连接到成员身份:

    mutation {
        probe_instance: probe_server(uri: "storage:3301")
    }
  3. Then we assign the roles to the instances and specify the instance and replica set identifiers. The GraphQL API is used for this purpose:

    然后,我们将角色分配给实例,并指定实例和副本集标识符。 GraphQL API用于此目的:

    mutation {
         join_server(
            uri:"storage:3301",
            instance_uuid: "cccccccc-cccc-4000-b000-000000000001",
            replicaset_uuid: "cccccccc-0000-4000-b000-000000000000",
            roles: ["storage"],
            timeout: 5
        )
    }
  4. inally, we bootstrap the component responsible for sharding using the API:

    最终,我们使用API​​引导负责分片的组件:

    mutation {
        bootstrap_vshard
     
        cluster {
            failover(enabled:true)
        }
    }

Easy, right?

容易吧?

Everything is more interesting when it comes to cluster expansion. The Routers role from the example scales easily: create more instances, join them to an existing cluster, and you're done! The Storages role is somewhat trickier. The storage is sharded, so when adding/removing instances, it is necessary to rebalance the data by moving it to/from the new/deleted instances respectively. Failing to do so would result in either underloaded instances, or lost data. What if there is not just one, but a dozen of clusters with different topologies?

当涉及到集群扩展时,一切都会变得更加有趣。 该示例中的Routers角色可以轻松扩展:创建更多实例,将它们加入到现有集群中,您就完成了! 存储角色有些棘手。 存储是分片的,因此在添加/删除实例时,有必要通过分别将数据移到新实例/已删除实例中/从新实例中删除来重新平衡数据。 否则,将导致实例负载不足或数据丢失。 如果不仅有一个集群,而且有许多具有不同拓扑的集群怎么办?

In general, this is all that Tarantool Operator handles. The user describes the necessary state of the Tarantool Cartridge cluster, and the operator translates it into a set of actions applied to the K8s resources and into certain calls to the Tarantool cluster administrator API in a specific order at a specific time. It also tries to hide all the details from the user.

通常,这就是Tarantool Operator处理的所有事情。 用户描述了Tarantool Cartridge集群的必要状态,操作员将其转换为应用于K8s资源的一组操作,并转换为在特定时间以特定顺序对Tarantool集群管理员API的某些调用。 它还尝试向用户隐藏所有详细信息。

一点细节 (A Little About the Details)

While working with the Tarantool Cartridge cluster administrator API, both the order of the calls and their destination are essential. Why is that?

在使用Tarantool Cartridge集群管理员API时,调用顺序及其目的地都是必不可少的。 这是为什么?

Tarantool Cartridge contains its topology storage, service discovery component and configuration component. Each instance of the cluster stores a copy of the topology and configuration in a YAML file.

Tarantool弹药筒包含其拓扑存储,服务发现组件和配置组件。 群集的每个实例都将拓扑和配置的副本存储在YAML文件中。

servers:
    d8a9ce19-a880-5757-9ae0-6a0959525842:
      uri: storage-2-0.examples-kv-cluster:3301
      replicaset_uuid: 8cf044f2-cae0-519b-8d08-00a2f1173fcb
    497762e2-02a1-583e-8f51-5610375ebae9:
      uri: storage-0-0.examples-kv-cluster:3301
      replicaset_uuid: 05e42b64-fa81-59e6-beb2-95d84c22a435
…
vshard:
  bucket_count: 30000
...

Updates are applied consistently using the two-phase commit mechanism. A successful update requires a 100% quorum: every instance must respond. Оtherwise, it rolls back. What does this mean in terms of operation? In terms of reliability, all the requests to the administrator API that modify the cluster state should be sent to a single instance, or the leader, because otherwise we risk getting different configurations on different instances. Tarantool Cartridge does not know how to do a leader election (not just yet), but Tarantool Operator can, and for you, this is just a fun fact, because the operator does everything.

使用两阶段提交机制一致地应用更新。 成功的更新需要100%的仲裁:每个实例都必须响应。 О另外,它会回滚。 就操作而言,这意味着什么? 在可靠性方面,所有修改集群状态的对管理员API的请求都应发送到单个实例或领导者,因为否则我们将冒着在不同实例上获得不同配置的风险。 Tarantool弹药筒尚不知道如何进行领导者选举(不仅如此),但是Tarantool操作员可以而且对您来说,这只是一个有趣的事实,因为操作员可以做所有事情。

Every instance should also have a fixed identity, i.e. a set of instance_uuid and replicaset_uuid, as well as advertise_uri. If suddenly a storage restarts, and one of these parameters changes, then you run the risk of breaking the quorum, and the operator is responsible for this.

每个实例还应该具有一个固定的标识,即一组instance_uuidreplicaset_uuid以及advertise_uri 。 如果存储突然重新启动,并且其中一个参数发生更改,则可能会导致仲裁中断,操作员对此负责。

操作员如何工作 (How the Operator Works)

The purpose of the operator is to bring the system into the user-defined state and maintain the system in this state until new directions are given. In order for the operator to be able to work, it needs:

操作员的目的是使系统进入用户定义的状态,并将系统保持在此状态,直到给出新的指示为止。 为了使操作员能够工作,需要:

  1. The description of the system status.

    系统状态的描述。
  2. The code that would bring the system into this state.

    使系统进入此状态的代码。
  3. A mechanism for integrating this code into k8s (for example, to receive state change notifications).

    一种将此代码集成到k8中的机制(例如,接收状态更改通知)。

The Tarantool Cartridge cluster is described in terms of k8s using a Custom Resource Definition (CRD). The operator would need three custom resources united under the tarantool.io/v1alpha group:

使用自定义资源定义(CRD)按照k8来描述Tarantool Cartridge集群。 操作员需要在tarantool.io/v1alpha组下合并的三个自定义资源:

  • Cluster is a top-level resource that corresponds to a single Tarantool Cartridge cluster.

    群集是与单个Tarantool墨盒群集相对应的顶级资源。
  • Role is a user role in terms of Tarantool Cartridge.

    就Tarantool弹药筒而言,角色是用户角色

  • Replicaset Template is a template for creating StatefulSets (I will tell you a bit later why they are stateful; not to be confused with K8s ReplicaSet).

    复制副本模板是用于创建StatefulSet的模板(我稍后会告诉您为什么它们是有状态的;不要与K8s ReplicaSet混淆)。

All of these resources directly reflect the Tarantool Cartridge cluster description model. Having a common dictionary makes it easier to communicate with the developers and to understand what they would like to see in production.

所有这些资源都直接反映了Tarantool Cartridge集群描述模型。 拥有通用词典可以更轻松地与开发人员进行交流,并了解他们希望在生产中看到什么。

The code that brings the system to the given state is the Controller in terms of K8s. In case of Tarantool Operator, there are several controllers:

使系统进入给定状态的代码是控制器(按K8)。 对于Tarantool Operator,有几个控制器:

  • Cluster Controller is responsible for interacting with the Tarantool Cartridge cluster; it connects instances to the cluster and disconnects instances from the cluster.

    群集控制器负责与Tarantool墨盒群集进行交互; 它将实例连接到群集,并断开实例与群集的连接。
  • Role Controller is the user role controller responsible for creating StatefulSets from the template and maintaining the predefined number of them.

    角色控制器是用户角色控制器,负责从模板创建StatefulSet并维护其中的预定义数量。

What is a controller like? It is a set of code that gradually puts the world around itself in order. A Cluster Controller would schematically look like:

控制器是什么样的? 这是一组代码,可以使周围的世界逐渐井然有序。 集群控制器的示意图如下:

An entry point is a test to see if a corresponding Cluster resource exists for an event. Does it exist? «No» means quitting. «Yes» means moving onto the next block and taking Ownership of the user roles. When the Ownership of a role is taken, it quits and goes the second time around. It goes on and on until it takes the Ownership of all the roles. When the ownership is taken, it's time to move to the next block of operations. And the process goes on until the last block. After that, we can assume that the controlled system is in the defined state.

入口点是一项测试,以查看事件是否存在相应的群集资源。 是否存在? «否»表示退出。 “是”表示进入下一个块并获得用户角色的所有权。 获得角色所有权后,它退出并第二次出现。 一直持续到获得所有角色的所有权为止。 取得所有权后,就该移至下一个操作块了。 然后该过程一直进行到最后一个块。 之后,我们可以假定受控系统处于定义状态。

In general, everything is quite simple. However, it is important to determine the success criteria for passing each stage. For example, the cluster join operation is not considered successful when it returns hypothetical success=true, but when it returns an error like «already joined».

总的来说,一切都很简单。 但是,确定通过每个阶段的成功标准很重要。 例如,当集群连接操作返回假设的success = true时,它不被视为成功,但是当它返回诸如“已连接”之类的错误时,则不被视为成功。

And the last part of this mechanism is the integration of the controller with K8s. From a bird's eye view, the entire K8s consists of a set of controllers that generate events and respond to them. These events are organized into queues that we can subscribe to. It would schematically look like:

该机制的最后一部分是控制器与K8的集成。 从鸟瞰角度来看,整个K8都由一组控制器组成,这些控制器生成事件并对其进行响应。 这些事件被组织成可以订阅的队列。 它的示意图如下:

The user calls kubectl create -f tarantool_cluster.yaml, and the corresponding Cluster resource is created. The Cluster Controller is notified of the Cluster resource creation. And the first thing it is trying to do is to find all the Role resources that should be part of this cluster. If it does, then it assigns the Cluster as the Owner for the Role and updates the Role resource. Role Controller receives a Role update notification, understands that the resource has its Owner, and starts creating StatefulSets. This is the way it works: the first event triggers the second one, the second event triggers the third one, and so on until one of them stops. You can also set a time trigger, for example, every 5 seconds.

用户调用kubectl create -f tarantool_cluster.yaml ,并kubectl create -f tarantool_cluster.yaml了相应的群集资源。 通知群集控制器群集资源的创建。 它要做的第一件事是找到应该属于该集群的所有角色资源。 如果是这样,则它将群集分配为角色的所有者并更新角色资源。 角色控制器收到角色更新通知,了解该资源具有其所有者,然后开始创建StatefulSet。 它是这样工作的:第一个事件触发第二个事件,第二个事件触发第三个事件,依此类推,直到其中一个事件停止。 您还可以设置一个时间触发器,例如,每5秒设置一次。

This is how the operator is organized: we create a custom resource and write the code that responds to the events related to the resources.

这是操作员的组织方式:我们创建一个自定义资源,并编写代码以响应与资源有关的事件。

运营商创造的东西 (What the Operator Creates)

The operator actions ultimately result in creating K8s Pods and containers. In the Tarantool Cartridge cluster deployed on K8s, all Pods are connected to StatefulSets.

操作员的行动最终导致创建K8s Pod和容器。 在K8上部署的Tarantool Cartridge集群中,所有Pod都连接到StatefulSet。

Why StatefulSet? As I mentioned earlier, every Tarantool Cluster instance keeps a copy of the cluster topology and configuration. And every once in a while an application server has some space dedicated, for example, for queues or reference data, and this is already a full state. StatefulSet also guarantees that Pod identities are preserved, which is important when clustering instances: instances should have fixed identities, otherwise we risk losing the quorum upon restart.

为什么选择StatefulSet? 如前所述,每个Tarantool集群实例均保留集群拓扑和配置的副本。 应用服务器有时每隔一段时间就有一些专用的空间,例如用于队列或参考数据,而这已经是一个完整的状态。 StatefulSet还保证保留Pod身份,这在群集实例时很重要:实例应具有固定的身份,否则我们有可能在重新启动时失去仲裁。

When all cluster resources are ready and in the desired state, they reflect the following hierarchy:

当所有群集资源都准备就绪并处于所需状态时,它们将反映以下层次结构:

The arrows indicate the Owner-Dependant relationship between resources. It is necessary, for example, for the Garbage Collector to clean up after the Cluster removal.

箭头指示资源之间的所有者依赖关系。 例如,在删除集群后,有必要清理垃圾收集器

In addition to StatefulSets, Tarantool Operator creates a Headless Service for the leader election, and the instances communicate with each other over this service.

除了StatefulSets,Tarantool Operator还为领导者选举创建了Headless服务,实例通过该服务彼此通信。

Tarantool Operator is based on the Operator Framework, and the operator code is written in Golang, so there is nothing special here.

Tarantool Operator基于Operator Framework ,并且操作员代码是用Golang编写的,因此这里没有什么特别的。

摘要 (Summary)

That's pretty much all there is to it. We are waiting for your feedback and tickets. We can't do without them — it is the alpha version after all. What is next? The next step is much polishing:

这几乎就是它的全部。 我们正在等待您的反馈和门票。 我们离不开它们-毕竟是alpha版本。 接下来是什么? 下一步是很多抛光:

  • Unit, E2E tests;

    单元,端到端测试;

  • Chaos Monkey tests;

    混沌猴子测试;

  • stress tests;

    压力测试;

  • backup/restore;

    备份/还原;

  • external topology provider.

    外部拓扑提供程序。

Each of these topics is broad on its own and deserves a separate article, so please wait for updates!

这些主题各有其自身,应单独撰写,因此请等待更新!

翻译自: https://habr.com/en/company/mailru/blog/472428/

逻辑运算符 位运算符

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值