7.Kafka系列之设计思想(五)-副本

4.7 Replication复制

Kafka replicates the log for each topic’s partitions across a configurable number of servers (you can set this replication factor on a topic-by-topic basis). This allows automatic failover to these replicas when a server in the cluster fails so messages remain available in the presence of failures.

Kafka在多个可配置服务器上为每个主题的分区复制日志(您可以逐个主题地设置此复制因子)。这允许在集群中的服务器发生故障时自动故障转移到这些副本,因此消息在出现故障时仍然可用

Other messaging systems provide some replication-related features, but, in our (totally biased) opinion, this appears to be a tacked-on thing, not heavily used, and with large downsides: replicas are inactive, throughput is heavily impacted, it requires fiddly manual configuration, etc. Kafka is meant to be used with replication by default—in fact we implement un-replicated topics as replicated topics where the replication factor is one.

其他消息系统提供了一些与复制相关的功能,但是,在我们(完全有偏见的)看来,这似乎是一个附加的东西,没有被大量使用,并且有很大的缺点:副本不活跃,吞吐量受到严重影响,它需要繁琐的手动配置等。Kafka 默认与复制一起使用——事实上,我们将未复制的主题实现为复制因子为 1 的复制主题

The unit of replication is the topic partition. Under non-failure conditions, each partition in Kafka has a single leader and zero or more followers. The total number of replicas including the leader constitute the replication factor. All writes go to the leader of the partition, and reads can go to the leader or the followers of the partition. Typically, there are many more partitions than brokers and the leaders are evenly distributed among brokers. The logs on the followers are identical to the leader’s log—all have the same offsets and messages in the same order (though, of course, at any given time the leader may have a few as-yet unreplicated messages at the end of its log)

复制单元是主题分区。在非故障情况下,Kafka 中的每个分区都有一个领导者和零个或多个追随者。包括领导者在内的副本总数构成复制因子。所有写入都到分区的领导者,读取可以到分区的领导者或追随者。通常,分区比经纪人多得多,领导者平均分布在经纪人之间。跟随者上的日志与领导者的日志相同——都具有相同的偏移量和相同顺序的消息(当然,在任何给定时间,领导者可能在其日志末尾有一些尚未复制的消息)

Followers consume messages from the leader just as a normal Kafka consumer would and apply them to their own log. Having the followers pull from the leader has the nice property of allowing the follower to naturally batch together log entries they are applying to their log

追随者像普通的 Kafka 消费者一样消费来自领导者的消息,并将它们应用到自己的日志中。让追随者从领导者那里拉区有一个很好的特性,那就是允许追随者自然地将他们正在应用到他们日志的日志条目在一起批处理

As with most distributed systems, automatically handling failures requires a precise definition of what it means for a node to be “alive.” In Kafka, a special node known as the “controller” is responsible for managing the registration of brokers in the cluster. Broker liveness has two conditions:

与大多数分布式系统一样,自动处理故障需要精确定义节点“存活”的含义。在 Kafka 中,一个称为“控制器”的特殊节点负责管理集群中代理的注册。Broker 活跃度有两个条件:

1.Brokers must maintain an active session with the controller in order to receive regular metadata updates.
经纪人必须与控制器保持活跃的会话,以便接收定期的元数据更新

2.Brokers acting as followers must replicate the writes from the leader and not fall “too far” behind.
作为跟随者的经纪人必须从领导者那里复制写入并且不能落后“太远”

What is meant by an “active session” depends on the cluster configuration. For KRaft clusters, an active session is maintained by sending periodic heartbeats to the controller. If the controller fails to receive a heartbeat before the timeout configured by broker.session.timeout.ms expires, then the node is considered offline.

“活动会话”的含义取决于集群配置。对于 KRaft 集群,通过向控制器发送定期心跳来维持活动会话。如果控制器在配置的超时到期之前未能收到心跳 broker.session.timeout.ms,则该节点被视为离线

For clusters using Zookeeper, liveness is determined indirectly through the existence of an ephemeral node which is created by the broker on initialization of its Zookeeper session. If the broker loses its session after failing to send heartbeats to Zookeeper before expiration of zookeeper.session.timeout.ms, then the node gets deleted. The controller would then notice the node deletion through a Zookeeper watch and mark the broker offline

对于使用 Zookeeper 的集群,活性是通过代理在其 Zookeeper 会话初始化时创建的临时节点的存在间接确定的。如果代理在到期之前未能向 Zookeeper 发送心跳后丢失其会话 zookeeper.session.timeout.ms,则该节点将被删除。然后,控制器会通过 Zookeeper 监视通知节点删除,并将代理标记为离线

We refer to nodes satisfying these two conditions as being “in sync” to avoid the vagueness of “alive” or “failed”. The leader keeps track of the set of “in sync” replicas, which is known as the ISR. If either of these conditions fail to be satisified, then the broker will be removed from the ISR. For example, if a follower dies, then the controller will notice the failure through the loss of its session, and will remove the broker from the ISR. On the other hand, if the follower lags too far behind the leader but still has an active session, then the leader can also remove it from the ISR. The determination of lagging replicas is controlled through the replica.lag.time.max.ms configuration. Replicas that cannot catch up to the end of the log on the leader within the max time set by this configuration are removed from the ISR

我们将满足这两个条件的节点称为“同步”,以避免“活着”或“失败”的含糊不清。领导者跟踪一组“同步”副本,称为 ISR。如果这些条件中的任何一个未能满足,则代理将从 ISR 中移除。例如,如果一个跟随者死亡,那么控制器将通过丢失会话来通知失败,并将从 ISR 中删除代理。另一方面,如果 follower 落后于 leader 太远但仍有活动会话,则 leader 也可以将其从 ISR 中移除。滞后副本的确定是通过replica.lag.time.max.ms配置。无法在此配置设置的最长时间内赶上领导者日志末尾的副本将从 ISR 中删除

In distributed systems terminology we only attempt to handle a “fail/recover” model of failures where nodes suddenly cease working and then later recover (perhaps without knowing that they have died). Kafka does not handle so-called “Byzantine” failures in which nodes produce arbitrary or malicious responses (perhaps due to bugs or foul play)

在分布式系统术语中,我们只尝试处理“失败/恢复”故障模型,其中节点突然停止工作然后恢复(可能不知道它们已经死亡)。Kafka 不处理所谓的“拜占庭式”故障,在这种情况下,节点会产生任意或恶意的响应(可能是由于错误或犯规)

We can now more precisely define that a message is considered committed when all replicas in the ISR for that partition have applied it to their log. Only committed messages are ever given out to the consumer. This means that the consumer need not worry about potentially seeing a message that could be lost if the leader fails. Producers, on the other hand, have the option of either waiting for the message to be committed or not, depending on their preference for tradeoff between latency and durability. This preference is controlled by the acks setting that the producer uses. Note that topics have a setting for the “minimum number” of in-sync replicas that is checked when the producer requests acknowledgment that a message has been written to the full set of in-sync replicas. If a less stringent acknowledgement is requested by the producer, then the message can be committed, and consumed, even if the number of in-sync replicas is lower than the minimum (e.g. it can be as low as just the leader).

我们现在可以更精确地定义,当该分区的 ISR 中的所有副本都将一条消息应用到它们的日志时,该消息被视为已提交。只有提交的消息才会发送给消费者。这意味着消费者不必担心如果领导者失败可能会看到一条可能丢失的消息。另一方面,生产者可以选择是否等待消息被提交,这取决于他们对延迟和持久性之间权衡的偏好。此首选项由生产者使用的 acks 设置控制。请注意,主题具有同步副本“最小数量”的设置,当生产者请求确认消息已写入完整的同步副本集时,将检查该设置

The guarantee that Kafka offers is that a committed message will not be lost, as long as there is at least one in sync replica alive, at all times.

Kafka 提供的保证是提交的消息不会丢失,只要至少有一个同步副本始终处于活动状态

Kafka will remain available in the presence of node failures after a short fail-over period, but may not remain available in the presence of network partitions

在短暂的故障转移期后,Kafka 将在出现节点故障时保持可用,但在出现网络分区时可能无法保持可用。

Replicated Logs: Quorums, ISRs, and State Machines (Oh my!)复制日志:仲裁、ISR 和状态机(天哪!)

At its heart a Kafka partition is a replicated log. The replicated log is one of the most basic primitives in distributed data systems, and there are many approaches for implementing one. A replicated log can be used by other systems as a primitive for implementing other distributed systems in the state-machine style.

Kafka 分区的核心是一个复制的日志。复制日志是分布式数据系统中最基本的原语之一,实现它的方法有很多种。复制的日志可以被其他系统用作以状态机形式实现其他分布式系统的原语

A replicated log models the process of coming into consensus on the order of a series of values (generally numbering the log entries 0, 1, 2, …). There are many ways to implement this, but the simplest and fastest is with a leader who chooses the ordering of values provided to it. As long as the leader remains alive, all followers need to only copy the values and ordering the leader chooses.

复制的日志模拟了对一系列值的顺序达成共识的过程(通常将日志条目编号为 0、1、2 …)。有很多方法可以实现这一点,但最简单和最快的方法是使用一个领导者来选择提供给它的值的顺序。只要领导者还活着,所有追随者只需要复制领导者选择的值和顺序。

Of course if leaders didn’t fail we wouldn’t need followers! When the leader does die we need to choose a new leader from among the followers. But followers themselves may fall behind or crash so we must ensure we choose an up-to-date follower. The fundamental guarantee a log replication algorithm must provide is that if we tell the client a message is committed, and the leader fails, the new leader we elect must also have that message. This yields a tradeoff: if the leader waits for more followers to acknowledge a message before declaring it committed then there will be more potentially electable leaders.

当然,如果领导者没有失败,我们就不需要追随者了!当领导者确实死亡时,我们需要从追随者中选择一个新的领导者。但是追随者本身可能会落后或崩溃,所以我们必须确保我们选择了一个最新的追随者。日志复制算法必须提供的基本保证是,如果我们告诉客户端一条消息已提交,而领导者失败了,我们选出的新领导者也必须拥有该消息。这会产生一个权衡:如果领导人等待更多的追随者在提前信息之前确认一条信息,那么将会有更多的可选举领导者。

If you choose the number of acknowledgements required and the number of logs that must be compared to elect a leader such that there is guaranteed to be an overlap, then this is called a Quorum.

如果您选择所需的确认数量和必须比较的日志数量以选择领导者,从而保证有重叠,那么这称为法定人数

A common approach to this tradeoff is to use a majority vote for both the commit decision and the leader election. This is not what Kafka does, but let’s explore it anyway to understand the tradeoffs. Let’s say we have 2f+1 replicas. If f+1 replicas must receive a message prior to a commit being declared by the leader, and if we elect a new leader by electing the follower with the most complete log from at least f+1 replicas, then, with no more than f failures, the leader is guaranteed to have all committed messages. This is because among any f+1 replicas, there must be at least one replica that contains all committed messages. That replica’s log will be the most complete and therefore will be selected as the new leader. There are many remaining details that each algorithm must handle (such as precisely defined what makes a log more complete, ensuring log consistency during leader failure or changing the set of servers in the replica set) but we will ignore these for now.

这种权衡的一种常见方法是对提交决定和领导者选举都使用多数表决。这不是 Kafka 所做的,但无论如何让我们探索它以了解权衡。假设我们有 2 f +1 个副本。如果f +1 个副本必须在领导者声明提交之前收到消息,并且如果我们通过从至少 f +1 个副本中选择具有最完整日志的跟随者来选举新的领导者,那么,不超过f失败时,领导者保证拥有所有已提交的消息。这是因为在任何f+1 个副本,必须至少有一个包含所有已提交消息的副本。该副本的日志将是最完整的,因此将被选为新的领导者。每个算法都必须处理许多剩余的细节(例如精确定义什么使日志更完整,确保领导者失败期间日志的一致性或更改副本集中的服务器集)但我们现在将忽略这些

This majority vote approach has a very nice property: the latency is dependent on only the fastest servers. That is, if the replication factor is three, the latency is determined by the faster follower not the slower one.

这种多数表决方法有一个非常好的特性:延迟仅取决于最快的服务器。也就是说,如果复制因子为 3,则延迟由较快的跟随者而不是较慢的跟随者决定

There are a rich variety of algorithms in this family including ZooKeeper’s Zab, Raft, and Viewstamped Replication. The most similar academic publication we are aware of to Kafka’s actual implementation is PacificA from Microsoft.

这个家族中有丰富多样的算法,包括 ZooKeeper 的 Zab、 Raft和Viewstamped Replication。据我们所知,与 Kafka 的实际实现最相似的学术出版物是 来自 Microsoft 的PacificA

The downside of majority vote is that it doesn’t take many failures to leave you with no electable leaders. To tolerate one failure requires three copies of the data, and to tolerate two failures requires five copies of the data. In our experience having only enough redundancy to tolerate a single failure is not enough for a practical system, but doing every write five times, with 5x the disk space requirements and 1/5th the throughput, is not very practical for large volume data problems. This is likely why quorum algorithms more commonly appear for shared cluster configuration such as ZooKeeper but are less common for primary data storage. For example in HDFS the namenode’s high-availability feature is built on a majority-vote-based journal, but this more expensive approach is not used for the data itself.

多数表决的不利之处在于,您无需多次失败就会失去可选举的领导人。容忍一次故障需要三份数据,容忍两次故障需要五份数据。根据我们的经验,只有足够的冗余来容忍单个故障对于实际系统来说是不够的,但是每次写入五次,磁盘空间要求是 5 倍,吞吐量是 1/5,对于大容量数据问题来说并不是很实用。这可能就是为什么仲裁算法更常出现在 ZooKeeper 等共享集群配置中,但不太常见于主数据存储的原因。例如,在 HDFS 中,namenode 的高可用性功能是建立在基于多数投票的日志之上的,但这种更昂贵的方法并不用于数据本身

Kafka takes a slightly different approach to choosing its quorum set. Instead of majority vote, Kafka dynamically maintains a set of in-sync replicas (ISR) that are caught-up to the leader. Only members of this set are eligible for election as leader. A write to a Kafka partition is not considered committed until all in-sync replicas have received the write. This ISR set is persisted in the cluster metadata whenever it changes. Because of this, any replica in the ISR is eligible to be elected leader. This is an important factor for Kafka’s usage model where there are many partitions and ensuring leadership balance is important. With this ISR model and f+1 replicas, a Kafka topic can tolerate f failures without losing committed messages.

Kafka 采用略微不同的方法来选择其仲裁集。Kafka 不是多数表决,而是动态维护一组同步到领导者的同步副本(ISR)。只有这个集合中的成员才有资格被选为领导者。在所有同步副本都收到写入之前,不会将对 Kafka 分区的写入视为已提交。只要 ISR 集发生变化,它就会保留在集群元数据中。正是因为这,ISR才能够直接当做候选领导者. 这是 Kafka 的使用模型的一个重要因素,其中有许多分区并且确保领导平衡很重要。使用此 ISR 模型和f+1 个副本,Kafka 主题可以容忍f 次失败而不会丢失已提交的消息

For most use cases we hope to handle, we think this tradeoff is a reasonable one. In practice, to tolerate f failures, both the majority vote and the ISR approach will wait for the same number of replicas to acknowledge before committing a message (e.g. to survive one failure a majority quorum needs three replicas and one acknowledgement and the ISR approach requires two replicas and one acknowledgement). The ability to commit without the slowest servers is an advantage of the majority vote approach. However, we think it is ameliorated by allowing the client to choose whether they block on the message commit or not, and the additional throughput and disk space due to the lower required replication factor is worth it.

对于我们希望处理的大多数用例,我们认为这种权衡是合理的。实际上,为了容忍f次失败,多数投票和 ISR 方法都将在提交消息之前等待相同数量的副本确认(例如,为了在一次失败中幸存下来,多数群体需要三个副本和一个确认,而 ISR 方法需要两个副本和一个确认)。在没有最慢服务器的情况下提交的能力是多数表决方法的一个优势。但是,我们认为通过允许客户端选择是否阻塞消息提交可以改善这种情况,并且由于所需的复制因子较低而带来的额外吞吐量和磁盘空间是值得的

Another important design distinction is that Kafka does not require that crashed nodes recover with all their data intact. It is not uncommon for replication algorithms in this space to depend on the existence of “stable storage” that cannot be lost in any failure-recovery scenario without potential consistency violations. There are two primary problems with this assumption. First, disk errors are the most common problem we observe in real operation of persistent data systems and they often do not leave data intact. Secondly, even if this were not a problem, we do not want to require the use of fsync on every write for our consistency guarantees as this can reduce performance by two to three orders of magnitude. Our protocol for allowing a replica to rejoin the ISR ensures that before rejoining, it must fully re-sync again even if it lost unflushed data in its crash.

另一个重要的设计区别是 Kafka 不要求崩溃的节点恢复时所有数据都完好无损。在这个空间中,复制算法依赖于“稳定存储”的存在并不少见,这种存储在任何故障恢复场景中都不会丢失,而不会出现潜在的一致性违规。这个假设有两个主要问题。首先,磁盘错误是我们在持久数据系统的实际操作中观察到的最常见问题,它们通常不会让数据保持完整。其次,即使这不是问题,我们也不希望在每次写入时都使用 fsync 来保证一致性,因为这会使性能降低两到三个数量级。我们允许副本重新加入 ISR 的协议确保在重新加入之前,它必须再次完全重新同步,即使它在崩溃中丢失了未刷新的数据

Unclean leader election: What if they all die?不干净的领导人选举:如果他们都死了怎么办?

Note that Kafka’s guarantee with respect to data loss is predicated on at least one replica remaining in sync. If all the nodes replicating a partition die, this guarantee no longer holds.

请注意,Kafka 对数据丢失的保证基于至少一个副本保持同步。如果复制分区的所有节点都死亡,则此保证不再有效

However a practical system needs to do something reasonable when all the replicas die. If you are unlucky enough to have this occur, it is important to consider what will happen. There are two behaviors that could be implemented:
然而,当所有副本都死亡时,实际系统需要做一些合理的事情。如果您不幸发生这种情况,请务必考虑会发生什么。有两种行为可以实现

1.Wait for a replica in the ISR to come back to life and choose this replica as the leader (hopefully it still has all its data).
等待 ISR 中的一个副本恢复生机并选择这个副本作为领导者(希望它仍然拥有所有数据)

2.Choose the first replica (not necessarily in the ISR) that comes back to life as the leader.
选择第一个恢复生命的副本(不一定在 ISR 中)作为领导者

This is a simple tradeoff between availability and consistency. If we wait for replicas in the ISR, then we will remain unavailable as long as those replicas are down. If such replicas were destroyed or their data was lost, then we are permanently down. If, on the other hand, a non-in-sync replica comes back to life and we allow it to become leader, then its log becomes the source of truth even though it is not guaranteed to have every committed message. By default from version 0.11.0.0, Kafka chooses the first strategy and favor waiting for a consistent replica. This behavior can be changed using configuration property unclean.leader.election.enable, to support use cases where uptime is preferable to consistency.

这是可用性和一致性之间的简单权衡。如果我们在 ISR 中等待副本,那么只要这些副本关闭,我们就将保持不可用状态。如果此类副本被破坏或数据丢失,那么我们将永久关闭。另一方面,如果一个不同步的副本恢复了生命并且我们允许它成为领导者,那么它的日志就会成为真实的来源,即使它不能保证有每条提交的消息。默认情况下,从 0.11.0.0 版本开始,Kafka 选择第一种策略并倾向于等待一致的副本。可以使用配置属性 unclean.leader.election.enable 更改此行为,以支持正常运行时间优于一致性的用例

This dilemma is not specific to Kafka. It exists in any quorum-based scheme. For example in a majority voting scheme, if a majority of servers suffer a permanent failure, then you must either choose to lose 100% of your data or violate consistency by taking what remains on an existing server as your new source of truth.

这种困境并不是 Kafka 特有的。它存在于任何基于群体的方案中。例如,在多数表决方案中,如果大多数服务器遭受永久性故障,那么您必须选择丢失 100% 的数据,或者通过将现有服务器上保留的内容作为新的真实来源来违反一致性

Availability and Durability Guarantees可用性和耐用性保证

When writing to Kafka, producers can choose whether they wait for the message to be acknowledged by 0,1 or all (-1) replicas. Note that “acknowledgement by all replicas” does not guarantee that the full set of assigned replicas have received the message. By default, when acks=all, acknowledgement happens as soon as all the current in-sync replicas have received the message. For example, if a topic is configured with only two replicas and one fails (i.e., only one in sync replica remains), then writes that specify acks=all will succeed. However, these writes could be lost if the remaining replica also fails. Although this ensures maximum availability of the partition, this behavior may be undesirable to some users who prefer durability over availability. Therefore, we provide two topic-level configurations that can be used to prefer message durability over availability:

写入 Kafka 时,生产者可以选择是等待消息被 0,1 还是所有 (-1) 副本确认。请注意,“所有副本的确认”并不能保证所有分配的副本都已收到消息。默认情况下,当 acks=all 时,一旦所有当前同步副本都收到消息,就会发生确认。例如,如果一个主题只配置了两个副本并且一个失败了(即只剩下一个同步副本),那么指定 acks=all 的写入将成功。但是,如果其余副本也发生故障,这些写入可能会丢失。虽然这确保了分区的最大可用性,但对于一些更喜欢持久性而不是可用性的用户来说,这种行为可能是不受欢迎的。所以

1.Disable unclean leader election - if all replicas become unavailable, then the partition will remain unavailable until the most recent leader becomes available again. This effectively prefers unavailability over the risk of message loss. See the previous section on Unclean Leader Election for clarification.
禁用不干净的领导者选举——如果所有副本都不可用,那么分区将保持不可用状态,直到最近的领导者再次可用。这实际上更倾向于不可用而不是消息丢失的风险。请参阅上一节关于 Unclean Leader Election 的说明

2.Specify a minimum ISR size - the partition will only accept writes if the size of the ISR is above a certain minimum, in order to prevent the loss of messages that were written to just a single replica, which subsequently becomes unavailable. This setting only takes effect if the producer uses acks=all and guarantees that the message will be acknowledged by at least this many in-sync replicas. This setting offers a trade-off between consistency and availability. A higher setting for minimum ISR size guarantees better consistency since the message is guaranteed to be written to more replicas which reduces the probability that it will be lost. However, it reduces availability since the partition will be unavailable for writes if the number of in-sync replicas drops below the minimum threshold
指定最小 ISR 大小 - 如果 ISR 的大小超过某个最小值,分区将仅接受写入,以防止丢失仅写入单个副本的消息,该副本随后变得不可用。此设置仅在生产者使用 acks=all 并保证消息将被至少这么多同步副本确认时才生效。此设置提供了一致性和可用性之间的权衡。最小 ISR 大小的较高设置可保证更好的一致性,因为可以保证将消息写入更多副本,从而降低丢失消息的可能性。但是,它会降低可用性,因为如果同步副本的数量低于最小阈值,分区将不可用于写入

Replica Management副本管理

The above discussion on replicated logs really covers only a single log, i.e. one topic partition. However a Kafka cluster will manage hundreds or thousands of these partitions. We attempt to balance partitions within a cluster in a round-robin fashion to avoid clustering all partitions for high-volume topics on a small number of nodes. Likewise we try to balance leadership so that each node is the leader for a proportional share of its partitions.

上面关于复制日志的讨论实际上只涉及单个日志,即一个主题分区。然而,一个 Kafka 集群将管理成百上千个这样的分区。我们尝试以循环方式平衡集群内的分区,以避免将所有分区聚集在少量节点上的高容量主题。同样,我们尝试平衡领导力,以便每个节点都是其分区的比例份额的领导者

It is also important to optimize the leadership election process as that is the critical window of unavailability. A naive implementation of leader election would end up running an election per partition for all partitions a node hosted when that node failed. As discussed above in the section on replication, Kafka clusters have a special role known as the “controller” which is responsible for managing the registration of brokers. If the controller detects the failure of a broker, it is responsible for electing one of the remaining members of the ISR to serve as the new leader. The result is that we are able to batch together many of the required leadership change notifications which makes the election process far cheaper and faster for a large number of partitions. If the controller itself fails, then another controller will be elected

优化领导选举过程也很重要,因为这是不可用的关键窗口。领导者选举的天真实现最终会在节点失败时为该节点托管的所有分区运行每个分区的选举。正如上面关于复制的部分所讨论的,Kafka集群有一个特殊的角色,称为“控制器”,负责管理broker的注册。如果 controller 检测到 broker 发生故障,它会负责选举 ISR 的剩余成员之一作为新的 leader。结果是我们能够将许多所需的领导层变更通知批处理在一起,这使得大量分区的选举过程成本更低、速度更快。如果控制器本身出现故障,则将选举另一个控制器
欢迎关注公众号算法小生,获取最新文章

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

算法小生Đ

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值