CRAQ论文学习理解

CRAQ实现

链接:《Object Storage on CRAQ: High-throughput chain replication for read-mostly workloads》论文总结 - BrianLeeLXT - 博客园

核心

CRAQ本身也有很多地方基于Zookeeper实现,例如处理错误恢复或者split-brain问题等。

Zookeeper提供了API来统计集群信息以及实现选举、服务发现等功能。

Dubbo里面的服务发现模块是基于Zookeeper实现。

CRAQ与RAFT为不同的实现容错的机制,他通过分散写(每个master节点只向直接后继逐次传递而非传递给所有子节点)来降低写负载。

CRAQ每个节点都支持读操作,而RAFT等算法只有master节点支持读操作

Why can CRAQ serve reads from replicas linearizably but Raft/ZooKeeper/&c cannot?
  Relies on being a chain, so that *all* nodes see each
    write before the write commits, so nodes know about
    all writes that might have committed, and thus know when
    to ask the tail.
  Raft/ZooKeeper can't do this because leader can proceed with a mere
    majority, so can commit without all followers seeing a write,
    so followers are not aware when they have missed a committed write.

虽然对读操作提供更强的支持能力,但这并不代表CRAQ优于RAFT等算法

Does that mean CRAQ is strictly more powerful than Raft &c?
  No.
  All CRAQ replicas have to participate for any write to commit.
  If a node isn't reachable, CRAQ must wait.
  So not immediately fault-tolerant in the way that ZK and Raft are.
  CR has the same limitation.

 

RAFT/ZOOKEEPER和CRAQ可以同时使用

How can we safely make use of a replication system that can't handle partition?
  A single "configuration manager" must choose head, chain, tail.
  Everyone (servers, clients) must obey or stop.
    Regardless of who they locally think is alive/dead.
  A configuration manager is a common and useful pattern.
    It's the essence of how GFS (master) and VMware-FT (test-and-set server) work.
    Usually Paxos/Raft/ZK for config service,
      data sharded over many replica groups,
      CR or something else fast for each replica group.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值