目录
前言
做2020的MIT6.824,完成了实验Raft Lab3A,通过了测试,对于之前的Raft实现的实验请参考Raft Lab 2A, Raft Lab 2B 以及 Raft Lab 2C
Lab3A主要是实现应用层的数据储存,通过之前已经实现Raft的帮助,实现strong consistent. 具体来说客户端Clerk给kvserver发送Get/Put/Append的请求,kvserver associate Raft的一个peer. 实验重点实现key/value service,对应就是server.go以及client.go。
一、Key/value Service
1.1 流程图
大概流程就是
- client给kvserver发送请求,一共有两个API,分别是Get和PutAppend
- kvserver收到请求,向Raft发送Start,并等待Raft发送过来的applyCh(如果此时Raft peer已经不是leader,返回error)
- 收到applyCh后,处理applyCh返回的Command的请求,比如是Put则把kv存到kvserver的map中,最后通过chan通知第2步等待的server
- 收到第3步的channel后处理对应的信息,并返回client
当然,这是个非常简化的流程图,中间涉及很多细节后面会讨论,Get跟PutAppend流程类似,这里就画了一张PutAppend的超简流程图先宏观上了解一下
二、client
1.1 arguments
在common.go中,添加了两个属性,sequenceId跟clientId,这主要为了在parallel中实现Linearizable,也就是保证每个operation是按顺序执行
You will need to uniquely identify client operations to ensure that the key/value service executes each one just once.
type PutAppendArgs struct {
Key string
Value string
Op string // "Put" or "Append"
// You'll have to add definitions here.
// Field names must start with capital letters,
// otherwise RPC will break.
SequenceId int64
ClientId int64
}
type GetArgs struct {
Key string
// You'll have to add definitions here.
SequenceId int64
ClientId int64
}
1.2 client属性
按照提示,在client中,同时需要记录leaderId
You will probably have to modify your Clerk to remember which server turned out to be the leader for the last RPC, and send the next RPC to that server first. This will avoid wasting time searching for the leader on every RPC, which may help you pass some of the tests quickly enough.
type Clerk struct {
servers []*labrpc.ClientEnd
// You will have to modify this struct.
mu sync.Mutex
leaderId int
clientId int64
sequenceId int64
}
1.3 client的Get/PutAppend请求
就是给server发请求,如果leader不是client.leader则leaderId++重新请求,如果 reply.Err == OK则为成功,同时return
func (ck *Clerk) Get(key string) string {
// You will have to modify this function.
args := GetArgs{
Key: key, ClientId: ck.clientId, SequenceId: atomic.AddInt64(&ck.sequenceId, 1)}
leaderId := ck.currentLeader()
for {
reply := GetReply{
}
if ck.servers[leaderId].Call("KVServer.Get", &args, &reply) {
if reply.Err == OK {
return reply.Value
} else if reply.Err == ErrNoKey {
return ""
}
}
leaderId = ck.changeLeader()
time.Sleep(1 * time.Millisecond)