在kafka 0.8以后开始提供High Availability机制,即一个broker宕机后可以不影响服务。在集群规模越来越大的今天,HA对于消息中间件至关重要。
broker startup
“Kafka源码阅读 —— KafkaController(1)”中提到,在broker启动时会调用KafkaServer.startup
,该函数中会创建一个KafkaHealthcheck
类对象,并调用KafkaHealthcheck.startup
函数。startup函数会注册一个zookeeper会话过期监听器,当zookeeper会话过期后又重新连接zookeeper成功时调用KafkaHealthcheck.register()
。此外,broker启动时也会调用register()函数,代码如下:
def register() {
//获取当前broker的hostname
val advertisedHostName =
if(advertisedHost == null || advertisedHost.trim.isEmpty)
InetAddress.getLocalHost.getCanonicalHostName
else
advertisedHost
val jmxPort = System.getProperty("com.sun.management.jmxremote.port", "-1").toInt
//将broker的信息写入到zookeeper临时节点/brokers/ids/[brokerId]/下
ZkUtils.registerBrokerInZk(zkClient, brokerId, advertisedHostName, advertisedPort, zkSessionTimeoutMs, jmxPort)
}
register()函数在/brokers/ids/下创建临时节点[brokerId]/,并写入broker的hostname,port等信息,当broker宕机时,由于zookeeper会话过期,删除目录[brokerId]/及路径下的数据。
zookeeper路径/brokers/ids/下子目录的变化会触发BrokerChangeListener.handleChildChange,代码如下:
def handleChildChange(parentPath : String, currentBrokerList : java.util.List[String]) {
inLock(controllerContext.controllerLock) {
if (hasStarted.get) {
ControllerStats.leaderElectionTimer.time {
try {
//根据子目录列表获取当前brokerId列表
val curBrokerIds = currentBrokerList.map(_.toInt).toSet
//新增的brokerId
val newBrokerIds = curBrokerIds -- controllerContext.liveOrShuttingDownBrokerIds
//获取新增brokerInfo
val newBrokerInfo = newBrokerIds.map(ZkUtils.getBrokerInfo(zkClient, _))
val newBrokers = newBrokerInfo.filter(_.isDefined).map(_.get)
//不在新的broker列表中的就表示已经死了
val deadBrokerIds = controllerContext.liveOrShuttingDownBrokerIds -- curBrokerIds
//这里更新liveBrokersUnderlying,即新的活着的brokerId
controllerContext.liveBrokers = curBrokerIds.map(ZkUtils.getBrokerInfo(zkClient, _)).filter(_.isDefined).map(_.get)
//在channelManager中增加到指定broker的channel
newBrokers.foreach(controllerContext.controllerChannelManager.addBroker(_))
//对于已经断开连接的broker,从channelManager中删除
deadBrokerIds.foreach(controllerContext.controllerChannelManager.removeBroker(_))
if(newBrokerIds.size > 0)
//对新增的broker调用onBrokerStartup
controller.onBrokerStartup(newBrokerIds.toSeq)
if(deadBrokerIds.size > 0)
//对死去的broker调用onBrokerFailure
controller.onBrokerFailure(deadBrokerIds.toSeq)
} catch {
case e: Throwable => error("Error while handling broker changes", e)
}
}
}
}
}
有broker启动时,controllerChannelManager.addBroker添加到该broker的channel,用于发送三种控制请求。之后,onBrokerStartup函数会被调用,也就是在broker启动后将其加入到集群中,执行流程如下:
- 向newBrokers发送UpdateMetadataRequest,MetadataCache对象存在于每个broker上,存放集群中所有Partition的state信息和aliveBrokerList;
- 将newBrokers上的所有replica状态变为OnlineReplica,在broker下线时,相应的replica状态变为OfflineReplica。OfflineReplica->OnlineReplica过程中会向replica发送LeaderAndIsrRequest,收到消息后replica开始向leader同步log,在同步完成后成为新的ISR;
- 触发处于NewPartition和OfflinePartition状态的向OnlinePartition转变;NewPartition->OnlinePartition 和 OfflinePartition->OnlinePartition都有可能因为没有活着的broker而状态转换失败,有新的broker上线就要尝试上线;
- 对于正在执行reassign的partition,调用onPartitionReassignment操作;
broker failure
在broker发生宕机后,zookeeper会话过期,临时节点删除,上文handleChildChange
调用controller.onBrokerFailure,代码如下:
def onBrokerFailure(deadBrokers: Seq[Int]) {
//省略部分代码
......
......
//获取leader在deadBrokers上的partition
val partitionsWithoutLeader = controllerContext.partitionLeadershipInfo.filter(partitionAndLeader =>
deadBrokersSet.contains(partitionAndLeader._2.leaderAndIsr.leader) &&
!deleteTopicManager.isTopicQueuedUpForDeletion(partitionAndLeader._1.topic)).keySet
//partition下线的条件是Leader下线,对前面获取的所有partition转换到Offline状态
partitionStateMachine.handleStateChanges(partitionsWithoutLeader, OfflinePartition)
//前面partition下线了,这里触发重新选择Leader,
partitionStateMachine.triggerOnlinePartitionStateChange()
//省略部分代码
......
......
//对于在deadBrokers上的replica执行下线操作,即状态变为OfflineReplica
replicaStateMachine.handleStateChanges(activeReplicasOnDeadBrokers, OfflineReplica)
//省略部分代码
......
......
}
}
在上面的代码中,broker下线时执行的主要操作有:
1. 对Leader副本在deadBrokers上的Partition执行下线操作,即Partition状态变为OfflinePartition。Partition是否上线的关键就是leader副本是否在线;
2. 触发 NewPartition,OfflinePartition -> OnlinePartition,上一步Partition下线后,通过这个操作来重新选择Leader,新的Leader诞生后,向所有对应broker发送LeaderAndIsrRequest,无论是生产还是消费都可以无缝切换;当然,选择Leader有可能失败,如没有replica在线的情况,这是Partition就一直处于OfflinePartition状态,直到有Broker上线;
3. 将下线的broker上的所有replica切换为OfflineReplica状态,这个过程会向replica所在broker发送StopReplicaRequest消息,并将replica从相应partition的ISR中移除–这个过程可能会重新选择Leader,最后再给partition剩下的replica发送LeaderAndIsrRequest;