mk-assignments主要功能就是产生Executor与节点+端口的对应关系,将Executor分配到某个节点的某个端口上,以及进行相应的调度处理。代码注释如下:
;;参数nimbus为nimbus-data对象,:scratch-topology-id为需要重新调度的Topology的id
(defnk mk-assignments [nimbus :scratch-topology-id nil]
(let [conf (:conf nimbus);;分别从nimbus-data中获取conf,storm-cluster-state和inimbus对象,并将其保存为临时变量
storm-cluster-state (:storm-cluster-state nimbus)
^INimbus inimbus (:inimbus nimbus)
;;从zk中读取所有活跃的Topologies,获取他们id的集合
topology-ids (.active-storms storm-cluster-state)
;;根据前面得到的Topology-id的集合,对每一个id调用read-topology-details方法
;;从参数nimbus-data中获取topology-details信息,并以<topology-id,topology-details>保存在集合中
topologies (into {} (for [tid topology-ids] {tid (read-topology-details nimbus tid)}))
;;利用前面得到的<topology-id,topology-details>集合创建Topologies对象
topologies (Topologies. topologies)
;;读取所有已经分配资源的Topology的id的集合。
assigned-topology-ids (.assignments storm-cluster-state nil)
existing-assignments (into {} (for [tid assigned-topology-ids] ;; 对于那些已经分配资源但需要重新调度的Topology(由scratch-topology-id指定), ;; 我们忽略其之前的分配,故之前分配占用的所有slot将被视为空闲slot(空闲资源),可重新被调度使用。 (when (or (nil? scratch-topology-id) (not= tid scratch-topology-id)) {tid (.assignment-info storm-cluster-state tid nil)})))
;; 调用compute-new-topology->executor->node+port方法为所有Topologies计算新的调度,
;; 并返回topology->executor->node+port
topology->executor->node+port (compute-new-topology->executor->node+port
nimbus
existing-assignments
topologies
scratch-topology-id)
;;获取当前系统时间(秒)
now-secs (current-time-secs)
;;调用basic-supervisor-details-map方法获取ZooKeeper中所有的SupervisorInfo信息,
;;然后将其转换为<supervisor-id,SupervisorDetails>集合,具体操作看1
basic-supervisor-details-map (basic-supervisor-details-map storm-cluster-state)
;; 对topology->executor->node+port中各项进行处理,通过添加开始时间等构建最终的作业
;; 返回得到<topology-id Assignment>集合
new-assignments (into {} (for [[topology-id executor->node+port] topology->executor->node+port ;;根据topology-id获取Topology的任务分配情况 :let [existing-assignment (get existing-assignments topology-id) ;;从executor->node+port信息中提取所有的节点信息 all-nodes (->> executor->node+port vals (map first) set) ;;根据all-nodes获取每个节点的主机名信息,并返回一个<node hostname>集合 node->host (->> all-nodes (mapcat (fn [node] (if-let [host (.getHostName inimbus basic-supervisor-details-map node)] [[node host]] ))) (into {})) ;;将上述获取到的<node, hostname>集合和<node, host>集合,得到所有<node host>关系. ;;如果存在相同的node,则与其对应的主机名将采用<node,hostname>集合中的值 all-node->host (merge (:node->host existing-assignment) node->host) ;;调用changed-executors,通过将executor->node+port信息同existing-assignment中的信息进行比对, ;;计算出所有被重新分配的Executor reassign-executors (changed-executors (:executor->node+port existing-assignment) executor->node+port) ;;通过将已经存在的assignment中的executor->start-time-secs信息 ;;与所有被重新分配的通过将已经存在的assignment中的executor->start-time-secs进行合并, ;;获得最新的所有<executor,start-time-secs>集合 start-times (merge (:executor->start-time-secs existing-assignment) (into {} (for [id reassign-executors] [id now-secs] )))]] ;;创建Assignment对象,参数分别为该Topology在Nimbus服务器上的root文件夹路径、 ;;<node,host>集合、新的executor->node+port映射关系以及新的<executor,start-time-secs>集合 {topology-id (Assignment. (master-stormdist-root conf topology-id) (select-keys all-node->host all-nodes) executor->node+port start-times)}))]
;; 对于新计算的<topology-id,assignment>集合中的每一项,比较其新的调度与当前运行时的调度之间是否发生了变化
;; 如果没有发生变化,就打印一条记录;否则将该Topology在ZooKeeper中保存的调度结果更新assignment
(doseq [[topology-id assignment] new-assignments
:let [existing-assignment (get existing-assignments topology-id) topology-details (.getById topologies topology-id)]]
(if (= existing-assignment assignment)
(log-debug "Assignment for " topology-id " hasn't changed")
(do (log-message "Setting new assignment for topology id " topology-id ": " (pr-str assignment)) (.set-assignment! storm-cluster-state topology-id assignment) )))
;;对于前面得到的new-assignments中的每一项,首先计算出新增的slot,
;;再将其转换化为worker-slot对象,返回的是<topology-id,worker-slot>集合,
;;最后调用inimbus的assignSlots方法来分配slot
(->> new-assignments
(map (fn [[topology-id assignment]] (let [existing-assignment (get existing-assignments topology-id)] [topology-id (map to-worker-slot (newly-added-slots existing-assignment assignment))] )))
(into {})
(.assignSlots inimbus topologies))
))
在该过程中,如果某个Slot不存在Executor的超时,而Supervisor的ZooKeeper心跳超时时,
认为当前Slot依然有效,可以分配认为.最坏的情况就是这些分配过去的Executor会超时,在下一轮的分配过程中,则不会分配。
mk-assignments的详细过程如下:
1.从ZooKeeper中读取所有活跃的Topologies
2.从ZooKeeper中读取当前的assignments,获取所有已经分配资源的Topology的id的集合。
3.对Topologies进行新assignments
3.1通过调用computer-topology->executors取出所有已经assignment的topology的executors
3.2 update-all-heartbeats,对每一个Topology进行更新心跳
3.3调用compute-topology->alive-executors过滤topology->executors,保留alive的executors
3.4调用compute->supervisor->dead-ports找出dead ports
3.5调用compute-topology->scheduler-assignment转换ZooKeeper中的assignment为SchedulerAssignment
3.6通过调用missing-assignment-topologies找出需要从新assign的Topology
3.7通过调用all-scheduling-slots得到所有Supervisor节点中可用的slot数量
3.8调用read-all-supervisor-details得到所有的Supervisor节点SupervisorDetails
3.9获取backtype.storm.scheduler.Cluster
3.10调用scheduler.schedule分配所有的Topologies
3.11通过调用compute-topology->executor->node_port转换SchedulerAssignment为Assignment,输出ressign日志
4.通过将已经存在的assignment中的executor->start-time-secs信息与所有被重新分配的通过将已经存在的assignment中的executor->start-time-secs进行合并,获得最新的所有<executor,start-time-secs>集合,补充start-times等信息,获得new-assignments。
5.调用set-assignment!将新的assignment结果写入ZooKeeper.
mk-assignments负责对当前集群中所有Topology进行新一轮的任务调度。先检查已运行的Topology所占用的资源,判断它们是否有问题以及重新分配;根据系统当前的可用资源,为新提交的Topology分配任务。mk-assignments会将所有assignment信息更新到ZooKeeper中,Supervisor周期性地检查这些分配信息,并根据这些分配信息做相应的调度处理。
注:学习李明老师Storm源码分析和陈敏敏老师Storm技术内幕与大数据实现的笔记的整理。
欢迎关注下面二维码进行技术交流: