作为一名storm的初学者,首先应该了解的就是storm是如何部署提交一个topology的。也就是说,当我们运行了命令:”storm jar myjarpath mytopologyclass args”之后,storm又是如何做的呢。我查阅了Apache Storm的官方文档,在此翻译整理一下,并写一点自己的理解。
官方链接:Lifecycle of a Storm Topology
[译]Storm Topology的生命周期
(NOTE:这篇文章基于storm0.7.1,之后的版本有了许多变化,比如tasks 和 executors直接的划分,源码的路径由之前的src/变为storm-core/src等。)
这篇文章详细介绍了我们执行storm jar命令之后一个Topology的生命周期:上传Topology到Nimbus,Supervisor启动/停止workers,workers和tasks的自我创建。还有Nimbus是如何监控Topology的,当执行kill命令时Topology是如何关闭的。
几个重要nodes:
1、真正执行的topology与用户自定义的topology不同,因为其加入了acker bolt 和 Streams。
2、真正的topology由system-topology!方法创建。该方法在Nimbus为topology创建tasks时 和worker 路由消息时使用。
1、启动一个Topology
- “storm jar”命令根据特定的参数执行你所提交的class。我们只确定”storm jar”命令通过StormSubmitter方法创建并设置了环境变量。
def jar(jarfile, klass, *args):
"""Syntax: [storm jar topology-jar-path class ...]
Runs the main method of class with the specified arguments.
The storm jars and configs in ~/.storm are put on the classpath.
The process is configured so that StormSubmitter
(http://nathanmarz.github.com/storm/doc/backtype/storm/StormSubmitter.html)
will upload the jar at topology-jar-path when the topology is submitted.
"""
exec_storm_class(
klass,
jvmtype="-client",
extrajars=[jarfile, CONF_DIR, STORM_DIR + "/bin"],
args=args,
childopts="-Dstorm.jar=" + jarfile)
- 当你使用StormSubmitter.submitTopology提交Topology时,StormSubmitter做了如下几件事:
- 首先,StormSubmitter通过Nimbus Thrift接口将jar包上传到Nimbus(在之前没有上传过jar包的情况下)。
- beginFileUPload方法返回Nimbus inbox的一个路径。(这个路径应该就是jar包上传到nimbus的路径。)
- uploadChunk方法保证上传速度为每次15KB。
- finishFileUpload方法在上传结束时执行。
- 上述三个方法的实现源码。
- 其次,StormSubmitter调用Nimbus Thrift接口的submitTopology方法。
- Topology的配置使用JSON进行序列化。
- submitTopology使用的Nimbus inbox path 就是jar被上传到nimbus的位置。
- 首先,StormSubmitter通过Nimbus Thrift接口将jar包上传到Nimbus(在之前没有上传过jar包的情况下)。
//submitter upload the jar
public static String submitJar(Map conf, String localJar) {
NimbusClient client = NimbusClient.getConfiguredClient(conf);
try {
String uploadLocation = client.getClient().beginFileUpload();
LOG.info("Uploading topology jar " + localJar + " to assigned location: " + uploadLocation);
BufferFileInputStream is = new BufferFileInputStream(localJar);
while(true) {
byte[] toSubmit = is.read();
if(toSubmit.length==0) break;
client.getClient().uploadChunk(uploadLocation, ByteBuffer.wrap(toSubmit));
}
client.getClient().finishFileUpload(uploadLocation);
LOG.info("Successfully uploaded topology jar to assigned location: " + uploadLocation);
return uploadLocation;
} catch(Exception e) {
throw new RuntimeException(e);
} finally {
client.close();
}
}
- Nimbus接收提交的Topology。code
- Nimbus规范化Topology的配置,其主要目的是确保每个单独的task都有相同的序列化注册码,这对于获得正确的序列化工作至关重要。code
- Nimbus为topology设置静态状态。code
- Jars和配置文件configs都保存在Nimbus本地文件系统上(对于zookeeper来说太大了)。Jars和configs被拷贝到 {nimbus local dir}/stormdist/{topology id}路径下。
- setup-storm-static方法将 task -> component 的映射写入ZK。
- setup-heartbeats在ZK上创建一个目录用于task的心跳反应。
- Nimbus调用mk-assignment方法将tasks分配给节点。该方法包含:
- master-code-dir:supervisor调用,以从Nimbus下载到Topology的jars/configs。
- task->node+port:首先它是一个Map,从 task id 到 worker(该task应该运行)。一个worker由node/port定义,node指定哪台节点,port指定哪个进程。
- node->host:一个Map,从 node id 到 hostname。使得worker知道需要通信的其他worker在哪台节点上。Node id 用于区分supervisor,因此一个或多个supervisor可以运行在一台节点上。
- task->start-time-secs:包含了一个Map,从 task id 到 timestamp(该时间戳是Nimbus发布这个task的时间)。这个被Nimbus用来监控topologies,在tasks第一次被启动时,它们被指定了较长的超时时间,该超时时间用于心跳检测,并且可以由 “nimbus.task.launch.secs”定义。
;;mk-assignment方法
(defnk mk-assignments [nimbus storm-id :scratch? false]
(log-debug "Determining assignment for " storm-id)
(let [conf (:conf nimbus)
storm-cluster-state (:storm-cluster-state nimbus)
callback (fn [& ignored] (transition! nimbus storm-id :monitor))
node->host (get-node->host storm-cluster-state callback)
existing-assignment (.assignment-info storm-cluster-state storm-id nil)
task->node+port (compute-new-task->node+port conf storm-id existing-assignment
storm-cluster-state callback
(:task-heartbeats-cache nimbus)
scratch?)
all-node->host (merge (:node->host existing-assignment) node->host)
reassign-ids (changed-ids (:task->node+port existing-assignment) task->node+port)
now-secs (current-time-secs)
start-times (merge (:task->start-time-secs existing-assignment)
(into {} (for [id reassign-ids] [id now-secs] )))
assignment (Assignment.
(master-stormdist-root conf storm-id)
(select-keys all-node->host (map first (vals task->node+port)))
task->node+port
start-times
)
]
;; tasks figure out what tasks to talk to by looking at topology at runtime
;; only log/set when there's been a change to the assignment
(if (= existing-assignment assignment)
(log-debug "Assignment for " storm-id " hasn't changed")
(do
(log-message "Setting new assignment for storm id " storm-id ": " (pr-str assignment))
(.set-assignment! storm-cluster-state storm-id assignment)
))
))
- 一旦Topology被分配好了,它们便处于deactivated模式。start-storm将数据(什么数据我还没看)写到ZK上,从而集群知道该Topology已经处于active状态了,便开始从spout连续不断的发射tuples。
(defn- start-storm [storm-name storm-cluster-state storm-id]
(log-message "Activating " storm-name ": " storm-id)
(.activate-storm! storm-cluster-state
storm-id
(StormBase. storm-name
(current-time-secs)
{:type :active})))
- TODO集群状态图(展示了所有的节点和它所包含的组件)。
- Supervisor在后台运行了两个方法:
Worker 通过mk-worker方法启动。code
Tasks通过mk-task方法启动。code
2、Topology的监控
- Nimbus在topology的整个生命周期内都对其进行监控。
- 创建一个循环执行的计时器线程(schedule-recurring)来监控topology。
- Nimbus的行为表示成一个有限状态机。code
- Topology中的”monitor”事件每隔”nimbus.monitor.freq.secs”调用一次,该监控器通过reassign-transition方法调用reassign-topology方法。code
- reassign-topology调用 mk-assignments方法来(逐步递增地)更新topology。
- mk-assignments检测heartbeats 并且在必要时重新部署workers。
- 任何重新部署后的改变都将改变ZK中的状态信息,从而触发supervisor的同步工作,并启动/停止workers。
(schedule-recurring (:timer nimbus)
0
(conf NIMBUS-MONITOR-FREQ-SECS)
(fn []
(doseq [storm-id (.active-storms (:storm-cluster-state nimbus))]
(transition! nimbus storm-id :monitor))
(do-cleanup nimbus)
))
3、(Kill)结束一个Topology
- “strom kill”命令通过Nimbus Thrift接口调用方法来kill一个topology。
(defn -main [& args]
(with-command-line args
"Kill a topology"
[[wait w "Override the amount of time to wait after deactivating before killing" nil]
posargs]
(let [name (first posargs)
opts (KillOptions.)]
(if wait (.set_wait_secs opts (Integer/parseInt wait)))
(with-configured-nimbus-connection nimbus
(.killTopologyWithOpts nimbus name opts)
(log-message "Killed topology: " name)
))))
- Nimbus 接收 kill命令。code
- Nimbus将该topology标记为kill transition状态(个人理解)。code
- kill-transition方法将topology的状态改为killed,并且调用remove事件执行”wait time seconds”。code
- wait time 默认和 topology message timeout一样,也可以通过storm kill 命令中的参数 -w来修改。
- 该方法使得topology处于未激活状态(在wait time 之前),wait time结束后才真正关闭。这使得topology在真正结束之前有一段时间来处理目前正在进行的工作。
在 kill transition过程中改变状态保证来kill协议是容错的。在启动时,如果一个topology的状态是”killed”,那么nimbus将会调用remove event来清除该topology。code
移除一个Topology时会清除保存在ZK上的分配和状态信息。code
- 一个单独cleanup的线程执行 do-cleanup方法,该方法会清除本地的 heartbeat目录 和 存储jars/configs目录。code
参考:http://blog.csdn.net/weijonathan/article/details/18792719