机器情况:在mini1、mini2、mini3机器的/home/hadoop/apps目录下分别下载storm安装包
本博文情况是
mini1 nimbus
mini2 nimbus supervisor
mini3 supervisor
1、apache-storm-1.0.2.tar.gz的下载
http://archive.apache.org/dist/storm/apache-storm-1.0.2/
2.解压
3.创建软连接 ln -s storm storm
4.修改环境变量,添加到/etc/profiletc/profile
5.环境:java7 python2.6.6
6、配置storm的配置文件
HA集群:
注意空格!!!
storm.zookeeper.servers:
- "mini1"
- "mini2"
- "mini3"
nimbus.seeds: ["mini1", "mini2"]
ui.port: 9999
storm.local.dir: "/home/hadoop/data/storm"
supervisor.slots.ports:
- 6700
- 6701
- 6702
- 6703
非HA集群:
storm.zookeeper.servers:
- "mini1"
- "mini2"
- "mini3"
nimbus.seeds: ["mini1"]
ui.port: 9999
storm.local.dir: "/home/hadoop/data/storm"
supervisor.slots.ports:
- 6700
- 6701
- 6702
- 6703
注意:我的这里ui.port选定为9999,是自定义,为了解决Storm 和spark默认的 8080 端口冲突!
mini2和mini3机器同样。不多赘述。
7.新建目录mkdir -p /home/hadoop/data/storm
8.启动storm集群
HA:
mini1 nimbus
mini2 nimbus supervisor
mini3 supervisor
mini1启动:nohup bin/storm nimbus >/dev/null 2>&1 &
mini2启动:nohup bin/storm nimbus >/dev/null 2>&1 &
mini2、mini3启动:nohup bin/storm supervisor >/dev/null 2>&1 &
mini1启动:nohup bin/storm ui>/dev/null 2>&1 &
mini1、mini2、mini3:nohup bin/storm logviwer >/dev/null 2>&1 &
非HA:
mini1(主) nimbus
mini2(主)(从) supervisor
mini3(从) supervisor
mini1:nohup bin/storm nimbus >/dev/null 2>&1 &
mini2\mini3:nohup bin/storm supervisor >/dev/null 2>&1 &
mini1:nohup bin/storm ui>/dev/null 2>&1 &
mini1\mini2\mini3:nohup bin/storm logviwer >/dev/null 2>&1 &
查看端口:192.168.124.10:9999