jdk1.7+
zookeeper
集群信息
hadoop1 | 192.168.0.131 | zk | nimbus, core |
hadoop2 | 192.168.0.132 | zk | Supervisor |
hadoop3 | 192.168.0.133 | zk | Supervisor |
hadoop4 | 192.168.0.134 |
| Supervisor |
1. 下载storm-1.0.3
http://archive.apache.org/dist/storm/apache-storm-1.0.3/
2. 解压
tar -zxvf apache-storm-1.2.3.tar.gz -C apps/
ln -s apache-storm-1.2.3.tar.gz storm
3. 修改配置文件
cd storm/conf/
vi storm.yaml
#设置zk地址
storm.zookeeper.servers: - "hadoop1" - "hadoop2" - "hadoop3" |
#storm启动存放nimbus等少量状态,自动创建目录
storm.local.dir: "/home/omc/apps/storm/status" |
#storm 日志目录
storm.log.dir : "/home/omc/apps/storm/logs" |
#nimbus所在地址
nimbus.seeds: ["hadoop1"] |
#supervisor端口号,也就是work er端口号
supervisor.slots.ports: - 6700 - 6701 - 6702 - 6703 |
# ui 端口号,可以不用改,默认值
ui.host: 0.0.0.0 ui.port: 8080 |
4. 添加环境变量,切换root帐号
cd /etc/profile
#添加Storm路径
export STORM_HOME=/home/omc/apps/storm
export PATH=$PATH:$STORM_HOME/bin
5. 分发storm和环境变量文件到其它节点
scp -r storm/ omc@hadoop2:`pwd`
scp -r storm/ omc@hadoop3:`pwd`
scp -r storm/ omc@hadoop4:`pwd`
scp /etc/profile root@hadoop2:/etc/profile
scp /etc/profile root@hadoop3:/etc/profile
scp /etc/profile root@hadoop4:/etc/profile
6. 启动zk集群
./zkServer.sh start
7. Storm
7.1 启动nimbus
storm nimbus & jps |
7.2 启动supervisor
storm supervisor & jps |
7.3 启动ui
storm ui & jps |
http://hadoop1:8080