前言:
安装在3台ubuntu14.04虚拟机上,
nimbus 192.168.0.180
supervisor 192.168.0.181
supervisor 192.168.0.182
1.安装JDK1.6+
#SET JDK
export JAVA_HOME=/home/hadoop/software/jdk1.7.0_76
export JRE_HOME=$JAVA_HOME/jre
export CLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib:$CLASSPATH
export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$PATH
2.安装zookeeper
hadoop@master:~/software/zookeeper-3.4.6/conf$ cat zoo.cfg
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/home/hadoop/software/zookeeper-3.4.6/var/data
dataLogDir=/home/hadoop/software/zookeeper-3.4.6/var/log
# the port at which the clients will connect
clientPort=2181
server.1=192.168.0.180:2888:3888
server.2=192.168.0.181:2888:3888
server.3=192.168.0.182:2888:3888
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
MaxSessionTimeout=120000
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
3.安装ZeroMQ
下载后解压./configure && make && make install
4.安装JZMQ
下载解压后
./autogen
./configure ** make && make install
注意: 我安装时报错说需要安装pck-config,解决步骤:
apt-get install cmake apt-get install libgtk2.0-dev
apt-get install pkg-config
5.安装Storm
在nimbus机器上解压storm1.0.3, 配置好后scp到另外两台supervisor机器上
配置文件conf/storm.yaml
hadoop@master:~/software/apache-storm-1.0.3/conf$ cat storm.yaml
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
########### These MUST be filled in for a storm configuration
storm.zookeeper.servers:
- "192.168.0.180"
- "192.168.0.181"
- "192.168.0.182"
storm.local.dir: "/mnt/storm/"
nimbus.seeds: ["192.168.0.180"]
supervisor.slots.ports:
- 6700
- 6701
- 6702
- 6703
ui.port: 8081
#
#
# ##### These may optionally be filled in:
#
## List of custom serializations
# topology.kryo.register:
# - org.mycompany.MyType
# - org.mycompany.MyType2: org.mycompany.MyType2Serializer
#
## List of custom kryo decorators
# topology.kryo.decorators:
# - org.mycompany.MyDecorator
#
## Locations of the drpc servers
# drpc.servers:
# - "server1"
# - "server2"
## Metrics Consumers
# topology.metrics.consumer.register:
# - class: "org.apache.storm.metric.LoggingMetricsConsumer"
# parallelism.hint: 1
# - class: "org.mycompany.MyMetricsConsumer"
# parallelism.hint: 1
# argument:
# - endpoint: "metrics-collector.mycompany.org"
nibums机器:
hadoop@master:~/software/apache-storm-1.0.3$ bin/storm nimbus > /dev/null 2>&1 &
启动ui:
hadoop@master:~/software/apache-storm-1.0.3$ bin/storm ui
supervisor机器:
hadoop@slaver01:~/software/apache-storm-1.0.3$ bin/storm supervisor > /dev/null 2>&1 &
hadoop@slaver02:~/software/apache-storm-1.0.3$ bin/storm supervisor > /dev/null 2>&1 &
7.可以去查看storm web端:
http://192.168.0.180:8081/index.html