软件版本
jstorm 2.1.1
jdk 1.8.40
zk 3.4.6
前置工作
搭建zk集群,请参考:http://blog.csdn.net/jamal117/article/details/54709608
jdk安装好。
集群搭建
- jstorm解压后修改配置文件/conf/storm.yaml
########### These MUST be filled in for a storm configuration
storm.zookeeper.servers:
- "100.81.74.96"
- "100.81.76.198"
- "100.81.80.234"
- "100.81.78.117"
- "100.81.80.242"
storm.zookeeper.root: "/jstorm"
# cluster.name: "default"
#nimbus.host/nimbus.host.start.supervisor is being used by $JSTORM_HOME/bin/start.sh
#it only support IP, please don't set hostname
# For example
# nimbus.host: "10.132.168.10, 10.132.168.45"
nimbus.host: "100.81.74.96"
nimbus.host.start.supervisor: false
nimbus.childopts: "-Xmx256m"
supervisor.childopts: "-Xmx256m"
worker.childopts: "-Xmx128m"
# %JSTORM_HOME% is the jstorm home directory
storm.local.dir: "/home/admin/data/jstorm_data"
# please set absolute path, default path is JSTORM_HOME/logs
# jstorm.log.dir: "absolute path"
# java.library.path: "/usr/local/lib:/opt/local/lib:/usr/lib"
# if supervisor.slots.ports is null,
# the port list will be generated by cpu cores and system memory size
# for example,
# there are cpu_num = system_physical_cpu_num/supervisor.slots.port.cpu.weight
# there are mem_num = system_physical_memory_size/(worker.memory.size * supervisor.slots.port.mem.weight)
# The final port number is min(cpu_num, mem_num)
# supervisor.slots.ports.base: 6800
# supervisor.slots.port.cpu.weight: 1.2
# supervisor.slots.port.mem.weight: 0.7
# supervisor.slots.ports: null
supervisor.slots.ports:
- 6800
- 6801
- 6802
- 6803
# Default disable user-define classloader
# If there are jar conflict between jstorm and application,
# please enable it
# topology.enable.classloader: false
# enable supervisor use cgroup to make resource isolation
# Before enable it, you should make sure:
# 1. Linux version (>= 2.6.18)
# 2. Have installed cgroup (check the file's existence:/proc/cgroups)
# 3. You should start your supervisor on root
# You can get more about cgroup:
# http://t.cn/8s7nexU
# supervisor.enable.cgroup: false
### Netty will send multiple messages in one batch
### Setting true will improve throughput, but more latency
# storm.messaging.netty.transfer.async.batch: true
### if this setting is true, it will use disruptor as internal queue, which size is limited
### otherwise, it will use LinkedBlockingDeque as internal queue , which size is unlimited
### generally when this setting is true, the topology will be more stable,
### but when there is a data loop flow, for example A -> B -> C -> A
### and the data flow occur blocking, please set this as false
# topology.buffer.size.limited: true
### default worker memory size, unit is byte
# worker.memory.size: 2147483648
# Metrics Monitor
# topology.performance.metrics: it is the switch flag for performance
# purpose. When it is disabled, the data of timer and histogram metrics
# will not be collected.
# topology.alimonitor.metrics.post: If it is disable, metrics data
# will only be printed to log. If it is enabled, the metrics data will be
# posted to alimonitor besides printing to log.
# topology.performance.metrics: true
# topology.alimonitor.metrics.post: false
# UI MultiCluster
# Following is an example of multicluster UI configuration
# ui.clusters:
# - {
# name: "jstorm",
# zkRoot: "/jstorm",
# zkServers:
# [ "localhost"],
# zkPort: 2181,
# }
- 配置下jstorm的/etc/profile
- 新建 ~/.jstorm目录,将storm.yaml拷贝到下面去。然后将jstorm软件包里的ui war包拷贝到tomcat里面启动起来。
- 改好的jstorm打包,拷贝到集群所有机器上。
- nimbus节点启动:nohup jstorm nimbus &
supervisor节点启动:nohup jstorm supervisor &