- 文件压缩包下载,tar -zxvf 解压 ,修改简约目录名字
- vim /etc/profile
- 添加以下配置 重启或者source
JAVA_HOME=/opt/jdk
ZOOKEEPER_HOME=/opt/zookeeper
STORM_HOME=/opt/storm
KAFKA_HOME=/opt/kafka
PATH=$JAVA_HOME/bin:$ZOOKEEPER_HOME/bin:$STORM_HOME/bin:$KAFKA_HOME/bin:$PATH
CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export JAVA_HOME
export STORM_HOME
export ZOOKEEPER_HOME
export KAFKA_HOME
export PATH
export CLASSPATH
hosts配置
1.vim /etc/hosts
2.添加配置
192.168.199.128 s0
192.168.199.129 s1
192.168.199.130 s2
zookeeper配置
1.cd /opt/zookeeper/conf
2.cp zoo_sample.cfg zoo.cfg
3.添加配置项
dataDir=/tmp/zookeeper/data
dataLogDir=/tmp/zookeeper/log
server.0=s0:2888:3888
server.1=s1:2888:3888
server.2=s2:2888:3888
4.在/tmp/zookeeper/data目录下新建myid文件,文件内容为主机所在的server.id的id数字
storm配置
1.vim /opt/storm/conf/storm.yaml (所有配置项前面要加一个空格)
2.添加配置
storm.zookeeper.servers:
- "s0"
- "s1"
- "s2"
nimbus.host: "s0"
storm.local.dir: "/opt/storm"
supervisor.slots.ports:
- 6700
- 6701
- 6702
- 6703
kafka配置
1.vim /opt/kafka/config/server.properties
2.添加配置
broker.id=0 //brokerid 集群中主机不同
host.name=s0
在Log Retention Policy新增三项
message.max.bytes=5048576
default.replication.factor=2
replica.fetch.max.bytes=5048576
zookeeper.connect=s0:2181,s1:2181,s2:2181
http://www.xitongzhijia.net/xtjc/20170807/104173.html
http://orchome.com/kafka/index
https://www.cnblogs.com/quchunhui/p/5380260.html#top
http://www.aboutyun.com/thread-22083-1-1.html
http://chengjianxiaoxue.iteye.com/blog/2190488
http://blog.csdn.net/wuxintdrh/article/details/78966560
http://blog.csdn.net/maomaosi2009/article/details/48828955
http://chengjianxiaoxue.iteye.com/blog/2190488
http://kafka.apache.org/090/javadoc/index.html?org/apache/kafka/clients/producer/KafkaProducer.html
http://kafka.apache.org/090/javadoc/index.html?org/apache/kafka/clients/consumer/KafkaConsumer.html