-
准备三台虚拟机(并且配置好hosts文件)
172.28.128.16 master.hive.com master 172.28.128.17 node01.hive.com node01 172.28.128.18 node02.hive.com node02
-
安装jdk1.8
-
安装zookeeper
-
[all]创建目录
mkdir /opt/cluster
-
[master]下载安装包
http://archive.cloudera.com/cdh5/cdh/5/zookeeper-3.4.5-cdh5.16.2.tar.gz
-
[master]解压安装包
tar -zxvf zookeeper-3.4.5-cdh5.16.2.tar.gz -C /opt/cluster/
-
[master]创建目录
mkdir /opt/cluster/zookeeper-3.4.5-cdh5.16.2/data mkdir /opt/cluster/zookeeper-3.4.5-cdh5.16.2/data/zk mkdir /opt/cluster/zookeeper-3.4.5-cdh5.16.2/data/log
-
[master]编辑文件 zoo.cfg
cd /opt/cluster/zookeeper-3.4.5-cdh5.16.2/conf cp zoo_sample.cfg zoo.cfg vim zoo.cfg dataDir=/opt/cluster/zookeeper-3.4.5-cdh5.16.2/data/zk dataLogDir=/opt/cluster/zookeeper-3.4.5-cdh5.16.2/data/log server.1=master.hive.com:2888:3888 server.2=node01.hive.com:2888:3888 server.3=node02.hive.com:2888:3888
-
[master]把master上配置好的整个工具包分发到另外两个节点
scp -r /opt/cluster/zookeeper-3.4.5-cdh5.16.2 node01.hive.com:/opt/cluster/ scp -r /opt/cluster/zookeeper-3.4.5-cdh5.16.2 node02.hive.com:/opt/cluster/
-
编辑myid文件
在master上的myid中写入 1 vim /opt/cluster/zookeeper-3.4.5-cdh5.16.2/data/zk/myid 在node1上的myid中写入 2 vim /opt/cluster/zookeeper-3.4.5-cdh5.16.2/data/zk/myid 在node2上的myid中写入 3 vim /opt/cluster/zookeeper-3.4.5-cdh5.16.2/data/zk/myid
-
[all]启动zkServer服务
/opt/cluster/zookeeper-3.4.5-cdh5.16.2/bin/zkServer.sh start
-
查看状态
/opt/cluster/zookeeper-3.4.5-cdh5.16.2/bin/zkServer.sh status
-
-
[master]进行基础配置,然后分发到其他节点
-
进入目录
cd /opt/cluster/
-
下载、解压文件
wget https://www-eu.apache.org/dist/kafka/2.3.0/kafka_2.12-2.3.0.tgz tar zxvf kafka_2.12-2.3.0.tgz
-
赋权
chmod 777 -R kafka_2.12-2.3.0
-
编辑配置文件
vim /opt/cluster/kafka_2.12-2.3.0/config/server.properties #修改以下配置 #为方便,直接将broker.id设置为了ip的最后一段,当集群中有多个Kafka时,他们的这个值必须不一样 broker.id=133 #端口暂时不变 port=9092 #hostname修改为本机的主机名 host.name=node3.cn #可选配置项,将日志输出到指定的位置 log.dirs=/tmp/kafka-logs #必须配置自己的zookeeper zookeeper.connect=node3.cn:2181,node4.cn:2181,node5.cn:2181 #在配置集群的时候,必须设置 listeners = PLAINTEXT://node3.cn:9092
-
分发配置
scp -r /opt/cluster/kafka_2.12-2.3.0 node01.hive.com:/opt/cluster/ scp -r /opt/cluster/kafka_2.12-2.3.0 node02.hive.com:/opt/cluster/
-
-
[all]确认防火墙是关闭状态
systemctl stop firewalld systemctl status firewalld systemctl disable firewalld
-
[all]集群各节点启动kafka服务
/opt/cluster/kafka_2.12-2.3.0/bin/kafka-server-start.sh -daemon /opt/cluster/kafka_2.12-2.3.0/config/server.properties
-
[all]查看集群中各个节点的服务的启动状态
jps
-
至此kafka安装完成
虚拟机安装KAFKA
最新推荐文章于 2024-06-19 23:30:15 发布