服务器准备
kafka01+zookeeper | 10.0.80.11 |
kafka01+zookeeper | 10.0.80.12 |
kafka01+zookeeper | 10.0.80.13 |
elasticsearch(master) +logstash | 10.0.80.21 |
elasticsearch(node)+logstash | 10.0.80.22 |
elasticsearch(node)+logstash | 10.0.80.23 |
jdk安装
略
kafka集群安装配置
解压安装包
创建目录
mkdir /data/{kafka-logs,zookeeper} -pv
mkdir /usr/local/kafka/
tar xzvf apache-zookeeper-3.5.9-bin.tar.gz -C /usr/local/kafka/
tar xzvf kafka_2.12-2.2.1.tgz -C /usr/local/kafka/
修改配置文件
zookeeper配置文件修改
cd /usr/local/kafka/apache-zookeeper-3.5.9-bin/conf
cp zoo_sample.cfg zoo.cfg
cat zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/data/zookeeper
clientPort=2181
server.1=10.0.80.11:2888:3888
server.2=10.0.80.12:2888:3888
server.3=10.0.80.13:2888:3888
将kafka01中的zookeeper传输到kafka02和kafka03中
scp -P 40022 -r apache-zookeeper-3.5.9-bin/ root@10.0.80.12:/usr/local/kafka/
scp -P 40022 -r apache-zookeeper-3.5.9-bin/ root@10.0.80.13:/usr/local/kafka/
创建myid文件,将三台服务器上的myid文件分别写入1,2,3
kafka01 echo 1 > /data/zookeeper/myid
kafka02 echo 2 > /data/zookeeper/myid
kafka03 echo 3 > /data/zookeeper/myid
kafka配置文件修改
cat config/server.properties
broker.id=1 kafka02机器上是 2 同理 kafka03机器上是3
steners=PLAINTEXT://10.0.80.11:9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/data/kafka-logs
num.partitions=1
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=3