针对三台相同内网的服务器安装配置kafka集群环境
内网地址分别为
10.174.32.122 |
10.117.15.224 |
10.168.96.198 |
1.安装zookeeper分别下载zookeeper到路径/usr/local/下
zookeeper资源地址:https://archive.apache.org/dist/zookeeper
# wget https://archive.apache.org/dist/zookeeper/zookeeper-3.4.10/zookeeper-3.4.10.tar.gz
解压
# tar -xzvf zookeeper-3.4.10.tar.gz
进入解压文件夹中重命名配置文件zoo_sample.cfg为zoo.cfg
# cd zookeeper-3.4.10
# cd conf/
# ls
configuration.xsl log4j.properties zoo_sample.cfg
# mv zoo_sample.cfg zoo.cfg
修改配置文件
dataDir=/var/lib/zookeeper
server.1=10.174.32.122:2888:3888
server.2=10.117.15.224:2888:3888
server.3=10.168.96.198:2888:3888
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/var/lib/zookeeper
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
server.1=10.174.32.122:2888:3888
server.2=10.117.15.224:2888:3888
server.3=10.168.96.198:2888:3888
分别启动zookeeper服务
# netstat -na|grep 2181
# bin/zkServer.sh start conf/zoo.cfg
ZooKeeper JMX enabled by default
Using config: conf/zoo.cfg
Starting zookeeper ... STARTED
# netstat -ntlp|grep 2181
tcp 0 0 0.0.0.0:2181 0.0.0.0:* LISTEN 1040/java
# netstat -ntlp|grep 2888
# netstat -ntlp|grep 3888
tcp 0 0 10.174.32.122:3888 0.0.0.0:* LISTEN 1040/java
校验zookeeper服务是否启动
# bin/zkCli.sh -server 10.174.32.122:2181,10.117.15.224:2181,10.168.96.198:2181
2.安装kafka分别下载kafka到/usr/local/
# wget https://archive.apache.org/dist/kafka/0.11.0.1/kafka_2.11-0.11.0.1.tgz
解压
# tar -xzvf kafka_2.11-0.11.0.1.tgz
进入config目录分别修改server.properties下列属性
broker.id=0
log.dirs=/var/lib/kafka
zookeeper.connect=10.174.32.122:2181,10.117.15.224:2181,10.168.96.198:2181
listeners=PLAINTEXT://10.174.32.122:9092
分别启动kafka
# bin/kafka-server-start.sh -daemon config/server.properties
# netstat -ntlp|grep 9092
tcp 0 0 10.174.32.122:9092 0.0.0.0:* LISTEN 1333/java
查看现有的topic集合
# bin/kafka-topics.sh --list --zookeeper 10.174.32.122:2181,10.117.15.224:2181,10.168.96.198:2181
test1
test2
创建新的topic
# bin/kafka-topics.sh --create --topic test3 --zookeeper 10.174.32.122:2181,10.117.15.224:2181,10.168.96.198:2181 --partitions 1 --replication-factor 3
Created topic "test3".
# bin/kafka-topics.sh --list --zookeeper 10.174.32.122:2181,10.117.15.224:2181,10.168.96.198:2181
test1
test2
test3
查看topic的属性
# bin/kafka-topics.sh --describe --topic test3 --zookeeper 10.174.32.122:2181,10.117.15.224:2181,10.168.96.198:2181
Topic:test3 PartitionCount:1 ReplicationFactor:3 Configs:
Topic: test3 Partition: 0 Leader: 0 Replicas: 0,2,1 Isr: 0,2,1
测试收发
发
# bin/kafka-console-producer.sh --topic test3 --broker-list 10.174.32.122:9092,10.117.15.224:9092,10.168.96.198:9092
>1111111
>11111^H^H^H^H^H^H^H
>lihaile
>hello
>nihaoa
>adfdsfdsfds
>lihailewodege
>
收
# bin/kafka-console-consumer.sh --topic test3 --bootstrap-server 10.174.32.122:9092,10.117.15.224:9092,10.168.96.198:9092
hello
nihaoa
adfdsfdsfds
dafdsfdsfdsf
lihailewodege
至此kafka集群配置完成!
kafka server.properties各参数配置