安装,使用zookeeper集群和kafka集群

1 准备三台服务器配置hosts,并可以互相ping通,并安装jdk。

vim /etc/hosts
192.168.67.8 kafka08
192.168.67.9 kafka09
192.168.67.10 kafka10

ping kafka08
ping kafka09
ping kafka10

yum install java-1.8.0-openjdk.x86_64 -y
java -version

2 下载地址

2.1 zookeeper下载地址
http://zookeeper.apache.org/releases.html
2.2 kafka下载地址
http://kafka.apache.org/downloads.html

3 安装配置zookeeper

3.1 节点1安装和配置

[root@kafka08 ~]# mkdir -p /app/soft
[root@kafka08 ~]# cd /app/soft
[root@kafka08 soft]# rz -E
[root@kafka08 soft]# ll
total 84128
-rw-r--r-- 1 root root 49475271 Oct 21  2019 kafka_2.11-1.0.0.tgz
-rw-r--r-- 1 root root 36668066 Oct 21  2019 zookeeper-3.4.11.tar.gz
[root@kafka08 soft]# tar zxf zookeeper-3.4.11.tar.gz -C /app/
[root@kafka08 app]# ln -s /app/zookeeper-3.4.11/ /app/zookeeper
[root@kafka08 app]# ll
total 4
drwxr-xr-x  2 root root    65 Jan 24 17:49 soft
lrwxrwxrwx  1 root root    22 Jan 24 18:06 zookeeper -> /app/zookeeper-3.4.11/
drwxr-xr-x 10  502 games 4096 Nov  2  2017 zookeeper-3.4.11
[root@kafka08 app]# mkdir -p /app/data/zookeeper
[root@kafka08 app]# cp /app/zookeeper/conf/zoo_sample.cfg /app/zookeeper/conf/zoo.cfg 
[root@kafka08 app]# vim /app/zookeeper/conf/zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/app/data/zookeeper
clientPort=2181
server.1=192.168.67.08:2888:3888
server.2=192.168.67.09:2888:3888
server.3=192.168.67.10:2888:3888

[root@kafka08 app]# echo "1" > /app/data/zookeeper/myid
[root@kafka08 app]# ls -lh /app/data/zookeeper/
total 4.0K
-rw-r--r-- 1 root root 2 Jan 24 18:14 myid

[root@kafka08 app]# cat /app/data/zookeeper/myid
1

3.2 在kafka08上安装rsync服务端,kafka09上安装rsync客户端,kafka10上安装rsync客户端,过程省略。

3.3 安装配置zookeeper节点2和节点3

[root@kafka08 app]# rsync -auz /app/ 192.168.67.9:/app/
[root@kafka08 app]# rsync -auz /app/ 192.168.67.10:/app/
[root@kafka09 app]# vim /app/data/zookeeper/myid 
[root@kafka09 app]# cat /app/data/zookeeper/myid 
2
[root@kafka10 soft]# vim /app/data/zookeeper/myid 
[root@kafka10 soft]# cat /app/data/zookeeper/myid 
3

3.4 分别在各节点上执行启动zookeeper

[root@kafka08 app]# /app/zookeeper/bin/zkServer.sh start
[root@kafka09 app]# /app/zookeeper/bin/zkServer.sh start
[root@kafka10 app]# /app/zookeeper/bin/zkServer.sh start

[root@kafka08 app]# /app/zookeeper/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /app/zookeeper/bin/../conf/zoo.cfg
Mode: follower

[root@kafka09 app]# /app/zookeeper/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /app/zookeeper/bin/../conf/zoo.cfg
Mode: follower

[root@kafka10 soft]# /app/zookeeper/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /app/zookeeper/bin/../conf/zoo.cfg
Mode: leader

3.5 zookeeper简单操作命令

我们在节点1生成数据,然后在其他节点验证数据

[root@kafka08 app]# /app/zookeeper/bin/zkCli.sh -server 192.168.67.8:2181
[zk: 192.168.67.8:2181(CONNECTED) 0] create /test "hello"
Created /test

在节点2上验证数据

[root@kafka09 app]# /app/zookeeper/bin/zkCli.sh -server 192.168.67.9:2181
[zk: 192.168.67.9:2181(CONNECTED) 0]  get /test
hello
cZxid = 0x100000002
ctime = Sun Jan 24 18:35:36 HKT 2021
mZxid = 0x100000002
mtime = Sun Jan 24 18:35:36 HKT 2021
pZxid = 0x100000002
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 5
numChildren = 0

在节点3上验证数据

[root@kafka10 soft]# /app/zookeeper/bin/zkCli.sh -server 192.168.67.10:2181
[zk: 192.168.67.10:2181(CONNECTED) 0] get /test
hello
cZxid = 0x100000002
ctime = Sun Jan 24 18:35:36 HKT 2021
mZxid = 0x100000002
mtime = Sun Jan 24 18:35:36 HKT 2021
pZxid = 0x100000002
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 5
numChildren = 0

4 安装并测试kafka

4.1节点1的配置

[root@kafka08 soft]# /app/soft
[root@kafka08 soft]# tar zxf kafka_2.11-1.0.0.tgz -C /app/
[root@kafka08 soft]# ln -s /app/kafka_2.11-1.0.0/ /app/kafka
[root@kafka08 soft]# mkdir /app/kafka/logs
[root@kafka08 soft]# vim /app/kafka/config/server.properties
broker.id=1                                                                              
listeners=PLAINTEXT://192.168.67.8:9092           
log.dirs=/opt/kafka/logs                                     
log.retention.hours=24                                                                   
zookeeper.connect=192.168.67.8:2181,192.168.67.9:2181,192.168.67.10:2181

4.2将kafka数据复制到节点2和节点3

[root@kafka08 soft]# rsync -auz /app/kafka* 192.168.67.9:/app/
[root@kafka08 soft]# rsync -auz /app/kafka* 192.168.67.10:/app/

4.3修改节点2配置文件

[root@kafka09 app]# vim /app/kafka/config/server.properties
broker.id=2                                                                              
listeners=PLAINTEXT://192.168.67.9:9092           
log.dirs=/opt/kafka/logs                                     
log.retention.hours=24                                                                   
zookeeper.connect=192.168.67.8:2181,192.168.67.9:2181,192.168.67.10:2181

4.4修改节点3配置文件

[root@kafka10 soft]# vim /app/kafka/config/server.properties
broker.id=3                                                                              
listeners=PLAINTEXT://192.168.67.10:9092           
log.dirs=/opt/kafka/logs                                     
log.retention.hours=24                                                                   
zookeeper.connect=192.168.67.8:2181,192.168.67.9:2181,192.168.67.10:2181

4.5各节点启动kafka

节点1,可以先前台启动,方便查看错误日志

[root@kafka08 soft]# /app/kafka/bin/kafka-server-start.sh  /app/kafka/config/server.properties

最后一行出现KafkaServer id和started字样,就表明启动成功了,然后就可以放到后台启动了

[root@kafka08 soft]# /app/kafka/bin/kafka-server-start.sh -daemon /app/kafka/config/server.properties
[root@kafka08 soft]# tail -f /app/kafka/logs/server.log

启动节点2和节点3

[root@kafka09 app]# /app/kafka/bin/kafka-server-start.sh -daemon /app/kafka/config/server.properties
[root@kafka09 app]# tail -f /app/kafka/logs/server.log

[root@kafka10 app]# /app/kafka/bin/kafka-server-start.sh -daemon /app/kafka/config/server.properties
[root@kafka10 app]# tail -f /app/kafka/logs/server.log

4.6 验证进程

节点1
[root@kafka08 soft]# jps
27296 QuorumPeerMain
28355 Jps
28282 Kafka
节点2
[root@kafka09 app]# jps
35568 Jps
35364 Kafka
32216 QuorumPeerMain
节点3
[root@kafka10 soft]# jps
27505 Jps
27053 QuorumPeerMain
27439 Kafka

4.7测试创建topic

创建名为kafkatest,partitions(分区)为3,replication(复制)为3的topic(主题),在任意机器操作即可

[root@kafka08 soft]# /app/kafka/bin/kafka-topics.sh  --create  --zookeeper 192.168.67.8:2181,192.168.67.9:2181,192.168.67.10:2181 --partitions 3 --replication-factor 3 --topic kafkatest

输出
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
Created topic "kafkatest".

4.8测试获取toppid

可以在任意一台kafka服务器进行测试
节点1获取查看

[root@kafka08 soft]# /app/kafka/bin/kafka-topics.sh  --describe  --zookeeper 192.168.67.8:2181,192.168.67.9:2181,192.168.67.10:2181 --topic kafkatest

输出
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
Topic:kafkatest    PartitionCount:3    ReplicationFactor:3    Configs:
    Topic: kafkatest    Partition: 0    Leader: 3    Replicas: 3,2,1    Isr: 3,2,1
    Topic: kafkatest    Partition: 1    Leader: 1    Replicas: 1,3,2    Isr: 1,3,2
    Topic: kafkatest    Partition: 2    Leader: 2    Replicas: 2,1,3    Isr: 2,1,3

节点2获取查看

[root@kafka09 app]# /app/kafka/bin/kafka-topics.sh  --describe  --zookeeper 192.168.67.8:2181,192.168.67.9:2181,192.168.67.10:2181 --topic kafkatest

输出
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
Topic:kafkatest    PartitionCount:3    ReplicationFactor:3    Configs:
    Topic: kafkatest    Partition: 0    Leader: 3    Replicas: 3,2,1    Isr: 3,2,1
    Topic: kafkatest    Partition: 1    Leader: 1    Replicas: 1,3,2    Isr: 1,3,2
    Topic: kafkatest    Partition: 2    Leader: 2    Replicas: 2,1,3    Isr: 2,1,3

节点3获取查看

[root@kafka10 soft]# /app/kafka/bin/kafka-topics.sh  --describe  --zookeeper 192.168.67.8:2181,192.168.67.9:2181,192.168.67.10:2181 --topic kafkatest

输出
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
Topic:kafkatest    PartitionCount:3    ReplicationFactor:3    Configs:
    Topic: kafkatest    Partition: 0    Leader: 3    Replicas: 3,2,1    Isr: 3,2,1
    Topic: kafkatest    Partition: 1    Leader: 1    Replicas: 1,3,2    Isr: 1,3,2
    Topic: kafkatest    Partition: 2    Leader: 2    Replicas: 2,1,3    Isr: 2,1,3

状态说明:kafkatest有三个分区分别为1、2、3,分区0的leader是3(broker.id),分区0有三个副本,并且状态都为lsr(ln-sync,表示可以参加选举成为leader)。

4.9测试删除topic

[root@kafka08 soft]# /app/kafka/bin/kafka-topics.sh  --delete  --zookeeper 192.168.67.8:2181,192.168.67.9:2181,192.168.67.10:2181 --topic kafkatest

输出
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
Topic kafkatest is marked for deletion.
Note: This will have no impact if delete.topic.enable is not set to true.

验证是否真的删除

[root@kafka08 soft]# /app/kafka/bin/kafka-topics.sh  --describe  --zookeeper 192.168.67.8:2181,192.168.67.9:2181,192.168.67.10:2181 --topic kafkatest

批量删除

/app/kafka/bin/kafka-topics.sh --list --zookeeper localhost:2181
#然后复制topic到文件topic.txt里。
vim delete_topic.sh
for  i in `cat topic.txt`;
do
    /usr/local/kafka/bin/kafka-topics.sh --delete --zookeeper 192.168.67.8:2181,192.168.67.9:2181,192.168.67.10:2181 --topic $i
    #echo $i;
done

4.10测试获取所有的topic列表

首先创建两个topic

[root@kafka08 soft]# /app/kafka/bin/kafka-topics.sh --create --zookeeper 192.168.67.8:2181,192.168.67.9:2181,192.168.67.10:2181 --partitions 3 --replication-factor 3 --topic kafkatest
[root@kafka08 soft]# /app/kafka/bin/kafka-topics.sh --create --zookeeper 192.168.67.8:2181,192.168.67.9:2181,192.168.67.10:2181 --partitions 3 --replication-factor 3 --topic kafkatest2

然后查看所有的topic列表

[root@kafka08 soft]# /app/kafka/bin/kafka-topics.sh --list --zookeeper 192.168.67.8:2181,192.168.67.9:2181,192.168.67.10:2181

输出
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
kafkatest
kafkatest2

4.11kafka测试命令发送消息

发送消息:注意,端口是 kafka的9092,而不是zookeeper的2181

[root@kafka08 soft]# /app/kafka/bin/kafka-console-producer.sh --broker-list  192.168.67.8:9092,192.168.67.9:9092,192.168.67.10:9092 --topic  kafkatest

OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
>hello
>ben
>

其他kafka节点获取消息

新建一个节点1窗口和节点2,节点3输入以下命令获取消息

/app/kafka/bin/kafka-console-consumer.sh --zookeeper 192.168.67.8:2181,192.168.67.9:2181,192.168.67.10:2181 --topic kafkatest --from-beginning

huozhe

/app/kafka/bin/kafka-console-consumer.sh --bootstrap-server  192.168.67.8:9092,192.168.67.9:9092,192.168.67.10:9092 --topic kafkatest --from-beginning

 

故障:

1 CDH Kafka 无法消费,也没有报错(网络没有问题)

解决:检查应用主机所连接的kafka集群,把集群的ip和主机名 写到应用主机的/etc/hosts 里面。

2 配置文件开启jmx

# vim /app/kafka/bin/kafka-server-start.sh
if [ "x$JMX_PORT" = "x" ]; then
    export JMX_PORT = "9999"
fi
参考:http://blog.nsfocus.net/jmx-real-time-monitoring/

3 查看日志:

# tail -f /app/kafka/logs/server.log
# tail -f /app/zookeeper/zookeeper.out

web1 管理台查看消息内容推荐kafka-magic

docker search kafka-magic
docker pull digitsy/kafka-magic
docker images
docker run -d -p 8800:80 digitsy/kafka-magic

web2 window系统管理工具推荐kafkatool

下载地址

https://www.kafkatool.com/download.html

 

web3 安装和配置kafka-manager

1 下载地址:https://github.com/yahoo/kafka-manager/releases

2 上传解压

3 修改配置
cd ~/bigdata/kafka-manager-1.3.0.8/conf
vi application.conf
修改配置:kafka-manager.zkhosts=“master:2181,slave1:2181,slave2:2181”

账号

密码

启动
1.cd ~/bigdata/kafka-manager-1.3.0.8
2.nohup bin/kafka-manager -Dconfig.file=conf/application.conf -Dhttp.port=这里默认是9000,也可以写成你想要的端口 &
注意,kafka-manager必须分配可执行权限,否则会报权限错误

https://blog.csdn.net/weixin_42411818/article/details/99444207

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值