kafka.2 集群搭建

1.启动kakfa自带的zk

参考:https://blog.csdn.net/justlpf/article/details/127261664?utm_medium=distribute.pc_relevant.none-task-blog-2defaultbaidujs_baidulandingword~default-0-127261664-blog-127495317.pc_relevant_3mothn_strategy_recovery&spm=1001.2101.3001.4242.1&utm_relevant_index=3

选择三台机器,该三台机器上已经搭建了ssh免密登录
vi /etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
10.1.218.22 node1
10.1.218.26 node2
10.1.218.24 node3

环境

1.三台服务器安装java环境,jdk1.8
2.Kafka安装包版本:kafka_2.12-3.3.1.tgz
3、假设3台服务器分别为:node1、node2、node3

将 kafka_2.12-3.0.0.tgz上传到node1上
/opt/server/kafka下,并解压

编写同步脚本
vi xsync

#!/bin/bash

#1. 判断参数个数
if [ $# -lt 1 ]
then
    echo Not Enough Arguement!
    exit;
fi

#2. 遍历集群所有机器
for host in node2 node3
do
    echo ====================  $host  ====================
    #3. 遍历所有目录,挨个发送

    for file in $@
    do
        #4. 判断文件是否存在
        if [ -e $file ]
            then
                #5. 获取父目录
                pdir=$(cd -P $(dirname $file); pwd)

                #6. 获取当前文件的名称
                fname=$(basename $file)
                ssh $host "mkdir -p $pdir"
                rsync -av $pdir/$fname $host:$pdir
            else
                echo $file does not exists!
        fi
    done
done

授予权限

chmod +777 xsync
该脚本用于向node2 和node3上同步文件

在node1、node2、node3上,都新建

cd /opt/server/kafka/kafka_2.12-3.0.0
mkdir zk_kfk_data
cd zk_kfk_data
vi myid
node1,node2,node3的myid中内容分别是 1 2 3

修改node1,node2,node3上的zk配置文件

vi zookeeper.properties

dataDir=/opt/server/kafka/kafka_2.12-3.0.0/zk_kfk_data
maxClientCnxns=0
tickTime=2000
initLimit=10
syncLimit=5

server.1=node1:2888:3888
server.2=node2:2888:3888
server.3=node3:2888:3888

三台机器上的zookeeper.properties文件配置相同,data.Dir 为zk的数据目录,server.1、server.2、server.3 为集群信息。
2888端口号是zookeeper服务之间通信的端口
3888端口是zookeeper与其他应用程序通信的端口。
tickTime:CS通信心跳数
Zookeeper 服务器之间或客户端与服务器之间维持心跳的时间间隔,也就是每个 tickTime 时间就会发送一个心跳。
tickTime以毫秒为单位。
tickTime:该参数用来定义心跳的间隔时间,zookeeper的客户端和服务端之间也有和web开发里类似的session的概念,而zookeeper里最小的session过期时间就是tickTime的两倍。
initLimit:LF初始通信时限
集群中的follower服务器(F)与leader服务器(L)之间 初始连接 时能容忍的最多心跳数(tickTime的数量)
syncLimit:LF同步通信时限
集群中的follower服务器(F)与leader服务器(L)之间 请求和应答 之间能容忍的最多心跳数(tickTime的数量)

在每台机器上启动zk
nohup bin/zookeeper-server-start.sh config/zookeeper.properties &>>zookeeper.log &

[root@node1 kafka_2.12-3.0.0]# netstat -ant |grep 2181
tcp6       0      0 :::2181                 :::*                    LISTEN  

说明启动成功了

2.启动kafka集群

在node1上

cd /opt/server/kafka/kafka_2.12-3.0.0
mkdir kafka-logs-1

修改server.properties

broker.id=0
advertised.host.name=node1
advertised.port=9092
log.dirs=/opt/server/kafka/kafka_2.12-3.0.0/kafka-logs-1
num.partitions=40
# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=24
zookeeper.connect=node1:2181,node2:2181,node3:2181/kafka

在node2和node3上同样的操作,node2的broker.id=1,node3的broker.id=2
在这里插入图片描述

启动命令
kafka_2.12-3.0.0]# nohup bin/kafka-server-start.sh config/server.properties  &>>kafka.log &


[root@node1 kafka_2.12-3.0.0]# netstat -ant |grep 9092
tcp6       0      0 :::9092                 :::*                    LISTEN     
tcp6       0      0 10.1.218.22:51018       10.1.218.22:9092        ESTABLISHED
tcp6       0      0 10.1.218.22:9092        10.1.218.22:51018       ESTABLISHED

3.查看zookeeper的kafka节点信息

[root@node1 kafka_2.12-3.0.0]# bin/zookeeper-shell.sh node1:2181,node2:2181,node3:2181
Connecting to node1:2181,node2:2181,node3:2181
Welcome to ZooKeeper!
JLine support is disabled

WATCHER::

WatchedEvent state:SyncConnected type:None path:null
ls
ls [-s] [-w] [-R] path
ls /
[kafka, zookeeper]
ls /kafka
[admin, brokers, cluster, config, consumers, controller, controller_epoch, feature, isr_change_notification, latest_producer_id_block, log_dir_event_notification]

ls /brokers/ids
Node does not exist: /brokers/ids
ls /kafka/brokers/ids
[0, 1, 2]
ls /kafka/brokers/topics
[]
get /kafka/brokers/ids/0
{"listener_security_protocol_map":{"PLAINTEXT":"PLAINTEXT"},"endpoints":["PLAINTEXT://node1:9092"],"jmx_port":-1,"features":{},"host":"node1","timestamp":"1673862098693","port":9092,"version":5}

启停kafka集群脚本

#!/bin/bash
basePath=/opt/server/kafka/kafka_2.12-3.0.0

case $1 in
"start"){
  for host in node1 node2 node3
  do
    echo "启动$host的kafka服务"
    ssh $host "$basePath/bin/kafka-server-start.sh -daemon $basePath/config/server.properties"
  done
};;
"stop"){
  for host in node1 node2 node3
  do
     echo "停止$host的kafka服务"
     ssh $host "$basePath/bin/kafka-server-stop.sh"
  done
};;
esac

创建主题

[root@node1 kafka_2.12-3.0.0]# bin/kafka-topics.sh --bootstrap-server node1:9092,node2:9092,node3:9092 --topic first --create --partitions 1 --replication-factor 3
Created topic first.
[root@node1 kafka_2.12-3.0.0]# bin/kafka-topics.sh --bootstrap-server node1:9092,node2:9092,node3:9092 --topic first --describe
Topic: first    TopicId: IcU6kbglSSWOkruuSmgryg PartitionCount: 1       ReplicationFactor: 3    Configs: segment.bytes=1073741824
        Topic: first    Partition: 0    Leader: 1       Replicas: 1,0,2 Isr: 1,0,2
[root@node1 kafka_2.12-3.0.0]# 

增加分区

[root@node1 kafka_2.12-3.0.0]# bin/kafka-topics.sh --bootstrap-server node1:9092,node2:9092,node3:9092 --topic first --alter --partitions 3
[root@node1 kafka_2.12-3.0.0]# bin/kafka-topics.sh --bootstrap-server node1:9092,node2:9092,node3:9092 --topic first --describe
Topic: first    TopicId: IcU6kbglSSWOkruuSmgryg PartitionCount: 3       ReplicationFactor: 3    Configs: segment.bytes=1073741824
        Topic: first    Partition: 0    Leader: 1       Replicas: 1,0,2 Isr: 1,0,2
        Topic: first    Partition: 1    Leader: 2       Replicas: 2,1,0 Isr: 2,1,0
        Topic: first    Partition: 2    Leader: 0       Replicas: 0,2,1 Isr: 0,2,1

Leader节点负责给定Partition的所有读写请求。
Replicas表示某个Partition在哪几个Broker上存在备份。不管这个几点是不是”Replicas“,甚至这个节点挂了,也会列出。
Isr是Replicas的一个子集,它只列出当前还存活着的,并且已同步备份了该Partition的节点。

模拟生产消费

node1上

[root@node1 kafka_2.12-3.0.0]# bin/kafka-console-producer.sh --topic quickstart-events --bootstrap-server node1:9092
abc 
hello

在node2上

[root@node2 kafka_2.12-3.0.0]#  bin/kafka-console-consumer.sh --topic quickstart-events --from-beginning --bootstrap-server node2:9092
abc
hello

模拟消费组消费主题,查看offset

启动生产者,test-topic创建了40个分区

 kafka_2.12-3.0.0]#  bin/kafka-console-producer.sh --topic test-topic --bootstrap-server node1:9092,node2:9092,node3:9092

启动消费者

 bin/kafka-console-consumer.sh --topic test-topic --group my-group  --bootstrap-server node1:9092,node2:9092,node3:9092

查看offset

GROUP           TOPIC           PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG             CONSUMER-ID                                              HOST            CLIENT-ID
my-group        test-topic      38         0               0               0               consumer-my-group-1-aacccf11-d714-47ed-852f-32123d88eebe /10.1.218.26    consumer-my-group-1
my-group        test-topic      15         0               0               0               consumer-my-group-1-aacccf11-d714-47ed-852f-32123d88eebe /10.1.218.26    consumer-my-group-1
my-group        test-topic      8          0               0               0               consumer-my-group-1-aacccf11-d714-47ed-852f-32123d88eebe /10.1.218.26    consumer-my-group-1
my-group        test-topic      17         0               0               0               consumer-my-group-1-aacccf11-d714-47ed-852f-32123d88eebe /10.1.218.26    consumer-my-group-1
my-group        test-topic      31         0               0               0               consumer-my-group-1-aacccf11-d714-47ed-852f-32123d88eebe /10.1.218.26    consumer-my-group-1
my-group        test-topic      22         0               0               0               consumer-my-group-1-aacccf11-d714-47ed-852f-32123d88eebe /10.1.218.26    consumer-my-group-1
my-group        test-topic      25         0               0               0               consumer-my-group-1-aacccf11-d714-47ed-852f-32123d88eebe /10.1.218.26    consumer-my-group-1
my-group        test-topic      4          0               0               0               consumer-my-group-1-aacccf11-d714-47ed-852f-32123d88eebe /10.1.218.26    consumer-my-group-1
my-group        test-topic      5          0               0               0               consumer-my-group-1-aacccf11-d714-47ed-852f-32123d88eebe /10.1.218.26    consumer-my-group-1
my-group        test-topic      18         0               0               0               consumer-my-group-1-aacccf11-d714-47ed-852f-32123d88eebe /10.1.218.26    consumer-my-group-1
......忽略,共计40个分区

当生产者发送消息后,消费者可以收到,且offset显示为

[root@node3 kafka_2.12-3.0.0]# bin/kafka-consumer-groups.sh --describe --bootstrap-server node1:9092,node2:9092,node3:9092 --group my-group

GROUP           TOPIC           PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG             CONSUMER-ID                                              HOST            CLIENT-ID
my-group        test-topic      38         0               0               0               consumer-my-group-1-aacccf11-d714-47ed-852f-32123d88eebe /10.1.218.26    consumer-my-group-1
my-group        test-topic      15         0               0               0               consumer-my-group-1-aacccf11-d714-47ed-852f-32123d88eebe /10.1.218.26    consumer-my-group-1
my-group        test-topic      8          0               0               0               consumer-my-group-1-aacccf11-d714-47ed-852f-32123d88eebe /10.1.218.26    consumer-my-group-1
my-group        test-topic      17         0               0               0               consumer-my-group-1-aacccf11-d714-47ed-852f-32123d88eebe /10.1.218.26    consumer-my-group-1
my-group        test-topic      31         0               0               0               consumer-my-group-1-aacccf11-d714-47ed-852f-32123d88eebe /10.1.218.26    consumer-my-group-1
my-group        test-topic      22         0               0               0               consumer-my-group-1-aacccf11-d714-47ed-852f-32123d88eebe /10.1.218.26    consumer-my-group-1
my-group        test-topic      25         0               0               0               consumer-my-group-1-aacccf11-d714-47ed-852f-32123d88eebe /10.1.218.26    consumer-my-group-1
my-group        test-topic      4          0               0               0               consumer-my-group-1-aacccf11-d714-47ed-852f-32123d88eebe /10.1.218.26    consumer-my-group-1
my-group        test-topic      5          0               0               0               consumer-my-group-1-aacccf11-d714-47ed-852f-32123d88eebe /10.1.218.26    consumer-my-group-1
my-group        test-topic      18         0               0               0               consumer-my-group-1-aacccf11-d714-47ed-852f-32123d88eebe /10.1.218.26    consumer-my-group-1
my-group        test-topic      34         0               0               0               consumer-my-group-1-aacccf11-d714-47ed-852f-32123d88eebe /10.1.218.26    consumer-my-group-1
my-group        test-topic      32         0               0               0               consumer-my-group-1-aacccf11-d714-47ed-852f-32123d88eebe /10.1.218.26    consumer-my-group-1
my-group        test-topic      16         0               0               0               consumer-my-group-1-aacccf11-d714-47ed-852f-32123d88eebe /10.1.218.26    consumer-my-group-1
my-group        test-topic      29         0               0               0               consumer-my-group-1-aacccf11-d714-47ed-852f-32123d88eebe /10.1.218.26    consumer-my-group-1
my-group        test-topic      39         0               0               0               consumer-my-group-1-aacccf11-d714-47ed-852f-32123d88eebe /10.1.218.26    consumer-my-group-1
my-group        test-topic      2          0               0               0               consumer-my-group-1-aacccf11-d714-47ed-852f-32123d88eebe /10.1.218.26    consumer-my-group-1
my-group        test-topic      23         0               0               0               consumer-my-group-1-aacccf11-d714-47ed-852f-32123d88eebe /10.1.218.26    consumer-my-group-1
my-group        test-topic      13         0               0               0               consumer-my-group-1-aacccf11-d714-47ed-852f-32123d88eebe /10.1.218.26    consumer-my-group-1
my-group        test-topic      6          0               0               0               consumer-my-group-1-aacccf11-d714-47ed-852f-32123d88eebe /10.1.218.26    consumer-my-group-1
my-group        test-topic      28         0               0               0               consumer-my-group-1-aacccf11-d714-47ed-852f-32123d88eebe /10.1.218.26    consumer-my-group-1
my-group        test-topic      3          0               0               0               consumer-my-group-1-aacccf11-d714-47ed-852f-32123d88eebe /10.1.218.26    consumer-my-group-1
my-group        test-topic      12         0               0               0               consumer-my-group-1-aacccf11-d714-47ed-852f-32123d88eebe /10.1.218.26    consumer-my-group-1
my-group        test-topic      24         0               0               0               consumer-my-group-1-aacccf11-d714-47ed-852f-32123d88eebe /10.1.218.26    consumer-my-group-1
my-group        test-topic      10         0               0               0               consumer-my-group-1-aacccf11-d714-47ed-852f-32123d88eebe /10.1.218.26    consumer-my-group-1
my-group        test-topic      1          0               0               0               consumer-my-group-1-aacccf11-d714-47ed-852f-32123d88eebe /10.1.218.26    consumer-my-group-1
my-group        test-topic      11         0               0               0               consumer-my-group-1-aacccf11-d714-47ed-852f-32123d88eebe /10.1.218.26    consumer-my-group-1
my-group        test-topic      36         0               0               0               consumer-my-group-1-aacccf11-d714-47ed-852f-32123d88eebe /10.1.218.26    consumer-my-group-1
my-group        test-topic      33         0               0               0               consumer-my-group-1-aacccf11-d714-47ed-852f-32123d88eebe /10.1.218.26    consumer-my-group-1
my-group        test-topic      14         0               0               0               consumer-my-group-1-aacccf11-d714-47ed-852f-32123d88eebe /10.1.218.26    consumer-my-group-1
my-group        test-topic      27         0               0               0               consumer-my-group-1-aacccf11-d714-47ed-852f-32123d88eebe /10.1.218.26    consumer-my-group-1
my-group        test-topic      20         0               0               0               consumer-my-group-1-aacccf11-d714-47ed-852f-32123d88eebe /10.1.218.26    consumer-my-group-1
my-group        test-topic      21         0               0               0               consumer-my-group-1-aacccf11-d714-47ed-852f-32123d88eebe /10.1.218.26    consumer-my-group-1
my-group        test-topic      7          0               0               0               consumer-my-group-1-aacccf11-d714-47ed-852f-32123d88eebe /10.1.218.26    consumer-my-group-1
my-group        test-topic      9          0               0               0               consumer-my-group-1-aacccf11-d714-47ed-852f-32123d88eebe /10.1.218.26    consumer-my-group-1
my-group        test-topic      30         1               1               0               consumer-my-group-1-aacccf11-d714-47ed-852f-32123d88eebe /10.1.218.26    consumer-my-group-1
my-group        test-topic      0          0               0               0               consumer-my-group-1-aacccf11-d714-47ed-852f-32123d88eebe /10.1.218.26    consumer-my-group-1
my-group        test-topic      35         0               0               0               consumer-my-group-1-aacccf11-d714-47ed-852f-32123d88eebe /10.1.218.26    consumer-my-group-1
my-group        test-topic      26         0               0               0               consumer-my-group-1-aacccf11-d714-47ed-852f-32123d88eebe /10.1.218.26    consumer-my-group-1
my-group        test-topic      19         0               0               0               consumer-my-group-1-aacccf11-d714-47ed-852f-32123d88eebe /10.1.218.26    consumer-my-group-1
my-group        test-topic      37         0               0               0               consumer-my-group-1-aacccf11-d714-47ed-852f-32123d88eebe /10.1.218.26    consumer-my-group-1
[root@node3 kafka_2.12-3.0.0]# 

关闭消费者,生产者陆续发送6条消息

GROUP           TOPIC           PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG             CONSUMER-ID     HOST            CLIENT-ID
my-group        test-topic      38         0               0               0               -               -               -
my-group        test-topic      15         0               0               0               -               -               -
my-group        test-topic      8          0               0               0               -               -               -
my-group        test-topic      17         0               1               1               -               -               -
my-group        test-topic      31         0               0               0               -               -               -
my-group        test-topic      22         0               0               0               -               -               -
my-group        test-topic      25         0               0               0               -               -               -
my-group        test-topic      4          0               2               2               -               -               -
my-group        test-topic      5          0               0               0               -               -               -
my-group        test-topic      18         0               0               0               -               -               -
my-group        test-topic      34         0               0               0               -               -               -
my-group        test-topic      32         0               0               0               -               -               -
my-group        test-topic      16         0               0               0               -               -               -
my-group        test-topic      29         0               0               0               -               -               -
my-group        test-topic      39         0               1               1               -               -               -
my-group        test-topic      2          0               0               0               -               -               -
my-group        test-topic      23         0               0               0               -               -               -
my-group        test-topic      13         0               0               0               -               -               -
my-group        test-topic      6          0               0               0               -               -               -
my-group        test-topic      28         0               0               0               -               -               -
my-group        test-topic      3          0               0               0               -               -               -
my-group        test-topic      12         0               0               0               -               -               -
my-group        test-topic      24         0               0               0               -               -               -
my-group        test-topic      10         0               0               0               -               -               -
my-group        test-topic      1          0               0               0               -               -               -
my-group        test-topic      11         0               0               0               -               -               -
my-group        test-topic      36         0               0               0               -               -               -
my-group        test-topic      33         0               0               0               -               -               -
my-group        test-topic      14         0               0               0               -               -               -
my-group        test-topic      27         0               0               0               -               -               -
my-group        test-topic      20         0               0               0               -               -               -
my-group        test-topic      21         0               1               1               -               -               -
my-group        test-topic      7          0               0               0               -               -               -
my-group        test-topic      9          0               1               1               -               -               -
my-group        test-topic      30         1               1               0               -               -               -
my-group        test-topic      0          0               0               0               -               -               -
my-group        test-topic      35         0               0               0               -               -               -
my-group        test-topic      26         0               0               0               -               -               -
my-group        test-topic      19         0               0               0               -               -               -
my-group        test-topic      37         0               0               0               -               -               -

lag 延后共计6个

offset是相对于consumer来说的,offset用来记录某个分区,某个topic的消费情况,每次提交offset,都是消费者向kafka提交一次消费进度

提交的记录会被保存到服务端的_consumer_offsets中

提交又分为同步提交和异步提交,首先来说同步提交,这种方式会阻塞消费端的消费性能,所以一般采用的都是异步提交,而异步提交的一个问题就是可能会提交失败,从而导致消费消息重复;
虽然在生产消息环节,我们可以通过幂等性、消息事务、ack机制等方式来保证消息的可靠性和幂等性。但是在消费环境,同样由于offset提交问题,会导致消息无法幂等消费。

所以消费环节要做到业务去重,或者事务消费

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值