集群配置Kafka,可视化

集群配置Kafka

  • 下载https://kafka.apache.org/downloads

在这里插入图片描述

  • 上传到master节点的/export/software目录下,并解压
[root@master software]# tar -zxf kafka_2.12-3.5.0.tgz -C /export/servers/
  • 主节点master上创建日志文件夹
[root@master kafka_2.12-3.5.0]# mkdir logs
[root@master kafka_2.12-3.5.0]# pwd
/export/servers/kafka_2.12-3.5.0/logs
  • 进入kafka安装目录,配置service.properties文件
[root@master config]# pwd
/export/servers/kafka_2.12-3.5.0/config

broker.id=1
listeners=PLAINTEXT://master:9092
log.dirs=/export/servers/kafka_2.12-3.5.0/logs
zookeeper.connect=master:2181,hadoop01:2181,hadoop02:2181
delete.topic.enble=true
  • 将配置完的kafka分发至hadoop01、hadoop02机器
scp -r /export/servers/kafka_2.12-3.5.0 hadoop01:/export/servers
scp -r /export/servers/kafka_2.12-3.5.0 hadoop02:/export/servers
  • hadoop01配置service.properties文件
broker.id=3
listeners=PLAINTEXT://hadoop01:9092
  • hadoop02配置service.properties文件
broker.id=5
listeners=PLAINTEXT://hadoop02:9092
  • List item

在主节点master配置环境变量

$ vim /etc/profile 

# kafka
export KAFKA_HOME=/export/servers/kafka_2.12-3.5.0
export PATH=$PATH:$KAFKA_HOME/bin

$ source /etc/profile  # 使配置文件生效
  • 查看环境变量是否配置成功
[root@master config]# kafka-
kafka-acls.sh                       kafka-delegation-tokens.sh          kafka-log-dirs.sh                   kafka-server-start.sh
kafka-broker-api-versions.sh        kafka-delete-records.sh             kafka-metadata-quorum.sh            kafka-server-stop.sh
kafka-cluster.sh                    kafka-dump-log.sh                   kafka-metadata-shell.sh             kafka-storage.sh
kafka-configs.sh                    kafka-e2e-latency.sh                kafka-mirror-maker.sh               kafka-streams-application-reset.sh
kafka-console-consumer.sh           kafka-features.sh                   kafka-producer-perf-test.sh         kafka-topics.sh
kafka-console-producer.sh           kafka-get-offsets.sh                kafka-reassign-partitions.sh        kafka-transactions.sh
kafka-consumer-groups.sh            kafka-jmx.sh                        kafka-replica-verification.sh       kafka-verifiable-consumer.sh
kafka-consumer-perf-test.sh         kafka-leader-election.sh            kafka-run-class.sh                  kafka-verifiable-producer.sh
[root@master config]# kafka-
  • 将配置文件分发给hadoop01和hadoop02节点
$ xsync /etc/profile
  • 登录hadoop01和hadoop02节点source环境变量
$ source /etc/profile

启动Zookeeper

  • 在master、hadoop01和hadoop02节点上分别启动zookeepr,全部启动后查看status
[root@master ~]# zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /export/servers/zookeeper-3.7.1/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[root@master ~]# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /export/servers/zookeeper-3.7.1/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: follower
[root@master ~]# 
[root@hadoop01 config]# zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /export/servers/zookeeper-3.7.1/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[root@hadoop01 config]# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /export/servers/zookeeper-3.7.1/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: leader
[root@hadoop01 config]# 
[root@hadoop02 config]# zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /export/servers/zookeeper-3.7.1/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[root@hadoop02 config]# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /export/servers/zookeeper-3.7.1/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: follower
[root@hadoop02 config]# 

启动Kafka

  • masterhadoop01hadoop02节点上分别启动Kafka-daemon后台运行
# 进入/export/servers/kafka_2.12-3.5.0/config目录下
[root@master ~]# cd /export/servers/kafka_2.12-3.5.0/config

# 启动命令  -daemon后台运行
[root@master config]# kafka-server-start.sh -daemon server.properties 
[root@master config]# jps
2961 QuorumPeerMain
3670 Jps
3628 Kafka
[root@master config]# 

# hadoop01和hadoop02同理

# 停止kafka运行
$ kafka-server-stop.sh
  • 测试
[root@master bin]# ls
docker-compose  jps  myhadoop-start  myhadoop-stop  mykafka.sh  myshutdown  myzk.sh  xcall  xsync
[root@master bin]# mykafka.sh start
---------- kafka  master 启动 ----------
---------- kafka  hadoop01 启动 ----------
---------- kafka  hadoop02 启动 ----------
[root@master bin]# xcall
=============== master ===============
5127 Jps
5002 Kafka
4269 QuorumPeerMain
=============== hadoop01 ===============
4594 Kafka
4731 Jps
3885 QuorumPeerMain
=============== hadoop02 ===============
4564 Kafka
3863 QuorumPeerMain
4699 Jps
[root@master bin]# 

Kafka启停脚本

  • 启动
#!/bin/bash

case $1 in
"start"){
	for i in master hadoop01 hadoop02
	do
		echo "---------- kafka  $i 启动 ----------"
		ssh $i "/export/servers/kafka_2.12-3.5.0/bin/kafka-server-start.sh -daemon /export/servers/kafka_2.12-3.5.0/config/server.properties"
	done
};;

"stop"){
	for i in master hadoop01 hadoop02
	do
		echo "---------- kafka  $i 停止 ----------"
		ssh $i "/export/servers/kafka_2.12-3.5.0/bin/kafka-server-stop.sh"
	done
};;
esac

登录zookeeper客户端,查看/brokers/ids

[root@master ~]# zkCli.sh

WATCHER::

WatchedEvent state:SyncConnected type:None path:null
[zk: localhost:2181(CONNECTED) 0] ls /
[admin, brokers, cluster, config, consumers, controller, controller_epoch, feature, flink, isr_change_notification, latest_producer_id_block, log_dir_event_notification, zookeeper]
[zk: localhost:2181(CONNECTED) 1] ls /brokers 
[ids, seqid, topics]
[zk: localhost:2181(CONNECTED) 3] ls /brokers/ids 
[1, 3, 5]
[zk: localhost:2181(CONNECTED) 4] 

验证Kafka使用

  • kafka-console-producer.sh用于生产消息
# 在master节点发布主题
[root@master config]# kafka-console-producer.sh --topic topic1 --broker-list master:9092
>fsdaf
>hello
  • kafka-console-consumer.sh用于消费消息
# 在hadoop01节点连接master上发布的主题
[root@hadoop01 config]# kafka-console-consumer.sh --bootstrap-server master:9092 --topic topic1
hello
test
  • 消费者连接之后,实时接收来自该主题下的消息
  • 查看消费组及信息
[root@master ~]# kafka-consumer-groups.sh --bootstrap-server master:9092 --list
console-consumer-79171
[root@master ~]# kafka-consumer-groups.sh --bootstrap-server master:9092 --describe --group console-consumer-79171

GROUP                  TOPIC           PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG             CONSUMER-ID                                           HOST             CLIENT-ID
console-consumer-79171 topic1          0          -               7               -               console-consumer-6d8bbc8d-d579-46bf-8838-c6671d895d92 /192.168.159.130 console-consumer

# Currennt-offset:当前消费组的已消费偏移量
# Log-end-offset:主题对应分区消息的结束偏移量(HW)
# Lag:当前消费组未消费的消息数


可视化工具kafka-eagle

  • 下载:https://github.com/smartloli/kafka-eagle-bin/archive/v2.1.0.tar.gz
  • 上传到master节点,并解压
$ cd /export/software
$ tar -zxvf kafka-eagle-bin-2.1.0.tar.gz  # 解压出一个efak-web-2.1.0-bin.tar.gz压缩包
$ tar -zxvf efak-web-2.1.0-bin.tar.gz -C /export/servers
  • 进入eagle的安装目录
[root@master efak-web-2.1.0]# pwd
/export/servers/efak-web-2.1.0
  • 配置环境变量KE_HOME
$ vim /etc/profile
# kafka eagle web
export KE_HOME=/export/servers/efak-web-2.1.0
export PATH=$PATH:$KE_HOME/bin
$ source /etc/profile
  • 修改配置文件$KE_HOME/conf/system-config.properties
efak.zk.cluster.alias=cluster1

cluster1.zk.list=master:2181,hadoop01:2181,hadoop02:2181

######################################
# 下面两项是配置数据库的,默认使用sqlite,如果量大,建议使用mysql,这里我使用的是sqlit
# kafka sqlite jdbc driver address
######################################
efak.driver=org.sqlite.JDBC
efak.url=jdbc:sqlite:/export/servers/efak-web-2.1.0/db/ke.db
efak.username=root
efak.password=123456

######################################
# kafka mysql jdbc driver address
######################################
#efak.driver=com.mysql.cj.jdbc.Driver
#efak.url=jdbc:mysql://localhost:3306/ke?useUnicode=true&characterEncoding=UTF-8&zeroDateTimeBehavior=convertToNull
#efak.username=root
#efak.password=123456
  • 启动
$ ke.sh start

在这里插入图片描述

  • 登录页面查看,默认用户名和密码为 admin ,123456

在这里插入图片描述
在这里插入图片描述

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值