kafka

kafka集群和efak图形化监控页面

安装

zookeeper安装

此篇介绍的是kafka2.6.0版本,需要依赖zookeeper

  1. zookeeper下载
    点此下载3.5.10
  2. 创建zookeeper文件夹,将压缩包解压出来,复制三份到zookeeper下,分别命名为zk1、zk2、zk3。进入conf文件夹下,查看是否有zoo.cfg,如没有,复制zoo_sample.cfg文件即可。
  3. 修改配置文件如下,依次修改三份配置文件:
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial 
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
#zookeeper数据以及日志文件目录
dataDir=D:/zookeeper/data/zk1
dataLogDir=D:/zookeeper/log/zk1
# the port at which the clients will connect
#此处为zookeeper对外暴露端口
clientPort=2187
#此处为zookeeper集群选举以及内部通信端口
server.1=localhost:2887:3887
server.2=localhost:2888:3888
server.3=localhost:2889:3889
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the 
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
  1. 在上述三个zookeeper的data数据文件夹下增加myid文件(注意不带后缀名)依次添加1、2、3
  2. 进入bin目录下,依次执行zkServer.cmd

kafka安装

  1. kafka下载
    点此下载kafka_2.13-2.6.0
  2. 创建kafka文件夹,将压缩包解压出来,复制三份到kafka下,分别命名为k1、k2、k3。进入config文件夹下。
  3. 修改server.properties配置文件如下:
#每个集群服务的id
broker.id=0
broker.list=192.168.1.2:9097,192.168.1.2:9098,192.168.1.2:9099
#监听的端口,外部连接的时候需要与此ip地址一致
listeners=PLAINTEXT://192.168.1.2:9097
advertised.listeners=PLAINTEXT://192.168.1.2:9097
host.name=192.168.1.2
#每个服务对外暴露的端口
port=9097

num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600

log.dirs=D:\kafka\log\k1
#副本数需要进行设置,默认为1的话,会存在单点故障的问题
num.partitions=1
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
#zookeeper集群服务地址
zookeeper.connect=localhost:2187,localhost:2188,localhost:2189
zookeeper.connection.timeout.ms=18000
group.initial.rebalance.delay.ms=0

  1. 依次在bin/windows文件夹下进入cmd黑窗口模式执行
kafka-server-start.bat ../../config/server.properties

efak下载安装

efak最新版本3.0.1不知道为什么一直在windows上运行出问题,在linux系统上运行没问题。所以这里介绍的是在linux上部署efak

  1. 下载
    官网地址

  2. 需要解压两次,进入config文件夹下

  3. 需要修改的配置如下:

#配置zookeeper服务地址
efak.zk.cluster.alias=cluster1
cluster1.zk.list=192.168.1.2:2187,192.168.1.2:2188,192.168.1.2:2189
cluster1.efak.offset.storage=kafka
#cluster2.efak.offset.storage=zk
#配置数据库
efak.driver=com.mysql.cj.jdbc.Driver
efak.url=jdbc:mysql://192.168.1.2:3306/ke?useUnicode=true&characterEncoding=UTF-8&zeroDateTimeBehavior=convertToNull
efak.username=root
efak.password=123456
  1. 修改kafka启动参数kafka-server-start.bat
IF ["%KAFKA_HEAP_OPTS%"] EQU [""] (
    rem detect OS architecture
    wmic os get osarchitecture | find /i "32-bit" >nul 2>&1
    IF NOT ERRORLEVEL 1 (
        rem 32-bit OS
        set KAFKA_HEAP_OPTS=-Xmx512M -Xms512M
    ) ELSE (
        rem 64-bit OS
        rem set KAFKA_HEAP_OPTS=-Xmx1G -Xms1G
		set KAFKA_HEAP_OPTS=-Xms2G -Xmx2G -XX:PermSize=128m -XX:+UseG1GC -XX:MaxGCPauseMillis=200 -XX:ParallelGCThreads=8 -XX:ConcGCThreads=5 -XX:InitiatingHeapOccupancyPercent=70
		set JMX_PORT=9987
    )
)

注意这里是windows版本,linux需要在kafka-server-start.sh里进行修改

if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
    export KAFKA_HEAP_OPTS="-server -Xms2G -Xmx2G -XX:PermSize=128m -XX:+UseG1GC -XX:MaxGCPauseMillis=200 -XX:ParallelGCThreads=8 -XX:ConcGCThreads=5 -XX:InitiatingHeapOccupancyPercent=70"
    export JMX_PORT="9999"
fi
  1. 进入bin目录下启动efak
./ke.sh start
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值