linux安装kafka2.6.0集群及实战案例

一、环境准备和安装

确保linux服务器上已经安装好jdk环境,未安装的请参考Linux上安装jdk1.8版本,rpm和tar.gz两种方式最全教程

1. 下载

kafka官网下载地址:https://kafka.apache.org/downloads,当前最新版本如下,Scala 2.12指的是Scala版本号(kafka是Scala语音编写的),2.6.0是kafka版本。

压缩包下载后上传到服务器自定义目录,我的是/data目录下,解压压缩包,后重名为kafka方便后续操作。

2. 解压缩

[root@host-192-168-11-22 data]# tar -xzvf kafka_2.12-2.6.0.tgz 

为了方便后续操作,我将解压缩后的文件重命名为kafka

[root@host-192-168-11-23 data]# mv kafka_2.12-2.6.0/ kafka

二、Zookeeper、kafka配置文件修改

当前下载的kafka程序里自带Zookeeper,可以直接使用其自带的Zookeeper建立集群,也可以单独使用Zookeeper安装文件建立集群。

推荐单独Zookeeper安装配置下期讲解,本次采用自带Zookeeper,

自带的Zookeeper程序脚本与配置文件名与原生Zookeeper稍有不同。kafka自带的Zookeeper程序在bin目录下的zookeeper-server-start.sh脚本进行启动,zookeeper-server-stop.sh脚本进行停止。另外Zookeeper的配制文件在路径config/zookeeper.properties,如果有需要可以修改其中的参数。

首先强调一点,kafka的日志目录和zookeeper数据目录,这两项默认放在tmp目录,而tmp目录中内容会随重启而丢失,所以我们遇到的时候最好自定义一个路径。

1.zookeeper配置文件

 进入kafka/config目录下,参考如下修改zookeeper.properties文件

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
# 
#    http://www.apache.org/licenses/LICENSE-2.0
# 
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# the directory where the snapshot is stored.
# zookeeper数据目录
dataDir=/data/kafka/data/kfkzookeeper
# the port at which the clients will connect
clientPort=2181
# disable the per-ip limit on the number of connections since this is a non-production config
# 注释掉这个
#maxClientCnxns=0

# Disable the adminserver by default to avoid port conflicts.
# Set the port to something non-conflicting if choosing to enable this
admin.enableServer=false
# admin.serverPort=8080

#设置连接参数,添加如下配置
tickTime=2000
#为zk的基本时间单元,毫秒
initLimit=10
#Leader-Follower初始通信时限 tickTime*10
syncLimit=5
#Leader-Follower同步通信时限 tickTime*5

#设置broker Id的服务地址,以下三个分别是我集群三台服务器ip
server.1=192.168.11.21:2888:3888
server.2=192.168.11.22:2888:3888
server.3=192.168.11.23:2888:3888

2.zookeeper配置myid文件

三台服务器都要在其zookeeper数据目录dataDir下创建一个myid文件,文件内只需填入上述配置文件中broker id的值,作为集群识别标识。以其中一台192.168.11.21服务器为例:

[root@host-192-168-11-21 ~]# cd /data/kafka/data/kfkzookeeper
[root@host-192-168-11-21 kfkzookeeper]# echo 1 > myid

创建好后,查看一下,没问题,myid里面只有一个数值

3.kafka配置文件

 进入kafka/config目录下,参考如下修改server.properties文件

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# see kafka.server.KafkaConfig for additional details and defaults

############################# Server Basics #############################

# The id of the broker. This must be set to a unique integer for each broker.
# broker.id是kafka broker的编号,集群里每个broker的id需不同,且必须为int整数
broker.id=1

# 选择启用删除主题功能,默认false
delete.topic.enable=true

############################# Socket Server Settings #############################

# listeners是监听地址,需要提供外网服务的话,要设置本地的IP地址
# The address the socket server listens on. It will get the value returned from 
# java.net.InetAddress.getCanonicalHostName() if not configured.
#   FORMAT:
#     listeners = listener_name://host_name:port
#   EXAMPLE:
#     listeners = PLAINTEXT://your.host.name:9092
#listeners=PLAINTEXT://:9092

# Hostname and port the broker will advertise to producers and consumers. If not set, 
# it uses the value for "listeners" if configured.  Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
#advertised.listeners=PLAINTEXT://your.host.name:9092

# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL

# The number of threads that the server uses for receiving requests from the network and sending responses to the network
# 服务器用来接受请求或者发送响应的线程数
num.network.threads=3

# The number of threads that the server uses for processing requests, which may include disk I/O
# 服务器用来处理请求的线程数,可能包括磁盘IO
num.io.threads=8

# The send buffer (SO_SNDBUF) used by the socket server
# 套接字服务器使用的发送缓冲区大小
socket.send.buffer.bytes=102400

# The receive buffer (SO_RCVBUF) used by the socket server
# 套接字服务器使用的接收缓冲区大小
socket.receive.buffer.bytes=102400

# The maximum size of a request that the socket server will accept (protection against OOM)
# 单个请求最大能接收的数据量
socket.request.max.bytes=104857600


############################# Log Basics #############################

# A comma separated list of directories under which to store log files
# log.dirs是日志(消息数据)目录
log.dirs=/data/kafka/data/kafka

# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
# num.partitions 为新建Topic的默认Partition数量,partition数量提升,一定程度上可以提升并发性
# 每个主题的日志分区的默认数量。更多的分区允许更大的并行操作,但是它会导致节点产生更多的文件
num.partitions=3

# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
# 每个数据目录用于在启动时恢复日志并在关闭时刷新的线程数,用于在启动时日志恢复,并在关闭时刷新。
# 对于数据目录位于RAID阵列中的安装,建议增大此值。
num.recovery.threads.per.data.dir=1

############################# Internal Topic Settings  #############################
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3.
# 内部topic配置
# 内部__consumer_offsets和__transaction_state两个topic,分组元数据的复制因子,为了保证可用性,在生产上建议设置大于1。
# default.replication.factor为kafka保存消息的副本数,如果一个副本失效了,另一个还可以继续提供服务,是在自动创建topic时的默认副本数,可以设置为3
offsets.topic.replication.factor=3
transaction.state.log.replication.factor=3
transaction.state.log.min.isr=3

############################# Log Flush Policy #############################

# Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
#    1. Durability: Unflushed data may be lost if you are not using replication.
#    2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
#    3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.

# The number of messages to accept before forcing a flush of data to disk
# 在强制刷新数据到磁盘之前允许接收消息的数量
#log.flush.interval.messages=10000

# The maximum amount of time a message can sit in a log before we force a flush
# 在强制刷新之前,消息可以在日志中停留的最长时间
#log.flush.interval.ms=1000

############################# Log Retention Policy #############################

# The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.

# The minimum age of a log file to be eligible for deletion due to age
# 一个日志的最小存活时间,可以被删除
log.retention.hours=168

# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
# 一个基于大小的日志保留策略。段将被从日志中删除只要剩下的部分段不低于log.retention.bytes。
#log.retention.bytes=1073741824

# The maximum size of a log segment file. When this size is reached a new log segment will be created.
# 每一个日志段大小的最大值。当到达这个大小时,会生成一个新的片段。
log.segment.bytes=1073741824

# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
# 检查日志段的时间间隔,看是否可以根据保留策略删除它们
log.retention.check.interval.ms=300000

############################# Zookeeper #############################

# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
# 设置Zookeeper集群地址,我是在同一个服务器上搭建了kafka和Zookeeper,所以填的本地地址
zookeeper.connect=192.168.11.21:2181,192.168.11.22:2181,192.168.11.23:2181

# Timeout in ms for connecting to zookeeper
# 连接到Zookeeper的超时时间
zookeeper.connection.timeout.ms=18000


############################# Group Coordinator Settings #############################

# The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
# The default value for this is 3 seconds.
# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.
# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
group.initial.rebalance.delay.ms=0

三、启动Zookeeper集群服务

1.启动Zookeeper

[root@host-192-168-11-21 kafka]# bin/zookeeper-server-start.sh -daemon config/zookeeper.properties

这里的-daemon参数,可以在后台启动Zookeeper,不必打印启动日志到控制台,下同,输出的信息在保存在执行目录的logs/zookeeper.out文件中。

2.关闭 Zookeeper

[root@host-192-168-11-21 kafka]# bin/zookeeper-server-stop.sh -daemon config/zookeeper.properties

3.启动kafka

[root@host-192-168-11-21 kafka]# bin/kafka-server-start.sh -daemon config/server.properties

4.查看启动日志

进入kafka下logs目录

[root@host-192-168-11-21 logs]# cat server.log

5.查看kafka进程

 进入kafka下logs目录

[root@host-192-168-11-21 logs]# jps

6.关闭kafka

[root@host-192-168-11-21 kafka]# bin/kafka-server-stop.sh config/server.properties

7.创建topic

指定分区和副本数为3,也可不指定,在kaf配置文件中设置值。

[root@host-192-168-11-21 kafka]# bin/kafka-topics.sh --create --zookeeper 192.168.11.21:2181 --replication-factor 3 --partitions 3 --topic test

8.查看topic列表

[root@host-192-168-11-21 kafka]# bin/kafka-topics.sh --list --zookeeper 192.168.11.21:2181

9.查看topic详情

[root@host-192-168-11-21 kafka]# bin/kafka-topics.sh --zookeeper 192.168.11.21:2181 --describe  --topic test

10.创建生产者,在一台服务器

[root@host-192-168-11-21 kafka]# bin/kafka-console-producer.sh --broker-list 192.168.11.21:9092 --topic test

11.创建消费者,在另一台服务器

[root@host-192-168-11-22 kafka]# bin/kafka-console-consumer.sh --bootstrap-server 192.168.11.22:9092 --topic test

或者从头消费

[root@host-192-168-11-23 kafka]# bin/kafka-console-consumer.sh --bootstrap-server 192.168.11.22:9092 --topic test --from-beginning

12.删除topic

[root@host-192-168-11-22 kafka]# bin/kafka-topics.sh --zookeeper 192.168.11.22:2181 --delete  --topic test

-----------------------------------------------------------------------------------------------------------------------------------------------------------------------

问题一:服务器内存过小

对于小内存的服务器,启动时有可能会出现错误:os::commit_memory(0x00000000e0000000, 536870912, 0) failed; error='Not enough space' (errno=12)

 解决:可以通过修改bin/zookeeper-server-start.sh中的参数,来减少内存的使用,将配置-Xmx512M -Xms512M改小。

问题二:主机名:未知的名称或服务错误

原因:问题原因是在系统的 /etc/Hostname中配置了主机名,而在/etc/hosts文件中没有相应的配置,

解决:修改etc/hosts文件,添加ip地址和主机名的映射关系即可。

[root@host-192-168-11-21 ~]# vi /etc/hosts

问题三:kakfa启动失败报:kafka.common.InconsistentClusterIdException

原因:kafka或者集群异常关闭,日志记录就会出现异常,会把当时的情况记录到meta.properties文件中,重新启动时此文件会对启动造成影响。

解决:方案一,删除日志目录下的meta.properties文件。方案二,清空日志目录,如果日志数据不重要的话,这样数据会丢失。方案三,修改配置文件server.properties的日志目录地址,改为其他路径下重新建立日志目录。

  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值