1.安装包下载并上传

上传到:/opt/app/middles
Kafka:
 https://archive.apache.org/dist/kafka/2.4.1/kafka_2.12-2.4.1.tgz

Zookeeper(apache-zookeeper-3.5.7-bin.tar.gz):
 http://archive.apache.org/dist/zookeeper/zookeeper-3.5.7

2.安装jdk8

cd /opt/app/middles
tar -zxvf jdk-8u201-linux-x64.tar.gz
mv jdk1.8.0_201/ jdk8

#添加全局path
cat <<EOF >> /etc/profile
JAVA_HOME=/opt/app/middles/jdk8
CLASSPATH=.:\$JAVA_HOME/lib.tools.jar
PATH=\$JAVA_HOME/bin:\$PATH
export JAVA_HOME CLASSPATH PATH
EOF

#刷新
source /etc/profile
  • 1.
  • 2.
  • 3.
  • 4.
  • 5.
  • 6.
  • 7.
  • 8.
  • 9.
  • 10.
  • 11.
  • 12.
  • 13.
  • 14.

3.安装zookeeper集群

3.1解压zookeeper

cd /opt/app/middles
tar -xvf apache-zookeeper-3.5.7-bin.tar.gz
  • 1.
  • 2.

3.2准备配置文件

cat > /opt/app/middles/apache-zookeeper-3.5.7-bin/conf/zoo.cfg << EOF
tickTime=2000
dataDir=/opt/app/middles/apache-zookeeper-3.5.7-bin/data
dataLogDir=/opt/app/middles/apache-zookeeper-3.5.7-bin/logs
clientPort=2181
initLimit=10
syncLimit=5
server.1=192.168.137.204:2888:3888
server.2=192.168.137.205:2888:3888
server.3=192.168.137.206:2888:3888
admin.serverPort=8080
EOF
  • 1.
  • 2.
  • 3.
  • 4.
  • 5.
  • 6.
  • 7.
  • 8.
  • 9.
  • 10.
  • 11.
  • 12.

3.3分配myid

####在192.168.137.204上执行
echo 1 > /opt/app/middles/apache-zookeeper-3.5.7-bin/data/myid

####在192.168.137.205上执行
echo 2 > /opt/app/middles/apache-zookeeper-3.5.7-bin/data/myid

####在192.168.137.206上执行
echo 3 > /opt/app/middles/apache-zookeeper-3.5.7-bin/data/myid
  • 1.
  • 2.
  • 3.
  • 4.
  • 5.
  • 6.
  • 7.
  • 8.

3.4 修改zookeeper中admin server的8080端口或停用内嵌的管理控制台(可选)

推荐用idea里的插件
zookeeper3.5.7版本中有个内嵌的管理控制台是通过jetty启动,会占用8080 端口,如上配置在zoo.cfg中的admin.serverPort,可以通过修改此配置来更换端口。
或者干脆直接停用这个服务通过在启动脚本中增加"-Dzookeeper.admin.enableServer=false",可以直接停止这个服务的,方法如下:

vi /opt/app/middles/apache-zookeeper-3.5.7-bin/bin/zkServer.sh
找到-cp "$CLASSPATH" $JVMFLAGS $ZOOMAIN "$ZOOCFG" > "$_ZOO_DAEMON_OUT" 2>&1 < /dev/null &这一行,大概在161行
在$JVMFLAGS后面增加"-Dzookeeper.admin.enableServer=false"
  • 1.
  • 2.
  • 3.

3.5三个节点启动zookeeper

/opt/app/middles/apache-zookeeper-3.5.7-bin/bin/zkServer.sh start
/opt/app/middles/apache-zookeeper-3.5.7-bin/bin/zkServer.sh stop
/opt/app/middles/apache-zookeeper-3.5.7-bin/bin/zkServer.sh status
  • 1.
  • 2.
  • 3.

4.kafka集群安装

4.1解压并修改配置文件

tar -zxvf kafka_2.12-2.4.1.tgz
  • 1.

集群基本上使用默认配置就好,我这里是进行了如brokerid、num.partitions、offsets.topic.replication.factor、zookeeper.connect、zookeeper.connection.timeout.ms的修改。

brokerid,当前节点的id号,唯一标识,建议给顺序数字,方便管理
num.partitions,控制设定几个分区,default.replication.factor,控制设定几个备份。我有三个节点,所以各给了3,也可根据自身实际情况以及自身需求给定
zookeeper.connect指定外部zk源的各个节点。若无特殊指定,外部zk集群默认端口2181
zookeeper.connection.timeout.ms根据自身网络情况设定,通常默认就好
default.replication.factor:副本个数,默认为1,如果需要保持高可用,可以设置成3

以下参数也建议设置修改,默认是1,没有高可用,

default.replication.factor=3
offsets.topic.replication.factor=3
transaction.state.log.replication.factor=3
transaction.state.log.min.isr=3
  • 1.
  • 2.
  • 3.
  • 4.

具体三台配置过程详情如下:

4.1.1 解压并配置192.168.137.204

注意下面的配置文件中的ip要和配置文件所在的服务器ip一致,不要搞混了。

####可以手动修改vi /opt/app/middles/kafka_2.12-2.4.1/config/server.properties,
#1.搜索"192.168.137.204",替换成你的新节点ip;
#2.搜索"broker.id=",节点2和节点3分别替换成2,3即可。

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# see kafka.server.KafkaConfig for additional details and defaults

############################# Server Basics #############################

# The id of the broker. This must be set to a unique integer for each broker.
broker.id=1

############################# Socket Server Settings #############################

# The address the socket server listens on. It will get the value returned from 
# java.net.InetAddress.getCanonicalHostName() if not configured.
#   FORMAT:
#     listeners = listener_name://host_name:port
#   EXAMPLE:
#     listeners = PLAINTEXT://your.host.name:9092
listeners=PLAINTEXT://192.168.137.204:9092

# Hostname and port the broker will advertise to producers and consumers. If not set, 
# it uses the value for "listeners" if configured.  Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
advertised.listeners=PLAINTEXT://192.168.137.204:9092
# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL

# The number of threads that the server uses for receiving requests from the network and sending responses to the network
num.network.threads=3

# The number of threads that the server uses for processing requests, which may include disk I/O
num.io.threads=8

# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400

# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400

# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600


############################# Log Basics #############################

# A comma separated list of directories under which to store log files
log.dirs=/opt/app/middles/kafka_2.12-2.4.1/kafka-logs

# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=3

# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1

############################# Internal Topic Settings  #############################
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3.
default.replication.factor=3
offsets.topic.replication.factor=3
transaction.state.log.replication.factor=3
transaction.state.log.min.isr=3

############################# Log Flush Policy #############################

# Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
#    1. Durability: Unflushed data may be lost if you are not using replication.
#    2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
#    3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.

# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000

# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000

############################# Log Retention Policy #############################

# The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.

# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=24

# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
#log.retention.bytes=1073741824

# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824

# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000

############################# Zookeeper #############################

# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=192.168.137.204:2181,192.168.137.205:2181,192.168.137.206:2181

# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=6000
delete.topic.enable=true
auto.create.topics.enable=false


############################# Group Coordinator Settings #############################

# The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
# The default value for this is 3 seconds.
# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.
# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
group.initial.rebalance.delay.ms=0


######
  • 1.
  • 2.
  • 3.
  • 4.
  • 5.
  • 6.
  • 7.
  • 8.
  • 9.
  • 10.
  • 11.
  • 12.
  • 13.
  • 14.
  • 15.
  • 16.
  • 17.
  • 18.
  • 19.
  • 20.
  • 21.
  • 22.
  • 23.
  • 24.
  • 25.
  • 26.
  • 27.
  • 28.
  • 29.
  • 30.
  • 31.
  • 32.
  • 33.
  • 34.
  • 35.
  • 36.
  • 37.
  • 38.
  • 39.
  • 40.
  • 41.
  • 42.
  • 43.
  • 44.
  • 45.
  • 46.
  • 47.
  • 48.
  • 49.
  • 50.
  • 51.
  • 52.
  • 53.
  • 54.
  • 55.
  • 56.
  • 57.
  • 58.
  • 59.
  • 60.
  • 61.
  • 62.
  • 63.
  • 64.
  • 65.
  • 66.
  • 67.
  • 68.
  • 69.
  • 70.
  • 71.
  • 72.
  • 73.
  • 74.
  • 75.
  • 76.
  • 77.
  • 78.
  • 79.
  • 80.
  • 81.
  • 82.
  • 83.
  • 84.
  • 85.
  • 86.
  • 87.
  • 88.
  • 89.
  • 90.
  • 91.
  • 92.
  • 93.
  • 94.
  • 95.
  • 96.
  • 97.
  • 98.
  • 99.
  • 100.
  • 101.
  • 102.
  • 103.
  • 104.
  • 105.
  • 106.
  • 107.
  • 108.
  • 109.
  • 110.
  • 111.
  • 112.
  • 113.
  • 114.
  • 115.
  • 116.
  • 117.
  • 118.
  • 119.
  • 120.
  • 121.
  • 122.
  • 123.
  • 124.
  • 125.
  • 126.
  • 127.
  • 128.
  • 129.
  • 130.
  • 131.
  • 132.
  • 133.
  • 134.
  • 135.
  • 136.
  • 137.
  • 138.
  • 139.
  • 140.
  • 141.
  • 142.
  • 143.
  • 144.
  • 145.

4.2kafka集群启动停止命令

在三个节点,依次执行

# 启动
/opt/app/middles/kafka_2.12-2.4.1/bin/kafka-server-start.sh -daemon config/server.properties

#停止
/opt/app/middles/kafka_2.12-2.4.1/bin/kafka-server-stop.sh
  • 1.
  • 2.
  • 3.
  • 4.
  • 5.

4.3kafka常用命令

##显示topic列表
./kafka-topics.sh --list --bootstrap-server 192.168.137.204:9092,192.168.137.205:9092,192.168.137.206:9092

##创建topic
./kafka-topics.sh --create --bootstrap-server 192.168.137.204:9092,192.168.137.205:9092,192.168.137.206:9092 --topic topictest

##删除topic,建议使用--bootstrap-server参数的命令
./kafka-topics.sh --delete --bootstrap-server 192.168.137.204:9092,192.168.137.205:9092,192.168.137.206:9092 --topic topictest

##查看topic的描述信息
./kafka-topics.sh --describe --bootstrap-server 192.168.137.204:9092,192.168.137.205:9092,192.168.137.206:9092 --topic topictest

##启动命令行生产者进行测试
./kafka-console-producer.sh --broker-list 192.168.137.204:9092,192.168.137.205:9092,192.168.137.206:9092 --topic topictest

##启动命令行消费者进行测试
./kafka-console-consumer.sh --bootstrap-server 192.168.137.204:9092,192.168.137.205:9092,192.168.137.206:9092 --from-beginning --topic topictest

##按时间重置groupId的偏移量
##不带密码
./kafka-consumer-groups --bootstrap-server 192.168.137.204:9092,192.168.137.205:9092,192.168.137.206:9092 --group YOUR_GROUPID --reset-offsets --topic YOUR_TOPIC --to-datetime yyyy-MM-ddTHH:mm:ss.SSS+08:00 --execute
##带密码
./kafka-consumer-groups --bootstrap-server 192.168.137.204:9092,192.168.137.205:9092,192.168.137.206:9092 --group YOUR_GROUPID --reset-offsets --topic YOUR_TOPIC --to-datetime yyyy-MM-ddTHH:mm:ss.SSS+08:00 --execute --commond-config test-client.properties
##test-client.properties文件的内容
bootstrap.servers=192.168.137.204:9092,192.168.137.205:9092,192.168.137.206:9092
security.protocol=SASL_PLAINTEXT
sasl.mechanism=SCRAM-SHA-512
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="YOUR_USERNAME" password="YOUR_PASS";
  • 1.
  • 2.
  • 3.
  • 4.
  • 5.
  • 6.
  • 7.
  • 8.
  • 9.
  • 10.
  • 11.
  • 12.
  • 13.
  • 14.
  • 15.
  • 16.
  • 17.
  • 18.
  • 19.
  • 20.
  • 21.
  • 22.
  • 23.
  • 24.
  • 25.
  • 26.
  • 27.
  • 28.