Kafka系列
第一章 Kafka基础概念
Kafka集群部署实践
- Kafka系列
- 前言
- 一、准备环境
- 二、搭建步骤
- 1.修改zookeeper配置文件,zoo.cfg。然后运行zkServer.cmd,启动zookeeper,使用ZooInspector连接zookeeper
- 2.准备脚本,下面以kafka-1为例,server.properties中需要修改的项为:broker.id、port、listeners、log.dirs、zookeeper.connect
- 3.分别启动3个kafka,命令为:.\bin\windows\kafka-server-start.bat .\config\server.properties,可以在Zookeeper中看到注册上来的3个节点
- 4.安装kafka-manager工具, 监控kafka集群状态
- 5.实例代码
- 5.1 pom.xml
- 5.2 配置文件
- 5.3 消费者
- 5.4 生产者
- 总结
前言
提示:这里可以添加本文要记录的大概内容:
例如:随着人工智能的不断发展,机器学习这门技术也越来越重要,很多人都开启了学习机器学习,本文就介绍了机器学习的基础内容。
提示:以下是本篇文章正文内容,下面案例可供参考
一、准备环境
环境:window 10
kafka:2.13
zookeeper:3.6.2
docker-desktop-window:3.0.4
二、搭建步骤
1.修改zookeeper配置文件,zoo.cfg。然后运行zkServer.cmd,启动zookeeper,使用ZooInspector连接zookeeper
2.准备脚本,下面以kafka-1为例,server.properties中需要修改的项为:broker.id、port、listeners、log.dirs、zookeeper.connect
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# see kafka.server.KafkaConfig for additional details and defaults
############################# Server Basics #############################
# The id of the broker. This must be set to a unique integer for each broker.
broker.id=1
port=9001
############################# Socket Server Settings #############################
# The address the socket server listens on. It will get the value returned from
# java.net.InetAddress.getCanonicalHostName() if not configured.
# FORMAT:
# listeners = listener_name://host_name:port
# EXAMPLE:
# listeners = PLAINTEXT://your.host.name:9092
listeners=PLAINTEXT://:9001
# Hostname and port the broker will advertise to producers and consumers. If not set,
# it uses the value for "listeners" if configured. Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
#advertised.listeners=PLAINTEXT://your.host.name:9092
# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
# The number of threads that the server uses for receiving requests from the network and sending responses to the network
num.network.threads=3
# The number of threads that the server uses for processing requests, which may include disk I/O
num.io.threads=8
# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400
# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400
# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600
############################# Log Basics #############################
# A comma separated list of directories under which to store log files
log.dirs=/tmp/kafka-logs1
# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=1
# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1
############################# Internal Topic Settings #############################
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3.
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
############################# Log Flush Policy #############################
# Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
# 1. Durability: Unflushed data may be lost if you are not using replication.
# 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
# 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.
# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000
# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000
############################# Log Retention Policy #############################
# The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.
# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=168
# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
#log.retention.bytes=1073741824
# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824
# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000
############################# Zookeeper #############################
# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=localhost:2181
broker.list=localhost:9001,localhost:9002,localhost:9003
# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=18000
############################# Group Coordinator Settings #############################
# The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
# The default value for this is 3 seconds.
# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.
# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
group.initial.rebalance.delay.ms=0
3.分别启动3个kafka,命令为:.\bin\windows\kafka-server-start.bat .\config\server.properties,可以在Zookeeper中看到注册上来的3个节点
4.安装kafka-manager工具, 监控kafka集群状态
docker-compose.yml文件配置
version: '3'
services:
kafka-manager:
image: sheepkiller/kafka-manager
ports:
- 9000:9000
environment:
ZK_HOSTS: 192.168.101.4:2181
执行docker-compose up命令,访问localhost:9000
5.实例代码
5.1 pom.xml
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
</dependency>
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<optional>true</optional>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>com.alibaba</groupId>
<artifactId>fastjson</artifactId>
<version>1.2.72</version>
<scope>compile</scope>
</dependency>
</dependencies>
5.2 配置文件
server:
port: 58080
servlet:
context-path: /
# kafka 配置
spring:
application:
name: spring-kafka-demo
kafka:
admin:
client-id: 1
num:
partitions: 3
replication: 2
template:
default-topic: test.order.topic
producer:
acks: 0
bootstrap-servers: 127.0.0.1:9001,127.0.0.1:9002,127.0.0.1:9003
batch-size: 100
buffer-memory: 33554432
key-serializer: org.apache.kafka.common.serialization.IntegerSerializer
value-serializer: org.apache.kafka.common.serialization.StringSerializer
client-id: 1002
consumer:
client-id: 1001
auto-commit-interval: 100
enable-auto-commit: true
key-deserializer: org.apache.kafka.common.serialization.IntegerDeserializer
value-deserializer: org.apache.kafka.common.serialization.StringDeserializer
auto-offset-reset: earliest
bootstrap-servers: 127.0.0.1:9001,127.0.0.1:9002,127.0.0.1:9003
listener:
ack-mode: batch
5.3 消费者
@Component
@Slf4j
public class KafkaConsumer {
@KafkaListener(topics = "test.order.topic", groupId = "1")
public void receiveMessage(ConsumerRecord<?, ?> record){
Optional<?> kafkaMessage = Optional.ofNullable(record);
if (kafkaMessage.isPresent()) {
Object message = kafkaMessage.get();
log.info("receive msg: {}", message);
}
}
}
5.4 生产者
@Component
@Slf4j
@EnableScheduling
public class KafkaProducer {
private static AtomicInteger adder = new AtomicInteger(0);
@Autowired
private KafkaTemplate<Integer, String> kafkaTemplate;
public void send(String topic, String message){
kafkaTemplate.send(topic, message.hashCode(), message);
}
@Scheduled(fixedRate = 5000)
public void sendMessage(){
send("test.order.topic", JSON.toJSONString(createOrder()));
}
private Order createOrder(){
Order order = new Order();
order.setId(adder.getAndIncrement());
order.setName("order" + adder);
return order;
}
}
程序启动后,控制台打印的日志:
receive msg: ConsumerRecord(topic = test.order.topic, partition = 1, leaderEpoch = 0, offset = 17, CreateTime = 1610470037027, serialized key size = 4, serialized value size = 26, headers = RecordHeaders(headers = [], isReadOnly = false), key = 354222495, value = {"id":35,"name":"order36"})
receive msg: ConsumerRecord(topic = test.order.topic, partition = 1, leaderEpoch = 0, offset = 18, CreateTime = 1610470042023, serialized key size = 4, serialized value size = 26, headers = RecordHeaders(headers = [], isReadOnly = false), key = -490248415, value = {"id":36,"name":"order37"})
receive msg: ConsumerRecord(topic = test.order.topic, partition = 1, leaderEpoch = 0, offset = 19, CreateTime = 1610470047013, serialized key size = 4, serialized value size = 26, headers = RecordHeaders(headers = [], isReadOnly = false), key = -1334719325, value = {"id":37,"name":"order38"})
receive msg: ConsumerRecord(topic = test.order.topic, partition = 2, leaderEpoch = 0, offset = 12, CreateTime = 1610470052018, serialized key size = 4, serialized value size = 26, headers = RecordHeaders(headers = [], isReadOnly = false), key = 2115777061, value = {"id":38,"name":"order39"})
receive msg: ConsumerRecord(topic = test.order.topic, partition = 0, leaderEpoch = 0, offset = 10, CreateTime = 1610470057023, serialized key size = 4, serialized value size = 26, headers = RecordHeaders(headers = [], isReadOnly = false), key = 1271326332, value = {"id":39,"name":"order40"})
总结
以上就是Kafka集群搭建,简单消费者、发送者的例子。