kafka producer实现幂等发送(log-recover) consumer重复消费(配置)

springboot-kafka-producer

	/**
		 * Number of acknowledgments the producer requires the leader to have received
		 * before considering a request complete.
		 */
		private String acks;

		/**
		 * Number of records to batch before sending.
		 */
		private Integer batchSize;

		/**
		 * Comma-delimited list of host:port pairs to use for establishing the initial
		 * connection to the Kafka cluster.
		 */
		private List<String> bootstrapServers;

		/**
		 * Total bytes of memory the producer can use to buffer records waiting to be sent
		 * to the server.
		 */
		private Long bufferMemory;

		/**
		 * Id to pass to the server when making requests; used for server-side logging.
		 */
		private String clientId;

		/**
		 * Compression type for all data generated by the producer.
		 */
		private String compressionType;

		/**
		 * Serializer class for keys.
		 */
		private Class<?> keySerializer = StringSerializer.class;

		/**
		 * Serializer class for values.
		 */
		private Class<?> valueSerializer = StringSerializer.class;

		/**
		 * When greater than zero, enables retrying of failed sends.
		 */
		private Integer retries;

# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=1

假设这里只有一个Kafka集群,且这个集群只有一个Kafka broker,即只有一台物理机。在这个Kafka broker中配置(KAFKA_HOME/config/server.properties中)log.dirs=/tmp/kafka-logs,以此来设置Kafka消息文件存储目录,与此同时创建一个topic:topic_zzh_test,partition的数量为4($KAFKA_HOME/bin/kafka-topics.sh –create –zookeeper localhost:2181 –partitions 4 –topic topic_vms_test –replication-factor 4)。那么我们此时可以在/tmp/kafka-logs目录中可以看到生成了4个目录:

drwxr-xr-x 2 root root 4096 Apr 10 16:10 topic_zzh_test-0 
 
drwxr-xr-x 2 root root 4096 Apr 10 16:10 topic_zzh_test-1 
 
drwxr-xr-x 2 root root 4096 Apr 10 16:10 topic_zzh_test-2 
 
drwxr-xr-x 2 root root 4096 Apr 10 16:10 topic_zzh_test-3  

Each partition is an ordered, immutable sequence of records that is continually appended to—a structured commit log. The records in the partitions are each assigned a sequential id number called the offset that uniquely identifies each record within the partition.
The Kafka cluster durably persists all published records—whether or not they have been consumed—using a configurable retention period. For example, if the retention policy is set to two days, then for the two days after a record is published, it is available for consumption, after which it will be discarded to free up space. Kafka’s performance is effectively constant with respect to data size so storing data for a long time is not a problem.

在这里插入图片描述

HW和LEO。这里先介绍下LEO,LogEndOffset的缩写,表示每个partition的log最后一条Message的位置。HW是HighWatermark的缩写,是指consumer能够看到的此partition的位置,这个涉及到多副本的概念

In fact, the only metadata retained on a per-consumer basis is the offset or position of that consumer in the log. This offset is controlled by the consumer: normally a consumer will advance its offset linearly as it reads records, but, in fact, since the position is controlled by the consumer it can consume records in any order it likes. For example a consumer can reset to an older offset to reprocess data from the past or skip ahead to the most recent record and start consuming from “now”.
在这里插入图片描述
Kafka提供了数据复制算法保证,如果leader发生故障或挂掉,一个新leader被选举并被接受客户端的消息成功写入。Kafka确保从同步副本列表中选举一个副本为leader,或者说follower追赶leader数据。leader负责维护和跟踪ISR(In-Sync Replicas的缩写,表示副本同步队列,具体可参考下节)中所有follower滞后的状态。当producer发送一条消息到broker后,leader写入消息并复制到所有follower。消息提交之后才被成功复制到所有的同步副本。消息复制延迟受最慢的follower限制,重要的是快速检测慢副本,如果follower“落后”太多或者失效,leader将会把它从ISR中删除。
Consumers label themselves with a consumer group name, and each record published to a topic is delivered to one consumer instance within each subscribing consumer group. Consumer instances can be in separate processes or on separate machines.
在这里插入图片描述

consumer重复消费
https://www.jianshu.com/p/4e00dff97f39

producer实现幂等发送
https://blog.csdn.net/alex_xfboy/article/details/82988259

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值