kafka原理和实践(三)spring-kafka生产者源码

本文详细探讨了Spring-Kafka生产者的工作原理,从KafkaProducer发送模型开始,涵盖数据入池、NIO发送数据的步骤。通过KafkaTemplate模板和KafkaProducer的构造过程及发送数据的细节,阐述了Spring-Kafka如何实现消息的发送。文章最后总结了生产者发送消息的主要流程和模块。
摘要由CSDN通过智能技术生成

 

正文

系列目录

kafka原理和实践(一)原理:10分钟入门

kafka原理和实践(二)spring-kafka简单实践

kafka原理和实践(三)spring-kafka生产者源码

kafka原理和实践(四)spring-kafka消费者源码

kafka原理和实践(五)spring-kafka配置详解

kafka原理和实践(六)总结升华

 

本文目录

1.kafkaProducer发送模型
2.KafkaTemplate发送模板
3.KafkaProducer
  3.1KafkaProducer构造过程
  3.2 KafkaProducer发送数据

==============正文分割线=====================

由于项目上了Spring-cloud,继承了spring-boot-start,默认支持版本是spring-kafka-1.1.7,本文基于源码spring-kafka-1.1.7分析。虽然官网已经到2.0版本,但我们分析核心方法基本不变,官网飞机票

一、 KafkaProducer发送模型

如上图,由KafkaTemplete发起发送请求,可分为如下几个步骤:

 

一、数据入池

1.KafkaProducer启动发送消息

2.消息发送拦截器拦截

3.用序列化器把数据进行序列化

4.用分区器选择消息的分区

5.添加进记录累加器

二、NIO发送数据

6.等待数据条数达到批量发送阀值或者新建一个RecoedBatch,立即唤醒Sender线程执行run方法

7.发送器内部从累加器Deque中拿到要发送的数据RecordBatch转换成ClientRequest客户端请求

8.在发送器内部,经由NetworkClient转换成RequestSend(Send接口)并调用Selector暂存进KafkaChannel(NetWorkClient维护的通道Map<String, KafkaChannel> channels)

9.执行nio发送消息(1.Selector.select()2.把KafkaChannel中的Send数据(ByteBuffer[])写入KafkaChannel的写通道GatheringByteChannel)

 

 二、KafkaTemplate模板

spring-kafka提供了简单的KafkaTemplate类,直接调用发送方法即可,只需要让容器知道这个bean即可(具体见第二章实践中xml中配置bean)。

复制代码
  1 public class KafkaTemplate<K, V> implements KafkaOperations<K, V> {
  2  14      ...
 15 
 16     /**
 17      * Create an instance using the supplied producer factory and autoFlush false.
 18      * @param producerFactory the producer factory.
 19      */
 20     public KafkaTemplate(ProducerFactory<K, V> producerFactory) {
 21         this(producerFactory, false);
 22     }
 23 
 24     /**
 25      * Create an instance using the supplied producer factory and autoFlush setting.
 26      * Set autoFlush to true if you wish to synchronously interact with Kafka, calling
 27      * {
    @link java.util.concurrent.Future#get()} on the result.
 28      * @param producerFactory the producer factory.
 29      * @param autoFlush true to flush after each send.
 30      */
 31     public KafkaTemplate(ProducerFactory<K, V> producerFactory, boolean autoFlush) {
 32         this.producerFactory = producerFactory;
 33         this.autoFlush = autoFlush;
 34     }
 36    ...
181     /**
182      * Send the producer record.
183      * @param producerRecord the producer record.
184      * @return a Future for the {
    @link RecordMetadata}.
185      */
186     protected ListenableFuture<SendResult<K, V>> doSend(final ProducerRecord<K, V> producerRecord) {
187         final Producer<K, V> producer = getTheProducer();
188         if (this.logger.isTraceEnabled()) {
189             this.logger.trace("Sending: " + producerRecord);
190         }
191         final SettableListenableFuture<SendResult<K, V>> future = new SettableListenableFuture<>();
192         producer.send(producerRecord, new Callback() {
193 
194             @Override
195             public void onCompletion(RecordMetadata metadata, Exception exception) {
196                 try {
197                     if (exception == null) {
198                         future.set(new SendResult<>(producerRecord, metadata));
199                         if (KafkaTemplate.this.producerListener != null
200                                 && KafkaTemplate.this.producerListener.isInterestedInSuccess()) {
201                             KafkaTemplate.this.producerListener.onSuccess(producerRecord.topic(),
202                                     producerRecord.partition(), producerRecord.key(), producerRecord.value(), metadata);
203                         }
204                     }
205                     else {
206                         future.setException(new KafkaProducerException(producerRecord, "Failed to send", exception));
207                         if (KafkaTemplate.this.producerListener != null) {
208                             KafkaTemplate.this.producerListener.onError(producerRecord.topic(),
209                                     producerRecord.partition(),
210                                     producerRecord.key(),
211                                     producerRecord.value(),
212                                     exception);
213                         }
214                     }
215                 }
216                 finally {
217                     producer.close();
218                 }
219             }
220 
221         });
222         if (this.autoFlush) {
223             flush();
224         }
225         if (this.logger.isTraceEnabled()) {
226             this.logger.trace("Sent: " + producerRecord);
227         }
228         return future;
229     }
235 }
复制代码

 KafkaTemplate源码重点

1.构造函数,入参ProducerFactory构造工厂和是否自动刷新(缓冲区的records立即发送)

2.发送消息doSend,这里核心点就2个:

1)producer.send(producerRecord, Callback)producer即KafkaProducer

2)Callback回调onCompletion完成,onSuccess,onError。

三、KafkaProducer

3.1KafkaProducer构造过程

复制代码
  1 @SuppressWarnings({"unchecked", "deprecation"})
  2     private KafkaProducer(ProducerConfig config, Serializer<K> keySerializer, Serializer<V> valueSerializer) {
  3         try {
  4             log.trace("Starting the Kafka producer");
  5             Map<String, Object> userProvidedConfigs = config.originals();
  6             this.producerConfig = config;
  7             this.time = new SystemTime();
  8 
  9             clientId = config.getString(ProducerConfig.CLIENT_ID_CONFIG);
 10             if (clientId.length() <= 0)
 11                 clientId = "producer-" + PRODUCER_CLIENT_ID_SEQUENCE.getAndIncrement();
 12             Map<String, String> metricTags = new LinkedHashMap<String, String>();
 13             metricTags.put("client-id", clientId);
 14             MetricConfig metricConfig = new MetricConfig().samples(config.getInt(ProducerConfig.METRICS_NUM_SAMPLES_CONFIG))
 15                     .timeWindow(config.getLong(ProducerConfig.METRICS_SAMPLE_WINDOW_MS_CONFIG), TimeUnit.MILLISECONDS)
 16                     .tags(metricTags);
 17             List<MetricsReporter> reporters = config.getConfiguredInstances(ProducerConfig.METRIC_REPORTER_CLASSES_CONFIG,
 18                     MetricsReporter.class);
 19             reporters.add(new JmxReporter(JMX_PREFIX));
 20             this.metrics = new Metrics(metricConfig, reporters, time);
 21             this.partitioner = config.getConfiguredInstance(ProducerConfig.PARTITIONER_CLASS_CONFIG, Partitioner.class);
 22             long retryBackoffMs = config.getLong(ProducerConfig.RETRY_BACKOFF_MS_CONFIG);
 23             if (keySerializer == null) {
 24                 this.keySerializer = config.getConfiguredInstance(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
 25                         Serializer.class);
 26                 this.keySerializer.configure(config.originals(), true);
 27             } else {
 28                 config.ignore(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG);
 29                 this.keySerializer = keySerializer;
 30             }
 31             if (valueSerializer == null) {
 32                 this.valueSerializer = config.getConfiguredInstance(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
 33                         Serializer.class);
 34                 this.valueSerializer.configure(config.originals(), false);
 35             } else {
 36                 config.ignore(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG);
 37                 this.valueSerializer = valueSerializer;
 38             }
 39 
 40             // load interceptors and make sure they get clientId
 41             userProvidedConfigs.put(ProducerConfig.CLIENT_ID_CONFIG, clientId);
 42             List<ProducerInterceptor<K, V>> interceptorList = (List) (new ProducerConfig(userProvidedConfigs)).getConfiguredInstances(ProducerConfig.INTERCEPTOR_CLASSES_CONFIG,
 43                     ProducerInterceptor.class);
 44             this.interceptors = interceptorList.isEmpty() ? null : new ProducerInterceptors<>(interceptorList);
 45<
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值