【生产者篇】发送消息send方法源码解读

关于更新元数据waitOnMetadata和追加消息到缓存由于内容很多。waitOnMetadata见下篇分析,RecordAccumulator会有专题分析数据结构和存取过程

@Override
public Future<RecordMetadata> send(ProducerRecord<K, V> record, Callback callback) {
    //拦截器处理
    ProducerRecord<K, V> interceptedRecord = this.interceptors.onSend(record);
    return doSend(interceptedRecord, callback);
}
 /**
     * 将消息追加到RecordAccumulator缓存队列,并唤醒sender线程
     * 
     * @param record
     * @param callback
     * @return
     */
    private Future<RecordMetadata> doSend(ProducerRecord<K, V> record, Callback callback) {
        TopicPartition tp = null;
        try {
            throwIfProducerClosed();
            // first make sure the metadata for the topic is available
            // 在数据发送前,需要先该 topic 是可用的
            ClusterAndWaitTime clusterAndWaitTime;
            try {
                /**
                 * 等待metadata的更新
                 *
                 * 这里就是更新的是Kafka的Cluster的信息,还有包括一些对应关系,关联联系,和一些逻辑数据
                 */
                clusterAndWaitTime = waitOnMetadata(record.topic(), record.partition(), maxBlockTimeMs);
            } catch (KafkaException e) {
                if (metadata.isClosed())
                    throw new KafkaException("Producer closed while send in progress", e);
                throw e;
            }
            long remainingWaitMs = Math.max(0, maxBlockTimeMs - clusterAndWaitTime.waitedOnMetadataMs);
            // 获取Cluster数据
            Cluster cluster = clusterAndWaitTime.cluster;
            byte[] serializedKey;
            try {
                // 序列化key
                serializedKey = keySerializer.serialize(record.topic(), record.headers(), record.key());
            } catch (ClassCastException cce) {
                throw new SerializationException("Can't convert key of class " + record.key().getClass().getName()
                    + " to class " + producerConfig.getClass(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG).getName()
                    + " specified in key.serializer", cce);
            }
            byte[] serializedValue;
            try {
                // 序列化value
                serializedValue = valueSerializer.serialize(record.topic(), record.headers(), record.value());
            } catch (ClassCastException cce) {
                throw new SerializationException("Can't convert value of class " + record.value().getClass().getName()
                    + " to class " + producerConfig.getClass(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG).getName()
                    + " specified in value.serializer", cce);
            }
            // 计算分区号 [见2.1]
            int partition = partition(record, serializedKey, serializedValue, cluster);
            tp = new TopicPartition(record.topic(), partition);

            setReadOnly(record.headers());
            Header[] headers = record.headers().toArray();
            //估算消息大小
            int serializedSize = AbstractRecords.estimateSizeInBytesUpperBound(apiVersions.maxUsableProduceMagic(),
                compressionType, serializedKey, serializedValue, headers);
            // 校验消息数据大小,不能超过配置max.request.size和buffer.memory的配置值
            ensureValidRecordSize(serializedSize);
            long timestamp = record.timestamp() == null ? time.milliseconds() : record.timestamp();
            log.trace("Sending record {} with callback {} to topic {} partition {}", record, callback, record.topic(),
                partition);
            // producer callback will make sure to call both 'callback' and interceptor callback
            // 回调函数见[2.2]
            Callback interceptCallback = new InterceptorCallback<>(callback, this.interceptors, tp);

            if (transactionManager != null && transactionManager.isTransactional())
                transactionManager.maybeAddPartitionToTransaction(tp);
            // 追加消息到RecordAccumulator缓存
            RecordAccumulator.RecordAppendResult result = accumulator.append(tp, timestamp, serializedKey,
                serializedValue, headers, interceptCallback, remainingWaitMs);
                 //当RecordAccumulator满了或者新建的batch,唤醒sender线程,进行发送
            if (result.batchIsFull || result.newBatchCreated) {
                log.trace("Waking up the sender since topic {} partition {} is either full or getting a new batch",
                    record.topic(), partition);
                //唤醒阻塞在selector.select上的线程,
                this.sender.wakeup();
            }
            return result.future;
            // handling exceptions and record the errors;
            // for API exceptions return them in the future,
            // for other exceptions throw directly
        } catch (ApiException e) {
            log.debug("Exception occurred during message send:", e);
            if (callback != null)
                callback.onCompletion(null, e);
            this.errors.record();
            this.interceptors.onSendError(record, tp, e);
            return new FutureFailure(e);
        } catch (InterruptedException e) {
            this.errors.record();
            this.interceptors.onSendError(record, tp, e);
            throw new InterruptException(e);
        } catch (BufferExhaustedException e) {
            this.errors.record();
            this.metrics.sensor("buffer-exhausted-records").record();
            this.interceptors.onSendError(record, tp, e);
            throw e;
        } catch (KafkaException e) {
            this.errors.record();
            this.interceptors.onSendError(record, tp, e);
            throw e;
        } catch (Exception e) {
            // we notify interceptor about all exceptions, since onSend is called before anything else in this method
            this.interceptors.onSendError(record, tp, e);
            throw e;
        }
    }
2.1计算分区号
    private int partition(ProducerRecord<K, V> record, byte[] serializedKey, byte[] serializedValue, Cluster cluster) {
       //获取消息的分区号
        Integer partition = record.partition();
        //如果消息指定了分区号则使用消息分区号,否则使用分区器进行分区号计算
        return partition != null ? partition : partitioner.partition(record.topic(), record.key(), serializedKey,
            record.value(), serializedValue, cluster);
    }

从源码中可以看出, 当RecordAccumulator满了或者新建的batch,唤醒sender线程,进行消息发送。

分区号分区规则详情查看:
分区号器分区

2.2回调函数:
private InterceptorCallback(Callback userCallback, ProducerInterceptors<K, V> interceptors, TopicPartition tp) {
    this.userCallback = userCallback;
    this.interceptors = interceptors;
    this.tp = tp;
}

public void onCompletion(RecordMetadata metadata, Exception exception) {
    metadata = metadata != null ? metadata
        : new RecordMetadata(tp, -1, -1, RecordBatch.NO_TIMESTAMP, Long.valueOf(-1L), -1, -1);
    //遍历调用拦截器的ack方法,其次调用Callback ack
    this.interceptors.onAcknowledgement(metadata, exception);
    if (this.userCallback != null)
        this.userCallback.onCompletion(metadata, exception);
}

可以看出在调用Callback 回调函数之前,需要遍历拦截器,默认是没有拦截器的,如果用户自定义的拦截器,或者多个拦截器,会按照用户配置的拦截器顺序依次调用拦截器的onAcknowledgement方法,其次才会执行Callback

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值