RecordAccumulator解析
目录
Kafka生产者源码解析之一KafkaProducer
Kafka生产者源码解析之二RecordAccumulator
Kafka生产者源码解析之三NIO
Kafka生产者源码解析之四Sender
Kafka生产者源码解析之五小结
引言
此类是kafka生产者发送消息的第一接收站,故接着KafkaProducer之后分析,废话不多说,直入主题。
回忆
上一篇中类 KafkaPRoducer中 dosend 方法里面先将消息的key和value进行序列化,然后发送给消息累加器,即我们今天要分析的类RecordAccumulator。
// append
RecordAccumulator.RecordAppendResult result = accumulator.append(tp, timestamp, serializedKey,
serializedValue, headers, interceptCallback, remainingWaitMs);
if (result.batchIsFull || result.newBatchCreated) {
log.trace("Waking up the sender since topic {} partition {} is either full or getting a new batch", record.topic(), partition);
this.sender.wakeup();
}
2.1 append方法
进入appernd方法,一探究竟。
/**
* Add a record to the accumulator, return the append result
* <p>
* The append result will contain the future metadata, and flag for whether the appended batch is full or a new batch is created
* <p>
*
* @param tp The topic/partition to which this record is being sent
* @param timestamp The timestamp of the record
* @param key The key for the record
* @param value The value for the record
* @param headers the Headers for the record
* @param callback The user-supplied callback to execute when the request is complete
* @param maxTimeToBlock The maximum time in milliseconds to block for buffer memory to be available
*/
public RecordAppendResult append(TopicPartition tp,
long timestamp,
byte[] key,
byte[] value,
Header[] headers,
Callback callback,
long maxTimeToBlock) throws InterruptedException {
// We keep track of the number of appending thread to make sure we do not miss batches in
// abortIncompleteBatches().
// 用 AtomicInteger 记录线程数,这里加一,会在finally中减一
appendsInProgress.incrementAndGet();
ByteBuffer buffer = null;
if (headers == null) headers = Record.EMPTY_HEADERS;
try {
// check if we have an in-progress batch
// 检查是否有包含该主题分区的批处理对象的双端队列,如果没有则新建
Deque<ProducerBatch> dq = getOrCreateDeque(tp);
// 给双端队列加上悲观锁
synchronized (dq) {
if (closed)
throw new KafkaException("Producer closed while send in progress");
// 尝试向批处理对象追加消息,并返回追加结果,如果队列里没有批处理对象,则返回空
// 2.2 会详细介绍
RecordAppendResult appendResult = tryAppend(timestamp, key, value, headers, callback, dq);
if (appendResult != null)
return appendResult;
}
// we don't have an in-progress record batch try to allocate a new batch
// 队列里没有批处理对象,则新建一个批处理对象
byte maxUsableMagic = apiVersions.maxUsableProduceMagic();
// 取 ( 配置文件批量消息的大小和保存具有给定字段的记录所需的批量消息大小的上限估计值 ) 两者较大的一个值
int size = Math.max(this.batchSize, AbstractRecords.estimateSizeInBytesUpperBound(maxUsableMagic, compression, key, value, headers));
log.trace("Allocating a new {} byte message buffer for topic {} partition {}", size, tp.topic(), tp.partition());
// 分配给定大小的缓冲区,如果没有足够内存则抛出异常
buffer = free.allocate(size, maxTimeToBlock);
// 再次给双端队列加上悲观锁
synchronized (dq) {
// Need to check if producer is closed again after grabbing the dequeue lock.
if (closed)
throw new KafkaException("Producer closed while send in progress");
// 尝试向批处理对象追加消息,并返回追加结果,如果队列里没有批处理对象,则返回空
// 2.2 会详细介绍
RecordAppendResult appendResult = tryAppend(timestamp