我们再回到我们的demo,接下来就要发送数据了,这里我们来看异步发送的方式
producer.send(new ProducerRecord<>(topic,
messageNo,
messageStr), new DemoCallBack(startTime, messageNo, messageStr));
我们点进send方法里面看一下
public Future<RecordMetadata> send(ProducerRecord<K, V> record, Callback callback) {
// intercept the record, which can be potentially modified; this method does not throw exceptions
ProducerRecord<K, V> interceptedRecord = this.interceptors == null ? record : this.interceptors.onSend(record);
//TODO 关键代码
return doSend(interceptedRecord, callback);
}
我们点进doSend方法看一下
这里面就包含了我们主要的核心流程
接下来我们来看一下总的流程
private Future<RecordMetadata> doSend(ProducerRecord<K, V> record, Callback callback) {
TopicPartition tp = null;
try {
// first make sure the metadata for the topic is available
/**
* 步骤一:同步等待拉取元数据
* maxBlockTimeMs 最多能等待多久
*/
ClusterAndWaitTime clusterAndWaitTime = waitOnMetadata(record.topic(), record.partition(), maxBlockTimeMs);
//clusterAndWaitTime.waitedOnMetadataMs 代表拉取元数据用了多少时间
//maxBlockTimeMs - 用了多少时间 = 还剩余多少时间可以使用。
long remainingWaitMs = Math.max(0, maxBlockTimeMs - clusterAndWaitTime.waitedOnMetadataMs);
//更新集群的元数据
Cluster cluster = clusterAndWaitTime.cluster;
/**
* 步骤二:
* 对key和value进行序列化
*/
byte[] serializedKey;
try {
serializedKey = keySerializer.serialize(record.topic(), record.key());
} catch (ClassCastException cce) {
throw new SerializationException("Can't convert key of class " + record.key().getClass().getName() +
" to class " + producerConfig.getClass(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG).getName() +
" specified in key.serializer");
}
byte[] serializedValue;
try {
serializedValue = valueSerializer.serialize(record.topic(), record.value());
} catch (ClassCastException cce) {
throw new SerializationException("Can't convert value of class " + record.value().getClass().getName() +
" to class " + producerConfig.getClass(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG).getName() +
" specified in value.serializer");
}
/**
* 步骤三:
* 根据分区器选择消息应该发送的分区。
*/
int partition = partition(record, serializedKey, serializedValue, cluster);
int serializedSize = Records.LOG_OVERHEAD + Record.recordSize(serializedKey, serializedValue);
/**
* 步骤四:
* 确认一下消息的大小是否超过了最大值
* kafkaproducer初始化的时候,指定了一个参数,代表的是producer这最大能发送的是一条消息能有多大
* 默认最大是1M,生产上会去修改
*
*/
ensureValidRecordSize(serializedSize);
/**
* 步骤五:
* 根据元数据信息,封装分区对象
*/
tp = new TopicPartition(record.topic(), partition);
long timestamp = record.timestamp() == null ? time.milliseconds() : record.timestamp();
log.trace("Sending record {} with callback {} to topic {} partition {}", record, callback, record.topic(), partition);
// producer callback will make sure to call both 'callback' and interceptor callback
/**
* 步骤六:
* 给每一条消息都绑定他的回调函数。因为我们使用的是异步的方式发送的消息。
*/
Callback interceptCallback = this.interceptors == null ? callback : new InterceptorCallback<>(callback, this.interceptors, tp);
/**
* 步骤七:
* 把消息放入accumulator(32M的一个内存)
* 然后由accumulator把消息封装成为一个批次一个批次的去发送
*/
RecordAccumulator.RecordAppendResult result = accumulator.append(tp, timestamp, serializedKey, serializedValue, interceptCallback, remainingWaitMs);
//如果批次满了
//或者新创建出来一个批次
if (result.batchIsFull || result.newBatchCreated) {
log.trace("Waking up the sender since topic {} partition {} is either full or getting a new batch", record.topic(), partition);
/**
* 步骤八:
* 唤醒sender线程。他才是真正发送数据的线程
*/
this.sender.wakeup();
}
return result.future;
// handling exceptions and record the errors;
// for API exceptions return them in the future,
// for other exceptions throw directly
} catch (ApiException e) {
log.debug("Exception occurred during message send:", e);
if (callback != null)
callback.onCompletion(null, e);
this.errors.record();
if (this.interceptors != null)
this.interceptors.onSendError(record, tp, e);
return new FutureFailure(e);
} catch (InterruptedException e) {
this.errors.record();
if (this.interceptors != null)
this.interceptors.onSendError(record, tp, e);
throw new InterruptException(e);
} catch (BufferExhaustedException e) {
this.errors.record();
this.metrics.sensor("buffer-exhausted-records").record();
if (this.interceptors != null)
this.interceptors.onSendError(record, tp, e);
throw e;
} catch (KafkaException e) {
this.errors.record();
if (this.interceptors != null)
this.interceptors.onSendError(record, tp, e);
throw e;
} catch (Exception e) {
// we notify interceptor about all exceptions, since onSend is called before anything else in this method
if (this.interceptors != null)
this.interceptors.onSendError(record, tp, e);
throw e;
}
}
在这里我们发现了很多try catch,我们去底层的代码里可以发现底层遇到异常直接往上抛,然后由核心代码来捕获。这里自定义了很多异常。