1 Kafka API
1.1 Producer API
1.1.1 消息发送流程
Kafka的Producer发送消息采用的是异步发送的方式。在消息发送的过程中,涉及到了两个线程——main线程和Sender线程,以及一个线程共享变量——RecordAccumulator。main线程将消息发送给RecordAccumulator,Sender线程不断从RecordAccumulator中拉取消息发送到Kafka broker。
相关参数:
batch.size:只有数据积累到batch.size之后,sender才会发送数据。
linger.ms:如果数据迟迟未达到batch.size,sender等待linger.time之后就会发送数据。
1.1.2 异步发送API
- 导入依赖
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>2.6.2</version>
</dependency>
2)编写代码
需要用到的类:
KafkaProducer:需要创建一个生产者对象,用来发送数据
ProducerConfig:获取所需的一系列配置参数
ProducerRecord:每条数据都要封装成一个ProducerRecord对象
1.不带回调函数的API
package com.jackyan.kafka.producer;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.common.serialization.StringSerializer;
import java.util.Properties;
public class CustomProducer {
public static void main(String [] args) {
Properties properties = new Properties();
properties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "hadoop101:9092,hadoop102:9092,hadoop103:9092");
properties.put(ProducerConfig.ACKS_CONFIG, "all");
// properties.put(ProducerConfig.RETRIES_CONFIG, 1); // 重试次数
// properties.put(ProducerConfig.BATCH_SIZE_CONFIG, 16384); // 批次大小
// properties.put(ProducerConfig.LINGER_MS_CONFIG, 100); // 等待时间
// properties.put(ProducerConfig.BUFFER_MEMORY_CONFIG, 33554432); // RecordAccumulator缓冲区大小
properties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
properties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
KafkaProducer<String, String> producer = new KafkaProducer<String, String>(properties);
for (int i = 0; i < 10; i++) {
producer.send(new ProducerRecord<String, String>("test1", i + "", "messages" + i));
}
producer.close();
}
}
2.带回调函数的API
回调函数会在producer收到ack时调用,为异步调用,该方法有两个参数,分别是RecordMetadata和Exception,如果Exception为null,说明消息发送成功,如果Exception不为null,说明消息发送失败。
注意:消息发送失败会自动重试,不需要我们在回调函数中手动重试。
package com.jackyan.kafka.producer;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.common.serialization.StringSerializer;
import java.util.Properties;
/**
* 带回调函数
*/
public class CallbackProducer {
public static void main(String [] args) {
Properties properties = new Properties();
properties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "hadoop101:9092,hadoop102:9092,hadoop103:9092");
properties.put(ProducerConfig.ACKS_CONFIG, "all");
// properties.put(ProducerConfig.RETRIES_CONFIG, 1); // 重试次数
// properties.put(ProducerConfig.BATCH_SIZE_CONFIG, 16384); // 批次大小
// properties.put(ProducerConfig.LINGER_MS_CONFIG, 100); // 等待时间
// properties.put(ProducerConfig.BUFFER_MEMORY_CONFIG, 33554432); // RecordAccumulator缓冲区大小
properties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
properties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
KafkaProducer<String, String> producer = new KafkaProducer<String, String>(properties);
for (int i = 0; i < 10; i++) {
producer.send(new ProducerRecord<String, String>("test1", i + "", "messages" + i), (metadata, exception) -> {
if (exception == null) {
System.out.println("success topic:" + metadata.topic() + " partition:" + metadata.partition() + " offset:" + metadata.offset());
} else {
exception.printStackTrace();
}
});
}
producer.close();
}
}
1.1.3 同步发送API
同步发送的意思就是,一条消息发送之后,会阻塞当前线程,直至返回ack。
由于send方法返回的是一个Future对象,根据Futrue对象的特点,我们也可以实现同步发送的效果,只需在调用Future对象的get方发即可。
package com.jackyan.kafka.producer;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.common.serialization.StringSerializer;
import java.util.Properties;
import java.util.concurrent.ExecutionException;
/**
* 带回调函数,回调函数返回的是一个Future对象,当调用.get()方法之后,会阻塞主线程
*/
public class CallbackProducer {
public static void main(String [] args) throws ExecutionException, InterruptedException {
Properties properties = new Properties();
properties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "hadoop101:9092,hadoop102:9092,hadoop103:9092");
properties.put(ProducerConfig.ACKS_CONFIG, "all");
// properties.put(ProducerConfig.RETRIES_CONFIG, 1); // 重试次数
// properties.put(ProducerConfig.BATCH_SIZE_CONFIG, 16384); // 批次大小
properties.put(ProducerConfig.LINGER_MS_CONFIG, 1000); // 等待时间
// properties.put(ProducerConfig.BUFFER_MEMORY_CONFIG, 33554432); // RecordAccumulator缓冲区大小
properties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
properties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
KafkaProducer<String, String> producer = new KafkaProducer<String, String>(properties);
for (int i = 0; i < 10; i++) {
producer.send(new ProducerRecord<String, String>("test1", i + "", "messages" + i), (metadata, exception) -> {
if (exception == null) {
System.out.println("success topic:" + metadata.topic() + " partition:" + metadata.partition() + " offset:" + metadata.offset());
} else {
exception.printStackTrace();
}
}).get();
}
producer.close();
}
}
1.2 Consumer API
Consumer消费数据时的可靠性是很容易保证的,因为数据在Kafka中是持久化的,故不用担心数据丢失问题。
由于consumer在消费过程中可能会出现断电宕机等故障,consumer恢复后,需要从故障前的位置的继续消费,所以consumer需要实时记录自己消费到了哪个offset,以便故障恢复后继续消费。
所以offset的维护是Consumer消费数据是必须考虑的问题。
1.2.1 手动提交offset
1)导入依赖
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>2.6.2</version>
</dependency>
2)编写代码
需要用到的类:
KafkaConsumer:需要创建一个消费者对象,用来消费数据
ConsumerConfig:获取所需的一系列配置参数
ConsuemrRecord:每条数据都要封装成一个ConsumerRecord对象
1.2.2 自动提交offset
package com.jackyan.kafka.consumer;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.common.serialization.StringDeserializer;
import java.time.Duration;
import java.util.Arrays;
import java.util.Properties;
public class CustomComsumer {
public static void main(String[] args) {
Properties properties = new Properties();
properties.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "hadoop101:9092,hadoop102:9092,hadoop103:9092");
properties.put(ConsumerConfig.GROUP_ID_CONFIG, "test");
properties.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false); // 关闭自动提交
properties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
KafkaConsumer<String, String> consumer = new KafkaConsumer<>(properties);
consumer.subscribe(Arrays.asList("test1", "test2"));
while (true) {
ConsumerRecords<String, String> records = consumer.poll(Duration.ofMillis(100));
for (ConsumerRecord<String, String> record : records) {
System.out.format("offset=%d, key=%s, value=%s\n", record.offset(), record.key(), record.value());
}
consumer.commitSync(); 同步提交
}
}
}
3)代码分析:
手动提交offset的方法有两种:分别是commitSync(同步提交)和commitAsync(异步提交)。两者的相同点是,都会将本次poll的一批数据最高的偏移量提交;不同点是,commitSync会失败重试,一直到提交成功(如果由于不可恢复原因导致,也会提交失败);而commitAsync则没有失败重试机制,故有可能提交失败。
commitSync和commitAsync都有可能会造成数据重复消费
1.2.2 自动提交offset
为了使我们能够专注于自己的业务逻辑,Kafka提供了自动提交offset的功能。
自动提交offset的相关参数:
enable.auto.commit:是否开启自动提交offset功能
auto.commit.interval.ms:自动提交offset的时间间隔
package com.jackyan.kafka.consumer;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.common.serialization.StringDeserializer;
import java.time.Duration;
import java.util.Arrays;
import java.util.Properties;
public class CustomComsumer {
public static void main(String[] args) {
Properties properties = new Properties();
properties.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "hadoop101:9092,hadoop102:9092,hadoop103:9092");
properties.put(ConsumerConfig.GROUP_ID_CONFIG, "test");
// properties.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false); // 关闭自动提交
properties.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, true); // 开启自动提交
properties.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, 1000); // 设置自动提交周期
properties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
KafkaConsumer<String, String> consumer = new KafkaConsumer<>(properties);
consumer.subscribe(Arrays.asList("test1", "test2"));
while (true) {
ConsumerRecords<String, String> records = consumer.poll(Duration.ofMillis(100));
for (ConsumerRecord<String, String> record : records) {
System.out.format("offset=%d, key=%s, value=%s\n", record.offset(), record.key(), record.value());
}
/**
* 手动提交offset的方法有两种:分别是commitSync(同步提交)和commitAsync(异步提交)。
* 两者的相同点是,都会将本次poll的一批数据最高的偏移量提交;
* 不同点是,commitSync会失败重试,一直到提交成功(如果由于不可恢复原因导致,也会提交失败);
* 而commitAsync则没有失败重试机制,故有可能提交失败。
* commitSync和commitAsync都有可能会造成数据重复消费
*/
// consumer.commitSync(); // 手动提交offset
// consumer.commitAsync();
}
}
}
1.3 自定义Interceptor
1.3.1 拦截器原理
Producer拦截器(interceptor)是在Kafka 0.10版本被引入的,主要用于实现clients端的定制化控制逻辑。
对于producer而言,interceptor使得用户在消息发送前以及producer回调逻辑前有机会对消息做一些定制化需求,比如修改消息等。同时,producer允许用户指定多个interceptor按序作用于同一条消息从而形成一个拦截链(interceptor chain)。Intercetpor的实现接口是org.apache.kafka.clients.producer.ProducerInterceptor,其定义的方法包括:
(1)configure(configs)
获取配置信息和初始化数据时调用。
(2)onSend(ProducerRecord):
该方法封装进KafkaProducer.send方法中,即它运行在用户主线程中。Producer确保在消息被序列化以及计算分区前调用该方法。用户可以在该方法中对消息做任何操作,但最好保证不要修改消息所属的topic和分区,否则会影响目标分区的计算。
(3)onAcknowledgement(RecordMetadata, Exception):
该方法会在消息从RecordAccumulator成功发送到Kafka Broker之后,或者在发送过程中失败时调用。并且通常都是在producer回调逻辑触发之前。onAcknowledgement运行在producer的IO线程中,因此不要在该方法中放入很重的逻辑,否则会拖慢producer的消息发送效率。
(4)close:
关闭interceptor,主要用于执行一些资源清理工作
如前所述,interceptor可能被运行在多个线程中,因此在具体实现时用户需要自行确保线程安全。另外倘若指定了多个interceptor,则producer将按照指定顺序调用它们,并仅仅是捕获每个interceptor可能抛出的异常记录到错误日志中而非在向上传递。这在使用过程中要特别留意。
1.3.2 拦截器案例
1)需求:
实现一个简单的双interceptor组成的拦截链。第一个interceptor会在消息发送前将时间戳信息加到消息value的最前部;第二个interceptor会在消息发送后更新成功发送消息数或失败发送消息数。
2)代码实现
(1)增加时间戳拦截器
package com.jackyan.kafka.interceptor;
import org.apache.kafka.clients.producer.ProducerInterceptor;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.clients.producer.RecordMetadata;
import java.util.Map;
public class TimeInterceptor implements ProducerInterceptor<String, String> {
@Override
public ProducerRecord<String, String> onSend(ProducerRecord<String, String> record) {
/**
* 在消息体前面增加时间戳
*/
return new ProducerRecord(record.topic(), record.partition(), record.timestamp(), record.key(), System.currentTimeMillis() + "-" + record.value());
}
@Override
public void onAcknowledgement(RecordMetadata recordMetadata, Exception e) {
}
@Override
public void close() {
}
@Override
public void configure(Map<String, ?> map) {
}
}
(2)统计发送消息成功和发送失败消息数,并在producer关闭时打印这两个计数器
package com.jackyan.kafka.interceptor;
import org.apache.kafka.clients.producer.ProducerInterceptor;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.clients.producer.RecordMetadata;
import java.util.Map;
public class CounterInterceptor implements ProducerInterceptor<String, String> {
private int successCount = 0;
private int errorCount = 0;
@Override
public ProducerRecord<String, String> onSend(ProducerRecord<String, String> record) {
return record;
}
@Override
public void onAcknowledgement(RecordMetadata recordMetadata, Exception e) {
/**
* 统计成功和错误的数量
*/
if (e == null) {
successCount ++;
} else {
errorCount ++;
}
}
@Override
public void close() {
// 保存结果
System.out.println("Successful sent: " + successCount);
System.out.println("Failed sent: " + errorCount);
}
@Override
public void configure(Map<String, ?> map) {
}
}
3)编写生产者
package com.jackyan.kafka.producer;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.common.serialization.StringSerializer;
import java.util.ArrayList;
import java.util.List;
import java.util.Properties;
public class InterceptorProducer {
public static void main(String [] args) {
Properties properties = new Properties();
properties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "hadoop101:9092,hadoop102:9092,hadoop103:9092");
properties.put(ProducerConfig.ACKS_CONFIG, "all");
// properties.put(ProducerConfig.RETRIES_CONFIG, 1); // 重试次数
// properties.put(ProducerConfig.BATCH_SIZE_CONFIG, 16384); // 批次大小
// properties.put(ProducerConfig.LINGER_MS_CONFIG, 100); // 等待时间
// properties.put(ProducerConfig.BUFFER_MEMORY_CONFIG, 33554432); // RecordAccumulator缓冲区大小
properties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
properties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
// 构建拦截器链
List<String> interceptors = new ArrayList<>();
interceptors.add("com.jackyan.kafka.interceptor.TimeInterceptor");
interceptors.add("com.jackyan.kafka.interceptor.CounterInterceptor");
properties.put(ProducerConfig.INTERCEPTOR_CLASSES_CONFIG, interceptors);
KafkaProducer<String, String> producer = new KafkaProducer<String, String>(properties);
for (int i = 0; i < 10; i++) {
producer.send(new ProducerRecord<String, String>("test1", i + "", "messages" + i));
}
// 一定要关闭producer,这样才会调用interceptor的close方法
producer.close();
}
}
4)测试
(1)在kafka上启动消费者,然后运行客户端java程序。
1638018839804-messages0
1638018840094-messages1
1638018840095-messages2
1638018840095-messages3
1638018840095-messages4
1638018840095-messages5
1638018840095-messages6
1638018840095-messages7
1638018840095-messages8
1638018840095-messages9
Successful sent: 10
Failed sent: 0
Process finished with exit code 0