消费者单次拉取50条数据,每一条单独提交
import org.apache.kafka.clients.consumer.*;
import java.util.*;
public class OffsetCommitExample {
private static final String TOPIC = "your_topic";
private static final String GROUP_ID = "your_group_id";
private static final String BOOTSTRAP_SERVERS = "your_bootstrap_servers";
public static void main(String[] args) {
Properties props = new Properties();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, BOOTSTRAP_SERVERS);
props.put(ConsumerConfig.GROUP_ID_CONFIG, GROUP_ID);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);
consumer.subscribe(Collections.singletonList(TOPIC));
try {
while (true) {
ConsumerRecords<String, String> records = consumer.poll(Duration.ofMillis(100));
for (ConsumerRecord<String, String> record : records) {
// 处理消息
processMessage(record);
// 单独提交当前消息的偏移量
Map<TopicPartition, OffsetAndMetadata> offsets = Collections.singletonMap(
new TopicPartition(record.topic(), record.partition()),
new OffsetAndMetadata(record.offset() + 1));
consumer.commitSync(offsets);
}
}
} finally {
consumer.close();
}
}
private static void processMessage(ConsumerRecord<String, String> record) {
// 处理消息的逻辑
System.out.println("Processing message: " + record.value());
}
}
我们创建了一个Kafka消费者,并订阅了指定的主题。在主循环中,我们使用poll()
方法从Kafka主题中拉取消息,并遍历处理每条消息。在处理完每条消息后,我们使用commitSync()
方法单独提交当前消息的偏移量,确保不会重复处理该条消息。
单个提交偏移量可能会对性能产生一定的影响,因此需要根据实际情况进行权衡和测试。