kafka偏移量设置及手动提交
- 一个kafka消费者实例在对未订阅的topic的offset的首次消费策略默认:latest
- latest - 自动将偏移量重置为最新的偏移量
- earliest - 自动将偏移量重置为最早的偏移量
- none - 如果未找到消费者组的先前偏移量,则像消费者抛出异常
1.设置消费者的偏移量自动提交
public class KafkaConsumerOffset03 {
public static void main(String[] args) {
Properties props = new Properties();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG,"ip:9092");
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,StringDeserializer.class.getName());
props.put(ConsumerConfig.GROUP_ID_CONFIG,"g3");
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG,"earliest");
props.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG,10000);
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG,true);
KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);
consumer.subscribe(Arrays.asList("topic03"));
while(true){...}
}
2.设置消费者偏移量手动提交
- 当消费者定义自己管理offset时,关闭offset自动提交;需要注意用户提交的offset偏移量永远比本次消费的偏移量+1;因为提交的offset是kafka消费者下一次抓取数据的位置。
public class KafkaConsumerOffset04 {
public static void main(String[] args) {
Properties props = new Properties();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG,"ip:9092");
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,StringDeserializer.class.getName());
props.put(ConsumerConfig.GROUP_ID_CONFIG,"g4");
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG,"earliest");
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG,false);
KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);
consumer.subscribe(Arrays.asList("topic03"));
while(true){
ConsumerRecords<String, String> records = consumer.poll(Duration.ofSeconds(1));
if(!records.isEmpty()){
Iterator<ConsumerRecord<String, String>> iterator = records.iterator();
Map<TopicPartition, OffsetAndMetadata> offsets = new HashMap<>();
while(iterator.hasNext()){
ConsumerRecord<String, String> record = iterator.next();
String topic = record.topic();
int partition = record.partition();
long offset = record.offset();
String key = record.key();
String value = record.value();
long timestamp = record.timestamp();
offsets.put(new TopicPartition(topic,partition),new OffsetAndMetadata(offset+1));
consumer.commitAsync(offsets, new OffsetCommitCallback() {
@Override
public void onComplete(Map<TopicPartition, OffsetAndMetadata> map, Exception e) {
System.out.println("offsets:"+offsets+"\t exception"+e);
}
});
System.out.println(topic+"\t"+partition+","+offset+"\t"+key+" "+value+" "+timestamp);
}
}
}
}
}