kafka consumer 如何根据 offset,进行消息回溯?下面的文档给出了 demo:
https://cwiki.apache.org/confluence/display/KAFKA/0.8.0+SimpleConsumer+Example
代码片段如下:
SimpleConsumer consumer = new SimpleConsumer(leadBroker, a_port, 100000, 64 * 1024, clientName); FetchRequest req = new FetchRequestBuilder() .clientId(clientName) // Note: this fetchSize of 100000 might need to be increased if large batches are written to Kafka .addFetch(a_topic, a_partition, readOffset, 100000) .build(); FetchResponse fetchResponse = consumer.fetch(req);
客户端构造 FetchRequest,从 readOffset 处开始拉取消息。SimpleConsumer 在将来会废弃,建议使用
org.apache.kafka.clients.consumer.KafkaConsumer#seek
给出另外一种方法:
@Test public void testOffset() throws InterruptedException { Properties props = new Properties(); props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, KafkaProperties.KAFKA_SERVER_LIST); props.put(ConsumerConfig.GROUP_ID_CONFIG, "DemoConsumer"); props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "false"); props.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, "1000"); props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, "30000"); props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.IntegerDeserializer"); props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer"); KafkaConsumer consumer = new KafkaConsumer<>(props); TopicPartition tp = new TopicPartition(KafkaProperties.TOPIC, 0); consumer.assign(Collections.singleton(tp)); Map<TopicPartition, OffsetAndMetadata> offsets = new HashMap<>(); // 设置 offset 为 0 offsets.put(tp, new OffsetAndMetadata(0, "reset")); consumer.commitSync(offsets); LockSupport.park(); }
命令行重置位点的方法:https://gist.github.com/marwei/cd40657c481f94ebe273ecc16601674b