文章参考:
http://www.cnblogs.com/fxjwind/p/3794255.html以及kafka官方文档
首先,我觉得kafka的官方文档真是够乱的。~~~~(>_<)~~~~。以前好几次想了解,因为文档的缘故不想看了。因为公司用的还算多,还算了解下吧。而且,还有部分再用旧版的接口。先从旧版的接口说起。旧版的分为high level 和low level.区别就是前者 比较简单不用关心offset, 会自动的读zookeeper中该Consumer group的last offset
注意事项,对于多个partition和多个consumer
1. 如果consumer比partition多,是浪费,因为kafka的设计是在一个partition上是不允许并发的,所以consumer数不要大于partition数
2. 如果consumer比partition少,一个consumer会对应于多个partitions,这里主要合理分配consumer数和partition数,否则会导致partition里面的数据被取的不均匀
最好partiton数目是consumer数目的整数倍,所以partition数目很重要,比如取24,就很容易设定consumer数目
3. 如果consumer从多个partition读到数据,不保证数据间的顺序性,kafka只保证在一个partition上数据是有序的,但多个partition,根据你读的顺序会有不同
4. 增减consumer,broker,partition会导致rebalance,所以rebalance后consumer对应的partition会发生变化
5. High-level接口中获取不到数据的时候是会block的
直接上代码实现,已在本机验证跑通:
import com.example.file.demo.component.StreamProcessor;
import kafka.consumer.ConsumerConfig;
import kafka.consumer.KafkaStream;
import kafka.javaapi.consumer.ConsumerConnector;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.CommandLineRunner;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Properties;
@Configuration
public class KafkaConfig {
private static final Logger LOGGER = LoggerFactory.getLogger(KafkaConfig.class);
@Autowired
private StreamProcessor streamProcessor;
@Bean(destroyMethod = "shutdown")
public ConsumerConnector consumerConnector(){
Properties props = new Properties();
props.put("zookeeper.connect", "localhost:2000");
props.put("auto.offset.reset","smallest");//初始的offset默认是非法的,然后这个设置的意思是,当offset非法时,如何修正offset,默认是largest,即最新,所以不加这个配置,你是读不到你之前produce的数据
props.put("group.id", "test");
props.put("enable.auto.commit","");
props.put("zookeeper.session.timeout.ms", "400");
props.put("zookeeper.sync.time.ms", "200"); //
props.put("auto.commit.interval.ms", "1000");
ConsumerConfig consumerConfig = new kafka.consumer.ConsumerConfig(props);
ConsumerConnector consumerConnector = kafka.consumer.Consumer.createJavaConsumerConnector(consumerConfig);
return consumerConnector;
}
@Bean
public CommandLineRunner startConsumeRunner(ConsumerConnector consumerConnector){
return new CommandLineRunner() {
@Override
public void run(String... strings) throws Exception {
Map<String, Integer> topicCountMap = new HashMap<String, Integer>();
topicCountMap.put("test", 1);// 描述读取哪个topic,需要几个线程读
Map<String, List<KafkaStream<byte[], byte[]>>> consumerMap = consumerConnector
.createMessageStreams(topicCountMap);
List<KafkaStream<byte[], byte[]>> streams = consumerMap.get("test");
streams.stream().forEach(e -> {
streamProcessor.Process(e);
});
}
};
}
}
初始的offset默认是非法的,然后这个设置的意思是,当offset非法时,如何修正offset,默认是largest,即最新,所以不加这个配置,你是读不到你之前produce的数据
props.put("group.id", "test");
props.put("enable.auto.commit","");
props.put("zookeeper.session.timeout.ms", "400");
props.put("zookeeper.sync.time.ms", "200"); //
props.put("auto.commit.interval.ms", "1000");
ConsumerConfig consumerConfig = new kafka.consumer.ConsumerConfig(props);
ConsumerConnector consumerConnector = kafka.consumer.Consumer.createJavaConsumerConnector(consumerConfig);
return consumerConnector;
}
@Bean
public CommandLineRunner startConsumeRunner(ConsumerConnector consumerConnector){
return new CommandLineRunner() {
@Override
public void run(String... strings) throws Exception {
Map<String, Integer> topicCountMap = new HashMap<String, Integer>();
topicCountMap.put("test", 1);// 描述读取哪个topic,需要几个线程读
Map<String, List<KafkaStream<byte[], byte[]>>> consumerMap = consumerConnector
.createMessageStreams(topicCountMap);
List<KafkaStream<byte[], byte[]>> streams = consumerMap.get("test");
streams.stream().forEach(e -> {
streamProcessor.Process(e);
});
}
};
}
}
这里用到了commandLineRunner.
同时consumer异步处理读到的数据。
import kafka.consumer.ConsumerIterator;
import kafka.consumer.KafkaStream;
import kafka.message.MessageAndMetadata;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.ApplicationEventPublisher;
import org.springframework.scheduling.annotation.Async;
import org.springframework.stereotype.Component;
import java.nio.charset.StandardCharsets;
@Component
public class StreamProcessor {
private static final Logger LOGGER = LoggerFactory.getLogger(StreamProcessor.class);
@Autowired
private ApplicationEventPublisher applicationEventPublisher;
@Async
public void Process(KafkaStream<byte[], byte[]> stream) {
LOGGER.info("start to process kafka stream,thread:{}",Thread.currentThread().getName());
ConsumerIterator<byte[], byte[]> consumerIterator = stream.iterator();
while(consumerIterator.hasNext()){ //没有消息的时候默认就一直阻塞
MessageAndMetadata<byte[],byte[]> msgData = consumerIterator.next();
byte[] data = msgData.message();
String body = new String(data, StandardCharsets.UTF_8);
System.out.println("receive msg:{}"+body);
}
}
}
以上就是comsumer的highlevel版本的实现。lowlevel的需要自己负责partition 以及offset等。不介绍了。
highlevelapi的特点:
1)消费过的数据无法再次消费,如果想要再次消费数据,要么换另一个group
2)为了记录每次消费的位置,必须提交TopicAndPartition的offset,offset提交支持两种方式:
①提交至ZK (频繁操作zk是效率比较低的)
②提交至Kafka集群内部
注:在早期的Kafka版本中,offset默认存在在zookeeper中,但是用这种方式来记录消费者/组的消费进度使得消费者需要频繁地读写zookeeper,而利用zkclient的API的频繁读写本身就是一个相当低效的操作。因此在新版的Kafka中官方做了改动,offset都默认保存在Kafka集群中一个_consumer_offsets的topic里。