因为项目里面我只写了从kafka获取数据,所以这里就贴出我的代码来供有需要的读者参考,至于如何写kafka的操作,只有等我研究后再写出来了.并且网上有大量的例子还是可以的
第一种方法,这种方法更繁琐点.第二种相对简单点.
我使用的是springboot工程
引入jar包
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.10</artifactId>
<version>0.8.2.1</version>
</dependency>
package com.economic.system.aggregation.common.utils;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Properties;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import kafka.consumer.Consumer;
import kafka.consumer.ConsumerConfig;
import kafka.consumer.KafkaStream;
import kafka.javaapi.consumer.ConsumerConnector;
import org.springframework.boot.CommandLineRunner;
import org.springframework.core.annotation.Order;
import org.springframework.stereotype.Component;
/**
* 随项目一起启动,项目启动时连接kafka
* @author yhy
*
*/
@Component//这个注解可以让项目启动后,就执行这个类,要实现CommandLineRunner接口
@Order(value = 1)
public class KafkaConfiguration implements CommandLineRunner {
/** groupId*/
private String GROUP_ID = "eosp";
/** topic*/
// @Value("${kafka.topic}")
private String TOPIC = "netcompany";
/** 消费者个数*/
private int TOPIC_COUNTMAP = 1;//可以通过topic分区个数来配置,消费者个数要<=分区个数
private ExecutorService executorService;
@Override
public void run(String... args) throws Exception {
Properties properties = new Properties();
properties.put("zookeeper.connect",
"hadoop1:2181,hadoop2:2181,hadoop3:2181,hadoop4:2181,hadoop5:2181");// 声明zk
properties.put("group.id", GROUP_ID);// 必须要使用别的组名称,// 如果生产者和消费者都在同一组,则不能访问同一组内的topic数据
// properties.put("auto.offset.reset", "largest");
ConsumerConnector consumer =
Consumer.createJavaConsumerConnector(new ConsumerConfig(properties));
Map<String, Integer> topicCountMap = new HashMap<String, Integer>();
// 这里可以设置创建多个消费者
topicCountMap.put(TOPIC, TOPIC_COUNTMAP);
Map<String, List<KafkaStream<byte[], byte[]>>> messageStreams =
consumer.createMessageStreams(topicCountMap);
executorService = Executors.newFixedThreadPool(TOPIC_COUNTMAP);
List<KafkaStream<byte[], byte[]>> streams = messageStreams.get(TOPIC);// 获取每次接收到的这个数据
for (KafkaStream<byte[], byte[]> stream : streams) {
executorService.submit(new ConsumerThread(stream));
}
//如果是测试,可以使用下面的代码来执行,但要注释掉上面的for循环,不让线程池来处理
// ConsumerIterator<byte[], byte[]> iterator = stream.iterator();
// while (iterator.hasNext()) {
// String message = new String(iterator.next().message());
// //这里指的注意,如果没有下面这个语句的执行很有可能回从头来读消息的
// consumer.commitOffsets();
// System.out.println(message);
// }
}
}
kafka消费者线程类
package com.economic.system.aggregation.common.utils;
import kafka.consumer.ConsumerIterator;
import kafka.consumer.KafkaStream;
import kafka.message.MessageAndMetadata;
/**
* kafka消费者线程类
* @author yhy
*
*/
public class ConsumerThread implements Runnable {
private KafkaStream<byte[], byte[]> stream;
/**
* @param stream
*/
public ConsumerThread(KafkaStream<byte[], byte[]> stream) {
this.stream = stream;
}
@Override
public void run() {
ConsumerIterator<byte[], byte[]> iterator = this.stream.iterator();
while (iterator.hasNext()) {
MessageAndMetadata<byte[], byte[]> message = iterator.next();
// int partition = message.partition();
// String topic = message.topic();
String messageT = new String(message.message());
System.out.println("接收到的消息:"+messageT);
}
}
}
这样就可以使用kafka接收数据了
第二种方法,使用springboot简单配置
kafka的监听service如下:
package com.economic.system.aggregation.service;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.kafka.annotation.KafkaListener;
import org.springframework.stereotype.Service;
import com.alibaba.fastjson.JSON;
import com.alibaba.fastjson.JSONObject;
import com.economic.system.aggregation.service.observer.internal.CleanDataSubject;
/**
* kafka消息消费者
*
* @author
*/
@Service("KafkaConsumer")
public class KafkaConsumerListener {
/**
* 日志
*/
private static final Logger LOGGER = LoggerFactory.getLogger(KafkaConsumerListener.class);
@Autowired
private CleanDataSubject cleanDataSubject;
@KafkaListener(topics = {"netcompany"})
public void processMessage(String content) {
try {
this.handle(content);
} catch (Exception e) {
LOGGER.error(e.getMessage(), e);
}
}
private void handle(String content) {
//content就是每次传过来的消息了
}
}
需要在springboot的启动类 applicationStart中添加注解,如下
需要引入的jar包
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
<version>1.1.1.RELEASE</version>
</dependency>
这样是不是更简单了......