首先是kafka与zookeeper集群的搭建我们已经完成了在上一节中。这一章我们主要来实现代码整合Kafka,实现一个业务上的,从kafka获取监听到数据以后的业务逻辑。
1、将kafka整合到spring boot中
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.9.2</artifactId>
<version>0.8.1</version>
</dependency>
在启动类中注册自己的监听器,实现一个初始化一些东西的作用
@Bean
public ServletListenerRegistrationBean servletListenerRegistrationBean() {
ServletListenerRegistrationBean servletListenerRegistrationBean =
new ServletListenerRegistrationBean();
servletListenerRegistrationBean.setListener(new InitListener());
return servletListenerRegistrationBean;
}
监听器
/**
* 系统初始化的监听器
* @author Administrator
*
*/
public class InitListener implements ServletContextListener {
public void contextInitialized(ServletContextEvent sce) {
ServletContext sc = sce.getServletContext();
ApplicationContext context = WebApplicationContextUtils.getWebApplicationContext(sc);
SpringContext.setApplicationContext(context);
new Thread(new KafkaConsumer("test3")).start();
}
public void contextDestroyed(ServletContextEvent sce) {
}
}
/**
* spring上下文
* @author Administrator
*
*/
public class SpringContext {
private static ApplicationContext applicationContext;
public static ApplicationContext getApplicationContext() {
return applicationContext;
}
public static void setApplicationContext(ApplicationContext applicationContext) {
SpringContext.applicationContext = applicationContext;
}
}
/**
* kafka消费者
* @author Administrator
*
*/
public class KafkaConsumer implements Runnable {
private ConsumerConnector consumerConnector;
private String topic;
public KafkaConsumer(String topic) {
this.consumerConnector = Consumer.createJavaConsumerConnector(
createConsumerConfig());
this.topic = topic;
}
@SuppressWarnings("rawtypes")
public void run() {
Map<String, Integer> topicCountMap = new HashMap<String, Integer>();
topicCountMap.put(topic, 1);
Map<String, List<KafkaStream<byte[], byte[]>>> consumerMap =
consumerConnector.createMessageStreams(topicCountMap);
List<KafkaStream<byte[], byte[]>> streams = consumerMap.get(topic);
for (KafkaStream stream : streams) {
new Thread(new KafkaMessageProcessor(stream)).start();
}
}
/**
* 创建kafka cosumer config
* @return
*/
private static ConsumerConfig createConsumerConfig() {
Properties props = new Properties();
props.put("zookeeper.connect", "192.168.1.51:2181,192.168.1.52:2181,192.168.1.53:2181");
props.put("group.id", "cache-group");
props.put("zookeeper.session.timeout.ms", "40000");
props.put("zookeeper.sync.time.ms", "200");
props.put("auto.commit.interval.ms", "1000");
return new ConsumerConfig(props);
}
}
2、编写业务逻辑
(1)两种服务会发送来数据变更消息:商品信息服务,商品店铺信息服务,每个消息都包含服务名以及商品id
(2)接收到消息之后,根据商品id到对应的服务拉取数据,这一步,我们采取简化的模拟方式,就是在代码里面写死,会获取到什么数据,不去实际再写其他的服务去调用了
(3)商品信息:id,名称,价格,图片列表,商品规格,售后信息,颜色,尺寸
(4)商品店铺信息:其他维度,用这个维度模拟出来缓存数据维度化拆分,id,店铺名称,店铺等级,店铺好评率
(5)分别拉取到了数据之后,将数据组织成json串,然后分别存储到ehcache中,和redis缓存中
@SuppressWarnings("rawtypes")
public class KafkaMessageProcessor implements Runnable {
private KafkaStream kafkaStream;
private CacheService cacheService;
public KafkaMessageProcessor(KafkaStream kafkaStream) {
this.kafkaStream = kafkaStream;
this.cacheService = (CacheService) SpringContext.getApplicationContext()
.getBean("cacheService");
}
@SuppressWarnings("unchecked")
public void run() {
ConsumerIterator<byte[], byte[]> it = kafkaStream.iterator();
while (it.hasNext()) {
String message = new String(it.next().message());
// 首先将message转换成json对象
JSONObject messageJSONObject = JSONObject.parseObject(message);
// 从这里提取出消息对应的服务的标识
String serviceId = messageJSONObject.getString("serviceId");
// 如果是商品信息服务
if("productInfoService".equals(serviceId)) {
processProductInfoChangeMessage(messageJSONObject);
} else if("shopInfoService".equals(serviceId)) {
processShopInfoChangeMessage(messageJSONObject);
}
}
}
/**
* 处理商品信息变更的消息
* @param messageJSONObject
*/
private void processProductInfoChangeMessage(JSONObject messageJSONObject) {
// 提取出商品id
Long productId = messageJSONObject.getLong("productId");
// 调用商品信息服务的接口
// 直接用注释模拟:getProductInfo?productId=1,传递过去
String productInfoJSON = "{\"id\": 1, \"name\": \"iphone7手机\", \"price\": 5599, \"pictureList\":\"a.jpg,b.jpg\", \"specification\": \"iphone7的规格\", \"service\": \"iphone7的售后服务\", \"color\": \"红色,白色,黑色\", \"size\": \"5.5\", \"shopId\": 1}";
ProductInfo productInfo = JSONObject.parseObject(productInfoJSON, ProductInfo.class);
cacheService.saveProductInfo2LocalCache(productInfo);
System.out.println("===================获取刚保存到本地缓存的商品信息:" + cacheService.getProductInfoFromLocalCache(productId));
cacheService.saveProductInfo2ReidsCache(productInfo);
}
/**
* 处理店铺信息变更的消息
* @param messageJSONObject
*/
private void processShopInfoChangeMessage(JSONObject messageJSONObject) {
// 提取出商品id
Long productId = messageJSONObject.getLong("productId");
Long shopId = messageJSONObject.getLong("shopId");
// 调用商品信息服务的接口
// 直接用注释模拟:getProductInfo?productId=1,传递过去
String shopInfoJSON = "{\"id\": 1, \"name\": \"小王的手机店\", \"level\": 5, \"goodCommentRate\":0.99}";
ShopInfo shopInfo = JSONObject.parseObject(shopInfoJSON, ShopInfo.class);
cacheService.saveShopInfo2LocalCache(shopInfo);
System.out.println("===================获取刚保存到本地缓存的店铺信息:" + cacheService.getShopInfoFromLocalCache(shopId));
cacheService.saveShopInfo2ReidsCache(shopInfo);
}
}
3、测试业务逻辑
(1)创建一个kafka topic
(2)在命令行启动一个kafka producer
(3)启动系统,消费者开始监听kafka topic
C:\Windows\System32\drivers\etc\hosts
(4)在producer中,分别发送两条消息,一个是商品信息服务的消息,一个是商品店铺信息服务的消息
(5)能否接收到两条消息,并模拟拉取到两条数据,同时将数据写入ehcache中,并写入redis缓存中
(6)ehcache通过打印日志方式来观察,redis通过手工连接上去来查询