前言
Kafka是最初由Linkedin公司开发,是一个分布式、分区的、多副本的、多订阅者,基于zookeeper协调的分布式日志系统(也可以当做MQ系统),常见可以用于web/nginx日志、访问日志,消息服务等等,Linkedin于2010年贡献给了Apache基金会并成为顶级开源项目。
主要应用场景是:日志收集系统和消息系统。
Kafka主要设计目标如下:
- 以时间复杂度为O(1)的方式提供消息持久化能力,即使对TB级以上数据也能保证常数时间的访问性能。
- 高吞吐率。即使在非常廉价的商用机器上也能做到单机支持每秒100K条消息的传输。
- 支持Kafka Server间的消息分区,及分布式消费,同时保证每个partition内的消息顺序传输。
- 同时支持离线数据处理和实时数据处理。
- Scale out:支持在线水平扩展
第一步:准备资源
需要下载跟安装kafka跟zookeeper
资源的下载大家可以去网上自行百度,也可以加博主的qq群,里面提供免费的资源下载
下面说一下安装需要注意的过程
1.优先进行zookeeper的安装
1.1 解压群里下面的zookeeper文件
1.2 打开zookeeper-3.4.14\conf,把zoo_sample.cfg重命名成zoo.cfg
1.3 从文本编辑器里打开zoo.cfg
1.4 把dataDir的值改成“./zookeeper-3.4.14/data”
1.5 添加如下系统变量:
ZOOKEEPER_HOME: (zookeeper目录)
Path: 在现有的值后面添加 “;%ZOOKEEPER_HOME%\bin;”
1.6 运行Zookeeper: 打开cmd然后执行 zkserver
1.7 cmd窗口不要关
2.下面是kafka的安装
2.2 解压群里下面的kafka文件
2.3 打开kafka_2.11-2.2.1\config
2.4 从文本编辑器里打开 server.properties
2.5 把 log.dirs的值改成 “./logs”
2.6 打开cmd
2.7 进入kafka文件目录: (kafka目录)
2.8 输入并执行: .\bin\windows\kafka-server-start.bat .\config\server.properties
2.9 cmd窗口不要关
第二步编码
1.项目中添加pom依赖
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
</dependency>
2.项目中编写配置文件
#=============== producer =======================
spring.kafka.producer.bootstrap-servers=127.0.0.1:9092,127.0.0.1:9093
spring.kafka.producer.retries=1
spring.kafka.producer.batch-size=16384
spring.kafka.producer.buffer-memory=33554432
spring.kafka.producer.properties.max.requst.size=2097152
spring.kafka.producer.key-serializer=org.apache.kafka.common.serialization.StringSerializer
spring.kafka.producer.value-serializer=org.apache.kafka.common.serialization.StringSerializer
#=============== consumer =======================
spring.kafka.consumer.bootstrap-servers=127.0.0.1:9092
spring.kafka.consumer.auto-offset-reset=earliest
#consumer组一
spring.kafka.consumer.group-id.one=test
#consumer组二
spring.kafka.consumer.group-id.two=test2
spring.kafka.consumer.enable-auto-commit=true
spring.kafka.consumer.auto-commit-interval=100
#=======set comsumer max fetch.byte 2*1024*1024=============
spring.kafka.consumer.properties.max.partition.fetch.bytes=2097152
3.项目中编写配置类
package com.example.demo.config;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.kafka.config.ConcurrentKafkaListenerContainerFactory;
import org.springframework.kafka.config.KafkaListenerContainerFactory;
import org.springframework.kafka.core.*;
import org.springframework.kafka.listener.ConcurrentMessageListenerContainer;
import java.util.Map;
/**
* author:lizhaojie
* 创建日期:2019/9/27-10:25
*/
@Configuration
public class KafkaConfig {
@Autowired
KafkaProperties properties; //自动加载application中kafka的配置项(不包括自定义的配置项)
@Value("${spring.kafka.consumer.group-id.one}")
private String groupOne;
@Value("${spring.kafka.consumer.group-id.two}")
private String groupTwo;
@Bean
public ProducerFactory<String, String> kafkaProducerFactory() {
Map<String, Object> producerProperties = properties.buildProducerProperties();
return new DefaultKafkaProducerFactory<>(producerProperties);
}
@Bean
public KafkaTemplate<String, String> kafkaTemplate() {
return new KafkaTemplate<>(kafkaProducerFactory());
}
/**
* 用于实现多个consumer在不同group
*/
@Bean
public ConsumerFactory<String, String> kafkaConsumerFactoryOne() {
Map<String, Object> consumerProperties = properties.buildConsumerProperties();
consumerProperties.put(ConsumerConfig.GROUP_ID_CONFIG, groupOne);
return new DefaultKafkaConsumerFactory<>(consumerProperties);
}
@Bean
public ConsumerFactory<String, String> kafkaConsumerFactoryTwo() {
Map<String, Object> consumerProperties = properties.buildConsumerProperties();
consumerProperties.put(ConsumerConfig.GROUP_ID_CONFIG, groupTwo);
return new DefaultKafkaConsumerFactory<>(consumerProperties);
}
@Bean(name = "kafkaListenerContainerFactory") //这里一定要命名为kafkaListenerContainerFactory,否则报错
public KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, String>> kafkaListenerContainerFactoryOne() {
ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(kafkaConsumerFactoryOne());
return factory;
}
@Bean
public KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, String>> kafkaListenerContainerFactoryTwo() {
ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(kafkaConsumerFactoryTwo());
return factory;
}
}
4.项目中编写Controller
package com.example.demo.controller;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.ResponseBody;
import org.springframework.web.bind.annotation.RestController;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
/**
* author:lizhaojie
* 创建日期:2019/9/27-12:49
*/
@RequestMapping(value = "/kafka")
@RestController
public class KafkaController {
@Autowired
private KafkaTemplate kafkaTemplate;
@RequestMapping("send")
@ResponseBody
public String sengMessage(HttpServletRequest request, HttpServletResponse response){
String message = request.getParameter("message");
try {
kafkaTemplate.send("test",message);
return "success";
}catch (Exception e){
e.printStackTrace();
return "error";
}
}
}
使用postman测试一下就好啦
有问题欢迎加入qq交流群:610458079
免费资料下载