MQ:消息队列,消息的传输过程中的容器;主要提供生产、消费接口供外部调用做数据的存储和获取。
MQ分类:点对点(p2p)、发布订阅(Pub/Sub)
Kafka:一种消息队列,符合发布订阅模式。
Kafka使用场景:
- 指标:Kafka通常用于操作监控数据。
- 运营指标:Kafka经常用来记录运营监控数据。包括收集各种分布式应用的数据,生产各种操作的集中反馈,比如报警和报告。
- 日志聚合:Kafka可用于跨组织从多个服务收集日志,并使它们以标准格式提供给多个服务器。
- 消息系统:解耦生产者和消费者、缓存消息等。
Kafka中术语:
- Topic:消息根据Topic进行归类,可以理解为一个队列。
- Producer:消息生产者,就是向kafka broker发消息的客户端。
- Consumer:消息消费者,向kafka broker取消息的客户端。
- broker:每个kafka实例(server),一台kafka服务器就是一个broker,一个集群由多个broker组成,一个broker可以容纳多个topic。
- Zookeeper:依赖集群保存meta信息。
springboot集成kafka示例:
pom.xml文件中核心依赖
<dependency>
<groupId>com.alibaba</groupId>
<artifactId>fastjson</artifactId>
<version>1.2.75</version>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
</dependency>
配置文件application.properties
# 指定kafka 代理地址,可以多个
spring.kafka.bootstrap-servers=192.168.0.1:9092
# 重试次数
spring.kafka.producer.retries=0
# 每次批量发送消息的数量
spring.kafka.producer.batch-size=16384
spring.kafka.producer.buffer-memory=33554432
# 指定消息key和消息体的编解码方式
spring.kafka.producer.key-serializer=org.apache.kafka.common.serialization.StringSerializer
spring.kafka.producer.value-serializer=org.apache.kafka.common.serialization.StringSerializer
# 指定默认消费者group id
spring.kafka.consumer.group-id=consumer-group
spring.kafka.consumer.auto-offset-reset=earliest
spring.kafka.consumer.enable-auto-commit=true
spring.kafka.consumer.auto-commit-interval=100
# 指定消息key和消息体的编解码方式
spring.kafka.consumer.key-deserializer=org.apache.kafka.common.serialization.StringDeserializer
spring.kafka.consumer.value-deserializer=org.apache.kafka.common.serialization.StringDeserializer
# 消费端监听的topic不存在时,项目启动会报错(关掉)
spring.kafka.listener.missing-topics-fatal=false
部分代码如下:
@RestController
public class DemoController {
@Autowired
private DemoService demoService;
@GetMapping("/hello")
public Object hello() {
return "hello";
}
@GetMapping("/send")
public Object send() {
demoService.send("hello");
return "hello";
}
}
@Service
public class DemoService {
@Autowired
private KafkaProducer kafkaProducer;
public void send(String param) {
Message message = new Message();
message.setId(1L);
message.setMsg(param);
message.setSendTime(new Date());
kafkaProducer.send(message);
}
}
public class Message implements Serializable {
private static final long serialVersionUID = 1L;
private Long id; //id
private String msg; //消息
private Date sendTime; //时间戳
public Long getId() {
return id;
}
public void setId(Long id) {
this.id = id;
}
public String getMsg() {
return msg;
}
public void setMsg(String msg) {
this.msg = msg;
}
public Date getSendTime() {
return sendTime;
}
public void setSendTime(Date sendTime) {
this.sendTime = sendTime;
}
}
生产者和消费者代码:
@Component
public class KafkaProducer {
private final Logger logger = LoggerFactory.getLogger(KafkaProducer.class);
@Autowired
private KafkaTemplate<String,String> kafkaTemplate;
public void send(Message message) {
logger.info("send message=" + JSON.toJSONString(message));
kafkaTemplate.send("topic1", JSON.toJSONString(message));
}
}
@Component
public class KafkaConsumer {
private final Logger logger = LoggerFactory.getLogger(KafkaConsumer.class);
@Autowired
private ConsumerFactory consumerFactory;
/*@Bean
public ConcurrentKafkaListenerContainerFactory filterContainerFactory() {
ConcurrentKafkaListenerContainerFactory<Object, Object> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory);
// 被过滤的消息将被丢弃
factory.setAckDiscarded(true);
// 消息过滤策略
factory.setRecordFilterStrategy(consumerRecord -> {
if (Integer.parseInt(consumerRecord.value().toString()) % 2 == 0) {
return false;
}
//返回true消息则被过滤
return true;
});
return factory;
}*/
//@KafkaListener(topics = {"topic1","topic2"})
@KafkaListener(topics = {"topic1"},containerFactory = "filterContainerFactory")
public void listen(ConsumerRecord<?,?> record) {
Optional<?> message = Optional.ofNullable(record.value());
if (message.isPresent()) {
Object result = message.get();
logger.info("record=" + record);
logger.info("receive message=" + result);
// listen topic and receive message and share message to browser
}
}
}
结果如下:
send message={"id":1,"msg":"hello","sendTime":1612082552091}
record=ConsumerRecord(topic = topic1, partition = 0, leaderEpoch = 0, offset = 5, CreateTime = 1612082552095, serialized key size = -1, serialized value size = 47, headers = RecordHeaders(headers = [], isReadOnly = false), key = null, value = {"id":1,"msg":"hello","sendTime":1612082552091})
receive message={"id":1,"msg":"hello","sendTime":1612082552091}