1.原生生产消息和消费消息
类介绍
MsgConsumer原生消费者,注释已添加
MsgProducer原生生产者,注释已添加
2.springboot生产消息和消费消息
类介绍
KafkaController发送消息 send()方法
MyConsumer 消费消息方法
`
工厂配置类KafkaConfig ,这个项目中没有,需要在相同目录创建,复制即可
package com.kafka;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.common.serialization.StringDeserializer;
import org.apache.kafka.common.serialization.StringSerializer;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.kafka.config.ConcurrentKafkaListenerContainerFactory;
import org.springframework.kafka.config.KafkaListenerContainerFactory;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.core.ProducerFactory;
import org.springframework.kafka.listener.ConcurrentMessageListenerContainer;
import org.springframework.kafka.listener.ContainerProperties;
import java.util.HashMap;
import java.util.Map;
/**
* @author luyao
* @create 2023-07-19 17:54
* @desc 工厂配置
**/
@Configuration
public class KafkaConfig {
@Value("${spring.kafka.bootstrap-servers}")
private String servers;
@Value("${spring.kafka.consumer.auto-offset-reset}")
private String autoOffsetReset;
@Value("${spring.kafka.consumer.enable-auto-commit}")
private String enableAutoCommit;
@Bean("kafkaTemplate")
public KafkaTemplate<String, String> kafkaTemplate() {
KafkaTemplate<String, String> kafkaTemplate = new KafkaTemplate<>(producerFactory());
return kafkaTemplate;
}
public ProducerFactory<String, String> producerFactory() {
Map<String, Object> properties = new HashMap<>();
//连接地址
properties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, servers);
//重试,0为不启用重试机制
properties.put(ProducerConfig.RETRIES_CONFIG, 50);
//控制批处理大小,单位为字节
properties.put(ProducerConfig.BATCH_SIZE_CONFIG, 16384);
//批量发送,延迟为1毫秒,启用该功能能有效减少生产者发送消息次数,从而提高并发量
properties.put(ProducerConfig.LINGER_MS_CONFIG, 1);
//生产者可以使用的总内存字节来缓冲等待发送到服务器的记录
properties.put(ProducerConfig.BUFFER_MEMORY_CONFIG, 33554432);
//键的序列化方式
properties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
//值的序列化方式
properties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
properties.put(ProducerConfig.METADATA_MAX_AGE_CONFIG, 10000);
//指定分区策略
//properties.put(ProducerConfig.PARTITIONER_CLASS_CONFIG, "com.kafka.MyPartitioner");
return new DefaultKafkaProducerFactory<>(properties);
}
@Bean
KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<Integer, String>> testFactory() {
//创建消息监听器工厂
ConcurrentKafkaListenerContainerFactory<Integer, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
Map<String, Object> properties = new HashMap<>();
properties.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, servers);
properties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
//设置groupId
properties.put(ConsumerConfig.GROUP_ID_CONFIG, "test1");
properties.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
properties.put(ConsumerConfig.PARTITION_ASSIGNMENT_STRATEGY_CONFIG, com.github.grantneale.kafka.LagBasedPartitionAssignor);
DefaultKafkaConsumerFactory<Integer, String> counsumerFactory = new DefaultKafkaConsumerFactory<>(properties);
//设置消费者工厂配置
factory.setConsumerFactory(counsumerFactory);
//设置监听器容器中运行的线程数
factory.setConcurrency(1);
//设置消费者等待记录的最大阻塞时间 默认1000毫秒
factory.getContainerProperties().setPollTimeout(3 * 1000);
factory.getContainerProperties().setAckMode(ContainerProperties.AckMode.MANUAL_IMMEDIATE);
return factory;
}
}
自定义分区:
1.依赖添加到pom.xml中
<dependency>
<groupId>com.github.grantneale</groupId>
<artifactId>kafka-lag-based-assignor</artifactId>
</dependency>
2.指定
partition.assignment.strategy = com.github.grantneale.kafka.LagBasedPartitionAssignor
这样就可以用你所引用的分区策略了。
1.至于重分区和rebalance明晚我在更新上来
@KafkaListener(topics = “my-replicated-topic”,containerFactory = “testFactory”)
testFactory名字就是下面的方法名。
KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<Integer, String>> testFactory()
这个含义是指:
@KafkaListener(topics = “my-replicated-topic”,groupId = “good”)
消费者组good下的一个消费者,其他属性走默认配置,例如partition.assignment.strategy = Round-robin方式
而引入containerFactory = "testFactory"就会根据工厂配置走。
1.修改分区
1、修改 topic 的分区,变成想要的分区数
./bin/kafka-topics.sh --zookeeper 127.0.0.1:2181 --alter --topic my-replicated-topic --partitions 6
2.迁移数据
使用kafka提供的工具kafka-reassign-partitions.sh来迁移数据。迁移数据需要分三步做
第一步:生成迁移计划
先手动生成一个topic.json,内容如下。这里topic可以是一个列表
{
"topics": [
{"topic": "my-replicated-topic"}
],
"version": 1
}
执行:
./bin/kafka-reassign-partitions.sh --zookeeper 127.0.0。1:2181 --topics-to-move-json-file topic.json --broker-list "0,1,2,3,4" --generate
将topic.json里的topic迁移到broker-list列表里列的broker上,会得到一个执行计划
Current partition replica assignment
{"version":1,
"partitions":[....]
}
Proposed partition reassignment configuration
{"version":1,
"partitions":[.....]
}
新建一个文件reassignment.json,保存上边这些信息。其中Current partition replica assignment指当前的分区情况,Proposed partition reassignment configuration是计划的分区情况
第二步
./bin/kafka-reassign-partitions.sh --zookeeper 127.0.0.1:2181 --reassignment-json-file reassignment.json --execute
第三步
./bin/kafka-reassign-partitions.sh --zookeeper 127.0.0.1:2181 --reassignment-json-file reassignment.json --verify