- 下载kafkkafka,http://kafka.apache.org/downloads 我这里下载的版本是 kafka-1.1.0,文件: kafka_2.11-1.1.0.tgz
- 下载后上传到/usr/local/kafka,并解压。解压后文件路径为:/usr/local/kafka/kafka_2.11-1.1.0,因为kafka的运行需要用到zookeeper,所以在此之前我们需要安装下zookeeper,可以参考之前我写的一篇安装https://blog.csdn.net/u011890101/article/details/82491770,kafka的配置文件中默认连接的是本机的zookeeper地址,如果你想连接其他机器上的zk,可进入/usr/local/kafka/kafka_2.11-1.1.0/config 下的server.properties,修改zookeeper.connect=localhost:2181 地址即可
- 启动kafka,进入/usr/local/kafka/kafka_2.11-1.1.0/bin,执行./kafka-server-start.sh -daemon ../config/server.properties即可开启kafka
- 创建topic,同样在/usr/local/kafka/kafka_2.11-1.1.0/bin,执行./kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test_topic
- springboot集成kafka生产者,创建项目后添加kafka相关依赖
<dependency> <groupId>org.springframework.kafka</groupId> <artifactId>spring-kafka</artifactId> </dependency>
在application配置文件中添加
kafka.producer.servers=kafkaip地址:9092 kafka.producer.retries=0 kafka.producer.batch.size=4096 kafka.producer.linger=1 kafka.producer.buffer.memory=40960
创建KafkaProducerConfig类,该类代码如下
@Configuration @EnableKafka public class KafkaProducerConfig { @Value("${kafka.producer.servers}") private String servers; @Value("${kafka.producer.retries}") private int retries; @Value("${kafka.producer.batch.size}") private int batchSize; @Value("${kafka.producer.linger}") private int linger; @Value("${kafka.producer.buffer.memory}") private int bufferMemory; public Map<String, Object> producerConfigs() { Map<String, Object> props = new HashMap<>(); props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, servers); props.put(ProducerConfig.RETRIES_CONFIG, retries); props.put(ProducerConfig.BATCH_SIZE_CONFIG, batchSize); props.put(ProducerConfig.LINGER_MS_CONFIG, linger); props.put(ProducerConfig.BUFFER_MEMORY_CONFIG, bufferMemory); props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class); props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class); return props; } public ProducerFactory<String, String> producerFactory() { return new DefaultKafkaProducerFactory<>(producerConfigs()); } @Bean public KafkaTemplate<String, String> kafkaTemplate() { return new KafkaTemplate<String, String>(producerFactory()); } }
创建一个测试controller,注入KafkaTemplate
@Resource private KafkaTemplate template; @RequestMapping("test") public String test() { template.send("test_topic", "helloworld"); return "success"; }
-
springboot集成kafka生产者,创建项目后添加kafka依赖
<dependency> <groupId>org.springframework.kafka</groupId> <artifactId>spring-kafka</artifactId> </dependency>
在application配置文件中添加
spring: kafka: consumer: enable-auto-commit: true group-id: applog auto-offset-reset: latest bootstrap-servers: kafkaIP地址:9092
编写测试消息消费类KafkaConsumer
@Component public class KafkaConsumer { @KafkaListener(topics = {"test_topic"}) public void receive(String msg) { System.out.print("receive" + msg); } }
-
对集成好的生产者controller测试api进行访问,进行消息测试发送访问
centos安装kafka,以及springboot的集成
最新推荐文章于 2024-06-13 10:13:33 发布