kafka的安装与demo应用java

4 篇文章 0 订阅

下载kafka编译后的压缩包(文件名称中没有src) Apache Kafka

参考文档 Apache Kafka

这里我将压缩包放在了/home/wl/mq/ 目录下(我的版本为2.5.1)

解压

tar -xzf kafka_2.12-2.5.1
cd kafka_2.12-2.5.1

kafka启动需要zookeeper,并且kafka自带了zookeeper

启动kafka自带的zookeeper

bin/zookeeper-server-start.sh config/zookeeper.properties

后台启动命令

bin/zookeeper-server-start.sh -daemon config/zookeeper.properties

启动broker服务

先修改config目录下server.properties,增加下面的配置

advertised.listeners=PLAINTEXT://192.168.92.128:9092

192.168.92.128为我的kafka服务器的ip地址。若没有这个配置,使用java客户端连接kafka会报错

启动命令

bin/kafka-server-start.sh  config/server.properties

后台启动命令

bin/kafka-server-start.sh -daemon  config/server.properties

zookeeper与kafka server的关闭

bin/kafka-server-stop.sh
bin/zookeeper-server-stop.sh

kafka的环境准备完毕

java demo如下(我的项目是spring-boot项目)

引入依赖

    <dependency>
      <groupId>org.apache.kafka</groupId>
      <artifactId>kafka-clients</artifactId>
      <version>2.5.1</version>
    </dependency>

配置生产者和消费者KafkaConfig.java

package com.wl.mq.config;

import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.Producer;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;

import java.util.Properties;

/**
 * Created by Administrator on 2021/3/10.
 */
@Configuration
public class KafkaConfig {

    @Bean
    public Producer<String,String> producer(){
        Properties props = new Properties();
        props.put("bootstrap.servers", "192.168.92.128:9092");
        props.put("acks", "all");
        props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
        props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");

        return new KafkaProducer<>(props);
    }

    @Bean
    public Consumer<String,String> consumerA(){
        Properties props = new Properties();
        props.setProperty("bootstrap.servers", "192.168.92.128:9092");
        props.setProperty("group.id", "consumer_a");
        props.setProperty("enable.auto.commit", "true");
        props.setProperty("auto.commit.interval.ms", "1000");
        props.setProperty("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        props.setProperty("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        return new KafkaConsumer<>(props);
    }


}

ConsumerA中的配置group.id相当于rocketMq中的消费组的概念。

在集群环境下,如果Consumer监听的topic相同且group_id相同,则该topic只会被任意一个consumer消费一次。

如果想要实现activemq中的topic广播模式,则监听一个topic的Conumser配置group_id应该各不相同(即监听topic的consumer是不同的实例)。

KafkaProducerService.java

package com.wl.mq.kafka;

import org.apache.kafka.clients.producer.Producer;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.springframework.stereotype.Service;

/**
 * Created by Administrator on 2021/3/10.
 */
@Service
public class KafkaProduceService {

    private Producer<String,String> producer;

    @Autowired
    public KafkaProduceService(Producer<String,String> producer){
        this.producer = producer;
    }

    public void sendMessage(String destination,String message){
        producer.send(new ProducerRecord<String, String>(destination,message));
    }

    public void sendMessage(String destination,String key,String message){
        producer.send(new ProducerRecord<String, String>(destination,key,message));
    }

    /**
     *   若指定Partition ID,则PR被发送至指定Partition
     *   若未指定Partition ID,但指定了Key, PR会按照hasy(key)发送至对应Partition
     *   若既未指定Partition ID也没指定Key,PR会按照round-robin模式发送到每个Partition
     *   若同时指定了Partition ID和Key, PR只会发送到指定的Partition (Key不起作用)
     */
    public void sendMessage(String destination,Integer partition,String message){
        producer.send(new ProducerRecord<String, String>(destination,partition,null,message));
    }



}

KafkaConsumerServiceA.java

package com.wl.mq.kafka;

import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.springframework.beans.factory.InitializingBean;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;

import java.time.Duration;
import java.util.Collections;

/**
 * Created by Administrator on 2021/3/10.
 */
@Service
public class KafkaConsumerServiceA implements InitializingBean {

    private Consumer<String,String> consumerA;

    @Autowired
    public KafkaConsumerServiceA(Consumer<String,String> consumerA){
        this.consumerA = consumerA;
    }

    private void initConsumer(){
        consumerA.subscribe(Collections.singleton("kafka-topic"));
        new Thread(new Runnable() {
            @Override
            public void run() {
                while (true) {
                    ConsumerRecords<String, String> records = consumerA.poll(Duration.ofMillis(100));
                    for (ConsumerRecord<String, String> record : records) {
                        System.out.println("================================================");
                        System.out.printf("offset = %d, key = %s, value = %s%n", record.offset(), record.key(), record.value());
                        System.out.println("================================================");
                    }
                }
            }
        }).start();
    }

    @Override
    public void afterPropertiesSet() throws Exception {
        initConsumer();
    }
}

测试代码

package com.wl.mq;

import com.wl.mq.kafka.KafkaProduceService;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.test.context.junit4.SpringJUnit4ClassRunner;

/**
 * Created by Administrator on 2021/3/10.
 */
@SpringBootTest(classes = Application.class)
@RunWith(SpringJUnit4ClassRunner.class)
//@Ignore
public class KafkaMqTest {

    @Autowired
    private KafkaProduceService produceService;

    @Test
    public void testSendMessage() throws Exception{
        String destination = "kafka-topic";
        String message = "hello this is kafka message";
        produceService.sendMessage(destination,message);
        Thread.sleep(1000000);
    }
}

测试结果 

consumserA也可以监听其他的topic队列(推荐一个consumer只监听一个topic)。eg:

private void initConsumer(){
        consumerA.subscribe(Collections.singleton("kafka-topic"));
        consumerA.subscribe(Collections.singleton("kafka-topic-1"));
        new Thread(new Runnable() {
            @Override
            public void run() {
                while (true) {
                    ConsumerRecords<String, String> records = consumerA.poll(Duration.ofMillis(100));
                    for (ConsumerRecord<String, String> record : records) {
                        System.out.println(record.topic());
                        System.out.println("================================================");
                        System.out.printf("offset = %d, key = %s, value = %s%n", record.offset(), record.key(), record.value());
                        System.out.println("================================================");
                    }
                }
            }
        }).start();

    }

在微服务系统中,不同的模块使用的group.id往往是不同的。

如果我们有两个服务,一个是订单服务,一个是活动服务。都需要监听同一个topic.。我们将上面的consumerA当作活动微服务的消费者。下面我们新建一个consumerB当作订单微服务的消费者

修改KafkaConfig.java如下(增加consumerB消费者实例)

package com.wl.mq.config;

import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.Producer;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;

import java.util.Properties;

/**
 * Created by Administrator on 2021/3/10.
 */
@Configuration
public class KafkaConfig {

    @Bean
    public Producer<String,String> producer(){
        Properties props = new Properties();
        props.put("bootstrap.servers", "192.168.92.128:9092");
        props.put("acks", "all");
        props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
        props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");

        return new KafkaProducer<>(props);
    }

    @Bean
    public Consumer<String,String> consumerA(){
        Properties props = new Properties();
        props.setProperty("bootstrap.servers", "192.168.92.128:9092");
        props.setProperty("group.id", "consumer_a");
        props.setProperty("enable.auto.commit", "true");
        props.setProperty("auto.commit.interval.ms", "1000");
        props.setProperty("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        props.setProperty("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        return new KafkaConsumer<>(props);
    }

    @Bean
    public Consumer<String,String> consumerB(){
        Properties props = new Properties();
        props.setProperty("bootstrap.servers", "192.168.92.128:9092");
        props.setProperty("group.id", "consumer_b");
        props.setProperty("enable.auto.commit", "true");
        props.setProperty("auto.commit.interval.ms", "1000");
        props.setProperty("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        props.setProperty("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        return new KafkaConsumer<>(props);
    }
}

增加KafkaConsumerServiceB.java(同样监听kafka-topic队列)

package com.wl.mq.kafka;

import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.springframework.beans.factory.InitializingBean;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;

import java.time.Duration;
import java.util.Collections;

/**
 * Created by Administrator on 2021/3/10.
 */
@Service
public class KafkaConsumerServiceB implements InitializingBean {

    private Consumer<String,String> consumerB;

    @Autowired
    public KafkaConsumerServiceB(Consumer<String,String> consumerB){
        this.consumerB = consumerB;
    }

    private void initConsumer(){
        consumerB.subscribe(Collections.singleton("kafka-topic"));
        new Thread(new Runnable() {
            @Override
            public void run() {
                while (true) {
                    ConsumerRecords<String, String> records = consumerB.poll(Duration.ofMillis(100));
                    for (ConsumerRecord<String, String> record : records) {
                        System.out.println(record.topic());
                        System.out.println("======================consumerB==========================");
                        System.out.printf("offset = %d, key = %s, value = %s%n", record.offset(), record.key(), record.value());
                        System.out.println("=======================kafka-topic=========================");
                    }
                }
            }
        }).start();

    }


    @Override
    public void afterPropertiesSet() throws Exception {
        initConsumer();
    }
}

再次执行上面的测试

    @Test
    public void testSendMessage() throws Exception{
        String destination = "kafka-topic";
        String message = "hello this is kafka message";
        produceService.sendMessage(destination,message);
        Thread.sleep(1000000);
    }

测试结果

consumerA与consumerB都消费了kafka-topic队列

将该项目打包,分别发布在两台服务器上。再次测试,就会发现consumerA与consumerB任然只会消费一次。避免了集群环境下同一个topic被多次消费的问题(activeMq需要使用虚拟topic解决)

#=====================================================

spring-kafka   demo

引入依赖

<!-- https://mvnrepository.com/artifact/org.springframework.kafka/spring-kafka -->
    <dependency>
      <groupId>org.springframework.kafka</groupId>
      <artifactId>spring-kafka</artifactId>
      <version>2.5.1.RELEASE</version>
    </dependency>

注意spring版本冲突,这里spring-kafka版本为2.5.1(对应kafka-client版本为2.5.0)。我的spring-boot版本之前为2.0.8,启动失败。将spring-boot版本修改为2.3.7.RELEASE后启动成功

当spring-boot版本高于2.2.x时,低版本的idea测试类启动失败,是因为junit版本不一致。spring-boot 2.2.x及以上默认使用junit5,而idea默认使用junit4。(我的2017.2版本启动测试类失败)

解决冲突需要将spring-boot-test中junit-api移除。如下

    <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-test</artifactId>
      <version>${spring-boot-version}</version>
      <scope>test</scope>
      <exclusions>
        <exclusion>
          <groupId>org.junit.jupiter</groupId>
          <artifactId>junit-jupiter-api</artifactId>
        </exclusion>
      </exclusions>
    </dependency>

application.properties增加kafka配置

#=======================================================KAFKA MQ======================================================#
spring.kafka.bootstrap-servers=192.168.92.128:9092
# 重试次数
spring.kafka.producer.retries=0
# 应答级别:多少个分区副本备份完成时向生产者发送ack确认(可选0、1、all/-1)
spring.kafka.producer.acks=all
# Kafka提供的序列化和反序列化类
spring.kafka.producer.key-serializer=org.apache.kafka.common.serialization.StringSerializer
spring.kafka.producer.value-serializer=org.apache.kafka.common.serialization.StringSerializer
# 是否自动提交offset
spring.kafka.consumer.enable-auto-commit=true
spring.kafka.consumer.key-deserializer=org.apache.kafka.common.serialization.StringDeserializer
spring.kafka.consumer.value-deserializer=org.apache.kafka.common.serialization.StringDeserializer

KafkaTemplate
package com.wl.mq.kafka;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.stereotype.Service;

/**
 * Created by Administrator on 2021/3/10.
 */
@Service
public class KafkaTemplateProduceService {

    private KafkaTemplate<String, String> kafkaTemplate;

    @Autowired
    public KafkaTemplateProduceService(KafkaTemplate<String,String> kafkaTemplate){
        this.kafkaTemplate = kafkaTemplate;
    }

    public void sendMessage(String destination,String message){
        kafkaTemplate.send(destination,message);
    }

}

listener

package com.wl.mq.kafka;

import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.springframework.kafka.annotation.KafkaListener;
import org.springframework.stereotype.Service;

/**
 * Created by Administrator on 2021/3/10.
 */
@Service
public class KafkaListenerService {

    @KafkaListener(topics = "kafka-topic",groupId = "consumer_c")
    public void consumerAListener(ConsumerRecord<String, String> record){
        System.out.println(record.topic());
        System.out.println("======================kafka spring consumerC==========================");
        System.out.printf("offset = %d, key = %s, value = %s%n", record.offset(), record.key(), record.value());
        System.out.println("=======================kafka-topic=========================");
    }

    @KafkaListener(topics = "kafka-topic",groupId = "consumer_d")
    public void consumerBListener(ConsumerRecord<String, String> record){
        System.out.println(record.topic());
        System.out.println("======================kafka spring consumerD==========================");
        System.out.printf("offset = %d, key = %s, value = %s%n", record.offset(), record.key(), record.value());
        System.out.println("=======================kafka-topic=========================");
    }

}

这里我们照样监听kafka-topic这个队列,不过groupId分别为consumer_c和consumer_d

还是执行上面的测试代码

测试结果如下

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
Flink是一个开源的流处理框架,而Kafka是一个分布式消息队列系统。在Flink中使用KafkaJava API可以实现将Kafka中的数据作为输入源或将处理结果输出到Kafka中。 在Flink中使用Kafka Java API的步骤通常如下: 1. 引入Kafka的依赖:首先需要将KafkaJava API的依赖添加到Flink的工程中。 2. 创建Kafka消费者:使用KafkaJava API创建一个消费者实例,可以指定消费者的一些配置如Kafka的地址、消费者组ID等。通过调用消费者的`assign()`方法或`subscribe()`方法来指定要消费的Kafka主题。 3. 创建Flink的DataStream:使用Flink的DataStream API实例化一个用于接收Kafka数据的DataStream对象。可以使用`addSource()`方法来将Kafka消费者作为数据源。可以在创建DataStream时指定Kafka消息的反序列化方式、数据类型等。 4. 执行数据处理逻辑:可以在DataStream上应用各种Flink的算子,如map、filter、reduce等,对Kafka中的数据进行处理。 5. 创建Kafka生产者:使用KafkaJava API创建一个生产者实例,可以指定生产者的一些配置。通过调用生产者的`send()`方法将处理后的结果数据发送到Kafka中。 6. 提交任务并启动Flink作业:将处理逻辑应用到Flink的任务上,并将任务提交给Flink集群进行执行。 通过以上步骤,就可以在Flink中使用KafkaJava API进行数据的输入和输出。这种方式将Kafka作为Flink的一个数据源或数据目的,使得数据可以在流处理中被实时地处理和分析。同时,由于Kafka的分布式特性,也可以保证数据的可靠性和高吞吐量。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值