centos7 Spring mvc配置kafka教程+两种kafka配置方式+spring xml配置+java代码配置+kafka sasl安全认证(配置账号密码生产消费)

kafka安装,以及开启认证请看上一篇文章!

centos7 kafka安全认证(配置账号密码生产消费)+systemctl开机启动

Spring 版本:4.2.5.RELEASE
kafka版本:kafka_2.12-2.2.0
(由于Spring版本问题,无法使用kafka最新版,2.3需要spring 5,经测试2.2.0可以正常使用)

经过几天的配置测试,kafka接入spring有两种方式。
1.完全用java代码接入,优点:方便,灵活,创建监听消费方便自由。可以用注解方式监听消费。
2.完全用配置xml方式接入,缺点:创建消费者不灵活,需要修改xml配置。不能用注解

spring配置:

pom.xml添加依赖:

 <dependency>
            <groupId>org.springframework.kafka</groupId>
            <artifactId>spring-kafka</artifactId>
            <version>2.2.0.RELEASE</version>
        </dependency>
我们先创建一个公共的配置文件kafka.properties
################# kafka 公共配置##################
# brokers集群 
kafka.bootstrap.servers = ip:9092

#sasl安全认证配置 
#此文配置都默认添加账号验证配置,如果kafka服务器没有开启sasl没有开启测无法连接服务器,已配置可以参考,没有配置一定要取消
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="admin" password="admin";
security.protocol=SASL_PLAINTEXT
sasl.mechanism=PLAIN

################# kafka producer 生产者配置##################
kafka.producer.acks = all
#发送失败重试次数
kafka.producer.retries = 3
kafka.producer.linger.ms =  10
# 33554432 即32MB的批处理缓冲区
kafka.producer.buffer.memory = 40960
#批处理条数:当多个记录被发送到同一个分区时,生产者会尝试将记录合并到更少的请求中。这有助于客户端和服务器的性能
kafka.producer.batch.size = 4096
#默认topci
kafka.producer.defaultTopic = topone
kafka.producer.key.serializer = org.apache.kafka.common.serialization.StringSerializer
kafka.producer.value.serializer = org.apache.kafka.common.serialization.StringSerializer


################# kafka consumer  消费者配置##################
# 如果为true,消费者的偏移量将在后台定期提交
kafka.consumer.enable.auto.commit = true
#消费监听器容器并发数
kafka.consumer.concurrency = 3
#如何设置为自动提交(enable.auto.commit=true),这里设置自动提交周期
kafka.consumer.auto.commit.interval.ms=1000
#order-beta 消费者群组ID,发布-订阅模式,即如果一个生产者,多个消费者都要消费,那么需要定义自己的群组,同一群组内的消费者只有一个能消费到消息
kafka.consumer.group.id = sys_topone
kafka.alarm.topic = topone
#在使用Kafka的组管理时,用于检测消费者故障的超时
kafka.consumer.session.timeout.ms = 30000
kafka.consumer.key.deserializer = org.apache.kafka.common.serialization.StringDeserializer
kafka.consumer.value.deserializer = org.apache.kafka.common.serialization.StringDeserializer

一 Spring mvc代码配置kafka

1.创建Producer配置类
import org.apache.kafka.clients.CommonClientConfigs;
import org.apache.kafka.common.config.SaslConfigs;
import org.apache.kafka.common.serialization.StringSerializer;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.common.protocol.types.Field;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.PropertySource;
import org.springframework.kafka.annotation.EnableKafka;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.core.ProducerFactory;

import java.util.HashMap;
import java.util.Map;

@Configuration
@EnableKafka
public class KafkaProducerConfig {
	//采用注解读取kafka.properties字段
    @Value("${kafka.bootstrap.servers}")
    private String kafka_bootstrap_servers;
    @Value("${kafka.producer.acks}")
    private String kafka_producer_acks;
    @Value("${kafka.producer.retries}")
    private String kafka_producer_retries;
    @Value("${kafka.producer.linger.ms}")
    private String kafka_producer_linger_ms;
    @Value("${kafka.producer.buffer.memory}")
    private String kafka_producer_buffer_memory;
    @Value("${kafka.producer.batch.size}")
    private String kafka_producer_batch_size;
    @Value("${kafka.producer.defaultTopic}")
    private String kafka_producer_defaultTopic;

    @Value("${security.protocol}")
    private String security_protocol;
    @Value("${sasl.mechanism}")
    private String sasl_mechanism;
    @Value("${sasl.jaas.config}")
    private String sasl_jaas_config;

    @Bean
    public KafkaTemplate<String, String> kafkaTemplate() {
        KafkaTemplate<String, String> kafkaTemplate = new KafkaTemplate<String, String>(producerFactory());
        kafkaTemplate.setDefaultTopic(kafka_producer_defaultTopic);
        return kafkaTemplate;
    }

    public ProducerFactory<String, String> producerFactory() {
        Map<String, Object> properties = new HashMap<String, Object>();
        properties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, kafka_bootstrap_servers);
        properties.put(ProducerConfig.RETRIES_CONFIG, kafka_producer_retries);
        properties.put(ProducerConfig.BATCH_SIZE_CONFIG, kafka_producer_batch_size);
        properties.put(ProducerConfig.LINGER_MS_CONFIG, kafka_producer_linger_ms);
        properties.put(ProducerConfig.BUFFER_MEMORY_CONFIG, kafka_producer_buffer_memory);
        properties.put(ProducerConfig.ACKS_CONFIG, kafka_producer_acks);
        properties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
        properties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);

		//设置安全验证  此文配置都默认添加账号验证配置,如果kafka服务器没有开启sasl没有开启测无法连接服务器,已配置可以参考,没有配置一定要取消
        properties.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, security_protocol);
        properties.put(SaslConfigs.SASL_MECHANISM, sasl_mechanism);
        properties.put(SaslConfigs.SASL_JAAS_CONFIG,sasl_jaas_config);
        return new DefaultKafkaProducerFactory<>(properties);
    }

}
2.创建Consumer配置类
import org.apache.kafka.clients.CommonClientConfigs;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.common.config.SaslConfigs;
import org.apache.kafka.common.serialization.StringDeserializer;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.kafka.annotation.EnableKafka;
import org.springframework.kafka.config.ConcurrentKafkaListenerContainerFactory;
import org.springframework.kafka.config.KafkaListenerContainerFactory;
import org.springframework.kafka.core.ConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.listener.ConcurrentMessageListenerContainer;

import java.util.HashMap;
import java.util.Map;

@Configuration
@EnableKafka
public class KafkaConsumerConfig {
//采用注解读取kafka.properties字段
    @Value("${kafka.bootstrap.servers}")
    private String kafka_bootstrap_servers;

    @Value("${kafka.consumer.enable.auto.commit}")
    private String kafka_consumer_enable_auto_commit;
    @Value("${kafka.consumer.concurrency}")
    private String kafka_consumer_concurrency;
    @Value("${kafka.consumer.auto.commit.interval.ms}")
    private String kafka_consumer_auto_commit_interval_ms;
    @Value("${kafka.consumer.group.id}")
    private String kafka_consumer_group_id;
    @Value("${kafka.consumer.session.timeout.ms}")
    private String kafka_consumer_session_timeout_ms;

    @Value("${security.protocol}")
    private String security_protocol;
    @Value("${sasl.mechanism}")
    private String sasl_mechanism;
    @Value("${sasl.jaas.config}")
    private String sasl_jaas_config;


    @Bean
    public KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, String>> kafkaListenerContainerFactory() {
        ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<String, String>();
        factory.setConsumerFactory(consumerFactory());
        factory.setConcurrency(kafka_consumer_concurrency);
        factory.getContainerProperties().setPollTimeout(4000);
        return factory;
    }


    public ConsumerFactory<String, String> consumerFactory() {
        Map<String, Object> properties = new HashMap<String, Object>();
        properties.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG,kafka_bootstrap_servers);
        properties.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, kafka_consumer_enable_auto_commit);
        properties.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, kafka_consumer_auto_commit_interval_ms);
        properties.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, kafka_consumer_session_timeout_ms);
        properties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
        properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
        properties.put(ConsumerConfig.GROUP_ID_CONFIG, kafka_consumer_group_id);

		//设置安全验证  此文配置都默认添加账号验证配置,如果kafka服务器没有开启sasl没有开启测无法连接服务器,已配置可以参考,没有配置一定要取消
        properties.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, security_protocol);
        properties.put(SaslConfigs.SASL_MECHANISM, sasl_mechanism);
        properties.put(SaslConfigs.SASL_JAAS_CONFIG,sasl_jaas_config);
        return new DefaultKafkaConsumerFactory<>(properties);
    }

    @Bean
    public KafkaListeners kafkaListeners() {
        return new KafkaListeners();
    }

}

3.创建KafkaListeners消费类进行消费
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.springframework.kafka.annotation.KafkaListener;

import java.util.Optional;

public class KafkaListeners {
	//也可以在其他方法上通过添加@KafkaListener注解方法来监听topic接收消息,使用非常方便
    @KafkaListener(topics = {"aaa"})
    public void listen(ConsumerRecord<?, ?> record) {
        Optional<?> kafkaMessage = Optional.ofNullable(record.value());
        if (kafkaMessage.isPresent()) {
            Object message = kafkaMessage.get();
            System.out.println("listen " + message);
        }
    }
    @KafkaListener(topics = {"bbb"})
    public void listen2(ConsumerRecord<?, ?> record) {
        Optional<?> kafkaMessage = Optional.ofNullable(record.value());
        if (kafkaMessage.isPresent()) {
            Object message = kafkaMessage.get();
            System.out.println("listen2 " + message);
        }
    }
}

4.测试
1.创建一个测试接口
 	@Autowired
    KafkaTemplate kafkaTemplate;

    @RequestMapping(value = "/test")
    @ResponseBody
    public Object test(HttpServletRequest request1, HttpServletResponse response1) {
        kafkaTemplate.sendDefault("111111");//发送的是kafka.properties默认的topic
        kafkaTemplate.send("aaa","22222");//发送自定义topic
        System.out.println("kafka消息发送成功!");
        return "ok";`在这里插入代码片`
    }

2.接收到消息
在这里插入图片描述

二 Spring mvc xml资源文件配置kafka方式

同样需要使用上面第一个创建的公共配置文件kafka.properties
注意:需要在spring配置文件中引入以下两个配置文件:

<import resource="classpath:spring-kafka-producer.xml"/>
<import resource="classpath:spring-kafka-consumer.xml"/>
1.资源目录下新建生产者配置:spring-kafka-producer.xml
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xsi:schemaLocation="http://www.springframework.org/schema/beans
       http://www.springframework.org/schema/beans/spring-beans.xsd">

    <!--<context:property-placeholder location="classpath:kafka/kafka.properties" />-->
    <!-- 定义producer的参数 -->
    <bean id="producerProperties" class="java.util.HashMap">
        <constructor-arg>
            <map>
                <!-- kafka服务地址,可能是集群-->
                <entry key="bootstrap.servers" value="${kafka.bootstrap.servers}"/>
                <!-- 有可能导致broker接收到重复的消息,默认值为3-->
                <entry key="retries" value="${kafka.producer.retries}"/>
                <!-- 每次批量发送消息的数量-->
                <entry key="batch.size" value="${kafka.producer.batch.size}"/>
                <!-- 默认0ms,在异步IO线程被触发后(任何一个topic,partition满都可以触发)-->
                <entry key="linger.ms" value="${kafka.producer.linger.ms}"/>
                <!--producer可以用来缓存数据的内存大小。如果数据产生速度大于向broker发送的速度,producer会阻塞或者抛出异常 -->
                <entry key="buffer.memory" value="${kafka.producer.buffer.memory}"/>
                <!-- producer需要server接收到数据之后发出的确认接收的信号,此项配置就是指procuder需要多少个这样的确认信号-->
                <entry key="acks" value="${kafka.producer.acks}"/>
                <entry key="key.serializer" value="${kafka.producer.key.serializer}"/>
                <entry key="value.serializer" value="${kafka.producer.value.serializer}"/>
				
						<!-- kafka sasl安全验证配置,设置安全验证  此文配置都默认添加账号验证配置,如果kafka服务器没有开启sasl没有开启测无法连接服务器,已配置可以参考,没有配置一定要取消-->
                <entry key="sasl.jaas.config" value="${sasl.jaas.config}"/>
                <entry key="security.protocol" value="${security.protocol}"/>
                <entry key="sasl.mechanism" value="${sasl.mechanism}"/>
            </map>
        </constructor-arg>
    </bean>

    <!-- 创建kafkatemplate需要使用的producerfactory bean -->
    <bean id="producerFactory" class="org.springframework.kafka.core.DefaultKafkaProducerFactory">
        <constructor-arg>
            <ref bean="producerProperties"/>
        </constructor-arg>
    </bean>

    <!--定义生产者监听 -->
    <bean id="kafkaProducerListener" class="com.test.kafka.KafkaProducerListener"/>

    <!-- 创建kafkatemplate bean,使用template的send消息方法 -->
    <bean id="kafkaTemplate" class="org.springframework.kafka.core.KafkaTemplate">
        <constructor-arg ref="producerFactory"/>
        <constructor-arg name="autoFlush" value="true"/>
        <!--设置默认的topic-->
        <property name="defaultTopic" value="${kafka.producer.defaultTopic}"/>
        <property name="producerListener" ref="kafkaProducerListener"/>
    </bean>
</beans>
2.资源目录下新建消费者配置:spring-kafka-consumer.xml
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xsi:schemaLocation="http://www.springframework.org/schema/beans
       http://www.springframework.org/schema/beans/spring-beans.xsd">
    <!-- 1.定义consumer的参数 -->
    <!--<context:property-placeholder location="classpath*:kafka/kafka.properties" />-->
    <bean id="consumerProperties" class="java.util.HashMap">
        <constructor-arg>
            <map>
                <!--Kafka服务地址 -->
                <entry key="bootstrap.servers" value="${kafka.bootstrap.servers}" />
                <!--Consumer的组ID,相同goup.id的consumer属于同一个组。 -->
                <entry key="group.id" value="${kafka.consumer.group.id}" />
                <!--如果此值设置为true,consumer会周期性的把当前消费的offset值保存到zookeeper。当consumer失败重启之后将会使用此值作为新开始消费的值。 -->
                <entry key="enable.auto.commit" value="${kafka.consumer.enable.auto.commit}" />
                <!--网络请求的socket超时时间。实际超时时间由max.fetch.wait + socket.timeout.ms 确定 -->
                <entry key="session.timeout.ms" value="${kafka.consumer.session.timeout.ms}" />
                <entry key="auto.commit.interval.ms" value="${kafka.consumer.auto.commit.interval.ms}" />
                <entry key="retry.backoff.ms" value="100" />
                <entry key="key.deserializer" value="${kafka.consumer.key.deserializer}" />
                <entry key="value.deserializer" value="${kafka.consumer.value.deserializer}" />

		<!--  kafka sasl安全验证配置,设置安全验证  此文配置都默认添加账号验证配置,如果kafka服务器没有开启sasl没有开启测无法连接服务器,已配置可以参考,没有配置一定要取消 -->
                <entry key="sasl.jaas.config" value="${sasl.jaas.config}"/>
                <entry key="security.protocol" value="${security.protocol}"/>
                <entry key="sasl.mechanism" value="${sasl.mechanism}"/>
            </map>
        </constructor-arg>
    </bean>

    <!-- 2.创建consumerFactory bean -->
    <bean id="consumerFactory"
          class="org.springframework.kafka.core.DefaultKafkaConsumerFactory" >
        <constructor-arg>
            <ref bean="consumerProperties" />
        </constructor-arg>
    </bean>

    <!--3.指定具体监听类的bean -->
    <bean id="kafkaConsumerService" class="com.test.kafka.KafkaConsumerMessageListener" />

    <!-- 4.消费者容器配置信息 -->
    <bean id="containerProperties" class="org.springframework.kafka.listener.ContainerProperties">
        <!-- topic -->
        <constructor-arg name="topics">
            <list>
            	 <!-- 监听的topic,可以添加多个,onMessage方法中将会收到此处监听的多个topic消息 -->
                <value>${kafka.alarm.topic}</value>
            </list>
        </constructor-arg>
        <property name="messageListener" ref="kafkaConsumerService" />
    </bean>
    <!-- 5.消费者并发消息监听容器-->
    <bean id="messageListenerContainer" class="org.springframework.kafka.listener.ConcurrentMessageListenerContainer" init-method="doStart" >
        <constructor-arg ref="consumerFactory" />
        <constructor-arg ref="containerProperties" />
        <property name="concurrency" value="${kafka.consumer.concurrency}" />
    </bean>



    <!--如果你不想在一个类中监听多个topic,那么你可以复制3-5步,修改id后,在新建的监听类中接受指定的topic -->
    <!--第二个消费类-->
    <!-- 3.指定具体监听类的bean -->
    <bean id="kafkaConsumerService2" class="com.test.kafka.KafkaConsumerListenser" />

    <!-- 4.消费者容器配置信息 -->
    <bean id="containerProperties2" class="org.springframework.kafka.listener.ContainerProperties">
        <!-- topic -->
        <constructor-arg name="topics">
            <list>
                <value>aaa</value>
            </list>
        </constructor-arg>
        <property name="messageListener" ref="kafkaConsumerService2" />
    </bean>
    <!-- 5.消费者并发消息监听容器,执行doStart()方法 使用的时候,只需要注入这个bean,即可使用template的send消息方法-->
    <bean id="messageListenerContainer2" class="org.springframework.kafka.listener.ConcurrentMessageListenerContainer" init-method="doStart" >
        <constructor-arg ref="consumerFactory" />
        <constructor-arg ref="containerProperties2" />
        <property name="concurrency" value="${kafka.consumer.concurrency}" />
    </bean>

</beans>
3.发送消息

跟代码配置一样使用

    @Autowired
    KafkaTemplate kafkaTemplate;

    @RequestMapping(value = "/test")
    @ResponseBody
    public Object test(HttpServletRequest request1, HttpServletResponse response1) {
        kafkaTemplate.sendDefault("111111");
        kafkaTemplate.send("aaa","22222");
        System.out.println("kafka消息发送成功!");
        return "ok";
    }
4.消费消息,实现MessageListener接口的onMessage接口

可以接受xml中配置指定的topic消息

import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.springframework.kafka.listener.MessageListener;


public class KafkaConsumerMessageListener implements MessageListener<String, Object> {

    @Override
    public void onMessage(ConsumerRecord<String, Object> record) {
            System.out.println(" kafka接受到消息" + record.toString());
    }
}

  • 0
    点赞
  • 10
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值