Spring-Kafka XML配置方法实现生产和消费

 

Spring-Kafka XML配置方法实现生产和消费

 1. 生产者配置xml

Step1producerProperties:设置生产者公产需要的配置 ;

Step2producerFactory:定义了生产者工厂构造方法;

Step3 : kafkaTemplate:定义了使用producerFactory和是否自动刷新,2个参数来构造kafka生产者模板类。xml主要配置了KafkaTemplate的构造参数producerFactory和autoFlush,对应了一个KafkaTemplate源码中的2参构造函数

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:context="http://www.springframework.org/schema/context"
       xsi:schemaLocation="http://www.springframework.org/schema/beans
         http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
         http://www.springframework.org/schema/context
         http://www.springframework.org/schema/context/spring-context.xsd">
    <context:property-placeholder location="classpath*:application.properties" />

    <!-- 定义producer的参数 -->
    <!-- 1.producerProperties:设置生产者公产需要的配置-->
    <bean id="producerProperties" class="java.util.HashMap">
        <constructor-arg>
            <map>
                <entry key="bootstrap.servers" value="${bootstrap.servers}" />
                <entry key="group.id" value="${group.id}" />
                <entry key="retries" value="${retries}" />
                <entry key="batch.size" value="${batch.size}" />
                <entry key="linger.ms" value="${linger.ms}" />
                <entry key="buffer.memory" value="${buffer.memory}" />
                <entry key="acks" value="${acks}" />
                <entry key="key.serializer"
                       value="org.apache.kafka.common.serialization.StringSerializer" />
                <entry key="value.serializer"
                       value="org.apache.kafka.common.serialization.StringSerializer" />
            </map>
        </constructor-arg>
    </bean>


    <!-- 创建kafkatemplate需要使用的producerfactory bean -->
    <!--2.producerFactory:定义了生产者工厂构造方法-->
    <bean id="producerFactory"
          class="org.springframework.kafka.core.DefaultKafkaProducerFactory">
        <constructor-arg>
            <ref bean="producerProperties" />
        </constructor-arg>
    </bean>

    <!-- 创建kafkatemplate bean,使用的时候,只需要注入这个bean,即可使用template的send消息方法 -->
    <!--3.kafkaTemplate:定义了使用producerFactory和是否自动刷新,2个参数来构造kafka生产者模板类。-->
    <bean id="kafkaTemplate" class="org.springframework.kafka.core.KafkaTemplate">
        <constructor-arg ref="producerFactory" />
        <constructor-arg name="autoFlush" value="true" />
        <property name="defaultTopic" value="default" />
    </bean>

</beans>

【生产者参数配置】: 

#============== kafka config 生产者=======================
# brokers集群
bootstrap.servers=192.168.80.150:9092
# 消费者群组ID,发布-订阅模式,即如果一个生产者,多个消费者都要消费,那么需要定义自己的群组,同一群组内的消费者只有一个能消费到消息
group.id=test
# 即所有副本都同步到数据时send方法才返回, 以此来完全判断数据是否发送成功, 理论上来讲数据不会丢失
acks=all
# 发送失败重试次数
retries=0
# 批处理条数:当多个记录被发送到同一个分区时,生产者会尝试将记录合并到更少的请求中。这有助于客户端和服务器的性能
batch.size=16384
# 批处理延迟时间上限:即1ms过后,不管是否达到批处理数,都直接发送一次请求
linger.ms=1
# 即32MB的批处理缓冲区
buffer.memory=33554432

 2. 生产者发送消息:

Step1:根据topic、partition、key发送数据data。

Step2:接收ListenableFuture添加成功、失败回调函数

package com.caox.kafka._03_spring_kafka_xml;

import org.springframework.context.ApplicationContext;
import org.springframework.context.support.ClassPathXmlApplicationContext;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.support.SendResult;
import org.springframework.util.concurrent.FailureCallback;
import org.springframework.util.concurrent.ListenableFuture;
import org.springframework.util.concurrent.SuccessCallback;

/**
 * Created by nazi on 2018/9/5.
 * @author nazi
 */
public class ProducerMain {
    public static void main(String[] argv) throws Exception {
        ApplicationContext context = new ClassPathXmlApplicationContext("applicationContext.xml");
        KafkaTemplate kafkaTemplate = context.getBean(KafkaTemplate.class);
        String key  = "test-key";
        String data = "this is a test message";
        ListenableFuture<SendResult<String, String>> listenableFuture = kafkaTemplate.send("topic-test4", 0, key, data);
        //发送成功回调
        SuccessCallback<SendResult<String, String>> successCallback = new SuccessCallback<SendResult<String, String>>() {
            @Override
            public void onSuccess(SendResult<String, String> result) {
                //成功业务逻辑
                System.out.println("success to send message !");
            }
        };
        //发送失败回调
        FailureCallback failureCallback = new FailureCallback() {
            @Override
            public void onFailure(Throwable ex) {
                //失败业务逻辑
            }
        };
        listenableFuture.addCallback(successCallback, failureCallback);
    }



}

 3. 消费者配置xml: 

Step1 : consumerProperties -> consumerFactory 载入配置构造消费者工厂;

Step2:  messageListener -> containerProperties 载入容器配置(topics);

Step3 :   consumerFactory+containerProperties -> messageListenerContainer 容器配置(topics)+消息监听器,构造一个并发消息监听容器,并执行初始化方法doStart【生产者参数配置】: 

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:context="http://www.springframework.org/schema/context"
       xsi:schemaLocation="http://www.springframework.org/schema/beans
         http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
         http://www.springframework.org/schema/context
         http://www.springframework.org/schema/context/spring-context.xsd">
    <context:property-placeholder location="classpath*:application.properties" />

    <!-- 1.定义consumer的参数 -->
    <!-- consumerProperties -> consumerFactory  载入配置构造消费者工厂 -->
    <bean id="consumerProperties" class="java.util.HashMap">
        <constructor-arg>
            <map>
                <entry key="bootstrap.servers" value="${bootstrap.servers}" />
                <entry key="group.id" value="${group.id}" />
                <entry key="enable.auto.commit" value="${enable.auto.commit}" />
                <entry key="session.timeout.ms" value="${session.timeout.ms}" />
                <entry key="key.deserializer"
                       value="org.apache.kafka.common.serialization.StringDeserializer" />
                <entry key="value.deserializer"
                       value="org.apache.kafka.common.serialization.StringDeserializer" />
            </map>
        </constructor-arg>
    </bean>

    <!-- 2.创建consumerFactory bean -->
    <bean id="consumerFactory"
          class="org.springframework.kafka.core.DefaultKafkaConsumerFactory" >
        <constructor-arg>
            <ref bean="consumerProperties" />
        </constructor-arg>
    </bean>

    <!-- 3.定义消费实现类 -->
    <bean id="kafkaConsumerService" class="com.caox.kafka._03_spring_kafka_xml.KafkaConsumerServiceImpl3" />

    <!-- 4.消费者容器配置信息 -->
    <!-- messageListener -> containerProperties  载入容器配置(topics)-->
    <bean id="containerProperties" class="org.springframework.kafka.listener.config.ContainerProperties">
        <!-- topic -->
        <constructor-arg name="topics">
            <list>
                <!--<value>${kafka.consumer.topic.credit.for.lease}</value>-->
                <!--<value>${loan.application.feedback.topic}</value>-->
                <!--<value>${templar.agreement.feedback.topic}</value>-->
                <!--<value>${templar.aggrement.active.feedback.topic}</value>-->
                <!--<value>${templar.aggrement.agreementRepaid.topic}</value>-->
                <value>${templar.aggrement.agreementWithhold.topic}</value>
                <!--<value>${templar.aggrement.agreementRepayRemind.topic}</value>-->
            </list>
        </constructor-arg>
        <property name="messageListener" ref="kafkaConsumerService" />
    </bean>

    <!-- 5.消费者并发消息监听容器,执行doStart()方法 -->
    <!-- consumerFactory+containerProperties -> messageListenerContainer 容器配置(topics)+ 消息监听器,构造一个并发消息监听容器,并执行初始化方法doStart -->
    <bean id="messageListenerContainer" class="org.springframework.kafka.listener.ConcurrentMessageListenerContainer" init-method="doStart" >
        <constructor-arg ref="consumerFactory" />
        <constructor-arg ref="containerProperties" />
        <property name="concurrency" value="${concurrency}" />
    </bean>

</beans>

【消费者参数配置】:  

#=============== 消费者 ===========================
# 如果为true,消费者的偏移量将在后台定期提交
enable.auto.commit=false
# 在使用Kafka的组管理时,用于检测消费者故障的超时
session.timeout.ms=15000
# 消费监听器容器并发数
concurrency = 3
templar.aggrement.agreementWithhold.topic=topic-test4

 4. 消费者接受消息: 注:方案二和方案三 必须实现 MessageListener否则报参数初始化异常

 4.1【方案一】:直接实现MessageListener接口,复写onMessage方法,实现自定义消费业务逻辑。

package com.caox.kafka._03_spring_kafka_xml;

import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.springframework.kafka.listener.MessageListener;

/**
 * Created by nazi on 2018/9/11.
 * @author nazi
 */
public class KafkaConsumerSerivceImpl implements MessageListener<String, String> {
    @Override
    public void onMessage(ConsumerRecord<String, String> data) {
        //根据不同主题,消费
        if("topic-test4".equals(data.topic())){
            //逻辑1
            System.out.println("listen : " + " key:"+ data.key() + " value: " + data.value());
        }else if("topic-test5".equals(data.topic())){
            //逻辑2
        }
    }
}

 4.2【方案二】:@KafkaListener注解,并设置topic,支持SPEL表达式。这样方便拆分多个不同topic处理不同业务逻辑。(特别是有自己的事务的时候,尤其方便)

package com.caox.kafka._03_spring_kafka_xml;

import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.springframework.kafka.annotation.KafkaListener;
import org.springframework.kafka.listener.MessageListener;

/**
 * Created by nazi on 2018/9/11.
 * @author nazi
 */

public class KafkaConsumerServiceImpl3 implements MessageListener<String,String> {
    @KafkaListener(topics = "${templar.aggrement.agreementWithhold.topic}")
    public void onMessage(ConsumerRecord<String, String> stringStringConsumerRecord) {
        //消费业务逻辑
        System.out.println("listen 3 : " + " key:"+ stringStringConsumerRecord.key() + " value: " + stringStringConsumerRecord.value());
    }
}

 4.2【方案三】:@KafkaListener注解,并设置topic,支持SPEL表达式。这样方便拆分多个不同topic处理不同业务逻辑。(特别是有自己的事务的时候,尤其方便) 

package com.caox.kafka._03_spring_kafka_xml;

import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.springframework.kafka.annotation.KafkaListener;
import org.springframework.kafka.listener.MessageListener;

/**
 * Created by nazi on 2018/9/11.
 * @author nazi
 */
@KafkaListener(topics = "${templar.aggrement.agreementWithhold.topic}")
public class KafkaConsumerSerivceImpl2 implements MessageListener<String,String>{

    @Override
    public void onMessage(ConsumerRecord<String, String> data) {
        //根据不同主题,消费
        System.out.println("listen2 : " + " key:"+ data.key() + " value: " + data.value());
    }
}

【补充接受实体类消息】:

 【生产者配置】:

package com.caox.kafka._03_spring_kafka_xml;

import com.alibaba.fastjson.JSONObject;
import org.springframework.context.ApplicationContext;
import org.springframework.context.support.ClassPathXmlApplicationContext;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.support.SendResult;
import org.springframework.util.concurrent.FailureCallback;
import org.springframework.util.concurrent.ListenableFuture;
import org.springframework.util.concurrent.SuccessCallback;

/**
 * Created by nazi on 2018/9/5.
 * @author nazi
 */
public class ProducerMain {
    public static void main(String[] argv) throws Exception {
        ApplicationContext context = new ClassPathXmlApplicationContext("applicationContext.xml");
        KafkaTemplate kafkaTemplate = context.getBean(KafkaTemplate.class);
        String key  = "test-key";
        JSONObject body = new JSONObject();
        body.put("userId",110);
        body.put("name","阿丽塔");
        String data = body.toString();
        ListenableFuture<SendResult<String, String>> listenableFuture = kafkaTemplate.send("topic-test4", 0, key, data);
        //发送成功回调
        SuccessCallback<SendResult<String, String>> successCallback = new SuccessCallback<SendResult<String, String>>() {
            @Override
            public void onSuccess(SendResult<String, String> result) {
                //成功业务逻辑
                System.out.println("success to send message !");
            }
        };
        //发送失败回调
        FailureCallback failureCallback = new FailureCallback() {
            @Override
            public void onFailure(Throwable ex) {
                //失败业务逻辑
            }
        };
        listenableFuture.addCallback(successCallback, failureCallback);
    }



}

【消费者配置】: 

package com.caox.kafka._03_spring_kafka_xml;

import com.caox.sharding.entity.User;
import lombok.extern.slf4j.Slf4j;
import net.sf.json.JSONObject;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.springframework.kafka.annotation.KafkaListener;
import org.springframework.kafka.listener.MessageListener;

/**
 * Created by nazi on 2018/9/11.
 * @author nazi
 */
@Slf4j
public class KafkaConsumerServiceImpl3 implements MessageListener<String,String> {
    @KafkaListener(topics = "${templar.aggrement.agreementWithhold.topic}")
    public void onMessage(ConsumerRecord<String, String> stringStringConsumerRecord) {
        //消费业务逻辑
        String value = stringStringConsumerRecord.value();
        System.out.println("call 接受传过来的值:" + value);
        JSONObject jsonObj = JSONObject.fromObject(value);
         User user = (User) JSONObject.toBean(jsonObj,
                User.class);
        System.out.println("listen 3 : " + " key:"+ stringStringConsumerRecord.key() + " value: " + user.toString());
    }
}

【发送结果日志】: 

 

Spring-Kafka整合是将Spring框架与Kafka消息系统进行整合,使得开发者能够方便地使用Spring框架进行Kafka消息的生产消费Spring-Kafka整合提供了以下功能: 1. 自动配置Kafka生产者和消费者。 2. 提供KafkaTemplate用于发送消息。 3. 提供@KafkaListener注解用于监听Kafka主题。 4. 提供KafkaListenerContainerFactory用于创建Kafka监听器容器。 5. 提供KafkaAdmin用于管理Kafka集群。 Spring-Kafka整合的使用步骤如下: 1. 添加Spring-Kafka依赖 在pom.xml文件中添加以下依赖: ``` <dependency> <groupId>org.springframework.kafka</groupId> <artifactId>spring-kafka</artifactId> <version>${spring-kafka.version}</version> </dependency> ``` 2. 配置Kafka连接 在application.properties文件中添加Kafka连接相关配置: ``` spring.kafka.bootstrap-servers=localhost:9092 ``` 3. 编写Kafka生产者 使用KafkaTemplate发送消息: ``` @Autowired private KafkaTemplate<String, String> kafkaTemplate; public void sendMessage(String topic, String message) { kafkaTemplate.send(topic, message); } ``` 4. 编写Kafka消费者 使用@KafkaListener注解监听Kafka主题: ``` @KafkaListener(topics = "test-topic") public void receiveMessage(String message) { //消费消息 } ``` 5. 配置Kafka监听器容器 使用KafkaListenerContainerFactory创建Kafka监听器容器: ``` @Bean public KafkaListenerContainerFactory<?> kafkaListenerContainerFactory() { ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>(); factory.setConsumerFactory(consumerFactory()); factory.setConcurrency(1); factory.getContainerProperties().setPollTimeout(3000); return factory; } ``` 6. 配置Kafka管理器 使用KafkaAdmin创建Kafka管理器: ``` @Bean public KafkaAdmin kafkaAdmin() { Map<String, Object> configs = new HashMap<>(); configs.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers); return new KafkaAdmin(configs); } ``` Spring-Kafka整合的使用可以使得开发者更加方便地使用Kafka消息系统,提高消息的生产消费效率。
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值