spring boot配置spring-kafka与原生kafka

spring-kafka

搭建kafka要注意版本问题,首先看下spring-boot链接kafka的使用。

1、添加pom依赖

<dependencies>
<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-web</artifactId>
	<!--<version>1.5.8.RELEASE</version>-->
</dependency>
<dependency>
	<groupId>org.apache.kafka</groupId>
	<artifactId>kafka-clients</artifactId>
	<version>0.10.2.0</version>
</dependency>
<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.kafka</groupId>
	<artifactId>spring-kafka</artifactId>
</dependency>
</dependencies>

2、配置参数说明

##consumer的配置参数(开始)##
#如果enable.auto.commit为true,则消费者偏移自动提交给kafka的频率
#(以毫秒为单位)默认5000
spring.kafka.consumer.auto-commit-interval;

#当kafka中没有初始偏移量或者服务器上不再存在当前偏移量时该怎么办,
#默认值为latest,表示自动将偏移量重置为最新的偏移量
#可选的值为latest、earliest、none
spring.kafka.consumer.auto-offset-reset=latest;

#以逗号分隔的ip:port列表,用于建立与kafka集群的初始连接
spring.kafka.consumer.bootstrap-servers;

#ID在发出请求时传递给服务器,用于服务器端日志记录。
spring.kafka.consumer.client-id;

#如果为true,则消费者的偏移量将在后台定期提交,默认值为true
spring.kafka.consumer.enable-auto-commit=true;

#如果没有足够的数据立即满足“fetch.min.bytes”给出的要求,服务器在回答
#获取请求之前将阻塞的最长时间(单位ms),默认500
spring.kafka.consumer.fetch-max-wait;

#服务器以字节为单位返回获取请求的最小数据量,默认为1
spring.kafka.consumer.fetch-min-size;

#用于标识此使用者所属的使用者组的唯一字符串
spring.kafka.consumer.group-id;

#心跳周期,默认3000ms
spring.kafka.consumer.heartbeat-interval;

#key的反序列化类
spring.kafka.consumer.key-deserializer=org.apache.kafka.common.serialization.StringDeserializer

#value的反序列类
spring.kafka.consumer.value-deserializer=org.apache.common.serialization.StringDeserializer

#一次调用poll()操作时返回的最大记录数,默认值为500
spring.kafka.consumer.max-poll-records;
## consumer的配置参数(结束 )##

## producer的配置参数(开始) ##
#producer要求leader在考虑完成请求之前收到的确认数,用于控制发送记录在服务端的持久化
# acks=0 , 如果设置为0,则生产者将不会等待来自服务器的任何确认,该记录将立即添加到套接字缓冲区并视为已发送。
#在这种情况下,无法保证服务顺已收到记录,并且重试配置将不会生效,为每条记录返回的偏移量始终为-1。
#acks=1,这意味着leader会将记录写入其本地日志,但无需等待所有副本服务器的完全确认即可做出回应。
#但在将数据复制到所有副本服务之前,有数据丢失的可能性。
#acks=all,这意味着leader将等待完整的同步副本以确认记录。
spring.kafka.producer.acks=1

#每当多个记录被发送到同一分区时,生产者将尝试将记录一起批量处理为更少的请求。默认为16384字节
spring.kafka.producer.batch-size=16384

spring.kafka.producer.bootstrap-servers

#生产者可用于缓冲等待发送到服务器的记录的内存总字节数,默认33554432
spring.kafka.producer.buffer-memory;

spring.kafka.producer.client-di

#生产者生成的所数据的压缩类型,此配置接受标准压缩编解码器(gzip, snappy, lz4)
#还接受uncompressed以及producer(保留生产者设置的原始压缩编解码器),
#默认producer
spring.kafka.producer.compression-type=producer

spring.kafka.producer.key-serializer

spring.kafka.producer.value-serializer

#失败重试的次数
spring.kafka.producer.retries
## producers的配置参数(结束 ) ##

##listener的配置参数 ##
#侦听器的ackmode,当enable.auto.commit为false生效
spring.kafka.listener.ack-mode;

#在侦听器容器中运行的线程数
spring.kafka.listener.concurrency;

#轮询消费者时使用的超时(ms)
spring.kafka.listener.poll-timeout;

#当ackMode为COUNT或COUNT_TIME时,偏移提交之间的记录数
spring.kafka.listener.ack-count;

#当ackMode为COUNT或COUNT_TIME时,偏移提交之间的时间(ms)
spring.kafka.listener.ack-time;

配置示例

kafka.consumer.zookeeper.connect=zookeeper-ip:2181
kafka.consumer.servers=kafka-ip:9092
kafka.consumer.enable.auto.commit=true
kafka.consumer.session.timeout=6000
kafka.consumer.auto.commit.interval=100
kafka.consumer.auto.offset.reset=latest
kafka.consumer.topic=test
kafka.consumer.group.id=test
kafka.consumer.concurrency=10
 
kafka.producer.servers=kafka-ip:9092
kafka.producer.retries=0
kafka.producer.batch.size=4096
kafka.producer.linger=1
kafka.producer.buffer.memory=40960

3、添加KafkaConsumer配置类

package com.databps.bigdaf.admin.config;
 
import com.databps.bigdaf.admin.manager.HomePageManager;
import com.databps.bigdaf.admin.vo.HomePageVo;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.common.serialization.StringDeserializer;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.kafka.annotation.EnableKafka;
import org.springframework.kafka.config.ConcurrentKafkaListenerContainerFactory;
import org.springframework.kafka.config.KafkaListenerContainerFactory;
import org.springframework.kafka.core.ConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.listener.ConcurrentMessageListenerContainer;
 
import java.util.HashMap;
import java.util.Map;
/**
 * @author haipeng
 * @create 17-11-2 上午11:39
 */
@Configuration
@EnableKafka
public class KafkaConsumerConfig {
    @Value("${kafka.consumer.servers}")
    private String servers;
    @Value("${kafka.consumer.enable.auto.commit}")
    private boolean enableAutoCommit;
    @Value("${kafka.consumer.session.timeout}")
    private String sessionTimeout;
    @Value("${kafka.consumer.auto.commit.interval}")
    private String autoCommitInterval;
    @Value("${kafka.consumer.group.id}")
    private String groupId;
    @Value("${kafka.consumer.auto.offset.reset}")
    private String autoOffsetReset;
    @Value("${kafka.consumer.concurrency}")
    private int concurrency;
    @Autowired
    private HomePageManager homePageManager;
    @Bean
    public KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, String>> kafkaListenerContainerFactory() {
        ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
        factory.setConsumerFactory(consumerFactory());
        factory.setConcurrency(concurrency);
        factory.getContainerProperties().setPollTimeout(1500);
        return factory;
    }
 
    public ConsumerFactory<String, String> consumerFactory() {
        return new DefaultKafkaConsumerFactory<>(consumerConfigs());
    }
 
 
    public Map<String, Object> consumerConfigs() {
        Map<String, Object> propsMap = new HashMap<>();
//        propsMap.put("zookeeper.connect", "master1.hdp.com:2181,master2.hdp.com:2181,slave1.hdp.com:2181");
        propsMap.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, servers);
        propsMap.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, enableAutoCommit);
        propsMap.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, autoCommitInterval);
        propsMap.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, sessionTimeout);
        propsMap.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
        propsMap.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
        propsMap.put(ConsumerConfig.GROUP_ID_CONFIG, groupId);
        propsMap.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, autoOffsetReset);
        return propsMap;
    }
}

4、添加KafkaProducer配置类

package com.databps.bigdaf.admin.config;
 
 
import java.util.HashMap;
import java.util.Map;
 
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.common.serialization.StringSerializer;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.kafka.annotation.EnableKafka;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.core.ProducerFactory;
/**
 * @author haipeng
 * @create 17-11-2 上午11:37
 */
@Configuration
@EnableKafka
public class KafkaProducerConfig {
    @Value("${kafka.producer.servers}")
    private String servers;
    @Value("${kafka.producer.retries}")
    private int retries;
    @Value("${kafka.producer.batch.size}")
    private int batchSize;
    @Value("${kafka.producer.linger}")
    private int linger;
    @Value("${kafka.producer.buffer.memory}")
    private int bufferMemory;
 
 
    public Map<String, Object> producerConfigs() {
        Map<String, Object> props = new HashMap<>();
        props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, servers);
        props.put(ProducerConfig.RETRIES_CONFIG, retries);
        props.put(ProducerConfig.BATCH_SIZE_CONFIG, batchSize);
        props.put(ProducerConfig.LINGER_MS_CONFIG, linger);
        props.put(ProducerConfig.BUFFER_MEMORY_CONFIG, bufferMemory);
        props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
        props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
        return props;
    }
 
    public ProducerFactory<String, String> producerFactory() {
        return new DefaultKafkaProducerFactory<>(producerConfigs());
    }
 
    @Bean
    public KafkaTemplate<String, String> kafkaTemplate() {
        return new KafkaTemplate<String, String>(producerFactory());
    }
 
}

5、生产才调用过程

(1)添加KafkaTemplate注入

  @Autowired
  private KafkaTemplate kafkaTemplate;

(2)发送

    AuditVo auditVo=new AuditVo();
    long sortData=Long.parseLong(DateUtils.getNowDateTime());
    auditVo.setId("sdfdf");
    auditVo.setCmpyId(cmpyId);
    auditVo.setUser("whp");
    auditVo.setPluginIp("192.168.1.53");
    auditVo.setAccessTime(DateUtils.getNowDateTime());
    auditVo.setAccessType("WRITE");
    auditVo.setAction("write");
    auditVo.setAccessResult("success");
    auditVo.setServiceType("hbase");
    auditVo.setResourcePath("/whp");
    Gson gson=new Gson();
    kafkaTemplate.send("test", gson.toJson(auditVo));

6、消费,只要在方法上添加注解就可以了。

@Component
public class KafkaConsumer {
    @KafkaListener(topics = {"test"})
    public void processMessage(String content) {
        System.out.println("消息被消费"+content);
    }
}

使用原生Kafka的Java API

package com.example.demo.kafka;
 
import java.util.Arrays;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Properties;
import kafka.consumer.ConsumerConfig;
import kafka.consumer.ConsumerIterator;
import kafka.consumer.KafkaStream;
import kafka.javaapi.consumer.ConsumerConnector;
import kafka.serializer.StringDecoder;
import kafka.utils.VerifiableProperties;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.Producer;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.junit.Test;
 
 
public class kafkaConsumer {
 
  private String topic="test";
 
  @Test
  public void Producer(){
    Properties props = new Properties();
    props.put("bootstrap.servers", "master1.hdp.com:6667");
    props.put("acks", "all"); //ack方式,all,会等所有的commit最慢的方式
    props.put("retries", 0); //失败是否重试,设置会有可能产生重复数据
    props.put("batch.size", 16384); //对于每个partition的batch buffer大小
    props.put("linger.ms", 1);  //等多久,如果buffer没满,比如设为1,即消息发送会多1ms的延迟,如果buffer没满
    props.put("buffer.memory", 33554432); //整个producer可以用于buffer的内存大小
    props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
    props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
    Producer<String, String> producer = new KafkaProducer<>(props);
    producer.send(new ProducerRecord<String, String>(topic, "", Integer.toString(1)));
 
 
    producer.close();
  }
 
  private  ConsumerConnector consumer;
 
  @Test
  public  void kafkaConsumer() {
 
    Properties props = new Properties();
 
    // zookeeper 配置
 
    props.put("zookeeper.connect", "master1.hdp.com:2181,master2.hdp.com:2181,slave1.hdp.com:2181");
 
    // group 代表一个消费组
 
    props.put("group.id", "jd-group");
 
    // zk连接超时
 
    props.put("zookeeper.session.timeout.ms", "4000");
 
    props.put("zookeeper.sync.time.ms", "200");
 
    props.put("auto.commit.interval.ms", "1000");
 
    props.put("auto.offset.reset", "largest");
 
    // 序列化类
 
    props.put("serializer.class", "kafka.serializer.StringEncoder");
 
    ConsumerConfig config = new ConsumerConfig(props);
 
    consumer = (ConsumerConnector) kafka.consumer.Consumer.createJavaConsumerConnector(config);
 
    Map<String, Integer> topicCountMap = new HashMap<String, Integer>();
 
    topicCountMap.put("test", new Integer(1));
 
    StringDecoder keyDecoder = new StringDecoder(new VerifiableProperties());
 
    StringDecoder valueDecoder = new StringDecoder(
        new VerifiableProperties());
    Map<String, List<KafkaStream<String, String>>> consumerMap =
        consumer.createMessageStreams(topicCountMap, keyDecoder, valueDecoder);
    KafkaStream<String, String> stream = consumerMap.get(
        "test").get(0);
    ConsumerIterator<String, String> it = stream.iterator();
    while (it.hasNext())
      System.out.println(it.next().message());
  }
}

 

原生MQ(消息队列)消费是指直接在Spring Boot应用中集成MQ服务,通过Spring Boot提供的支持来处理来自消息队列的消息。这通常涉及到以下几个步骤: 1. **添加依赖**:首先,在Spring Boot项目的pom.xml文件中添加MQ客户端库的依赖,如RabbitMQ、Apache Kafka或Redis等。 ```xml <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-amqp</artifactId> <!-- 如果使用RabbitMQ --> <!-- 或者其他MQ的相关starter,如 spring-boot-starter-kafka --> </dependency> ``` 2. **配置连接信息**:在application.yml或application.properties中配置MQ的连接地址、用户名、密码等基本信息。 ```yaml spring: rabbitmq: host: localhost port: 5672 username: guest password: guest ``` 3. **创建消费者组件**:创建一个实现了`MessageListener`或对应MQ API的Java类,这个类将负责接收并处理消息。 ```java import org.springframework.amqp.core.Message; import org.springframework.amqp.rabbit.annotation.RabbitListener; @RabbitListener(queues = "myQueue") public void consumeMessage(Message message) { String content = new String(message.getBody(), StandardCharsets.UTF_8); // 处理接收到的消息 } ``` 4. **启动监听器**:在Spring Boot的启动类上使用`@EnableAutoConfiguration`启用自动配置,或手动配置`MessageConverter`和`ConcurrentTaskScheduler`。 ```java @SpringBootApplication @EnableAutoConfiguration public class App { public static void main(String[] args) { SpringApplication.run(App.class, args); } } ```
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值