Java使用Kafka初探

Java使用Kafka初探

环境:

1.centos7 x64(预先装好了JDK环境)

2.Kafka版本:kafka_2.10-0.10.2.1

 

1.Kafka下载

官方下载地址:http://kafka.apache.org/downloads

我这里选择目前最新的kafka_2.10-0.10.2.1

可用地址:https://www.apache.org/dyn/closer.cgi?path=/kafka/0.10.2.1/kafka_2.10-0.10.2.1.tgz

下载完毕后,解压

 

[ping@Hadoop kafka]$ ll
total 37524
-rw-r--r--. 1 ping ping 38424081 Jun 10 18:28 kafka_2.10-0.10.2.1.tgz
[ping@Hadoop kafka]$ tar -zxvf kafka_2.10-0.10.2.1.tgz 

 

2.修改配置文件

 

 

[ping@Hadoop config]$ pwd
/home/ping/kafka/kafka_2.10-0.10.2.1/config
[ping@Hadoop config]$ ll
total 60
-rw-r--r--. 1 ping ping  906 Apr 21 09:23 connect-console-sink.properties
-rw-r--r--. 1 ping ping  909 Apr 21 09:23 connect-console-source.properties
-rw-r--r--. 1 ping ping 2760 Apr 21 09:23 connect-distributed.properties
-rw-r--r--. 1 ping ping  883 Apr 21 09:23 connect-file-sink.properties
-rw-r--r--. 1 ping ping  881 Apr 21 09:23 connect-file-source.properties
-rw-r--r--. 1 ping ping 1074 Apr 21 09:23 connect-log4j.properties
-rw-r--r--. 1 ping ping 2061 Apr 21 09:23 connect-standalone.properties
-rw-r--r--. 1 ping ping 1199 Apr 21 09:23 consumer.properties
-rw-r--r--. 1 ping ping 4369 Apr 21 09:23 log4j.properties
-rw-r--r--. 1 ping ping 1900 Apr 21 09:23 producer.properties
-rw-r--r--. 1 ping ping 5631 Apr 21 09:23 server.properties
-rw-r--r--. 1 ping ping 1032 Apr 21 09:23 tools-log4j.properties
-rw-r--r--. 1 ping ping 1023 Apr 21 09:23 zookeeper.properties
[ping@Hadoop config]$ 

为了快速演示方便,可以不用修改任何配置信息。如果需要针对不同的环境进行设置,需要做以下内容为主的修改

 

2.1 zookeeper配置文件

kafka的运行环境还需要zookeeper来进行协调,这里我采用默认的配置

对应的配置文件为zookeeper.properties

 

2.2 server.properties

Kafka程序主要的配置,在server.properties文件中修改与设置

主要关注的3点为此文件中的以下内容:

# The id of the broker. This must be set to a unique integer for each broker.
broker.id=0

 

 

# The address the socket server listens on. It will get the value returned from
# java.net.InetAddress.getCanonicalHostName() if not configured.
#   FORMAT:
#     listeners = listener_name://host_name:port
#   EXAMPLE:
#     listeners = PLAINTEXT://your.host.name:9092
#listeners=PLAINTEXT://:9092

# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=localhost:2181


# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=6000

打开这个配置文件,上面也有较为详细的说明

为了演示方便,只将listeners=PLAINTEXT://:9092改为本地地址,如listeners=PLAINTEXT://192.168.0.95:9092

 

 

2.3 producer.properties

由producer这个名字可以得知,这是一个跟消息生产提供相关的配置

关心点:bootstrap.servers=localhost:9092

 

2.4 consumer.properties

与producer相对应,用于消费端的相关配置

关心点:

# Zookeeper connection string
# comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002"
zookeeper.connect=127.0.0.1:2181


# timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=6000


#consumer group id  
group.id=test-consumer-group


#consumer timeout
#consumer.timeout.ms=5000

 

3.kafka启动

3.1 zookeeper启动

zookeeper作为kafka的协调者,需要在kafka运行前最先运行

运行bin目录下的如下shell,后面跟上配置文件的地址

bin/zookeeper-server-start.sh config/zookeeper.properties 

 

[ping@Hadoop kafka_2.10-0.10.2.1]$ pwd
/home/ping/kafka/kafka_2.10-0.10.2.1
[ping@Hadoop kafka_2.10-0.10.2.1]$ ll
total 52
drwxr-xr-x. 3 ping ping  4096 Apr 21 09:24 bin
drwxr-xr-x. 2 ping ping  4096 Jun 10 19:45 config
drwxr-xr-x. 2 ping ping  4096 Jun 10 19:17 libs
-rw-r--r--. 1 ping ping 28824 Apr 21 09:23 LICENSE
drwxrwxr-x. 2 ping ping  4096 Jun 10 19:46 logs
-rw-r--r--. 1 ping ping   336 Apr 21 09:23 NOTICE
drwxr-xr-x. 2 ping ping    46 Apr 21 09:24 site-docs
[ping@Hadoop kafka_2.10-0.10.2.1]$ bin/zookeeper-server-start.sh config/zookeeper.properties &

 

3.2 kafka服务启动

 

kafka服务的启动主要执行如下shell

 bin/kafka-server-start.sh config/server.properties

 

[ping@Hadoop kafka_2.10-0.10.2.1]$ bin/kafka-server-start.sh config/server.properties

 

3.3创建topic

 

作为消息的创建与消费,需要制定一个topic,跟分组类似

bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test

最后的test代表topic的名字

 

[ping@Hadoop kafka_2.10-0.10.2.1]$ bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
Created topic "test".

如要查看目前已经创建了的topic,用如下命令

 bin/kafka-topics.sh --list --zookeeper localhost:2181

如:

[ping@Hadoop kafka_2.10-0.10.2.1]$  bin/kafka-topics.sh --list --zookeeper localhost:2181
test

4.kafka消息创建与消费测试

4.1 kafka消息生产启动

用下以命令指定将要产生的消息对应的kafka服务和topic是什么

[ping@Hadoop kafka_2.10-0.10.2.1]$ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test 

运行成功后,控制台将会进入阻塞状态,等待用户在此控制台上输入将要发送的内容,提交给kafka服务

如:

 

[ping@Hadoop kafka_2.10-0.10.2.1]$ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test 
123
abc
haha


通过回车换行后,消息将会发送到kafka中,等待其消息

4.2 kafka消息消费启动

用以下命令来消费消息

bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test --from-beginning

其中指定了kafka的协调者是哪个zookeeper和topic是哪个

运行上面的命令后,就会消费kafka中的test这个topic的消息了

如下:

 

[ping@Hadoop kafka_2.10-0.10.2.1]$ bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test --from-beginning
Using the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-server] instead of [zookeeper].
123
abc
haha

 

 

 

5.Java客户端程序调用

 

kafka的启动主要占用了如下端口,确保如下端口是运行了的,并且端口所开放的网卡是开放出来的

tcp6       0      0 :::9092                 :::*                    LISTEN      33492/java        

tcp6       0      0 :::2181                 :::*                    LISTEN      33123/java   

 

5.1 SpringBoot消费测试

在springBoot工程中添加maven依赖

 

 
  1. <dependency>

  2. <groupId>org.springframework.kafka</groupId>

  3. <artifactId>spring-kafka</artifactId>

  4. </dependency>



在application.yml中加入配置信息:

 

haiyang:
  kafka:
    binder:
      brokers: 192.168.31.222:9092
      zk-nodes: 192.168.31.222:2181
    group: test-group

 

创建kafkaProducersConfig

 

 

 
  1. package com.haiyang.config;

  2.  
  3. import org.apache.kafka.clients.producer.ProducerConfig;

  4. import org.apache.kafka.common.serialization.StringSerializer;

  5. import org.springframework.beans.factory.annotation.Value;

  6. import org.springframework.context.annotation.Bean;

  7. import org.springframework.context.annotation.Configuration;

  8. import org.springframework.kafka.annotation.EnableKafka;

  9. import org.springframework.kafka.core.DefaultKafkaProducerFactory;

  10. import org.springframework.kafka.core.KafkaTemplate;

  11. import org.springframework.kafka.core.ProducerFactory;

  12.  
  13. import java.util.HashMap;

  14. import java.util.Map;

  15.  
  16. @Configuration

  17. @EnableKafka

  18. public class KafkaProducersConfig {

  19. @Value("${haiyang.kafka.binder.brokers}")

  20. private String brokers;

  21.  
  22. @Bean("kafkaTemplate")

  23. public KafkaTemplate<String, String> kafkaTemplate() {

  24. KafkaTemplate<String, String> kafkaTemplate = new KafkaTemplate<String, String>(producerFactory());

  25. return kafkaTemplate;

  26. }

  27.  
  28. public ProducerFactory<String, String> producerFactory() {

  29. Map<String, Object> properties = new HashMap<String, Object>();

  30. properties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, brokers);

  31. properties.put(ProducerConfig.BATCH_SIZE_CONFIG, 4096);

  32. properties.put(ProducerConfig.LINGER_MS_CONFIG, 1);

  33. properties.put(ProducerConfig.BUFFER_MEMORY_CONFIG, 40960);

  34. properties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);

  35. properties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);

  36. return new DefaultKafkaProducerFactory<String, String>(properties);

  37. }

  38. }

 

创建KafkaConsumerConfig

 

 
  1. package com.haiyang.config;

  2.  
  3. import org.apache.kafka.clients.consumer.ConsumerConfig;

  4. import org.apache.kafka.common.serialization.StringDeserializer;

  5. import org.springframework.beans.factory.annotation.Value;

  6. import org.springframework.context.annotation.Bean;

  7. import org.springframework.context.annotation.Configuration;

  8. import org.springframework.kafka.annotation.EnableKafka;

  9. import org.springframework.kafka.config.ConcurrentKafkaListenerContainerFactory;

  10. import org.springframework.kafka.config.KafkaListenerContainerFactory;

  11. import org.springframework.kafka.core.ConsumerFactory;

  12. import org.springframework.kafka.core.DefaultKafkaConsumerFactory;

  13. import org.springframework.kafka.listener.ConcurrentMessageListenerContainer;

  14.  
  15. import java.util.HashMap;

  16. import java.util.Map;

  17.  
  18. @Configuration

  19. @EnableKafka

  20. public class KafkaConsumerConfig {

  21. @Value("${haiyang.kafka.binder.brokers}")

  22. private String brokers;

  23.  
  24. @Value("${haiyang.kafka.group}")

  25. private String group;

  26.  
  27. @Bean

  28. public KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, String>> kafkaListenerContainerFactory() {

  29.  
  30. ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<String, String>();

  31. factory.setConsumerFactory(consumerFactory());

  32. factory.setConcurrency(4);

  33. factory.getContainerProperties().setPollTimeout(4000);

  34.  
  35. return factory;

  36. }

  37.  
  38. @Bean

  39. public KafkaListeners kafkaListeners() {

  40. return new KafkaListeners();

  41. }

  42.  
  43. public ConsumerFactory<String, String> consumerFactory() {

  44.  
  45. Map<String, Object> properties = new HashMap<String, Object>();

  46.  
  47. properties.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, brokers);

  48. properties.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);

  49. properties.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, "100");

  50. properties.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, "15000");

  51. properties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);

  52. properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);

  53. properties.put(ConsumerConfig.GROUP_ID_CONFIG, group);

  54. properties.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "latest");

  55.  
  56. return new DefaultKafkaConsumerFactory<String, String>(properties);

  57. }

  58. }

 

创建KafkaListeners

用于监听Kafka,进行消费

 

 

 
  1. package com.haiyang.config;

  2.  
  3. import org.apache.kafka.clients.consumer.ConsumerRecord;

  4. import org.springframework.kafka.annotation.KafkaListener;

  5.  
  6. import java.util.Optional;

  7.  
  8. public class KafkaListeners {

  9. @KafkaListener(topics = {"test"})

  10. public void testListener(ConsumerRecord<?, ?> record) {

  11. Optional<?> messages = Optional.ofNullable(record.value());

  12. if (messages.isPresent()) {

  13. Object msg = messages.get();

  14. System.out.println("get message from kafka: " + msg);

  15. }

  16. }

  17. }


以上配置完成了一个简单的Kafka配置

 

发送测试controller

为了测试方便,再创建就一个controller,用于发送消息

 

package com.haiyang.controller;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
public class FeignController {
    @Autowired
    KafkaTemplate kafkaTemplate;

    private static int index = 0;

    @RequestMapping("/testKafka")
    public void testkafka(String message) {
        kafkaTemplate.send("test", "haha" + index++);
    }
}

 

 

5.2测试

package com.haiyang;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;

@SpringBootApplication
public class SpringbootWithKafkaApplication {

   public static void main(String[] args) {
      SpringApplication.run(SpringbootWithKafkaApplication.class, args);
   }
}

 

运行springBoot测试类,再通过浏览器发送几条消息,截图如下:

 

简单试了下,通过springBoot来调用kafka还是挺方便简单的

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值