Windows搭建kafka环境,实现消息的发送,消费

Windows搭建kafka环境,实现消息的发送,消费

  1. 搭建kafka环境
    1. 下载zookeeper
    2. 安装Zookeeper

Kafka的运行依赖于Zookeeper,所以在运行Kafka之前我们需要安装并运行Zookeeper

1.1 下载安装文件: http://mirror.bit.edu.cn/apache/zookeeper/

1.2 解压文件(本文解压到 C:\kafkainstall\zookeeper-3.4.10)

1.3 打开C:\kafkainstall\zookeeper-3.4.10\conf,把zoo_sample.cfg重命名成zoo.cfg

1.4 从文本编辑器里打开zoo.cfg

1.5 修改dataDir和dataLogDir保存路径

dataDir=C:\data\logs\zookeeper

dataLogDir=C:\data\logs\zookeeper

1.6 添加如下系统变量:ZOOKEEPER_HOME: C:\zookeeper-3.4.8

Path: 在现有的值后面添加 ;%ZOOKEEPER_HOME%\bin;

1.7 运行Zookeeper: 打开cmd然后执行zkserver 命令。如果打印以下信息则表示zookeeper已经安装成功并运行在2181端口。

检验是否安装成功:

在C:\kafkainstall\zookeeper-3.4.10\bin按着Shift再点击鼠标右键选择:在此处打开命令窗口

执行

zkCli.cmd -server 127.0.0.1:2181

2.安装并运行Kafka

2.1下载安装文件: http://kafka.apache.org/downloads.html

2.2 解压文件(本文解压到 C:\kafkainstall\kafka_2.11-1.0.2)

2.3 打开C:\kafkainstall\kafka_2.11-1.0.2\config\ server.properties

2.4 把 log.dirs的值改成 log.dirs=C:\kafkainstall\kafka_2.11-1.0.2\logs\kafka

2.5 C:\kafkainstall\kafka_2.11-1.0.2\bin文件夹下的.sh命令脚本是在shell下运行的,此文件夹下还有个 windows文件夹,里面是windows下运行的.bat命令脚本

2.6 在C:\kafkainstall\kafka_2.11-1.0.2文件夹中”Shift+鼠标右键”点击空白处打开命令提示窗口

2.7 输入并执行一下命令以打开kafka:

.\bin\windows\kafka-server-start.bat .\config\server.properties

 

3.topic

3.1、创建topics

在C:\kafkainstall\kafka_2.11-1.0.2\bin\windows文件夹中”Shift+鼠标右键”点击空白处打开命令提示窗口

 

.\kafka-topics.bat --create --zookeeper 127.0.0.1:2181 --replication-factor 1 --partitions 1 --topic test

3.2、查看topic

如果我们运行list topic命令,我们现在可以看到该主题:

.\kafka-topics.bat --list --zookeeper 127.0.0.1:2181

3.3、删除topic

.\kafka-topics.bat --delete --zookeeper 127.0.0.1:2181 --topic test

4.打开一个Producer

4.1在C:\kafkainstall\kafka_2.11-1.0.2\bin\windows文件夹中”Shift+鼠标右键”点击空白处打开命令提示窗口

  1. .\kafka-console-producer.bat --broker-list 127.0.0.1:9092 --topic test1

5.打开一个Consumer

5.1在C:\kafkainstall\kafka_2.11-1.0.2\bin\windows文件夹中”Shift+鼠标右键”点击空白处打开命令提示窗口

 

.\kafka-console-consumer.bat --zookeeper 127.0.0.1:2181 --topic test1

kafka-console-producer.bat生产的带中文的消息,消费者取到为乱码,界面大致为:

原因及解决办法:命令行的编码格式默认是GBK(936)。在命令行输入chcp可以查看命令行的编码格式:

可以用chcp  65001来将编码格式修改为UTF-8。修改为UTF-8来生产消息,不能输入中文,输入中文就会退出。所以具体的解决办法还没找到。

4、Kafka命令大全

6.1、查询

 

## 查询集群描述

 bin/kafka-topics.sh --describe --zookeeper

## 消费者列表查询

 bin/kafka-topics.sh --zookeeper 127.0.0.1:2181 --list

## 新消费者列表查询(支持0.9版本+)

bin/kafka-consumer-groups.sh --new-consumer --bootstrap-server localhost:9092 --list

## 显示某个消费组的消费详情(仅支持offset存储在zookeeper上的) bin/kafka-run-class.sh kafka.tools.ConsumerOffsetChecker --zookeeper localhost:2181 --group test

 ## 显示某个消费组的消费详情(支持0.9版本+) bin/kafka-consumer-groups.sh --new-consumer --bootstrap-server localhost:9092 --describe --group test-consumer-group

 

6.2、发送和消费

## 生产者

bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test

## 消费者

 bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test ## 新生产者(支持0.9版本+)

 bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test --producer.config config/producer.properties

 ## 新消费者(支持0.9版本+)

 bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --new-consumer --from-beginning --consumer.config config/consumer.properties

 ## 高级点的用法

 bin/kafka-simple-consumer-shell.sh --brist localhost:9092 --topic test --partition 0 --offset 1234 --max-messages 10

6.3、平衡leader

bin/kafka-preferred-replica-election.sh --zookeeper zk_host:port/chroot

6.4、kafka自带压测命令

bin/kafka-producer-perf-test.sh --topic test --num-records 100 --record-size 1 --throughput 100  --producer-props bootstrap.ser
 
  1. 程序配置

5.1、pom引入maven依赖包

<!-- 集成kafka -->

<dependency>

    <groupId>org.springframework.integration</groupId>

    <artifactId>spring-integration-kafka</artifactId>

    <version>2.2.0.RELEASE</version>

</dependency>

<dependency>

    <groupId>org.apache.kafka</groupId>

    <artifactId>kafka-clients</artifactId>

    <version>0.10.2.0</version>

</dependency>

<dependency>

    <groupId>org.springframework.kafka</groupId>

    <artifactId>spring-kafka</artifactId>

    <version>1.2.0.RELEASE</version>

</dependency>

5.2、producer配置

applicationContext-kafka-producer.xml

 

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:rabbit="http://www.springframework.org/schema/rabbit"
       xmlns="http://www.springframework.org/schema/beans"
       xsi:schemaLocation="
        http://www.springframework.org/schema/beans
        http://www.springframework.org/schema/beans/spring-beans.xsd
        http://www.springframework.org/schema/rabbit
        http://www.springframework.org/schema/rabbit/spring-rabbit-1.4.xsd">

    <!-- 定义producer的参数 -->
    <bean id="producerProperties" class="java.util.HashMap">
        <constructor-arg>
            <map>
                <entry key="bootstrap.servers" value="127.0.0.1:9092" />
                <entry key="retries" value="3" />
                <entry key="batch.size" value="1" />
                <entry key="linger.ms" value="2" />
                <entry key="buffer.memory" value="1212" />
                <entry key="acks" value="all" />
                <entry key="key.serializer"
                       value="org.apache.kafka.common.serialization.StringSerializer" />
                <entry key="value.serializer"
                       value="org.apache.kafka.common.serialization.StringSerializer" />
            </map>
        </constructor-arg>
    </bean>

    <!-- 创建kafkatemplate需要使用的producerfactory bean -->
    <bean id="producerFactory"
          class="org.springframework.kafka.core.DefaultKafkaProducerFactory">
        <constructor-arg>
            <ref bean="producerProperties" />
        </constructor-arg>
    </bean>

    <!-- 创建kafkatemplate bean,使用的时候,只需要注入这个bean,即可使用template的send消息方法 -->
    <bean id="kafkaTemplate" class="org.springframework.kafka.core.KafkaTemplate">
        <constructor-arg ref="producerFactory" />
        <constructor-arg name="autoFlush" value="true" />
        <property name="defaultTopic" value="test1" />
    </bean>
</beans>

 

5.3、consumer配置

applicationContext-kafka-consumer.xml

 

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:rabbit="http://www.springframework.org/schema/rabbit"
       xmlns="http://www.springframework.org/schema/beans"
       xsi:schemaLocation="
        http://www.springframework.org/schema/beans
        http://www.springframework.org/schema/beans/spring-beans.xsd
        http://www.springframework.org/schema/rabbit
        http://www.springframework.org/schema/rabbit/spring-rabbit-1.4.xsd">

    <!-- 1.定义consumer的参数 -->
    <bean id="consumerProperties" class="java.util.HashMap">
        <constructor-arg>
            <map>
                <entry key="bootstrap.servers" value="127.0.0.1:9092" />
                <entry key="group.id" value="test111" />
                <entry key="enable.auto.commit" value="true" />
                <entry key="session.timeout.ms" value="10000" />
                <entry key="key.deserializer"
                       value="org.apache.kafka.common.serialization.StringDeserializer" />
                <entry key="value.deserializer"
                       value="org.apache.kafka.common.serialization.StringDeserializer" />
            </map>
        </constructor-arg>
    </bean>

    <!-- 2.创建consumerFactory bean -->
    <bean id="consumerFactory"
          class="org.springframework.kafka.core.DefaultKafkaConsumerFactory" >
        <constructor-arg>
            <ref bean="consumerProperties" />
        </constructor-arg>
    </bean>

    <!-- 3.定义消费实现类 -->
    <bean id="kafkaTestMQConsumer" class="com.wlqq.etc.invoice.service.mq.KafkaTestMQConsumer" />

    <!-- 4.消费者容器配置信息 -->
    <bean id="containerProperties" class="org.springframework.kafka.listener.config.ContainerProperties">
        <!-- topic -->
        <constructor-arg name="topics">
            <list>
                <value>test1</value>
            </list>
        </constructor-arg>
        <property name="messageListener" ref="kafkaTestMQConsumer" />
    </bean>
    <!-- 5.消费者并发消息监听容器,执行doStart()方法 -->
    <bean id="messageListenerContainer" class="org.springframework.kafka.listener.ConcurrentMessageListenerContainer" init-method="doStart" >
        <constructor-arg ref="consumerFactory" />
        <constructor-arg ref="containerProperties" />
    </bean>
</beans>

 

5.3、发送消息方法

 

5.4、消费者实现类

 */

@Component("KafkaTestMQConsumer")

public class KafkaTestMQConsumer implements MessageListener<Integer, String> {



    private final Logger logger = LoggerFactory.getLogger(getClass());

    @Override

    public void onMessage(ConsumerRecord<Integer, String> integerStringConsumerRecord) {

        try {

            logger.info("接收到的kafka测试信息data={}", integerStringConsumerRecord);

        } catch (UnsupportedEncodingException e) {

            e.printStackTrace();

        }

    }
 
 


6、编程方式实现消息的发送

6.1、发送

package com.wlqq.etc.reptile.service.mq;

import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.stereotype.Service;

import javax.annotation.PostConstruct;
import javax.annotation.PreDestroy;
import java.util.Properties;
import java.util.UUID;

@Service
public class KafkaService {

   private static final Logger logger = LoggerFactory.getLogger(KafkaService.class);

   @Value("${kafka.broker.list}")
   private String brokerList;
  
   @Value("${kafka.producer.type}")
   private String producerType;
  
   @Value("${kafka.queue.buffering.max.ms}")
   private String queueBufferMaxTime;
  
   @Value("${kafka.batch.num.messages}")
   private String messageBatchSize;
  
   @Value("${kafka.queue.buffering.max.messages}")
   private String queueBufferMaxMessages;
  
   @Value("${kafka.queue.enqueue.timeout.ms}")
   private String enqueueTimeout;

   private KafkaProducer<String, String> producer;

   @PostConstruct
   private void init() {
      logger.info("begin to init kafka produer");
      logger.info("kafka.metadata.broker.list" + "=" + brokerList);
      logger.info("kafka.producer.type" + "=" + producerType);
      logger.info("kafka.queue.buffering.max.ms" + "=" + queueBufferMaxTime);
      logger.info("kafka.batch.num.messages" + "=" + messageBatchSize);
      logger.info("kafka.queue.buffering.max.messages" + "=" + queueBufferMaxMessages);
      logger.info("kafka.queue.enqueue.timeout.ms" + "=" + enqueueTimeout);
     
     
      Properties props = new Properties();
      props.put("bootstrap.servers", brokerList);
      props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
      props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
      props.put("request.required.acks", "1");
      props.put("producer.type", producerType);
      props.put("queue.buffering.max.ms", queueBufferMaxTime);
      props.put("batch.num.messages", messageBatchSize);
      props.put("queue.buffering.max.messages", queueBufferMaxMessages);
      props.put("queue.enqueue.timeout.ms", enqueueTimeout);

      producer = new KafkaProducer<String, String>(props);
      logger.info("end to init kafka producer");
   }
  
   public void sendMessage(String topic, String message) {
      ProducerRecord<String, String> data = new ProducerRecord<String, String>(topic, UUID.randomUUID().toString(), message);
        producer.send(data);
   }

   @PreDestroy
   private void destroy() {
      producer.close();
   }
}

  1. 集群搭建地址

6.1、https://www.cnblogs.com/lentoo/p/7785004.html

 

 

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值