kafka的安装部署以及应用

概述

  • http://kafka.apache.org/
  • 基于zk的分布式(distributed)消息系统
    ps:现在大家把它当做消息中间件来使用。学习一款即可,除非是专门做中间件开发

特点

  • 高吞吐
  • 高性能
  • 实时
  • 可靠

安装部署

  • 下载安装包
    kafka和zookeeper
    官网下载即可
    加粗样式
    修改此配置文件
    在这里插入图片描述
    在这里插入图片描述

启动

  • 启动zk
    cd到zk的bin目录
    在这里插入图片描述
    ./zkServer.sh start
    启动即可
    在这里插入图片描述
    jps(提供的一个bai显示当前所有java进程pid的命令)在这里插入图片描述

  • 启动kafka(kafka-broker)

cd到kafka的bin目录
在这里插入图片描述
执行命令
./kafka-server-start.sh /Users/cuihailong/Downloads/tools/kafka_2.13-2.5.0/config/server.properties

出现下图日志,启动成功
在这里插入图片描述
启动成功
在这里插入图片描述

测试

  • 创建一个topic
./bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic hello-kafka

在这里插入图片描述

  • 检测topic创建成功否
./bin/kafka-topics.sh --list --zookeeper localhost:2181

在这里插入图片描述

  • 消费者监听
 ./bin/kafka-console-consumer.sh --topic hello-kafka  --bootstrap-server 127.0.0.1:9092 --from-beginning

出现下图证明监听成功
在这里插入图片描述

  • 生产者生产

在这里插入图片描述

./bin/kafka-console-producer.sh  --topic hello-kafka --bootstrap-server 127.0.0.1:9092

在这里插入图片描述
测试下,生产者发消息,消费者收到消息
在这里插入图片描述

kafka基础

学习:基本使用+一些常用配置+原理(了解)

基本概念
在这里插入图片描述

  • Topic(主题): 逻辑概念,一般由1-n个partitions组成 (业务场景的描述)
  • Partition: 实际消息存储单位
  • Producer(生产者): 消息的生产方
  • Consumer(消费者): 消息的消费方

==============
测试场景:

Java Client

bulid.gradle

implementation('org.apache.kafka:kafka-clients:2.5.0')

构建AdminClient

  • BuildAdminClient
public class BuildAdminClient {
    public static AdminClient createAdminClient(){
        Properties props = new Properties();
        props.setProperty(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG,"127.0.0.1:9092");

        return AdminClient.create(props);
    }

}

  • 测试类
public class App {

    @Test
    public void testCreateAdminClient(){
        AdminClient adminClient = BuildAdminClient.createAdminClient();

        assertThat(adminClient).isNotNull();
        
    }
}

ps:运行未报错,创建成功

创建topic

  • createTopic
    public static  void createTopic(String topicName){
        AdminClient adminClient = BuildAdminClient.createAdminClient();


        /**
         *String name         topic名字
         *int numPartitions   分区
         *short replicationFactor  副本数,必须不能大于broker数量
         */
        NewTopic topic = new NewTopic(topicName, 1, Short.parseShort("1"));


        CreateTopicsResult result = adminClient.createTopics(Collections.singletonList(topic));

        result.values().forEach((name,future) ->{
            System.out.println("TopicOperating.createTopic name =" + name);
        });

        // 资源关闭
        adminClient.close();

    }
}
  • Test
 @Test
    public void testCreateTopic(){
        TopicOperating.createTopic(TEST_TOPIC_NAME);
    }
  • 运行结果

SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
TopicOperating.createTopic name =hello-topic-mlong
  • 验证topic是否创建成功

在这里插入图片描述

展示Topic

 /**
     * 列出所有的topic
     * @throws Exception
     */
    public static void listTopics() throws Exception {
        AdminClient adminClient = BuildAdminClient.createAdminClient();


        // 列出所有的topic
        ListTopicsResult result = adminClient.listTopics();

        //解析获取结果
        Set<String> names = result.names().get();

        for (String name : names) {
            System.out.println("topic name = " + name);
        }

        System.out.println("===========\n");


        Collection<TopicListing> listings = result.listings().get();
        for (TopicListing listing : listings) {
            System.out.println("listing = " + listing);
        }


        //资源关闭
        adminClient.close();
    }

  • 单侧
    @Test
    public void testListTopic() throws Exception {
        TopicOperating.listTopics();
    }

  • result
topic name = hello-topic-mlong1111
topic name = hello-kafka
topic name = hello-topic-mlong
===========

listing = (name=hello-topic-mlong1111, internal=false)
listing = (name=hello-kafka, internal=false)
listing = (name=hello-topic-mlong, internal=false)

带Options的展示topic

 /**
     * 列出所有的topic
     * @throws Exception
     */
    public static void listTopicsWithOptions() throws Exception {
        AdminClient adminClient = BuildAdminClient.createAdminClient();

        ListTopicsOptions options = new ListTopicsOptions();

        // 列出所有的topic
        ListTopicsResult result = adminClient.listTopics(options);

        options.listInternal(true);

        //解析获取结果
        Set<String> names = result.names().get();

        for (String name : names) {
            System.out.println("topic name = " + name);
        }

        System.out.println("===========\n");


        Collection<TopicListing> listings = result.listings().get();
        for (TopicListing listing : listings) {
            System.out.println("listing = " + listing);
        }


        //资源关闭
        adminClient.close();
    }

 @Test
    public void testListTopicWithOptions() throws Exception {
        TopicOperating.listTopicsWithOptions();
    }
  • result
topic name = hello-topic-mlong1111
topic name = hello-kafka
topic name = hello-topic-mlong
topic name = __consumer_offsets
===========

listing = (name=hello-topic-mlong1111, internal=false)
listing = (name=hello-kafka, internal=false)
listing = (name=hello-topic-mlong, internal=false)
listing = (name=__consumer_offsets, internal=true)

ps:listing = (name=__consumer_offsets, internal=true)
kafka内部topic

删除topic

public static void removeTopic(String topicName) throws Exception {
        AdminClient adminClient = BuildAdminClient.createAdminClient();

        adminClient.deleteTopics(Collections.singletonList(topicName));

        adminClient.close();
    }
 @Test
    public void testRemoveTopic() throws Exception {
        TopicOperating.removeTopic("hello-topic-mlong1111");
    }

获取指定topic的描述信息

 public static void describeTopic(String topicName) throws Exception {

        AdminClient adminClient = BuildAdminClient.createAdminClient();

        // 获取Topic的描述信息
        DescribeTopicsResult result = adminClient.describeTopics(Collections.singletonList(topicName));
        // 解析描述信息结果,Map<String, TopicDescription> ==》topicName:TopicDescription
        Map<String, TopicDescription> descriptionMap = result.all().get();

        descriptionMap.forEach((name,desc) ->System.out.printf("topic name =%s,desc = %s \n" , name,desc));


        adminClient.close();

    }
 @Test
    public void testDescTopic() throws Exception {
        TopicOperating.describeTopic(TEST_TOPIC_NAME);
    }

result

topic name =hello-kafka,desc = (name=hello-kafka, internal=false, partitions=(partition=0, leader=127.0.0.1:9092 (id: 0 rack: null), replicas=127.0.0.1:9092 (id: 0 rack: null), isr=127.0.0.1:9092 (id: 0 rack: null)), authorizedOperations=null) 

获取topic的配置描述信息

 public static void describeConfig(String topicName) throws Exception{
        AdminClient adminClient = BuildAdminClient.createAdminClient();

        ConfigResource configResource = new ConfigResource(ConfigResource.Type.TOPIC,topicName);

        DescribeConfigsResult result = adminClient.describeConfigs(Collections.singletonList(configResource));

        Map<ConfigResource, Config> resourceConfigMap = result.all().get();

        resourceConfigMap.forEach((cr,c) ->{
            System.out.printf("desc topic config ConfigResource = %s,config = %s \n",cr,c);

        });


        adminClient.close();

 @Test
    public void testDescConfig() throws Exception{
        TopicOperating.describeConfig(TEST_TOPIC_NAME);

    }

result
太多了,只粘了一部分

desc topic config ConfigResource = ConfigResource(type=TOPIC, name='hello-kafka'),config = Config(entries=[ConfigEntry(name=compression.type, value=producer, source=DEFAULT_CONFIG, isSensitive=false, isReadOnly=false, synonyms=[]), ...

ps:配置项不清楚啥意思,文档搜一搜
在这里插入图片描述

修改分区

 /**
     * 修改topic的分区数量,注意:只能加,不能减
     * @param topicName
     * @param partitionNum
     */

    public  static void updateTopicPartition(String topicName,int partitionNum){
        AdminClient adminClient = BuildAdminClient.createAdminClient();

        // 构建修改分区的topic请求参数
        Map<String, NewPartitions> newPartitions = Maps.newHashMap();

        newPartitions.put(topicName,NewPartitions.increaseTo(partitionNum));


        //执行修改
        adminClient.createPartitions(newPartitions);

        adminClient.close();
    }


 @Test
    public void testUpdateTopicPartition(){
        TopicOperating.updateTopicPartition(TEST_TOPIC_NAME,2);
    }

result
partition=1,之前是0
在这里插入图片描述

获取指定topic的描述信息
     *
     * topic name =hello-kafka,
     * desc =
     *
     *      (name=hello-kafka,
     *      internal=false,
     *          partitions=(partition=0, leader=127.0.0.1:9092 (id: 0 rack: null
     *          ),
     *      replicas=127.0.0.1:9092 (id: 0 rack: null),
     *      isr=127.0.0.1:9092 (id: 0 rack: null)),
     *      authorizedOperations=null
     *      )
     *
     *
     *      ===================
     *
     *  topic name =hello-kafka,
     *  desc =
     *          (name=hello-kafka,
     *          internal=false,
     *          partitions=(partition=0, leader=127.0.0.1:9092 (id: 0 rack: null), replicas=127.0.0.1:9092 (id: 0 rack: null), isr=127.0.0.1:9092 (id: 0 rack: null)),
     *          (partition=1, leader=127.0.0.1:9092 (id: 0 rack: null), replicas=127.0.0.1:9092 (id: 0 rack: null), isr=127.0.0.1:9092 (id: 0 rack: null)), authorizedOperations=null)

修改配置信息

 public  static void updateTopicConfig(String topicName){
        AdminClient adminClient = BuildAdminClient.createAdminClient();


        Map<ConfigResource, Config> configMap = Maps.newHashMap();

        ConfigResource configResource = new ConfigResource(ConfigResource.Type.TOPIC,topicName);

        Config config = new Config(Collections.singletonList(new ConfigEntry("preallocate","true")));



        configMap.put(configResource,config);


        // 新的api,有些麻烦
        // adminClient.incrementalAlterConfigs();

        // 修改topic配置,老api,已经过时
        adminClient.alterConfigs(configMap);


        adminClient.close();

    }
 @Test
    public void testUpdateTopicConfig(){
        TopicOperating.updateTopicConfig(TEST_TOPIC_NAME);
    }

运行后,再看testDescConfig
在这里插入图片描述

发送单个消息

public class ProducerOperating {

    private static final String TOPIC_NAME = "hello-kafka";

    public static void sendMsg(String key,String msg){
        // 构建kafka producer的配置参数
        Properties prop = new Properties();

        prop.setProperty(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,"127.0.0.1:9092");
        prop.setProperty(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
        prop.setProperty(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,StringSerializer.class.getName());


        // 构建出kafka producer
        KafkaProducer<String, String> producer = new KafkaProducer<>(prop);

        // 创建一个消息实体,指定topic,以及消息的key,msg
        ProducerRecord<String, String> record = new ProducerRecord<String, String>(TOPIC_NAME,key,msg);

        // 消息发送
        producer.send(record);

        producer.close();
        
    }

}

    @Test
    public void testSimpleSend(){

        ProducerOperating.sendMsg("hello","kafka");

    }
}

consumer收到消息
在这里插入图片描述

发送多条(异步)

 public static void sendMultipMsg(String key,String msg){

        Properties prop = new Properties();

        prop.setProperty(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,"127.0.0.1:9092");
        prop.setProperty(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
        prop.setProperty(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,StringSerializer.class.getName());


        KafkaProducer<String, String> producer = new KafkaProducer<>(prop);

        for (int i = 0; i < 100; i++) {
            ProducerRecord<String, String> record = new ProducerRecord<>(TOPIC_NAME,
                    key + i,
                    msg + i);

            producer.send(record);
        }

        producer.close();

    }

  @Test
    public void testMutipSend(){
        ProducerOperating.sendMultipMsg("test","send kafka msg");

    }

result
在这里插入图片描述

异步+阻塞

  public static void sendMsgWithSync(String key,String msg) throws Exception {

        Properties prop = new Properties();

        prop.setProperty(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,"127.0.0.1:9092");
        prop.setProperty(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
        prop.setProperty(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,StringSerializer.class.getName());


        KafkaProducer<String, String> producer = new KafkaProducer<>(prop);

        for (int i = 0; i < 100; i++) {
            ProducerRecord<String, String> record = new ProducerRecord<>(TOPIC_NAME,
                    key + i,
                    msg + i);

            // 处理结果返回,得到future,future.get()是一个阻塞操作,来模拟实现同步能力
            Future<RecordMetadata> future = producer.send(record);
            RecordMetadata metadata = future.get();

            System.out.printf("send msg to topic = %s,partition =%d,offset = %s \n",metadata.topic(),metadata.partition(),metadata.offset());
        }

        producer.close();

    }

异步+回调

  public static void sendMsgWithCallback(String key,String msg) throws Exception {

        Properties prop = new Properties();

        prop.setProperty(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,"127.0.0.1:9092");
        prop.setProperty(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
        prop.setProperty(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,StringSerializer.class.getName());


        KafkaProducer<String, String> producer = new KafkaProducer<>(prop);

        for (int i = 0; i < 10; i++) {
            ProducerRecord<String, String> record = new ProducerRecord<>(TOPIC_NAME,
                    key + i,
                    msg + i);

            producer.send(record, new Callback() {
                @Override
                public void onCompletion(RecordMetadata metadata, Exception exception) {

                }
            });
        }
        producer.close();
    }
   @Test
    public void testSendMsgWithCallback() throws Exception {
        ProducerOperating.sendMsgWithCallback("test","hello kafka");

    }

result
在这里插入图片描述

自定义分区器,即指定特定的key进入特定的分区.负载均衡

  • 自定义分区器
public class SimplePartioner implements Partitioner {

    /**
     * 0-4个分区
     * @param topic
     * @param key
     * @param keyBytes
     * @param value
     * @param valueBytes
     * @param cluster
     * @return
     */
    @Override
    public int partition(String topic, Object key, byte[] keyBytes, Object value, byte[] valueBytes, Cluster cluster) {
        // topic = hello-kafka

        // key = Takeaway  0-3
        // key = hotel  4
        if (StringUtils.startsWith(key.toString(),"takeaway_")){
            int code = Math.abs(key.toString().hashCode());
            return code % 4;
        }
        return 4;
    }

    @Override
    public void close() {

    }

    @Override
    public void configure(Map<String, ?> configs) {

    }
}
  • 消息发送
 /**
     * 消息发送,自定义消息的发送分区数。需要在producer上进行配置
     *
     * prop.setProperty(ProducerConfig.PARTITIONER_CLASS_CONFIG,SimplePartioner.class.getName());
     * @param key
     * @param msg
     * @throws Exception
     */
    public static void sendMsgWithPartition(String key,String msg) throws Exception {

        Properties prop = new Properties();

        prop.setProperty(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,"127.0.0.1:9092");
        prop.setProperty(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
        prop.setProperty(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,StringSerializer.class.getName());
        prop.setProperty(ProducerConfig.PARTITIONER_CLASS_CONFIG,SimplePartioner.class.getName());


        KafkaProducer<String, String> producer = new KafkaProducer<>(prop);

        for (int i = 0; i < 10; i++) {
            ProducerRecord<String, String> record = new ProducerRecord<>(TOPIC_NAME,
                    key + i,
                    msg + i);

            producer.send(record, new Callback() {
                @Override
                public void onCompletion(RecordMetadata metadata, Exception exception) {
                    System.out.printf("send msg to topic = %s,partition =%d,offset = %s \n",metadata.topic(),metadata.partition(),metadata.offset());


                }
            });
        }
        producer.close();
    }
  • 测试
  @Test
    public void testSendWithPartition() throws Exception {
        ProducerOperating.sendMsgWithPartition("test","hello kafka");

    }
  • result

因为是外卖之外,返回4

send msg to topic = hello-kafka,partition =4,offset = -1 
send msg to topic = hello-kafka,partition =4,offset = -1 
send msg to topic = hello-kafka,partition =4,offset = -1 
send msg to topic = hello-kafka,partition =4,offset = -1 
send msg to topic = hello-kafka,partition =4,offset = -1 
send msg to topic = hello-kafka,partition =4,offset = -1 
send msg to topic = hello-kafka,partition =4,offset = -1 
send msg to topic = hello-kafka,partition =4,offset = -1 
send msg to topic = hello-kafka,partition =4,offset = -1 
send msg to topic = hello-kafka,partition =4,offset = -1 
  • 测试
 @Test
    public void testSendWithPartition() throws Exception {
        ProducerOperating.sendMsgWithPartition("takeaway_chl","hello kafka");

    }
  • result

因为是外卖,返回0-3

send msg to topic = hello-kafka,partition =0,offset = 171 
send msg to topic = hello-kafka,partition =0,offset = 172 
send msg to topic = hello-kafka,partition =0,offset = 173 
send msg to topic = hello-kafka,partition =1,offset = 154 
send msg to topic = hello-kafka,partition =1,offset = 155 
send msg to topic = hello-kafka,partition =1,offset = 156 
send msg to topic = hello-kafka,partition =2,offset = -1 
send msg to topic = hello-kafka,partition =2,offset = -1 
send msg to topic = hello-kafka,partition =3,offset = -1 
send msg to topic = hello-kafka,partition =3,offset = -1 

配置

 prop.setProperty(ProducerConfig.ACKS_CONFIG, "all");
        prop.setProperty(ProducerConfig.RETRIES_CONFIG, "0");
        prop.setProperty(ProducerConfig.BATCH_SIZE_CONFIG, "16384");
        prop.setProperty(ProducerConfig.LINGER_MS_CONFIG, "16384");
        prop.setProperty(ProducerConfig.BUFFER_MEMORY_CONFIG, "33554432");

acks面试点
配置策略
在这里插入图片描述

Consumer

接收消息/自动提交
最基本

  private static final String TOPIC_NAME ="hello-kafka";

    public static void consumeForAutoCommit() {
        // consumer的配置信息封装
        Properties prop = new Properties();

        prop.setProperty(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG,"127.0.0.1:9092");
        prop.setProperty(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
        prop.setProperty(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,StringDeserializer.class.getName());
        prop.setProperty(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG,"true");
        prop.setProperty(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG,"1000");
        prop.put("group.id", "test");







        // 创建consumer
        KafkaConsumer<String, String> consumer = new KafkaConsumer<>(prop);

        consumer.subscribe(Collections.singleton(TOPIC_NAME));

        //轮询处理消息
        while (true) {
            // 拉消息
            ConsumerRecords<String, String> records = consumer.poll(Duration.ofMillis(100));
            Iterator<ConsumerRecord<String, String>> recordIterator = records.iterator();

            while (recordIterator.hasNext()){

                ConsumerRecord<String, String> record = recordIterator.next();
                System.out.printf("topic = %s, key = %s,val =%s \n",record.topic(),record.key(),record.value());
            }
        }

        // 资源关闭
        // consumer.close();


    }

}

手动提交和自动提交的区别

自动提交:${KAFKA_CONSUMER_ENABLE_AUTO_COMMIT:true}
手动提交: ${KAFKA_CONSUMER_ENABLE_AUTO_COMMIT:false}”

手动提交,这里表示的是告诉broker,这个消息我已经正确处理了。
如果业务处理有问题,需要这条消息被继续处理,即被其他consumer来处理。
那么此时就不调用下面这段代码,即未做提交,broker未收到ack,就认为没有被处理,也就是offset也不会向后移动。
所以这条还可以继续被处理和消费。

实际业务场景:
1.收到了消息。需要处理消息内容,封装完成之后,入MySQL。
2.入库时发生失败,MySQL抖动,网络? 。。。。。 冲突。 需要重试。
3.所以此时,我们判断入库失败,int effectRows == 0, 这个时候不调用consumer.commitAsync();
int effectRows = OrderTable.insert(orderDO);
if (effectRows > 0){
consumer.commitAsync(); // 入库成功了,业务处理成功了, 此时手动调 提交。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值