Kafka操纵

Kafka命令

  • 创建主题
bin/kafka-topics.sh --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 1 --topic test
  • 查看主题
bin/kafka-topics.sh --list --bootstrap-server localhost:9092
  • 启动生产端,发送消息
bin/kafka-console-producer.sh --bootstrap-server localhost:9092 --topic test
This is a message
This is another message
  • 启动消费端
 bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning
This is a message
This is another message

将数据导入kafka
创建文件

 echo -e "foo\nbar" > test.txt
bin/connect-standalone.sh config/connect-standalone.properties config/connect-file-source.properties config/connect-file-sink.properties

注:三个配置文件分别对应了不同的配置,可以去config目录下查看。
最终结果
在这里插入图片描述

bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic connect-test --from-beginning
得到结果:
{"schema":{"type":"string","optional":false},"payload":"foo"}


kakfka存储:

java 操作Kafka

producer

Properties conf = new Properties();
       //kafka 集群, broker-list
       conf.put("bootstrap.servers", "***:9092");
       conf.put("acks", "all");
       //重试次数
       conf.put("retries", 1);
       //批次大小
       conf.put("batch.size", 16384);
       //等待时间
       conf.put("linger.ms", 1);
       //RecordAccumulator 缓冲区大小
       conf.put("buffer.memory", 33554432);
       conf.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
       conf.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
       KafkaProducer<String, String> producer = new KafkaProducer<String, String>(conf);
       for (int i = 0; i < 100; i++) {
		producer.send(new ProducerRecord<String, String>("first",
		Integer.toString(i), Integer.toString(i)));
}
       producer.close();

注意:关闭资源的操作会将内存的数据写出去,如没有关闭资源可能会接收不到消息。如果主题不存在会自动创建主题。

consumer端

Properties props = new Properties();
       props.put("bootstrap.servers", "***:9092");
       props.put("group.id", "test");
       props.put("enable.auto.commit", "true");
       props.put("auto.commit.interval.ms", "1000");
       props.put("key.deserializer",
               "org.apache.kafka.common.serialization.StringDeserializer");
       props.put("value.deserializer",
               "org.apache.kafka.common.serialization.StringDeserializer");
       KafkaConsumer<String, String> consumer = new KafkaConsumer<String, String>(props);
       consumer.subscribe(Arrays.asList("test"));
       while (true) {
           ConsumerRecords<String, String> records =
                   consumer.poll(100);
           for (ConsumerRecord<String, String> record : records) {
               System.out.println(record);
           }
       }
   }

带回调函数的生产者

可以在消息发送后执行一些操作

producer.send(new ProducerRecord<String, String>("test",
      Integer.toString(i), Integer.toString(i)), new Callback() {
     //回调函数, 该方法会在 Producer 收到 ack 时调用,为异步调用
     @Override
     public void onCompletion(RecordMetadata metadata,
                              Exception exception) {
         if (exception == null) {
         // 无异常则成功
             System.out.println("success->" +
                     metadata.offset());
         } else {
             exception.printStackTrace();
         }
     }
 });
}

重载

for (int i = 0; i < 2; i++) {
           producer.send(new ProducerRecord<String, String>("test", 1,"", ""), (metadata, exception) -> {
               System.out.println(metadata.partition() + ":" + metadata.offset());
           });
       }

如果指定了分区,直接发送到分区,如果没有指定分区按照key的harsh值确定分区。

自定义分区

public class PartitonerT implements Partitioner {
    @Override
    public int partition(String s, Object o, byte[] bytes, Object o1, byte[] bytes1, Cluster cluster) {
        return 0;
    }
    @Override
    public void close() {
    }
    @Override
    public void configure(Map<String, ?> map) {
    }
}

使用分区器
conf.put(“partitioner.class”, “kafkatest.PartitonerT”);
查看配置记住配置类ProducerConfig
在这里插入图片描述

同步发送 API

同步发送的意思就是,一条消息发送之后,会阻塞当前线程, 直至返回 ack。

producer.send(new ProducerRecord<String, String>("first",
Integer.toString(i), Integer.toString(i))).get();
}

consumer API

消费端需要考虑的重点是offset的维护

Properties props = new Properties();
      props.put("bootstrap.servers", "172.18.0.130:9092");
      // 消费者组
      props.put("group.id", "test");
      props.put("enable.auto.commit", "true");
      props.put("auto.commit.interval.ms", "1000");
      props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
      props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
      KafkaConsumer<String, String> consumer = new KafkaConsumer<String, String>(props);
      // 订阅主题,可订阅多个
      consumer.subscribe(Arrays.asList("test"));
      while (true) {
          ConsumerRecords<String, String> records =
                  consumer.poll(100);
          for (ConsumerRecord<String, String> record : records) {
              System.out.println(record);
          }
      }
  }

重置offset

生效情况:换了消费组,消费的offset在集群中不存在了。

  public static final String AUTO_OFFSET_RESET_DOC = "What to do when there is no initial offset in Kafka or if the current offset does not exist any more on the server (e.g. because that data has been deleted): <ul><li>earliest: automatically reset the offset to the earliest offset<li>latest: automatically reset the offset to the latest offset</li><li>none: throw exception to the consumer if no previous offset is found for the consumer's group</li><li>anything else: throw exception to the consumer.</li></ul>";
props.put(ConsumerConfig.AUTO_OFFSET_RESET_DOC,"earliest"); #从头消费
props.put(ConsumerConfig.AUTO_OFFSET_RESET_DOC,"latest"); #从最近处消费

自动提交offset

props.put("enable.auto.commit", "true");
props.put("auto.commit.interval.ms", "1000");

如果不自动提交,则需要手动提交。同时自动提交也存在问题,如果消费过程中异常但是offset已经提交了,会发生丢失数据的情况。

手动提交

手动提交 offset 的方法有两种:分别是 commitSync(同步提交) 和 commitAsync(异步提交) 。两者的相同点是,都会将本次 poll 的一批数据最高的偏移量提交;不同点是,commitSync 阻塞当前线程,一直到提交成功,并且会自动失败重试(由不可控因素导致,也会出现提交失败);而 commitAsync 则没有失败重试机制,故有可能提交失败。

  • 同步提交,效率低
props.put("enable.auto.commit", "false");//关闭自动提交 offset
consumer.commitSync();
  • 异步提交
//关闭自动提交 offset
props.put("enable.auto.commit", "false");
//异步提交
consumer.commitAsync(new OffsetCommitCallback() {
@Override
public void onComplete(Map<TopicPartition,
OffsetAndMetadata> offsets, Exception exception) {
if (exception != null) {
System.err.println("Commit failed for" +
offsets);
}
}
});
  • 数据漏消费和重复消费分析
    无论是同步提交还是异步提交 offset,都有可能会造成数据的漏消费或者重复消费。先
    提交 offset 后消费,有可能造成数据的漏消费;而先消费后提交 offset,有可能会造成数据
    的重复消费。

自定义存储offset

ConsumerRebalanceListener

package com.atguigu.kafka.consumer;
import org.apache.kafka.clients.consumer.*;
import org.apache.kafka.common.TopicPartition;
import java.util.*;
public class CustomConsumer {
private static Map<TopicPartition, Long> currentOffset = new
HashMap<>();
public static void main(String[] args) {
//创建配置信息
Properties props = new Properties();
//Kafka 集群
props.put("bootstrap.servers", "hadoop102:9092");
//消费者组,只要 group.id 相同,就属于同一个消费者组
props.put("group.id", "test");
//关闭自动提交 offset
props.put("enable.auto.commit", "false");
//Key 和 Value 的反序列化类
props.put("key.deserializer",
"org.apache.kafka.common.serialization.StringDeserializer");
props.put("value.deserializer",
"org.apache.kafka.common.serialization.StringDeserializer");
//创建一个消费者
KafkaConsumer<String, String> consumer = new
KafkaConsumer<>(props);
//消费者订阅主题
consumer.subscribe(Arrays.asList("first"), new
ConsumerRebalanceListener() {
//该方法会在 Rebalance 之前调用
@Override
public void
onPartitionsRevoked(Collection<TopicPartition> partitions) {
commitOffset(currentOffset);
}
//该方法会在 Rebalance 之后调用
@Override
public void
onPartitionsAssigned(Collection<TopicPartition> partitions) {
currentOffset.clear();
for (TopicPartition partition : partitions) {
consumer.seek(partition, getOffset(partition));//
定位到最近提交的 offset 位置继续消费
}
}
});
while (true) {
ConsumerRecords<String, String> records =
consumer.poll(100);//消费者拉取数据
for (ConsumerRecord<String, String> record : records) {
System.out.printf("offset = %d, key = %s, value
= %s%n", record.offset(), record.key(), record.value());
currentOffset.put(new TopicPartition(record.topic(),
record.partition()), record.offset());
}
commitOffset(currentOffset);//异步提交
}
}
//获取某分区的最新 offset
private static long getOffset(TopicPartition partition) {
return 0;
}
//提交该消费者所有分区的 offset
private static void commitOffset(Map<TopicPartition, Long>
currentOffset) {
}
}

自定义拦截器

案例

// 时间拦截器
package com.atguigu.kafka.interceptor;
import java.util.Map;
import org.apache.kafka.clients.producer.ProducerInterceptor;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.clients.producer.RecordMetadata;
public class TimeInterceptor implements
ProducerInterceptor<String, String> {
@Override
public void configure(Map<String, ?> configs) {
}
@Override
public ProducerRecord<String, String>
onSend(ProducerRecord<String, String> record) {
// 创建一个新的 record,把时间戳写入消息体的最前部
return new ProducerRecord(record.topic(),
record.partition(), record.timestamp(), record.key(),
System.currentTimeMillis() + "," +
record.value().toString());
}
@Override
public void onAcknowledgement(RecordMetadata metadata,
Exception exception) {
}
@Override
public void close() {
}
}
package com.atguigu.kafka.interceptor;
import java.util.Map;
import org.apache.kafka.clients.producer.ProducerInterceptor;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.clients.producer.RecordMetadata;
public class CounterInterceptor implements
ProducerInterceptor<String, String>{
private int errorCounter = 0;
private int successCounter = 0;
@Override
public void configure(Map<String, ?> configs) {
}
@Override
public ProducerRecord<String, String>
onSend(ProducerRecord<String, String> record) {
return record;
}
@Override
public void onAcknowledgement(RecordMetadata metadata,
Exception exception) {
// 统计成功和失败的次数
if (exception == null) {
successCounter++;
} else {
errorCounter++;
}
}
@Override
public void close() {
// 保存结果
System.out.println("Successful sent: " + successCounter);
System.out.println("Failed sent: " + errorCounter);
}
}

producer

package com.atguigu.kafka.interceptor;
import java.util.ArrayList;
import java.util.List;
import java.util.Properties;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.Producer;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.clients.producer.ProducerRecord;
public class InterceptorProducer {
public static void main(String[] args) throws Exception {
// 1 设置配置信息
Properties props = new Properties();
props.put("bootstrap.servers", "hadoop102:9092");
props.put("acks", "all");
props.put("retries", 3);
props.put("batch.size", 16384);
props.put("linger.ms", 1);
props.put("buffer.memory", 33554432);
props.put("key.serializer",
"org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer",
"org.apache.kafka.common.serialization.StringSerializer");
// 2 构建拦截链
List<String> interceptors = new ArrayList<>();
interceptors.add("com.atguigu.kafka.interceptor.TimeInterce
ptor");
interceptors.add("com.atguigu.kafka.interceptor.CounterInterceptor");
props.put(ProducerConfig.INTERCEPTOR_CLASSES_CONFIG,
interceptors);

注:部分资料来自于哔哩哔哩视频教学。

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值