Kafka topic消息清理几种方式

Kafka清理Topic消息

参考链接:https://cloud.tencent.com/developer/article/1590094

快速配置删除法

  1. kafka启动之前,在server.properties配置delete.topic.enable=true

  2. 执行命令bin/kafka-topics.sh --delete --topic test --zookeeper zk:2181或者使用kafka-manage集群管理工具删除。如果kafka启动之前没有配置delete.topic.enable=true,topic只会标记为marked for deletion,并且在Zookeeper中的/admin/delete_topics下创建对应的子节点,加上配置,重启kafka,之前的topic就真正删除了

  3. 优点由Kafka来完成Topic的相关删除,只需要修改server.properties配置文件的delete.topic.enable为true就可以了

  4. 缺点:需要重启Kafka来完成配置文件的生效

# 默认是false,注意等号前后一定不能有空格,否则配置会不生效
delete.topic.enable=true

# Bitnami Chart环境变量设置(涉及重启了)
KAFKA_CFG_DELETE_TOPIC_ENABLE=true


# 创建新的Topic logstash_test(拥有3个副本)
kafka-topics.sh --create --bootstrap-server ape-kafka-0.ape-kafka-headless:9092,ape-kafka-1.ape-kafka-headless:9092,ape-kafka-2.ape-kafka-headless:9092 --topic logstash_test --partitions 1 --replication-factor 3


# 查看Topic logstash_test的状态,发现Leader是1(broker.id=0),有三个备份分别是0,1,2
I have no name!@ape-kafka-0:/$ kafka-topics.sh --describe --bootstrap-server ape-kafka-0.ape-kafka-headless:9092,ape-kafka-1.ape-kafka-headless:9092,ape-kafka-2.ape-kafka-headless:9092 --topic logstash_test
Topic: logstash_test    TopicId: 1j9d-WGVTzKTpGdTtO0YFQ PartitionCount: 1       ReplicationFactor: 3    Configs: flush.ms=1000,segment.bytes=1073741824,flush.messages=10000,max.message.bytes=1000012,retention.bytes=1073741824
        Topic: logstash_test    Partition: 0    Leader: 0       Replicas: 0,2,1 Isr: 0,2,1
	
# 查看Zookeeper上的Topic
$ zkCli.sh -server localhost:2181
[zk: localhost:2181(CONNECTED) 0] ls /brokers/topics
[__consumer_offsets, frontend_invoke_queue, frontend_invoke_result_log, lake_add_namelist, lake_entrylog, logstash_test]
[zk: localhost:2181(CONNECTED) 1] ls /config/topics
[__consumer_offsets, frontend_invoke_queue, frontend_invoke_result_log, lake_add_namelist, lake_entrylog, logstash_test]

# 查看Kafka的server.properties配置文件中log.dirs 的目录
I have no name!@ape-kafka-0:/$ ls /bitnami/kafka/data/logstash_test-0/
00000000000000000000.index  00000000000000000000.log  00000000000000000000.timeindex  leader-epoch-checkpoint  partition.metadata

# 删除Topic logstash_test
I have no name!@ape-kafka-0:/$ kafka-topics.sh --delete --bootstrap-server ape-kafka-0.ape-kafka-headless:9092,ape-kafka-1.ape-kafka-headless:9092,ape-kafka-2.ape-kafka-headless:9092 --topic logstash_test

# 再次查看Topic logstash_test的状态,说明Topic已经被删除了
I have no name!@ape-kafka-0:/$ kafka-topics.sh --describe --bootstrap-server ape-kafka-0.ape-kafka-headless:9092,ape-kafka-1.ape-kafka-headless:9092,ape-kafka-2.ape-kafka-headless:9092 --topic logstash_test
Error while executing topic command : Topic 'logstash_test' does not exist as expected
[2024-06-26 03:13:45,323] ERROR java.lang.IllegalArgumentException: Topic 'logstash_test' does not exist as expected
        at kafka.admin.TopicCommand$.kafka$admin$TopicCommand$$ensureTopicExists(TopicCommand.scala:399)
        at kafka.admin.TopicCommand$TopicService.describeTopic(TopicCommand.scala:311)
        at kafka.admin.TopicCommand$.main(TopicCommand.scala:62)
        at kafka.admin.TopicCommand.main(TopicCommand.scala)
 (kafka.admin.TopicCommand$)

# 再次查看Zookeeper上的Topic,logstash_test也已经被删除了
[zk: localhost:2181(CONNECTED) 2] ls /brokers/topics
[__consumer_offsets, frontend_invoke_queue, frontend_invoke_result_log, lake_add_namelist, lake_entrylog]
[zk: localhost:2181(CONNECTED) 3] ls /config/topics
[__consumer_offsets, frontend_invoke_queue, frontend_invoke_result_log, lake_add_namelist, lake_entrylog]

# 再次查看/log.dirs 目录,logstash_test相关日志也被删除了
I have no name!@ape-kafka-0:/$ ls /bitnami/kafka/data/logstash_test*
ls: cannot access '/bitnami/kafka/data/logstash_test*': No such file or directory

手动删除数据

  1. 优点:不需要重启Kafka服务,直接删除Topic对应的系统日志,然后在Zookeeper中删除对应的目录
  2. 缺点:需要人为手动删除,删除之后重新创建同名的Topic会有问题(使用方式一不会有此问题)
  3. 不对推荐使用这个方法:简单粗暴,如果这个消息有程序还在消费者,此时KAFKA就game over
# 创建新的Topic logstash_test(拥有3个副本)
I have no name!@ape-kafka-0:/$ kafka-topics.sh --create --bootstrap-server ape-kafka-0.ape-kafka-headless:9092,ape-kafka-1.ape-kafka-headless:9092,ape-kafka-2.ape-kafka-headless:9092 --replication-factor 3 --partitions 1 --topic logstash_test
WARNING: Due to limitations in metric names, topics with a period ('.') or underscore ('_') could collide. To avoid issues it is best to use either, but not both.
Created topic logstash_test.

# 查看Topic logstash_test的状态,发现Leader是1(broker.id=1),有三个备份分别是0,1,2
I have no name!@ape-kafka-0:/$ kafka-topics.sh --describe --bootstrap-server ape-kafka-0.ape-kafka-headless:9092,ape-kafka-1.ape-kafka-headless:9092,ape-kafka-2.ape-kafka-headless:9092 --topic logstash_test
Topic: logstash_test    TopicId: S7bPYklqRXy6GB8Qwq67_A PartitionCount: 1       ReplicationFactor: 3    Configs: flush.ms=1000,segment.bytes=1073741824,flush.messages=10000,max.message.bytes=1000012,retention.bytes=1073741824
        Topic: logstash_test    Partition: 0    Leader: 1       Replicas: 1,0,2 Isr: 1,0,2
	
# 查看Zookeeper上的Topic
[zk: localhost:2181(CONNECTED) 0] ls /brokers/topics
[__consumer_offsets, frontend_invoke_queue, frontend_invoke_result_log, lake_add_namelist, lake_entrylog, logstash_test]
[zk: localhost:2181(CONNECTED) 1] ls /config/topics
[__consumer_offsets, frontend_invoke_queue, frontend_invoke_result_log, lake_add_namelist, lake_entrylog, logstash_test]

# 查看Kafka的server.properties配置文件中log.dirs的目录
I have no name!@ape-kafka-0:/$ ls /bitnami/kafka/data/logstash_test-0/
00000000000000000000.index  00000000000000000000.log  00000000000000000000.timeindex  leader-epoch-checkpoint  partition.metadata

# 删除Zookeeper上的Topic
[zk: localhost:2181(CONNECTED) 5] deleteall /brokers/topics/logstash_test
[zk: localhost:2181(CONNECTED) 6] deleteall /config/topics/logstash_test

# 删除Topic logstash_test的log文件(这里Kafka集群的所有节点都要删除)
rm -rf /bitnami/kafka/data/logstash_test*

# 查询还有哪些topic
I have no name!@ape-kafka-0:/$ kafka-topics.sh --list --bootstrap-server ape-kafka-0.ape-kafka-headless:9092,ape-kafka-1.ape-kafka-headless:9092,ape-kafka-2.ape-kafka-headless:9092
__consumer_offsets
frontend_invoke_queue
frontend_invoke_result_log
lake_add_namelist
lake_entrylog
logstash_test

# 再次查看Topic logstash_test的状态,可以发现topic还是存在的,这个时候需要手动删除一下topic(数据已清理)
I have no name!@ape-kafka-1:/$ kafka-topics.sh --describe --bootstrap-server ape-kafka-0.ape-kafka-headless:9092,ape-kafka-1.ape-kafka-headless:9092,ape-kafka-2.ape-kafka-headless:9092 --topic logstash_test
Topic: logstash_test    TopicId: S7bPYklqRXy6GB8Qwq67_A PartitionCount: 1       ReplicationFactor: 3    Configs: flush.ms=1000,segment.bytes=1073741824,flush.messages=10000,max.message.bytes=1000012,retention.bytes=1073741824
        Topic: logstash_test    Partition: 0    Leader: 1       Replicas: 1,0,2 Isr: 1,0,2

# 删除之后创建同名的Topic会有问题
I have no name!@ape-kafka-1:/$ kafka-topics.sh --create --bootstrap-server ape-kafka-0.ape-kafka-headless:9092,ape-kafka-1.ape-kafka-headless:9092,ape-kafka-2.ape-kafka-headless:9092 --replication-factor 3 --partitions 1 --topic logstash_test
WARNING: Due to limitations in metric names, topics with a period ('.') or underscore ('_') could collide. To avoid issues it is best to use either, but not both.
Error while executing topic command : Topic 'logstash_test' already exists.
[2024-06-26 03:38:34,038] ERROR org.apache.kafka.common.errors.TopicExistsException: Topic 'logstash_test' already exists.
 (kafka.admin.TopicCommand$)
 
# 删除topic,删除失败(重启kafka后恢复)
I have no name!@ape-kafka-1:/$ kafka-topics.sh --delete --bootstrap-server ape-kafka-0.ape-kafka-headless:9092,ape-kafka-1.ape-kafka-headless:9092,ape-kafka-2.ape-kafka-headless:9092 --topic logstash_test
Error while executing topic command : This server does not host this topic-partition.
[2024-06-26 03:40:30,871] ERROR org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server does not host this topic-partition.
 (kafka.admin.TopicCommand$)

设置删除策略

  1. 简单粗暴,如果这个消息有程序还在消费者,此时KAFKA就game over
  2. 相关参数如下,kafka启动之前,在server.properties配置
#日志清理策略选择有:delete和compact主要针对过期数据的处理,或是日志文件达到限制的额度,会被 topic创建时的指定参数覆盖
log.cleanup.policy = delete
 
# 注意:下面有两种配置,一种是基于时间的策略,另种是基于日志文件大小的策略,两种策略同是配置的话,只要满足其中种策略,则触发Log删除的操作。删除操作总是先删除最旧的日志
# 消息在Kafka中保存的时间,168小时之前的1og, 可以被删除掉,根据policy处理数据。
log.retention.hours=4
 
# 当剩余空间低于log.retention.bytes字节,则开始删除1og
log.retention.bytes=37580963840
 
# 每隔300000ms, logcleaner线程将检查一次,看是否符合上述保留策略的消息可以被删除
log.retention.check.interval.ms=1000

offset删除数据

# 生成数据
# 1. 创建一个新的topic test, 3个分区,1个副本
I have no name!@ape-kafka-0:/$ kafka-topics.sh --create --bootstrap-server ape-kafka-0.ape-kafka-headless:9092,ape-kafka-1.ape-kafka-headless:9092,ape-kafka-2.ape-kafka-headless:9092 --topic test --partitions 3 --replication-factor 1
Created topic test.

# 2. 生成随机消息100条
	kafka-verifiable-producer.sh --bootstrap-server ape-kafka-0.ape-kafka-headless:9092,ape-kafka-1.ape-kafka-headless:9092,ape-kafka-2.ape-kafka-headless:9092 --topic test --max-messages 100
	
# 3. 查看topic消息有多少
I have no name!@ape-kafka-0:/$ kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list ape-kafka-0.ape-kafka-headless:9092,ape-kafka-1.ape-kafka-headless:9092,ape-kafka-2.ape-kafka-headless:9092 --topic test --time -1
test:0:0
test:1:100
test:2:0

# 4. 将配置文件编辑如下,将会将partition 重0删除到49,50并不会删除
cat <<EOF> offset.json
{"partitions":[{"topic":"test", "partition":1, "offset": 50}], "version":1}
EOF

# 5. 执行删除
	kafka-delete-records.sh --bootstrap-server ape-kafka-0.ape-kafka-headless:9092,ape-kafka-1.ape-kafka-headless:9092,ape-kafka-2.ape-kafka-headless:9092 --offset-json-file offset.json


# 6. 取出消息,看是否符合预期,实际测试0-49被删除了
	kafka-console-consumer.sh --bootstrap-server ape-kafka-0.ape-kafka-headless:9092,ape-kafka-1.ape-kafka-headless:9092,ape-kafka-2.ape-kafka-headless:9092 --topic test --from-beginning
  • 2
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
在Linux下,Kafka创建topic几种方式如下: 1. 使用kafka-topics.sh脚本创建topic 在Linux系统上的Kafka安装目录下,有一个bin目录,其中有一个名为kafka-topics.sh的脚本文件,可以使用该脚本创建Kafkatopic。 例如,要创建一个名为test的topic,可以使用以下命令: ``` ./kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test ``` 其中,--create选项表示创建topic,--zookeeper指定Zookeeper的地址,--replication-factor指定副本因子,--partitions指定分区数,--topic指定topic名称。 2. 使用kafkacat工具创建topic kafkacat是一个Kafka命令行工具,它可以用来创建Kafkatopic。 例如,要创建一个名为test的topic,可以使用以下命令: ``` kafkacat -b localhost:9092 -t test -P -K: ``` 其中,-b指定Kafka的地址,-t指定topic名称,-P表示生产者模式,-K指定key和value之间的分割符。 3. 使用Java API创建topic 除了使用命令行工具外,还可以使用Java API来创建Kafkatopic。 例如,下面的Java代码可以创建一个名为test的topic: ```java import kafka.admin.AdminUtils; import kafka.utils.ZKStringSerializer$; import org.I0Itec.zkclient.ZkClient; import java.util.Properties; public class CreateTopic { public static void main(String[] args) { String zkConnect = "localhost:2181"; int sessionTimeoutMs = 10000; int connectionTimeoutMs = 10000; ZkClient zkClient = new ZkClient(zkConnect, sessionTimeoutMs, connectionTimeoutMs, ZKStringSerializer$.MODULE$); String topicName = "test"; int numPartitions = 1; int replicationFactor = 1; Properties topicConfig = new Properties(); AdminUtils.createTopic(zkClient, topicName, numPartitions, replicationFactor, topicConfig); zkClient.close(); } } ``` 该代码使用了Kafka的AdminUtils类来创建topic,首先创建一个ZkClient对象,然后使用AdminUtils.createTopic方法创建topic。其中,zkConnect指定Zookeeper的地址,numPartitions指定分区数,replicationFactor指定副本因子,topicConfig指定topic的属性。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

旺仔_牛奶

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值