Topic 处于 under replicated 状态.
server.log 充满:
[2020-11-30 19:00:00,006] WARN Received a PartitionLeaderEpoch assignment for an epoch < latestEpoch. This implies messages have arrived out of order. New: {epoch:0, offset:17990690}, Current: {epoch:4, offset:1547772} for Partition: __consumer_offsets-18 (kafka.server.epoch.LeaderEpochFileCache)
[2020-11-30 19:00:00,006] WARN Received a PartitionLeaderEpoch assignment for an epoch < latestEpoch. This implies messages have arrived out of order. New: {epoch:0, offset:139386934}, Current: {epoch:4, offset:6837455} for Partition: __consumer_offsets-9 (kafka.server.epoch.LeaderEpochFileCache)
[2020-11-30 19:00:00,006] WARN Received a PartitionLeaderEpoch assignment for an epoch < latestEpoch. This implies messages have arrived out of order. New: {epoch:0, offset:139386939}, Current: {epoch:4, offset:6837455} for Partition: __consumer_offsets-9 (kafka.server.epoch.LeaderEpochFileCache)
[2020-11-30 19:00:00,006] WARN Received a PartitionLeaderEpoch assignment for an epoch < latestEpoch. This implies messages have arrived out of order. New: {epoch:0, offset:139386940}, Current: {epoch:4, offset:6837455} for Partition: __consumer_offsets-9 (kafka.server.epoch.LeaderEpochFileCache)
[2020-11-30 19:00:00,007] WARN Received a PartitionLeaderEpoch assignment for an epoch < latestEpoch. This implies messages have arrived out of order. New: {epoch:0, offset:17990695}, Current: {epoch:4, offset:1547772} for Partition: __consumer_offsets-18 (kafka.server.epoch.LeaderEpochFileCache)
检查发现
brokers spread不均匀都显示33%
是因为我有三个节点,但是每个topic只给了一个分区一个副本,最后更新分区为3,解决警告
更新分区
./kafka-topics.sh --zookeeper 172.0.0.1:2181 -alter --partitions 3 --topic user-log-test
更新副本
配置副本数
创建配置文件 increase-replication-factor.json, 这个配置文件说明,
参数 说明
topic 哪一个topic
partition 指定哪个分区
replicas 指定broker-id ,这个是server.properties配置文件里面配置的broker.id 这个字段。
{
"partitions": [{
"topic": "ba_spam_content",
"partition": 0,
"replicas": [1, 2]
},
{
"topic": "ba_spam_content",
"partition": 1,
"replicas": [0, 2]
},
{
"topic": "ba_spam_content",
"partition": 2,
"replicas": [1, 0]
}
],
"version": 1
}
可以看到,单机的情况,replicas 只在0这个broke上。
开始设置副本数
./bin/kafka-reassign-partitions.sh --zookeeper 127.0.0.1:2181 --reassignment-json-file increase-replication-factor.json --execute
#查看副本情况
./bin/kafka-reassign-partitions.sh --zookeeper 127.0.0.1:2181 --reassignment-json-file increase-replication-factor.json --verify
# 查看分区信息
./kafka-topics.sh --zookeeper 127.0.0.1:2181 --describe --topic ba_spam_content
验证设置的结果情况,可以看到,我们的结果非常好,都成功了
后面可以看到配置很OK,看到有备份信息了。
配置kafka服务日志自动删除
kafka 目录下新建文件 auto-delete-kafka-3days-ago-log.sh 内容如下:
#!/bin/sh
find /data1/kafka/kafka/logs/ -mtime +3 -name "*.log*" -exec rm -rf {} \;
注意:这个地方不要漏了 最后面的 分号 ;
命令末尾是 : 一对大括号+空格 +斜杠\和分号; 一个都不能少和错。
crontab中加入定时任务,每天凌晨30分执行:crontab -e
输入:
#每天凌晨30分定时清理kafka集群三天前的log日志
30 0 * * * kafka/auto-delete-kafka-3days-ago-log.sh