kafka报错处理集合

docker-kafka启动参照 https://www.jianshu.com/p/9552871bb40a

查看kafka版本

$KAFKA_HOME/bin/kafka-consumer-groups.sh --version

新创建的 Topic 名字为 mykafka, partition 数为 1, replica(replication-factor) 数为 1.

$KAFKA_HOME/bin/kafka-topics.sh --create --zookeeper 192.168.0.4:2181 --replication-factor 1 --partitions 1 --topic mykafka

查看主题

$KAFKA_HOME/bin/kafka-topics.sh --list --zookeeper zookeeper:2181

$KAFKA_HOME/bin/kafka-topics.sh --describe --zookeeper zookeeper:2181
Topic:mykafka   PartitionCount:1        ReplicationFactor:1     Configs:
        Topic: mykafka  Partition: 0    Leader: 1006    Replicas: 1006  Isr: 1006
Topic:test      PartitionCount:1        ReplicationFactor:1     Configs:
        Topic: test     Partition: 0    Leader: -1      Replicas: 1002  Isr: 1002

$KAFKA_HOME/bin/kafka-topics.sh --describe --zookeeper zookeeper:2181  #可以用容器名,也可以用ip地址

发送消息

$KAFKA_HOME/bin/kafka-console-producer.sh --broker-list 192.168.0.4:9092 --topic mykafka

接收消息

$KAFKA_HOME/bin/kafka-console-consumer.sh --bootstrap-server 192.168.0.4:9092 --topic mykafka --from-beginning

查看消费情况

$KAFKA_HOME/bin/kafka-consumer-groups.sh --offsets --all-groups  --bootstrap-server kafka:9092 --describe

GROUP           TOPIC           PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG             CONSUMER-ID                                 HOST            CLIENT-ID
example         test_kafka      2          105             105             0               sarama-d77a07a0-4630-4716-ae0b-b4d9485e52a0 /172.18.0.1     sarama
example         test_kafka      4          20              20              0               sarama-d77a07a0-4630-4716-ae0b-b4d9485e52a0 /172.18.0.1     sarama
example         test_kafka      3          14              14              0               sarama-d77a07a0-4630-4716-ae0b-b4d9485e52a0 /172.18.0.1     sarama
example         test_kafka      1          101             101             0               sarama-d77a07a0-4630-4716-ae0b-b4d9485e52a0 /172.18.0.1     sarama
example         test_kafka      0          159             159             0               sarama-d77a07a0-4630-4716-ae0b-b4d9485e52a0 /172.18.0.1     sarama

#关闭consumer时
$KAFKA_HOME/bin/kafka-consumer-groups.sh --offsets --all-groups  --bootstrap-server kafka:9092 --describe

Consumer group 'example' has no active members.

GROUP           TOPIC           PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG             CONSUMER-ID     HOST            CLIENT-ID
example         test_kafka      0          159             159             0               -               -               -
example         test_kafka      2          105             106             1               -               -               -
example         test_kafka      1          101             101             0               -               -               -
example         test_kafka      4          20              20              0               -               -               -
example         test_kafka      3          14              14              0               -               -               -

kafka 的consumer机制

consumer group
每个consumer 会保存一个offset
zookeeper里保存了一个topic<__consumer_offsets>
kafka不像其它MQ,不存在在应答机制,只更新offset
1.broker接收producer的写入请求,把消息写入到一个topic里面,同时根据router信息进行分区操作。
2.broker把确定了partition的消息写入到segment里面的某一个log文件,消息的物理偏移量同时记入index文件里面。
3.查询一条message的时候,consumer发送需要读取的消息的offset给broker,broker根据offset来进行二分查找,定位到具体的index文件,进而定位到log文件里面的消息。

topic partition 默认为1 表示该topic数据均写入一个文件夹下

一个topic有多个partition可以增加topicr的message吞吐量,一个partition有多个replication提高数据可用性,不会因为部分节点宕机而数据丢失

consumergroup内每个consumer不能重复消费消息
每个partition会在每个broker节点上存副本,message先写到partition leader上,再由partition leader push到partition follower上

以下是遇到的一些错误:

命令行运行消费:无topic时

WARN [Consumer clientId=consumer-console-consumer-69322-1, groupId=console-consumer-69322] Error while fetching metadata with correlation id 2 : {test_kafka=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)

命令行运行发送:改变端口之后

WARN [Producer clientId=console-producer] Connection to node 100 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)

命令启动custome 端口不对

$KAFKA_HOME/bin/kafka-console-producer.sh --broker-list xx.xxx.xxx.xxx:9092 --topic test_kafka

报错;
WARN [Consumer clientId=consumer-console-consumer-2317-1, groupId=console-consumer-2317] Error while fetching metadata with correlation id 18 : {test_kafka=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)

命令发送时无法找到topic:先创建topic

[2019-12-24 19:27:12,575] WARN [Producer clientId=console-producer] Error while fetching metadata with correlation id 3 : {t=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)


KAFKA_ADVERTISED_HOST_NAME 不能用生产者或消费者访问不到的地址(golang)
panic: dial tcp xxx5000: i/o timeout

goroutine 1 [running]:
main.main()
/main.go:33 +0x241

golang代码报错:不指定KAFKA_ADVERTISED_PORT

kafka server: Request was for a topic or partition that does not exist on this broker

golang代码报错:kafka程序没有启动成功或者端口无法连接

 kafka: client has run out of available brokers to talk to (Is your cluster reachable?)

增加partition

bash-4.4# $KAFKA_HOME/bin/kafka-topics.sh --alter --zookeeper zookeeper:2181 --topic test_kafka --partitions 3
WARNING: If partitions are increased for a topic that has a key, the partition logic or ordering of the messages will be affected
Adding partitions succeeded!
bash-4.4# $KAFKA_HOME/bin/kafka-topics.sh --describe --zookeeper zookeeper:2181 --topic test_kafka
Topic: test_kafka    PartitionCount: 3    ReplicationFactor: 1    Configs:
    Topic: test_kafka    Partition: 0    Leader: 100    Replicas: 100    Isr: 100
    Topic: test_kafka    Partition: 1    Leader: 100    Replicas: 100    Isr: 100
    Topic: test_kafka    Partition: 2    Leader: 100    Replicas: 100    Isr: 100

减少partition,

$KAFKA_HOME/bin/kafka-topics.sh --alter --zookeeper zookeeper:2181 --topic test_kafka --partitions 4
WARNING: If partitions are increased for a topic that has a key, the partition logic or ordering of the messages will be affected
Error while executing topic command : The number of partitions for a topic can only be increased. Topic test_kafka currently has 5 partitions, 4 would not be an increase.
[2020-01-05 15:47:52,110] ERROR org.apache.kafka.common.errors.InvalidPartitionsException: The number of partitions for a topic can only be increased. Topic test_kafka currently has 5 partitions, 4 would not be an increase.
 (kafka.admin.TopicCommand$)

zookeeper保存kafka(2.4版本)相关信息
1.查看topic

[zk: localhost:2181(CONNECTED) 8] get /brokers/topics/test_kafka
{"version":2,"partitions":{"0":[100]},"adding_replicas":{},"removing_replicas":{}}

2.查看broker注册信息

[zk: localhost:2181(CONNECTED) 11] ls /brokers/ids  #获取id,每个broker都有一个唯一的ID
[100]
[zk: localhost:2181(CONNECTED) 12] get /brokers/ids/100
{"listener_security_protocol_map":{"PLAINTEXT":"PLAINTEXT"},"endpoints":["PLAINTEXT://localhost:9092"],"jmx_port":-1,"host":"localhost","timestamp":"1577885461148","port":9092,"version":4}
[zk: localhost:2181(CONNECTED) 14] get /controller_epoch
15  #每次master经过重新选举就会自增1

报错: Could not find or load main class kafka.tools.ConsumerOffsetChecker

bash-4.4# $KAFKA_HOME/kafka-run-class.sh kafka.tools.ConsumerOffsetChecker  .....

解决 : https://blog.csdn.net/lukabruce/article/details/89210463

补下2.4版本的kafka-consumer-groups api

$KAFKA_HOME/bin/kafka-consumer-groups.sh
This tool helps to list all consumer groups, describe a consumer group, delete consumer group info, or reset consumer group offsets.
Option                                  Description
------                                  -----------
--all-groups                            Apply to all consumer groups.
--all-topics                            Consider all topics assigned to a
                                          group in the `reset-offsets` process.
--bootstrap-server <String: server to   REQUIRED: The server(s) to connect to.
  connect to>
--by-duration <String: duration>        Reset offsets to offset by duration
                                          from current timestamp. Format:
                                          'PnDTnHnMnS'
--command-config <String: command       Property file containing configs to be
  config property file>                   passed to Admin Client and Consumer.
--delete                                Pass in groups to delete topic
                                          partition offsets and ownership
                                          information over the entire consumer
                                          group. For instance --group g1 --
                                          group g2
--delete-offsets                        Delete offsets of consumer group.
                                          Supports one consumer group at the
                                          time, and multiple topics.
--describe                              Describe consumer group and list
                                          offset lag (number of messages not
                                          yet processed) related to given
                                          group.
--dry-run                               Only show results without executing
                                          changes on Consumer Groups.
                                          Supported operations: reset-offsets.
--execute                               Execute operation. Supported
                                          operations: reset-offsets.
--export                                Export operation execution to a CSV
                                          file. Supported operations: reset-
                                          offsets.
--from-file <String: path to CSV file>  Reset offsets to values defined in CSV
                                          file.
--group <String: consumer group>        The consumer group we wish to act on.
--help                                  Print usage information.
--list                                  List all consumer groups.
--members                               Describe members of the group. This
                                          option may be used with '--describe'
                                          and '--bootstrap-server' options
                                          only.
                                        Example: --bootstrap-server localhost:
                                          9092 --describe --group group1 --
                                          members
--offsets                               Describe the group and list all topic
                                          partitions in the group along with
                                          their offset lag. This is the
                                          default sub-action of and may be
                                          used with '--describe' and '--
                                          bootstrap-server' options only.
                                        Example: --bootstrap-server localhost:
                                          9092 --describe --group group1 --
                                          offsets
--reset-offsets                         Reset offsets of consumer group.
                                          Supports one consumer group at the
                                          time, and instances should be
                                          inactive
                                        Has 2 execution options: --dry-run
                                          (the default) to plan which offsets
                                          to reset, and --execute to update
                                          the offsets. Additionally, the --
                                          export option is used to export the
                                          results to a CSV format.
                                        You must choose one of the following
                                          reset specifications: --to-datetime,
                                          --by-period, --to-earliest, --to-
                                          latest, --shift-by, --from-file, --
                                          to-current.
                                        To define the scope use --all-topics
                                          or --topic. One scope must be
                                          specified unless you use '--from-
                                          file'.
--shift-by <Long: number-of-offsets>    Reset offsets shifting current offset
                                          by 'n', where 'n' can be positive or
                                          negative.
--state                                 Describe the group state. This option
                                          may be used with '--describe' and '--
                                          bootstrap-server' options only.
                                        Example: --bootstrap-server localhost:
                                          9092 --describe --group group1 --
                                          state
--timeout <Long: timeout (ms)>          The timeout that can be set for some
                                          use cases. For example, it can be
                                          used when describing the group to
                                          specify the maximum amount of time
                                          in milliseconds to wait before the
                                          group stabilizes (when the group is
                                          just created, or is going through
                                          some changes). (default: 5000)
--to-current                            Reset offsets to current offset.
--to-datetime <String: datetime>        Reset offsets to offset from datetime.
                                          Format: 'YYYY-MM-DDTHH:mm:SS.sss'
--to-earliest                           Reset offsets to earliest offset.
--to-latest                             Reset offsets to latest offset.
--to-offset <Long: offset>              Reset offsets to a specific offset.
--topic <String: topic>                 The topic whose consumer group
                                          information should be deleted or
                                          topic whose should be included in
                                          the reset offset process. In `reset-
                                          offsets` case, partitions can be
                                          specified using this format: `topic1:
                                          0,1,2`, where 0,1,2 are the
                                          partition to be included in the
                                          process. Reset-offsets also supports
                                          multiple topic inputs.
--verbose                               Provide additional information, if
                                          any, when describing the group. This
                                          option may be used with '--
                                          offsets'/'--members'/'--state' and
                                          '--bootstrap-server' options only.
                                        Example: --bootstrap-server localhost:
                                          9092 --describe --group group1 --
                                          members --verbose
--version                               Display Kafka version.
  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
要捕获Kafka报错信息,你可以通过使用Spring Kafka提供的错误处理器(Error Handler)来实现。下面是一种常见的处理方式: 1. 创建一个实现ErrorHandler接口的自定义错误处理器类,用于处理Kafka报错信息。 ```java import org.apache.kafka.clients.consumer.ConsumerRecord; import org.springframework.kafka.listener.ErrorHandler; import org.springframework.kafka.listener.ListenerExecutionFailedException; import org.springframework.kafka.support.Acknowledgment; import org.springframework.util.backoff.BackOff; import org.springframework.util.backoff.FixedBackOff; public class CustomErrorHandler implements ErrorHandler { @Override public void handle(Exception thrownException, ConsumerRecord<?, ?> record, Consumer<?, ?> consumer, MessageListenerContainer container) { // 捕获Kafka报错信息并进行处理 if (thrownException instanceof ListenerExecutionFailedException) { ListenerExecutionFailedException failedException = (ListenerExecutionFailedException) thrownException; // 获取异常的原因并进行相应的处理 Throwable cause = failedException.getCause(); if (cause != null) { // 处理异常信息 } } // 可以选择进行一些其他操作,如重试或记录错误日志 } @Override public void handle(Exception e, ConsumerRecord<?, ?> consumerRecord) { // 处理没有指定消费者和容器的情况 } @Override public void handle(Exception e, ConsumerRecord<?, ?> consumerRecord, Consumer<?, ?> consumer) { // 处理没有指定容器的情况 } @Override public void handle(Exception e, ConsumerRecord<?, ?> consumerRecord, Consumer<?, ?> consumer, String s) { // 处理没有指定容器和主题的情况 } } ``` 在上面的例子中,我们创建了一个CustomErrorHandler类来自定义错误处理。在handle方法中,我们可以捕获Kafka报错信息并进行相应的处理,如记录日志、重试等。 2. 在Kafka消费者配置中设置自定义错误处理器。 ```java @Configuration @EnableKafka public class KafkaConsumerConfig { @Bean public ConsumerFactory<String, String> consumerFactory() { // 创建ConsumerFactory配置... } @Bean public ConcurrentKafkaListenerContainerFactory<String, String> kafkaListenerContainerFactory() { ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>(); factory.setConsumerFactory(consumerFactory()); // 设置自定义错误处理器 factory.setErrorHandler(new CustomErrorHandler()); return factory; } } ``` 在上面的例子中,我们通过设置ConcurrentKafkaListenerContainerFactory的setErrorHandler方法来指定使用自定义的错误处理器。 通过以上步骤,你就可以使用自定义的错误处理器来捕获Kafka报错信息,并根据需要进行相应的处理

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值