python confluent_kafka 关于消费者消费时间过长,导致的leave group

8 篇文章 0 订阅

在这里插入图片描述
问题如上,当拉取数据,消费时间过长,就会出现leave group的情况

   下面6个参数是3对,通俗理解如下:

     1,2配合使用,告诉kafka集群,我消费者的处理能力,每秒至少能消费掉

     3,4配合使用,告诉kafka集群,在我没事情干的时候,多久尝试拉取一次数据,即使此时没有数据(所以要处理空消息)

     5,6配合使用,告诉kafka集群,什么情况你可以认为整个消费者挂了,触发rebanlance

参数名含义默认值备注
max.poll.interval.ms拉取时间间隔300s每次拉取的记录必须在该时间内消费完
max.poll.records每次拉取条数500条这个条数一定要结合业务背景合理设置
fetch.max.wait.ms每次拉取最大等待时间时间达到或者消息大小谁先满足条件都触发,没有消息但时间达到返回空消息体
fetch.min.bytes每次拉取最小字节数时间达到或者消息大小谁先满足条件都触发
heartbeat.interval.ms向协调器发送心跳的时间间隔3s建议不超过session.timeout.ms的1/3
session.timeout.ms心跳超时时间30s配置太大会导致真死消费者检测太慢

auto.offset.reset值含义解释
earliest
当各分区下有已提交的offset时,从提交的offset开始消费;无提交的offset时,从头开始消费
latest
当各分区下有已提交的offset时,从提交的offset开始消费;无提交的offset时,消费新产生的该分区下的数据
none
topic各分区都存在已提交的offset时,从offset后开始消费;只要有一个分区不存在已提交的offset,则抛出异常

import traceback
from confluent_kafka import Consumer

from models.batchdownload import BatchDownload
from e_document import settings as config
from functools import wraps
from datetime import datetime
from multiprocessing import Pool
import time
import json
import os
from asyncdownload.asyncdownload import makePackage
from utils.logger import elogger

class consumer:
    def __init__(self,voucher_type_dict):
        conf = {'bootstrap.servers': f'{config.Zookeeper["Host"]}:{config.Zookeeper["Port"]}',
        'group.id': config.TopicGroup,
        'enable.auto.commit': False,
        "heartbeat.interval.ms":3000,
        'session.timeout.ms': 30000,
        'max.poll.interval.ms':30000,
        # 'auto.offset.reset': 'latest'}
        'auto.offset.reset': 'earliest',
        'compression.type': 'gzip'#设置压缩
        'message.max.bytes': 10485760}
        #message.max.bytes 用来解决超过1M的数据无法发送
        self.cons = Consumer(conf)
        self.cons.subscribe([config.TopicName])
        self.voucher_type_dict = voucher_type_dict

    # consumer message from kafka and generate xml file 
    def consumerMessage(self):
        while 1:
            msg = self.cons.poll(1)
            if msg == None:
                continue
            if not msg.error() is None:
                print (msg.error())
                continue

            else:
                try:
                    value = json.loads(msg.value())
                    print(self.cons.commit(message=msg,asynchronous=False))
                    makePackage(value,self.voucher_type_dict)
                except Exception as e:
                    elogger(log_name='asyncdownload').get_logger().warning(traceback.extract_stack())
    def close(self):
        try:
            self.cons.close()
        except Exception as e:
            pass

https://github.com/confluentinc/confluent-kafka-python/issues/552

  • 1
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
Confluent Kafka is a Python client library for Apache Kafka, developed by Confluent. It provides an easy-to-use interface for interacting with Kafka clusters, allowing you to produce and consume messages from Kafka. To use the confluent_kafka library in, you first need to install it. You can do this by running the following command: ``` pip install confluent-kafka ``` Once installed, you can import the library in your Python code as follows: ```python from confluent_kafka import Producer, Consumer ``` To produce messages to a Kafka topic, you can create a `Producer` instance and use its `produce()` method. Here's an example: ```python producer = Producer({'bootstrap.servers': 'localhost:9092'}) topic = 'my_topic' message = 'Hello, Kafka!' producer.produce(topic, message.encode('utf-8')) producer.flush() ``` To consume messages from a Kafka topic, you can create a `Consumer` instance and use its `subscribe()` and `poll()` methods. Here's an example: ```python consumer = Consumer({ 'bootstrap.servers': 'localhost:9092', 'group.id': 'my_consumer_group', 'auto.offset.reset': 'earliest' }) topic = 'my_topic' consumer.subscribe([topic]) while True: msg = consumer.poll(1.0) if msg is None: continue if msg.error(): print(f"Consumer error: {msg.error()}") continue print(f"Received message: {msg.value().decode('utf-8')}") consumer.close() ``` These are just basic examples to get you started with the confluent_kafka library. You can refer to the official documentation for more advanced usage and configuration options. Please note that you need a running Kafka cluster to use the confluent_kafka library.

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值