Kafka FAQ

原文:https://cwiki.apache.org/confluence/display/KAFKA/FAQ

(翻译中……)

Kafka FAQ1

Producers
How should I set metadata.broker.list?

The broker list provided to the producer is only used for fetching metadata. Once the metadata response is received, the producer will send produce requests to the broker hosting the corresponding topic/partition directly, using the ip/port the broker registered in ZK. Any broker can serve metadata requests. The client is responsible for making sure that at least one of the brokers in metadata.broker.list is accessible. One way to achieve this is to use a VIP in a load balancer. If brokers change in a cluster, one can just update the hosts associated with the VIP

我该怎么去设置metadata.broker.list?

broker list 提供给 producer 仅仅是用来获取元数据。一旦成功获取到元数据,producer 会使用 broker 在zoopeeker中注册的ip/port 将 send produce requests 直接发送至于相关topic/partition 所在broker 主机。任何broker都可以服务元数据请求。客户端的职责就是确保metadata.broker.list中至少有一个broker是可用的。 One way to achieve this is to use a VIP in a load balancer. If brokers change in a cluster, one can just update the hosts associated with the VIP

Why do I get QueueFullException in my producer when running in async mode?

This typically happens when the producer is trying to send messages quicker than the broker can handle. If the producer can’t block, one will have to add enough brokers so that they jointly can handle the load. If the producer can block, one can set queue.enqueueTimeout.ms in producer config to -1. This way, if the queue is full, the producer will block instead of dropping messages.

为什么我的producer在async模式运行中出现了QueueFullException?

这个问题典型的出现在producer发送消息的速度大于broker处理消息的速度。如果producer不允许被堵塞,我们不得不添加足够多的brokers使他们一同处理。如果producer允许被堵塞,我们可以在producer中设置queue.enqueueTimeout.ms为-1。这样,如果队列满了,producer 将会被堵塞而不是抛弃消息。

I am using the ZK-based producer in 0.7 and I see data only produced on some of the brokers, but not all, why?

This is related to an issue in Kafka 0.7.x (see the discussion in http://apache.markmail.org/thread/c7tdalfketpusqkg). Basically, for a new topic, the producer bootstraps using all existing brokers. However, if a topic already exists on some brokers, the producer never bootstraps again when new brokers are added to the cluster. This means that the producer won’t see those new broker. A workaround is to manually create the log directory for that topic on the new brokers.

Why are my brokers not receiving producer sent messages?

This happened when I tried to enable gzip compression by setting compression.codec to 1. With the code change, not a single message was received by the brokers even though I had called producer.send() 1 million times. No error printed by producer and no error could be found in broker’s kafka-request.log. By adding log4j.properties to my producer’s classpath and switching the log level to DEBUG, I captured the java.lang.NoClassDefFoundError: org/xerial/snappy/SnappyInputStream thrown at the producer side. Now I can see this error can be resolved by adding snappy jar to the producer’s classpath.

Why is data not evenly distributed among partitions when a partitioning key is not specified?

In Kafka producer, a partition key can be specified to indicate the destination partition of the message. By default, a hashing-based partitioner is used to determine the partition id given the key, and people can use customized partitioners also.

To reduce # of open sockets, in 0.8.0 (https://issues.apache.org/jira/browse/KAFKA-1017), when the partitioning key is not specified or null, a producer will pick a random partition and stick to it for some time (default is 10 mins) before switching to another one. So, if there are fewer producers than partitions, at a given point of time, some partitions may not receive any data. To alleviate this problem, one can either reduce the metadata refresh interval or specify a message key and a customized random partitioner. For more detail see this thread http://mail-archives.apache.org/mod_mbox/kafka-dev/201310.mbox/%3CCAFbh0Q0aVh%2Bvqxfy7H-%2BMnRFBt6BnyoZk1LWBoMspwSmTqUKMg%40mail.gmail.com%3E

Is it possible to delete a topic?

In the current version, 0.8.0, no. (You could clear the entire Kafka and zookeeper states to delete all topics and data.) But upcoming releases are expected to include a delete topic tool

在当前版本中,0.8.0,不可以。(你可以通过清理整个kafka及zookeeper中的状态去删除topics 及其数据。

Consumers

Why does my consumer never get any data?

By default, when a consumer is started for the very first time, it ignores all existing data in a topic and will only consume new data coming in after the consumer is started. If this is the case, try sending some more data after the consumer is started. Alternatively, you can configure the consumer by setting auto.offset.reset to “smallest”.

为什么我的consumer从来没有获取到数据?

默认情况下,当一个consumer第一次启动是,它会忽略topic上已经存在的所有数据而只是消费其启动之后生产的数据。如果是这种情况,尝试着在consumer启动后发送更多的新的数据到topic中。或者,你可以在consumer设置auto.offset.reset为”smallest”

Why does my consumer get InvalidMessageSize Exception?

This typically means that the “fetch size” of the consumer is too small. Each time the consumer pulls data from the broker, it reads bytes up to a configured limit. If that limit is smaller than the largest single message stored in Kafka, the consumer can’t decode the message properly and will throw an InvalidMessageSizeException. To fix this, increase the limit by setting the property “fetch.size” (0.7) / “fetch.message.max.bytes” (0.8) properly in config/consumer.properties. The default fetch.size is 300,000 bytes.

为什么我的consumer出现了InvalidMessageSizeException?

这通常意味着consumer的”fetch size”太小。consumer每次从broker读取消息的最大字节数取决于配置的限制。如果这种限制小于存储在kafka中最大的单一消息大小,consumer将不能正确的对消息进行解码从而抛出InvalidMessageSizeException异常。为了解决这个问题,我们可以在 config/consumer.properties 中增大”fetch.size” (0.7) / “fetch.message.max.bytes” (0.8) 属性的设置。默认情况下,fetch.size 被设置为 300,000 字节。

Should I choose multiple group ids or a single one for the consumers?

If all consumers use the same group id, messages in a topic are distributed among those consumers. In other words, each consumer will get a non-overlapping subset of the messages. Having more consumers in the same group increases the degree of parallelism and the overall throughput of consumption. See the next question for the choice of the number of consumer instances. On the other hand, if each consumer is in its own group, each consumer will get a full copy of all messages.

我改为consumers选择多个group.id还是单一的group.id?

如果所有consumers使用相同的group.id,那么topic中的消息将分布在这些消费者中。换句话说,每一个消费者将得到一个不重叠的子集。多个消费者在同一个消费组中可以提高并行度和整体的消费量。请看所选的consumer实例中的下一个问题。另一方面,如果每个消费者都在一个属于自己的独立的消费者组,那么每个消费者将得到topic中所有消息的一个副本。

Why some of the consumers in a consumer group never receive any message?

Currently, a topic partition is the smallest unit that we distribute messages among consumers in the same consumer group. So, if the number of consumers is larger than the total number of partitions in a Kafka cluster (across all brokers), some consumers will never get any data. The solution is to increase the number of partitions on the broker.

为什么一个消费组中的某些消费者一直没有获取到消息?

目前,一个topic partition是我们在一个消费组中分发消息到消费组的最小单元。所以,如果一个消费组中消费者的数目大于kafka 集群中partitions的总数,那么将会有部分consumer永远获取不到消息。这个问题的解决办法就是增加kafka中partition的数目。

Why are there many rebalances in my consumer log?

A typical reason for many rebalances is the consumer side GC. If so, you will see Zookeeper session expirations in the consumer log (grep for Expired). Occasional rebalances are fine. Too many rebalances can slow down the consumption and one will need to tune the java GC setting.

为什么我的消费者日志中有很多的再平衡?

这种问题的一个典型原因是consumer端的GC导致。如果这样,你会在consumer日志中看到消费者注册到zookeeper的会话过期。偶尔的再平衡是好的。但是,过多的在平衡将会降低消耗和我们需要调整设置Java GC。

Can I predict the results of the consumer rebalance?

During the rebalance process, each consumer will execute the same deterministic algorithm to range partition a sorted list of topic-partitions over a sorted list of consumer instances. This makes the whole rebalancing process deterministic. For example, if you only have one partition for a specific topic and going to have two consumers consuming this topic, only one consumer will get the data from the partition of the topic; and even if the consumer named “Consumer1” is registered after the other consumer named “Consumer2”, it will replace “Consumer2” gaining the ownership of the partition in the rebalance.

我是否可以预测消费者再平衡的结果?

在再平衡过程中,each consumer will execute the same deterministic algorithm to range partition a sorted list of topic-partitions over a sorted list of consumer instances。这使得整个在平衡过程是确定的。例如,如果你有一个一个分区的特定topic,将有两个消费者消费这个topic,即使名为”Consumer1”的消费者在名为”Consumer2”后注册,它也会在再平衡过程中取代”Consumer2”获取这个topic分区的消费权。

Range partitioning works on a per-topic basis. For each topic, we lay out the available partitions in numeric order and the consumer threads in lexicographic order. We then divide the number of partitions by the total number of consumer streams (threads) to determine the number of partitions to allocate to each consumer. If it does not evenly divide, then the first few consumers will have one extra partition. For example, suppose there are two consumers C1 and C2 with two streams each, and there are five available partitions (p0, p1, p2, p3, p4). So each consumer thread will get at least one partition and the first consumer thread will get one extra partition. So the assignment will be: p0 -> C1-0, p1 -> C1-0, p2 -> C1-1, p3 -> C2-0, p4 -> C2-1

Range partitioning 是基于每一个topic。对于每一个topic,我们列出根据数字排序的可用分区和根据字典排序的消费者线程, 我们根据消费流的总数来决定给每个消费者分配几个分区。如果划分不均衡,前面的几个消费者将获得一个额外的分区(p0,p1,p2,p3,p4,p5). 所以、每个消费流将至少得到一个分区,并且第一个消费线程将获取一个额外的分区。最后,任务分配将是这样的:p0 -> C1-0, P1 - >C1-0,P2 -> C1-2, P3 -> p4->C2-1, P4 -> p5->C2-2.

My consumer seems to have stopped, why?

First, try to figure out if the consumer has really stopped or is just slow. You can use our tool

ConsumerOffsetChecker

bin/kafka-run-class.sh kafka.tools.ConsumerOffsetChecker --group consumer-group1 --zkconnect zkhost:zkport --topic topic1
consumer-group1,topic1,0-0 (Group,Topic,BrokerId-PartitionId)
Owner = consumer-group1-consumer1
Consumer offset = 70121994703
= 70,121,994,703 (65.31G)
Log size = 70122018287
= 70,122,018,287 (65.31G)
Consumer lag = 23584
= 23,584 (0.00G)

In 0.8, you can also monitor the MaxLag and the MinFetch jmx bean (see http://kafka.apache.org/documentation.html#monitoring).

If consumer offset is not moving after some time, then consumer is likely to have stopped. If consumer offset is moving, but consumer lag (difference between the end of the log and the consumer offset) is increasing, the consumer is slower than the producer. If the consumer is slow, the typical solution is to increase the degree of parallelism in the consumer. This may require increasing the number of partitions of a topic.

我的consumer看起来好像停止了,为什么?

首先,尝试着找出consumer是真的停止了还是仅仅慢而已。你可以使用我们的工具

ConsumerOffsetChecker

bin/kafka-run-class.sh kafka.tools.ConsumerOffsetChecker --group consumer-group1 --zkconnect zkhost:zkport --topic topic1
consumer-group1,topic1,0-0 (Group,Topic,BrokerId-PartitionId)
Owner = consumer-group1-consumer1
Consumer offset = 70121994703
= 70,121,994,703 (65.31G)
Log size = 70122018287
= 70,122,018,287 (65.31G)
Consumer lag = 23584
= 23,584 (0.00G)

在0.8版本中,你还可以监控MaxLag jmx bean和MinFetch jmx bean(查看http://kafka.apache.org/documentation.html#monitoring)

如果consumer offset在一段时间后还是没移动,那么consumer可能已经停止了。如果consumer offset在移动,但是滞后消费(end of the log和consumer offset的差距)在增加,那就是consumer消费比produer生产的慢。如果consumer慢了,典型的解决办法就是提高consumers的并行度。这很可能就要去我们加大topic的分区。

The high-level consumer will block if

  • there are no more messages available

    • The ConsumerOffsetChecker will show that the log offset of the partitions being consumed does not change on the broker
  • the next message available is larger than the maximum fetch size you have specified

    • One possibility of a stalled consumer is that the fetch size in the consumer is smaller than the largest message in the broker. You can use the DumpLogSegments tool to figure out the largest message size and set fetch.size in the consumer config accordingly.
  • your client code simply stops pulling messages from the iterator (the blocking queue will fill up).
    • One of the typical causes is that the application code that consumes messages somehow died and therefore killed the consumer thread. We recommend using a try/catch clause to log all Throwable in the consumer logic.
  • consumer rebalancing fails (you will see ConsumerRebalanceFailedException): This is due to conflicts when two consumers are trying to own the same topic partition. The log will show you what caused the conflict (search for “conflict in “).
  • If your consumer subscribes to many topics and your ZK server is busy, this could be caused by consumers not having enough time to see a consistent view of all consumers in the same group. If this is the case, try Increasing rebalance.max.retries and rebalance.backoff.ms.
  • Another reason could be that one of the consumers is hard killed. Other consumers during rebalancing won’t realize that consumer is gone after zookeeper.session.timeout.ms time. In the case, make sure that rebalance.max.retries * rebalance.backoff.ms > zookeeper.session.timeout.ms.

以下情况,高级consumer可能会堵塞:

  • 没有可用的消息

    • ConsumerOffsetChecker 将会显示在broker上已消费的分区log offset 将不再变化。
  • 下一条可用消息大于你指定的最大fetch size .

    • 一种消费者停滞的原因是consumer端的fetch size小于borker上最大的消息。你可以是呀DumpLogSegments工具找出最大消息大小以便在consumer端配置相应的fetch size。
  • 你的客户端代码停止从迭代器中拉消息。

    • 一个典型的原因是消费消息的应用程序代码以某种原因死掉了,因此导致消费者线程被杀死。我们建议使用try/catch字句来log消费逻辑中所有抛出的异常。
  • 消费者在平衡失败(你会看到ConsumerRebalanceFailedException):这是由于两个消费者想拥有同一个topic分区而引起冲突导致的,日志会告诉你冲突的原因(查找”conflict in “)

  • 如果你的消费者订阅量许多topic而你的zookeeper又比较繁忙,这可能是消费者没有足够的时间去看到消费组中一致的消费者视图。如果这样的话,可以尝试增加rebalance.max.retries 和 rebalance.backoff.ms配置。

  • 另一个原因是消费者组中一个消费者很难被杀死。在再平衡过程中其他消费者没有意识到这个消费者将在zookeeper.session.timeout.ms时长后死亡。在这种情况下,确保rebalance.max.retries * rebalance.backoff.ms > zookeeper.session.timeout.ms.

Why messages are delayed in my consumer?

This could be a general throughput issue. If so, you can use more consumer streams (may need to increase # partitions) or make the consumption logic more efficient.

这可能是一个总吞吐量的问题。如果是的话,你需要更多的消费流(你需要加大你的分区数)或者提供消费逻辑的效率。

Another potential issue is when multiple topics are consumed in the same consumer connector. Internally, we have an in-memory queue for each topic, which feed the consumer iterators. We have a single fetcher thread per broker that issues multi-fetch requests for all topics. The fetcher thread iterates the fetched data and tries to put the data for different topics into its own in-memory queue. If one of the consumer is slow, eventually its corresponding in-memory queue will be full. As a result, the fetcher thread will block on putting data into that queue. Until that queue has more space, no data will be put into the queue for other topics. Therefore, those other topics, even if they have less volume, their consumption will be delayed because of that. To address this issue, either making sure that all consumers can keep up, or using separate consumer connectors for different topics。

另一个潜在的问题是使用同一个连接器消费多个topics。在kafka内部,我们每一个topic都有一个供消费者迭代器使用的内存队列,……

How to improve the throughput of aremote consumer?

If the consumer is in a different data center from the broker, you may need to tune the socket buffer size to amortize the long network latency. Specifically, for Kafka 0.7, you can increase socket.receive.buffer in the broker, and socket.buffersize and fetch.size in the consumer. For Kafka 0.8, the consumer properties are socket.receive.buffer.bytes and fetch.message.max.bytes

如果consumer与broker在不同的数据中心,你可以需要调整sokect缓冲区大小来分摊网络延迟。具体的说,在kafka0.7中,你可以增大broker端的socket.receive.buffer和consumer端的ocket.buffersize 及fetch.size配置。对于kafka0.8,consumer端配置熟悉是socket.receive.buffer.bytes 和fetch.message.max.bytes

How can I rewind the offset in the consumer?

If you are using the high level consumer, currently there is no api to reset the offsets in the consumer. The only way is to stop all consumers and reset the offsets for that consumer group in ZK manually. We do have an import/export offset tool that you can use (bin/kafka-run-class.sh kafka.tools.ImportZkOffsets and bin/kafka-run-class.sh

kafka.tools.ExportZkOffsets). To get the offsets for importing, we have a GetOffsetShell tool (bin/kafka-run-class.sh kafka.tools.GetOffsetShell) that allows you to get the offsets before a give timestamp. The offsets returned there are the offsets corresponding to the first message of each log segment. So the granularity is very coarse.

我可以在consumer端重置offset么?

如果你使用的是高级consumer,目前没有API可以重置offset。唯一的办法是停止所有的消费者然后手动重置该消费者组在zookeeper上的offset值。我们有一个导入/导出工具可以供你使用(bin/kafka-run-class.sh kafka.tools.ImportZkOffsets 和bin/kafka-run-class.sh kafka.tools.GetOffsetShell).为了得到需要导入的offsets,我们可以使用工具(bin/kafka-run-class.sh kafka.tools.GetOffsetShell)得到某个时间撮之前的offsets。这里返回的offsets是对应每个日志段的第一条消息,因此粒度很粗。

I don’t want my consumer’s offsets to be committed automatically. Can I manually manage my consumer’s offsets?

You can turn off the autocommit behavior (which is on by default) by setting auto.commit.enable=false in your consumer’s config. There are a couple of caveats to keep in mind when doing this:

  • You will manually commit offsets using the consumer’s commitOffsets API. Note that this will commit offsets for all partitions that the consumer currently owns. The consumer connector does not currently provide a more fine-grained commit API.

  • If a consumer rebalances for any reason it will fetch the last committed offsets for any partitions that it ends up owning. If you have not yet committed any offsets for these partitions, then it will use the latest or earliest offset depending on whether auto.offset.reset is set to largest or smallest (respectively).

我不想我的消费者自动提交offsets,我可以手动管理我的消费者offsets么?

你可以在你的consumer的配置文件设置auto.commit.enable=false来关闭自动提交行为(这是默认的)。当你这么做时,需要注意以下两点:

  • 你可以使用consumer端的commitOffsets API来手动提交offsets。需要注意的是你将提交属于该消费者所有分区的offsets。目前消费者连接器尚未提供更细粒度的提交API。

  • 如果消费者因为某种原因发再均衡,最后提交的offsets将会被其分区最终所属的消费者再次获取。如果这些分区没有提交任务offsets,那么将根据auto.offset.reset 被设置为largest 或smallest 分别取最新的或最早的offsets。

What is the relationship between fetch.wait.max.ms and socket.timeout.ms on the consumer?

fetch.wait.max.ms controls how long a fetch request will wait on the broker in the normal case. The issue is that if there is a hard crash on the broker (host is down), the client may not realize this immediately since TCP will try very hard to maintain the socket connection. By setting socket.timeout.ms, we allow the client to break out sooner in this case. Typically, socket.timeout.ms should be set to be at least fetch.wait.max.ms or a bit larger. It’s possible to specify an indefinite long poll by setting fetch.wait.max.ms to a very large value. It’s not recommended right now due to https://issues.apache.org/jira/browse/KAFKA-1016. The consumer-config documentation states that “The actual timeout set will be max.fetch.wait + socket.timeout.ms.” - however, in the code a while ago. https://issues.apache.org/jira/browse/KAFKA-1147 is filed to fix it.

fetch.wait.max.ms控制着正常情况下从broker上fetch请求将等待多久。问题是如果broker崩溃了(主机倒了),由于TCP将努力维持连接,所以客户端可能不会立即发现这个问题。通过配置socket.timeout.ms,我们允许客户端尽早摆脱这种情况。通常情况下,socket.timeout.ms需要设置为大于或等于fetch.wait.max.ms。通过设置fetch.wait.max.ms为一个非常大的值来指定一个无限长的poll是可行的,但是由于问题https://issues.apache.org/jira/browse/KAFKA-1016.,所以不推荐。用户配置文件指出"实际超时时间将会被设为ax.fetch.wait + socket.timeout.ms".owever, in the code a while ago. https://issues.apache.org/jira/browse/KAFKA-1147 is filed to fix it.

How do I get exactly-once messaging from Kafka?

Exactly once semantics has two parts: avoiding duplication during data production and avoiding duplicates during data consumption.

There are two approaches to getting exactly once semantics during data production:

1.Use a single-writer per partition and every time you get a network error check the last message in that partition to see if your last write succeeded

2.Include a primary key (UUID or something) in the message and deduplicate on the consumer.

If you do one of these things, the log that Kafka hosts will be duplicate-free. However, reading without duplicates depends on some co-operation from the consumer too. If the consumer is periodically checkpointing its position then if it fails and restarts it will restart from the checkpointed position. Thus if the data output and the checkpoint are not written atomically it will be possible to get duplicates here as well. This problem is particular to your storage system. For example, if you are using a database you could commit these together in a transaction. The HDFS loader Camus that LinkedIn wrote does something like this for Hadoop loads. The other alternative that doesn’t require a transaction is to store the offset with the data loaded and deduplicate using the topic/partition/offset combination.

I think there are two improvements that would make this a lot easier:

1.Producer idempotence could be done automatically and much more cheaply by optionally integrating support for this on the server.

2.The existing high-level consumer doesn’t expose a lot of the more fine grained control of offsets (e.g. to reset your position). We will be working on that soon

Why can’t I specify the number of streams parallelism per topic map using wildcard stream as I use static stream handler?

The reason we do not have per-topic parallelism specification in wildcard is that with the wildcard topicFilter, we will not know exactly which topics to consume at the construction time, hence no way to specify per-topic specs.

How to consume large messages?

First you need to make sure these large messages can be acceptted at Kafka brokers:

{code}

message.max.bytes

{code}

controls the maximum size of a message that can be accepted at the broker, and any single message (including the wrapper message for compressed message set) whose size is larger than this value will be rejected for producing.

Then you need to make sure consumers can fetch such large messages on brokers:

{code}

fetch.message.max.bytes

{code}

controls the maximum number of bytes a consumer issues in one fetch. If it is less than a message’s size, the fetching will be blocked on that message keep retrying.

How do we migrate to committing offsets to Kafka (rather than Zookeeper) in 0.8.2?

(Answer provided by Jon Bringhurst on mailing list)

A summary of the migration procedure is:

1) Upgrade your brokers and set dual.commit.enabled=false and offsets.storage=zookeeper (Commit offsets to Zookeeper Only).

2) Set dual.commit.enabled=true and offsets.storage=kafka and restart (Commit offsets to Zookeeper and Kafka).

3) Set dual.commit.enabled=false and offsets.storage=kafka and restart (Commit offsets to Kafka only).

Brokers

How does Kafka depend on Zookeeper?

Starting from 0.9, we are removing all the Zookeeper dependency from the clients (for details one can check this page). However, the brokers will continue to be heavily depend on Zookeeper for:

1.Server failure detection.

2.Data partitioning.

3.In-sync data replication.

4.Consumer membership management.

Once the Zookeeper quorum is down, brokers could result in a bad state and could not normally serve client requests, etc. Although when Zookeeper quorum recovers, the Kafka brokers should be able to resume to normal state automatically, there are still a few corner cases the they cannot and a hard kill-and-recovery is required to bring it back to normal. Hence it is recommended to closely monitor your zookeeper cluster and provision it so that it is performant.

Also note that if Zookeeper was hard killed previously, upon restart it may not successfully load all the data and update their creation timestamp. To resolve this you can clean-up the data directory of the Zookeeper before restarting (if you have critical metadata such as consumer offsets you would need to export / import them before / after you cleanup the Zookeeper data and restart the server).

Why do I see error “Should not set log end offset on partition” in the broker log?

Typically, you will see errors like the following.

kafka.common.KafkaException: Should not set log end offset on partition [test,22]’s local replica 4

ERROR [ReplicaFetcherThread-0-6], Error for partition [test,22] to broker 6:class kafka.common.UnknownException(kafka.server.ReplicaFetcherThread)

A common problem is that more than one broker registered the same host/port in Zookeeper. As a result, the replica fetcher is confused when fetching data from the leader. To verify that, you can use a Zookeeper client shell to list the registration info of each broker. The Zookeeper path and the format of the broker registration is described in Kafka data structures in Zookeeper. You want to make sure that all the registered brokers have unique host/port.

Why does controlled shutdown fail?

If a controlled shutdown attempt fails, you will see error messages like the following in your broker logs

WARN [Kafka Server 0], Retrying controlled shutdown after the previous attempt failed… (kafka.server.KafkaServer)

WARN [Kafka Server 0], Proceeding to do an unclean shutdown as all the controlled shutdown attempts failed

In addition to these error messages, if you also see SocketTimeoutExceptions, it indicates that the controller could not finish moving the leaders for all partitions on the broker within controller.socket.timeout.ms. The solution is to increase controller.socket.timeout.ms as well as increase controlled.shutdown.retry.backoff.ms and controlled.shutdown.max.retries to give enough time for the controlled shutdown to complete. If you don’t see SocketTimeoutExceptions, it could indicate a problem in your cluster state or a bug as this happens when the controller is not able to move the leaders to another broker for several retries.

Why can’t my consumers/producers connect to the brokers?

When a broker starts up, it registers its ip/port in ZK. You need to make sure the registered ip is consistent with what’s listed in metadata.broker.list in the producer config. By default, the registered ip is given by InetAddress.getLocalHost.getHostAddress. Typically, this should return the real ip of the host. However, sometimes (e.g., in EC2), the returned ip is an internal one and can’t be connected to from outside. The solution is to explicitly set the host ip to be registered in ZK by setting the “hostname” property in server.properties. In another rare case where the binding host/port is different from the host/port for client connection, you can set advertised.host.name and advertised.port for client connection.

Why partition leaders migrate themselves some times?

During a broker soft failure, e.g., a long GC, its session on ZooKeeper may timeout and hence be treated as failed. Upon detecting this situation, Kafka will migrate all the partition leaderships it currently hosts to other replicas. And once the broker resumes from the soft failure, it can only act as the follower replica of the partitions it originally leads.

To move the leadership back to the brokers, one can use the preferred-leader-election tool here. Also, in 0.8.2 a new feature will be added which periodically trigger this functionality (details here).

To reduce Zookeeper session expiration, either tune the GC or increase zookeeper.session.timeout.ms in the broker config.

How many topics can I have?

Unlike many messaging systems Kafka topics are meant to scale up arbitrarily. Hence we encourage fewer large topics rather than many small topics. So for example if we were storing notifications for users we would encourage a design with a single notifications topic partitioned by user id rather than a separate topic per user.

The actual scalability is for the most part determined by the number of total partitions across all topics not the number of topics itself (see the question below for details).

How do I choose the number of partitions for a topic?

There isn’t really a right answer, we expose this as an option because it is a tradeoff. The simple answer is that the partition count determines the maximum consumer parallelism and so you should set a partition count based on the maximum consumer parallelism you would expect to need (i.e. over-provision). Clusters with up to 10k total partitions are quite workable. Beyond that we don’t aggressively test (it should work, but we can’t guarantee it).

Here is a more complete list of tradeoffs to consider:

n A partition is basically a directory of log files.

n Each partition must fit entirely on one machine. So if you have only one partition in your topic you cannot scale your write rate or retention beyond the capability of a single machine. If you have 1000 partitions you could potentially use 1000 machines.

n Each partition is totally ordered. If you want a total order over all writes you probably want to have just one partition.

n Each partition is not consumed by more than one consumer thread/process in each consumer group. This allows to have each process consume in a single threaded fashion to guarantee ordering to the consumer within the partition (if we split up a partition of ordered messages and handed them out to multiple consumers even though the messages were stored in order they would be processed out of order at times).

n Many partitions can be consumed by a single process, though. So you can have 1000 partitions all consumed by a single process.

n Another way to say the above is that the partition count is a bound on the maximum consumer parallelism.

n More partitions will mean more files and hence can lead to smaller writes if you don’t have enough memory to properly buffer the writes and coalesce them into larger writes

n Each partition corresponds to several znodes in zookeeper. Zookeeper keeps everything in memory so this can eventually get out of hand.

n More partitions means longer leader fail-over time. Each partition can be handled quickly (milliseconds) but with thousands of partitions this can add up.

n When we checkpoint the consumer position we store one offset per partition so the more partitions the more expensive the position checkpoint is.

n It is possible to later expand the number of partitions BUT when we do so we do not attempt to reorganize the data in the topic. So if you are depending on key-based semantic partitioning in your processing you will have to manually copy data from the old low partition topic to a new higher partition topic if you later need to expand.

Note that I/O and file counts are really about #partitions/#brokers, so adding brokers will fix problems there; but zookeeper handles all partitions for the whole cluster so adding machines doesn’t help.

Why do I see lots of Leader not local exceptions on the broker during controlled shutdown?

This happens when the producer clients are using num.acks=0. When the leadership for a partition is changed, the clients (producer and consumer) gets an error when they try to produce or consume from the old leader when they wait for a response. The client then refreshes the partition metadata from zookeeper and gets the new leader for the partition and retries. This does not work for the producer client when ack = 0. This is because the producer does not wait for a response and hence does not know about the leadership change. The client would end up loosing messages till the shutdown broker is brought back up. This issue is fixed in KAFKA-955

How to reduce churns in ISR? When does a broker leave the ISR ?

ISR is a set of replicas that are fully sync-ed up with the leader. In other words, every replica in ISR has all messages that are committed. In an ideal system, ISR should always include all replicas unless there is a real failure. A replica will be dropped out of ISR if it diverges from the leader. This is controlled by two parameters: replica.lag.time.max.ms and replica.lag.max.messages. The former is typically set to a value that reliably detects the failure of a broker. We have a min fetch rate JMX in the broker. If that rate is n, set the former to a value larger than 1/n * 1000. The latter is typically set to the observed max lag (a JMX bean) in the follower. Note that if replica.lag.max.messages is too large, it can increase the time to commit a message. If latency becomes a problem, you can increase the number of partitions in a topic.

If a replica constantly drops out of and rejoins isr, you may need to increase replica.lag.max.messages. If a replica stays out of ISR for a long time, it may indicate that the follower is not able to fetch data as fast as data is accumulated at the leader. You can increase the follower’s fetch throughput by setting a larger value for num.replica.fetchers.

After bouncing a broker, why do I see LeaderNotAvailable or NotLeaderForPartition exceptions on startup?

If you don’t use controlled shutdown, some partitions that had leaders on the broker being bounced go offline immediately. The controller takes some time to elect leaders and notify the brokers to assume the new leader role. Following this, clients take some time to send metadata requests and discover the new leaders. If the broker is stopped and restarted quickly, clients that have not discovered the new leader keep sending requests to the newly restarted broker. The exceptions are throws since the newly restarted broker is not the leader for any partition.

Can I add new brokers dynamically to a cluster?

Yes, new brokers can be added online to a cluster. Those new brokers won’t have any data initially until either some new topics are created or some replicas are moved to them using the partition reassignment tool.

How do I accurately get offsets of messages for a certain timestamp using OffsetRequest?

Kafka allows querying offsets of messages by time and it does so at segment granularity. The timestamp parameter is the unix timestamp and querying the offset by timestamp returns the latest possible offset of the message that is appended no later than the given timestamp. There are 2 special values of the timestamp - latest and earliest. For any other value of the unix timestamp, Kafka will get the starting offset of the log segment that is created no later than the given timestamp. Due to this, and since the offset request is served only at segment granularity, the offset fetch request returns less accurate results for larger segment sizes.

For more accurate results, you may configure the log segment size based on time (log.roll.ms) instead of size (log.segment.bytes). However care should be taken since doing so might increase the number of file handlers due to frequent log segment rolling.

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值