max.poll.interval.ms 拉取时间间隔,这个参数会影响rebalance时间。
如果一个消息费心跳正常,但是没有拉取消息,可能会导致异常情况下,永远锁住某个分区,但是不消费。所以有这个参数用来处理这个问题,到了最大间隔时间不拉取的话,客户端会被移出,消息重新分配。
/**
* A client that consumes records from a Kafka cluster.
* <p>
* This client transparently handles the failure of Kafka brokers, and transparently adapts as topic partitions
* it fetches migrate within the cluster. This client also interacts with the broker to allow groups of
* consumers to load balance consumption using <a href="#consumergroups">consumer groups</a>.
* <p>
* The consumer maintains TCP connections to the necessary brokers to fetch data.
* Failure to close the consumer after use will leak these connections.
* The consumer is not thread-safe. See <a href="#multithreaded">Multi-threaded Processing</a> for more details.
*
* <h3>Cross-Version Compatibility</h3>
* This client can communicate with brokers that are version 0.10.0 or newer. Older or newer brokers may not support
* certain features. For example, 0.10.0 brokers do not support offsetsForTimes, because this feature was added
* in version 0.10.1. You will receive an {@link org.apache.kafka.common.errors.UnsupportedVersionException}
* when invoking an API that is not available on the running broker version.
* <p>
*
* <h3>Offsets and Consumer Position</h3>
* Kafka maintains a numerical offset for each record in a partition. This offset acts as a unique identifier of
* a record within that partition, and also denotes the position of the consumer in the partition. For example, a consumer
* which is at position 5 has consumed records with offsets 0 through 4 and will next receive the record with offset 5. There
* are actually two notions of position relevant to the user of the consumer:
* <p>
* The {@link #position(TopicPartition) position} of the consumer gives the offset of the next record that will be given
* out. It will be one larger than the highest offset the consumer has seen in that partition. It automatically advances
* every time the consumer receives messages in a call to {@link #poll(long)}.
* <p>
* The {@link #commitSync() committed position} is the last offset that has been stored securely. Should the
* process fail and restart, this is the offset that the consumer will recover to. The consumer can either automatically commit
* offsets periodically; or it can choose to control this committed position manually by calling one of the commit APIs
* (e.g. {@link #commitSync() commitSync} and {@link #commitAsync(OffsetCommitCallback) commitAsync}).
* <p>
* This distinction gives the consumer control over when a record is considered consumed. It is discussed in further
* detail below.
*
* <h3><a name="consumergroups">Consumer Groups and Topic Subscriptions</a></h3>
*
* Kafka uses the concept of <i>consumer groups</i> to allow a pool of processes to divide the work of consuming and
* processing records. These processes can either be running on the same machine or they can be
* distributed over many machines to provide scalability and fault tolerance for processing. All consumer instances
* sharing the same <code>group.id</code> will be part of the same consumer group.
* <p>
* Each consumer in a group can dynamically set the list of topics it wants to subscribe to through one of the
* {@link #subscribe(Collection, ConsumerRebalanceListener) subscribe} APIs. Kafka will deliver each message in the
* subscribed topics to one process in each consumer group. This is achieved by balancing the partitions between all
* members in the consumer group so that each partition is assigned to exactly one consumer in the group. So if there
* is a topic with four partitions, and a consumer group with two processes, each process would consume from two partitions.
* <p>
* Membership in a consumer group is maintained dynamically: if a process fails, the partitions assigned to it will
* be reassigned to other consumers in the same group. Similarly, if a new consumer joins the group, partitions will be moved
* from existing consumers to the new one. This is known as <i>rebalancing</i> the group and is discussed in more
* detail <a href="#failuredetection">below</a>. Group rebalancing is also used when new partitions are added
* to one of the subscribed topics or when a new topic matching a {@link #subscribe(Pattern, ConsumerRebalanceListener) subscribed regex}
* is created. The group will automatically detect the new partitions through periodic metadata refreshes and
* assign them to members of the group.
* <p>
* Conceptually you can think of a consumer group as being a single logical subscriber that happens to be made up of
* multiple processes. As a multi-subscriber system, Kafka naturally supports having any number of consumer groups for a
* given topic without duplicating data (additional consumers are actually quite cheap).
* <p>
* This is a slight generalization of the functionality that is common in messaging systems. To get semantics similar to
* a queue in a traditional messaging system all processes would be part of a single consumer group and hence record
* delivery would be balanced over the group like with a queue. Unlike a traditional messaging system, though, you can
* have multiple such groups. To get semantics similar to pub-sub in a traditional messaging system each process would
* have its own consumer group, so each process would subscribe to all the records published to the topic.
* <p>
* In addition, when group reassignment happens automatically, consumers can be notified through a {@link ConsumerRebalanceListener},
* which allows them to finish necessary application-level logic such as state cleanup, manual offset
* commits, etc. See <a href="#rebalancecallback">Storing Offsets Outside Kafka</a> for more details.
* <p>
* It is also possible for the consumer to <a href="#manualassignment">manually assign</a> specific partitions
* (similar to the older "simple" consumer) using {@link #assign(Collection)}. In this case, dynamic partition
* assignment and consumer group coordination will be disabled.
*
* <h3><a name="failuredetection">Detecting Consumer Failures</a></h3>
*
* After subscribing to a set of topics, the consumer will automatically join the group when {@link #poll(long)} is
* invoked. The poll API is designed to ensure consumer liveness. As long as you continue to call poll, the consumer
* will stay in the group and continue to receive messages from the partitions it was assigned. Underneath the covers,
* the consumer sends periodic heartbeats to the server. If the consumer crashes or is unable to send heartbeats for
* a duration of <code>session.timeout.ms</code>, then the consumer will be considered dead and its partitions will
* be reassigned.
* <p>
* It is also possible that the consumer could encounter a "livelock" situation where it is continuing
* to send heartbeats, but no progress is being made. To prevent the consumer from holding onto its partitions
* indefinitely in this case, we provide a liveness detection mechanism using the <code>max.poll.interval.ms</code>
* setting. Basically if you don't call poll at least as frequently as the configured max interval,
* then the client will proactively leave the group so that another consumer can take over its partitions. When this happens,
* you may see an offset commit failure (as indicated by a {@link CommitFailedException} thrown from a call to {@link #commitSync()}).
* This is a safety mechanism which guarantees that only active members of the group are able to commit offsets.
* So to stay in the group, you must continue to call poll.
* <p>
* The consumer provides two configuration settings to control the behavior of the poll loop:
* <ol>
* <li><code>max.poll.interval.ms</code>: By increasing the interval between expected polls, you can give
* the consumer more time to handle a batch of records returned from {@link #poll(long)}. The drawback
* is that increasing this value may delay a group rebalance since the consumer will only join the rebalance
* inside the call to poll. You can use this setting to bound the time to finish a rebalance, but
* you risk slower progress if the consumer cannot actually call {@link #poll(long) poll} often enough.</li>
* <li><code>max.poll.records</code>: Use this setting to limit the total records returned from a single
* call to poll. This can make it easier to predict the maximum that must be handled within each poll
* interval. By tuning this value, you may be able to reduce the poll interval, which will reduce the
* impact of group rebalancing.</li>
* </ol>
* <p>
* For use cases where message processing time varies unpredictably, neither of these options may be sufficient.
* The recommended way to handle these cases is to move message processing to another thread, which allows
* the consumer to continue calling {@link #poll(long) poll} while the processor is still working. Some care must be taken
* to ensure that committed offsets do not get ahead of the actual position. Typically, you must disable automatic
* commits and manually commit processed offsets for records only after the thread has finished handling them
* (depending on the delivery semantics you need). Note also that you will need to {@link #pause(Collection) pause}
* the partition so that no new records are received from poll until after thread has finished handling those
* previously returned.
*
* <h3>Usage Examples</h3>
* The consumer APIs offer flexibility to cover a variety of consumption use cases. Here are some examples to
* demonstrate how to use them.
*
* <h4>Automatic Offset Committing</h4>
* This example demonstrates a simple usage of Kafka's consumer api that relying on automatic offset committing.
* <p>
* <pre>
* Properties props = new Properties();
* props.put("bootstrap.servers", "localhost:9092");
* props.put("group.id", "test");
* props.put("enable.auto.commit", "true");
* props.put("auto.commit.interval.ms", "1000");
* props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
* props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
* KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);
* consumer.subscribe(Arrays.asList("foo", "bar"));
* while (true) {
* ConsumerRecords<String, String> records = consumer.poll(100);
* for (ConsumerRecord<String, String> record : records)
* System.out.printf("offset = %d, key = %s, value = %s%n", record.offset(), record.key(), record.value());
* }
* </pre>
*
* The connection to the cluster is bootstrapped by specifying a list of one or more brokers to contact using the
* configuration <code>bootstrap.servers</code>. This list is just used to discover the rest of the brokers in the
* cluster and need not be an exhaustive list of servers in the cluster (though you may want to specify more than one in
* case there are servers down when the client is connecting).
* <p>
* Setting <code>enable.auto.commit</code> means that offsets are committed automatically with a frequency controlled by
* the config <code>auto.commit.interval.ms</code>.
* <p>
* In this example the consumer is subscribing to the topics <i>foo</i> and <i>bar</i> as part of a group of consumers
* called <i>test</i> as configured with <code>group.id</code>.
* <p>
* The deserializer settings specify how to turn bytes into objects. For example, by specifying string deserializers, we
* are saying that our record's key and value will just be simple strings.
*
* <h4>Manual Offset Control</h4>
*
* Instead of relying on the consumer to periodically commit consumed offsets, users can also control when records
* should be considered as consumed and hence commit their offsets. This is useful when the consumption of the messages
* is coupled with some processing logic and hence a message should not be considered as consumed until it is completed processing.
* <p>
* <pre>
* Properties props = new Properties();
* props.put("bootstrap.servers", "localhost:9092");
* props.put("group.id", "test");
* props.put("enable.auto.commit", "false");
* props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
* props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
* KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);
* consumer.subscribe(Arrays.asList("foo", "bar"));
* final int minBatchSize = 200;
* List<ConsumerRecord<String, String>> buffer = new ArrayList<>();
* while (true) {
* ConsumerRecords<String, String> records = consumer.poll(100);
* for (ConsumerRecord<String, String> record : records) {
* buffer.add(record);
* }
* if (buffer.size() >= minBatchSize) {
* insertIntoDb(buffer);
* consumer.commitSync();
* buffer.clear();
* }
* }
* </pre>
*
* In this example we will consume a batch of records and batch them up in memory. When we have enough records
* batched, we will insert them into a database. If we allowed offsets to auto commit as in the previous example, records
* would be considered consumed after they were returned to the user in {@link #poll(long) poll}. It would then be possible
* for our process to fail after batching the records, but before they had been inserted into the database.
* <p>
* To avoid this, we will manually commit the offsets only after the corresponding records have been inserted into the
* database. This gives us exact control of when a record is considered consumed. This raises the opposite possibility:
* the process could fail in the interval after the insert into the database but before the commit (even though this
* would likely just be a few milliseconds, it is a possibility). In this case the process that took over consumption
* would consume from last committed offset and would repeat the insert of the last batch of data. Used in this way
* Kafka provides what is often called "at-least-once" delivery guarantees, as each record will likely be delivered one
* time but in failure cases could be duplicated.
* <p>
* <b>Note: Using automatic offset commits can also give you "at-least-once" delivery, but the requirement is that
* you must consume all data returned from each call to {@link #poll(long)} before any subsequent calls, or before
* {@link #close() closing} the consumer. If you fail to do either of these, it is possible for the committed offset
* to get ahead of the consumed position, which results in missing records. The advantage of using manual offset
* control is that you have direct control over when a record is considered "consumed."</b>
* <p>
* The above example uses {@link #commitSync() commitSync} to mark all received records as committed. In some cases
* you may wish to have even finer control over which records have been committed by specifying an offset explicitly.
* In the example below we commit offset after we finish handling the records in each partition.
* <p>
* <pre>
* try {
* while(running) {
* ConsumerRecords<String, String> records = consumer.poll(Long.MAX_VALUE);
* for (TopicPartition partition : records.partitions()) {
* List<ConsumerRecord<String, String>> partitionRecords = records.records(partition);
* for (ConsumerRecord<String, String> record : partitionRecords) {
* System.out.println(record.offset() + ": " + record.value());
* }
* long lastOffset = partitionRecords.get(partitionRecords.size() - 1).offset();
* consumer.commitSync(Collections.singletonMap(partition, new OffsetAndMetadata(lastOffset + 1)));
* }
* }
* } finally {
* consumer.close();
* }
* </pre>
*
* <b>Note: The committed offset should always be the offset of the next message that your application will read.</b>
* Thus, when calling {@link #commitSync(Map) commitSync(offsets)} you should add one to the offset of the last message processed.
*
* <h4><a name="manualassignment">Manual Partition Assignment</a></h4>
*
* In the previous examples, we subscribed to the topics we were interested in and let Kafka dynamically assign a
* fair share of the partitions for those topics based on the active consumers in the group. However, in
* some cases you may need finer control over the specific partitions that are assigned. For example:
* <p>
* <ul>
* <li>If the process is maintaining some kind of local state associated with that partition (like a
* local on-disk key-value store), then it should only get records for the partition it is maintaining on disk.
* <li>If the process itself is highly available and will be restarted if it fails (perhaps using a
* cluster management framework like YARN, Mesos, or AWS facilities, or as part of a stream processing framework). In
* this case there is no need for Kafka to detect the failure and reassign the partition since the consuming process
* will be restarted on another machine.
* </ul>
* <p>
* To use this mode, instead of subscribing to the topic using {@link #subscribe(Collection) subscribe}, you just call
* {@link #assign(Collection)} with the full list of partitions that you want to consume.
*
* <pre>
* String topic = "foo";
* TopicPartition partition0 = new TopicPartition(topic, 0);
* TopicPartition partition1 = new TopicPartition(topic, 1);
* consumer.assign(Arrays.asList(partition0, partition1));
* </pre>
*
* Once assigned, you can call {@link #poll(long) poll} in a loop, just as in the preceding examples to consume
* records. The group that the consumer specifies is still used for committing offsets, but now the set of partitions
* will only change with another call to {@link #assign(Collection) assign}. Manual partition assignment does
* not use group coordination, so consumer failures will not cause assigned partitions to be rebalanced. Each consumer
* acts independently even if it shares a groupId with another consumer. To avoid offset commit conflicts, you should
* usually ensure that the groupId is unique for each consumer instance.
* <p>
* Note that it isn't possible to mix manual partition assignment (i.e. using {@link #assign(Collection) assign})
* with dynamic partition assignment through topic subscription (i.e. using {@link #subscribe(Collection) subscribe}).
*
* <h4><a name="rebalancecallback">Storing Offsets Outside Kafka</h4>
*
* The consumer application need not use Kafka's built-in offset storage, it can store offsets in a store of its own
* choosing. The primary use case for this is allowing the application to store both the offset and the results of the
* consumption in the same system in a way that both the results and offsets are stored atomically. This is not always
* possible, but when it is it will make the consumption fully atomic and give "exactly once" semantics that are
* stronger than the default "at-least once" semantics you get with Kafka's offset commit functionality.
* <p>
* Here are a couple of examples of this type of usage:
* <ul>
* <li>If the results of the consumption are being stored in a relational database, storing the offset in the database
* as well can allow committing both the results and offset in a single transaction. Thus either the transaction will
* succeed and the offset will be updated based on what was consumed or the result will not be stored and the offset
* won't be updated.
* <li>If the results are being stored in a local store it may be possible to store the offset there as well. For
* example a search index could be built by subscribing to a particular partition and storing both the offset and the
* indexed data together. If this is done in a way that is atomic, it is often possible to have it be the case that even
* if a crash occurs that causes unsync'd data to be lost, whatever is left has the corresponding offset stored as well.
* This means that in this case the indexing process that comes back having lost recent updates just resumes indexing
* from what it has ensuring that no updates are lost.
* </ul>
* <p>
* Each record comes with its own offset, so to manage your own offset you just need to do the following:
*
* <ul>
* <li>Configure <code>enable.auto.commit=false</code>
* <li>Use the offset provided with each {@link ConsumerRecord} to save your position.
* <li>On restart restore the position of the consumer using {@link #seek(TopicPartition, long)}.
* </ul>
*
* <p>
* This type of usage is simplest when the partition assignment is also done manually (this would be likely in the
* search index use case described above). If the partition assignment is done automatically special care is
* needed to handle the case where partition assignments change. This can be done by providing a
* {@link ConsumerRebalanceListener} instance in the call to {@link #subscribe(Collection, ConsumerRebalanceListener)}
* and {@link #subscribe(Pattern, ConsumerRebalanceListener)}.
* For example, when partitions are taken from a consumer the consumer will want to commit its offset for those partitions by
* implementing {@link ConsumerRebalanceListener#onPartitionsRevoked(Collection)}. When partitions are assigned to a
* consumer, the consumer will want to look up the offset for those new partitions and correctly initialize the consumer
* to that position by implementing {@link ConsumerRebalanceListener#onPartitionsAssigned(Collection)}.
* <p>
* Another common use for {@link ConsumerRebalanceListener} is to flush any caches the application maintains for
* partitions that are moved elsewhere.
*
* <h4>Controlling The Consumer's Position</h4>
*
* In most use cases the consumer will simply consume records from beginning to end, periodically committing its
* position (either automatically or manually). However Kafka allows the consumer to manually control its position,
* moving forward or backwards in a partition at will. This means a consumer can re-consume older records, or skip to
* the most recent records without actually consuming the intermediate records.
* <p>
* There are several instances where manually controlling the consumer's position can be useful.
* <p>
* One case is for time-sensitive record processing it may make sense for a consumer that falls far enough behind to not
* attempt to catch up processing all records, but rather just skip to the most recent records.
* <p>
* Another use case is for a system that maintains local state as described in the previous section. In such a system
* the consumer will want to initialize its position on start-up to whatever is contained in the local store. Likewise
* if the local state is destroyed (say because the disk is lost) the state may be recreated on a new machine by
* re-consuming all the data and recreating the state (assuming that Kafka is retaining sufficient history).
* <p>
* Kafka allows specifying the position using {@link #seek(TopicPartition, long)} to specify the new position. Special
* methods for seeking to the earliest and latest offset the server maintains are also available (
* {@link #seekToBeginning(Collection)} and {@link #seekToEnd(Collection)} respectively).
*
* <h4>Consumption Flow Control</h4>
*
* If a consumer is assigned multiple partitions to fetch data from, it will try to consume from all of them at the same time,
* effectively giving these partitions the same priority for consumption. However in some cases consumers may want to
* first focus on fetching from some subset of the assigned partitions at full speed, and only start fetching other partitions
* when these partitions have few or no data to consume.
*
* <p>
* One of such cases is stream processing, where processor fetches from two topics and performs the join on these two streams.
* When one of the topics is long lagging behind the other, the processor would like to pause fetching from the ahead topic
* in order to get the lagging stream to catch up. Another example is bootstraping upon consumer starting up where there are
* a lot of history data to catch up, the applications usually want to get the latest data on some of the topics before consider
* fetching other topics.
*
* <p>
* Kafka supports dynamic controlling of consumption flows by using {@link #pause(Collection)} and {@link #resume(Collection)}
* to pause the consumption on the specified assigned partitions and resume the consumption
* on the specified paused partitions respectively in the future {@link #poll(long)} calls.
*
* <h3>Reading Transactional Messages</h3>
*
* <p>
* Transactions were introduced in Kafka 0.11.0 wherein applications can write to multiple topics and partitions atomically.
* In order for this to work, consumers reading from these partitions should be configured to only read committed data.
* This can be achieved by by setting the <code>isolation.level=read_committed</code> in the consumer's configuration.
* </p>
*
* <p>
* In <code>read_committed</code> mode, the consumer will read only those transactional messages which have been
* successfully committed. It will continue to read non-transactional messages as before. There is no client-side
* buffering in <code>read_committed</code> mode. Instead, the end offset of a partition for a <code>read_committed</code>
* consumer would be the offset of the first message in the partition belonging to an open transaction. This offset
* is known as the 'Last Stable Offset'(LSO).</p>
*
* <p>A </p><code>read_committed</code> consumer will only read up till the LSO and filter out any transactional
* messages which have been aborted. The LSO also affects the behavior of {@link #seekToEnd(Collection)} and
* {@link #endOffsets(Collection)} for <code>read_committed</code> consumers, details of which are in each method's documentation.
* Finally, the fetch lag metrics are also adjusted to be relative to the LSO for <code>read_committed</code> consumers.</p>
*
* <p>Partitions with transactional messages will include commit or abort markers which indicate the result of a transaction.
* There markers are not returned to applications, yet have an offset in the log. As a result, applications reading from
* topics with transactional messages will see gaps in the consumed offsets. These missing messages would be the transaction
* markers, and they are filtered out for consumers in both isolation levels. Additionally, applications using
* <code>read_committed</code> consumers may also see gaps due to aborted transactions, since those messages would not
* be returned by the consumer and yet would have valid offsets.</p>
*
* <h3><a name="multithreaded">Multi-threaded Processing</a></h3>
*
* The Kafka consumer is NOT thread-safe. All network I/O happens in the thread of the application
* making the call. It is the responsibility of the user to ensure that multi-threaded access
* is properly synchronized. Un-synchronized access will result in {@link ConcurrentModificationException}.
*
* <p>
* The only exception to this rule is {@link #wakeup()}, which can safely be used from an external thread to
* interrupt an active operation. In this case, a {@link org.apache.kafka.common.errors.WakeupException} will be
* thrown from the thread blocking on the operation. This can be used to shutdown the consumer from another thread.
* The following snippet shows the typical pattern:
*
* <pre>
* public class KafkaConsumerRunner implements Runnable {
* private final AtomicBoolean closed = new AtomicBoolean(false);
* private final KafkaConsumer consumer;
*
* public void run() {
* try {
* consumer.subscribe(Arrays.asList("topic"));
* while (!closed.get()) {
* ConsumerRecords records = consumer.poll(10000);
* // Handle new records
* }
* } catch (WakeupException e) {
* // Ignore exception if closing
* if (!closed.get()) throw e;
* } finally {
* consumer.close();
* }
* }
*
* // Shutdown hook which can be called from a separate thread
* public void shutdown() {
* closed.set(true);
* consumer.wakeup();
* }
* }
* </pre>
*
* Then in a separate thread, the consumer can be shutdown by setting the closed flag and waking up the consumer.
*
* <p>
* <pre>
* closed.set(true);
* consumer.wakeup();
* </pre>
*
* <p>
* Note that while it is possible to use thread interrupts instead of {@link #wakeup()} to abort a blocking operation
* (in which case, {@link InterruptException} will be raised), we discourage their use since they may cause a clean
* shutdown of the consumer to be aborted. Interrupts are mainly supported for those cases where using {@link #wakeup()}
* is impossible, e.g. when a consumer thread is managed by code that is unaware of the Kafka client.
*
* <p>
* We have intentionally avoided implementing a particular threading model for processing. This leaves several
* options for implementing multi-threaded processing of records.
*
* <h4>1. One Consumer Per Thread</h4>
*
* A simple option is to give each thread its own consumer instance. Here are the pros and cons of this approach:
* <ul>
* <li><b>PRO</b>: It is the easiest to implement
* <li><b>PRO</b>: It is often the fastest as no inter-thread co-ordination is needed
* <li><b>PRO</b>: It makes in-order processing on a per-partition basis very easy to implement (each thread just
* processes messages in the order it receives them).
* <li><b>CON</b>: More consumers means more TCP connections to the cluster (one per thread). In general Kafka handles
* connections very efficiently so this is generally a small cost.
* <li><b>CON</b>: Multiple consumers means more requests being sent to the server and slightly less batching of data
* which can cause some drop in I/O throughput.
* <li><b>CON</b>: The number of total threads across all processes will be limited by the total number of partitions.
* </ul>
*
* <h4>2. Decouple Consumption and Processing</h4>
*
* Another alternative is to have one or more consumer threads that do all data consumption and hands off
* {@link ConsumerRecords} instances to a blocking queue consumed by a pool of processor threads that actually handle
* the record processing.
*
* This option likewise has pros and cons:
* <ul>
* <li><b>PRO</b>: This option allows independently scaling the number of consumers and processors. This makes it
* possible to have a single consumer that feeds many processor threads, avoiding any limitation on partitions.
* <li><b>CON</b>: Guaranteeing order across the processors requires particular care as the threads will execute
* independently an earlier chunk of data may actually be processed after a later chunk of data just due to the luck of
* thread execution timing. For processing that has no ordering requirements this is not a problem.
* <li><b>CON</b>: Manually committing the position becomes harder as it requires that all threads co-ordinate to ensure
* that processing is complete for that partition.
* </ul>
*
* There are many possible variations on this approach. For example each processor thread can have its own queue, and
* the consumer threads can hash into these queues using the TopicPartition to ensure in-order consumption and simplify
* commit.
*/