Kafka-Kafka 1.0.0 client 消费者 配置选项 (完整版)

由于大家日常生产开发中,对 kafka 生产者,消费者 可以支持的配置 可能有所困惑,

这里我们写一片文章帮助大家答疑解惑。

 

本文基于  Kafka 的 1.0.0 版本

 

其实 ,主要的配置选项,可以在以下的包中找到。

<dependency>
    <groupId>org.apache.kafka</groupId>
    <artifactId>kafka-clients</artifactId>
    <version>1.0.0</version>
</dependency>

 

生产者配置:

  org.apache.kafka.clients.producer.ProducerConfig

消费者配置:

  org.apache.kafka.clients.consumer.ConsumerConfig

 

 

生产/ 消费 者配置按照以下方式,进行组织。

line1 : property name

line2: priority

line3 :  type

line4  : default value

line5 :  valid value  (for some type )

line6: english description

line7 : translation

 

名称:

优先级:

类型:

默认值:

合法值:

描述:

解释:

 

注意: 受限于篇幅,本篇主要对消费者配置进行讲解 !!!

 

重要属性预览

 

high

 

  bootstrap.servers 

 

  key.deserializer

 

  value.deserializer

 

  fetch.min.bytes

 

  group.id

  

heartbeat.interval.ms

 

 session.timeout.ms

 

medium

 

  auto.offset.reset


  enable.auto.commit


  max.poll.records


  security.protocol 

 

low

 

  auto.commit.interval.ms  

 

  fetch.max.wait.ms

 

消费者配置:

 

高优先级 

 

名称:

  bootstrap.servers

优先级:

  high

类型:

  list

默认值:

  无

合法值:

  ---

描述:

   A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all servers irrespective of which servers are specified here for bootstrapping—this list only impacts the initial hosts used to discover the full set of servers. This list should be in the form host1:port1,host2:port2,.... Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set of servers (you may want more than one, though, in case a server is down).

解释:

  给出一个初始化的 kafka 集群地址,不必是集群中的全部节点,(根据发现机制,发现整个集群),

为了防止 填写节点 刚好挂掉, 最好填写多于一个节点

例子:

  127.0.0.1:9092

------------------------------

 

名称:

  key.deserializer

优先级:

  high

类型:

  class

默认值:

  无

合法值:

  ---

描述:

    Deserializer class for key that implements the org.apache.kafka.common.serialization.Deserializer interface.

解释:

   kafka message 中 key 反序列化实现类,实现 org.apache.kafka.common.serialization.Deserializer 的接口

例子:

 org.apache.kafka.common.serialization.StringDeserializer

-------------------------------------------------------

 

名称:

  value.deserializer

优先级:

  high

类型:

  class

默认值:

 无

合法值:

 ---

描述:

    Deserializer class for value that implements the org.apache.kafka.common.serialization.Deserializer interface.

解释:

   kafka message 中 value 反序列化实现类,实现 org.apache.kafka.common.serialization.Deserializer 的接口

例子:

org.apache.kafka.common.serialization.StringDeserializer

 

------------------------------------------------------------

 

名称:

  fetch.min.bytes

优先级:

  high

类型:

  int

默认值:

  1

合法值:

  [0,...]

描述:

  The minimum amount of data the server should return for a fetch request. If insufficient data is available the request will wait for that much data to accumulate before answering the request. The default setting of 1 byte means that fetch requests are answered as soon as a single byte of data is available or the fetch request times out waiting for data to arrive. Setting this to something greater than 1 will cause the server to wait for larger amounts of data to accumulate which can improve server throughput a bit at the cost of some additional latency.

解释:

  kafka 服务端 至少有多少数据  才返回(按 byte 去计算)。如果服务端 没有足够的 数据,会等到超过阈值的数据量 再进行返回。 默认值 1 byte,  意味着 单有 1 byte 数据可以返回时,就返回数据。

将这个值设置为 大于 1 有助于 减少 kafka 服务端 的额外压力。

 

------------------------------------------------------------

 

名称:

  group.id

优先级:

  high

类型:

  string

默认值:

 ""

合法值:

 ---

描述:

  A unique string that identifies the consumer group this consumer belongs to. This property is required if the consumer uses either the group management functionality by using subscribe(topic) or the Kafka-based offset management strategy.

解释:
  一个唯一的字符串 标识了 消费者的 分组。如果使用了  通过 提交主题的 group 管理  或者 kafka 偏移量管理策略, 这个属性都是必须的。

 例子:

  test-group

 

-------------------------------------------------------

 

名称:

  heartbeat.interval.ms

优先级:

  high

类型:

 int 

默认值:

 3000

合法值:

  ---

描述:

    The expected time between heartbeats to the consumer coordinator when using Kafka's group management facilities. Heartbeats are used to ensure that the consumer's session stays active and to facilitate rebalancing when new consumers join or leave the group. The value must be set lower than session.timeout.ms, but typically should be set no higher than 1/3 of that value. It can be adjusted even lower to control the expected time for normal rebalances.

解释:

    当使用 kafka 组管理工具,消费者协调者的 心跳确认时间。

心跳用来确认 消费者的会话保持活跃 和   消费者加入,离开消费 group 时 的 kafka 内部调整。

该值必须小于 session.timeout.ms 属性值,一般来说,不应大于 1/3  session.timeout.ms 值。

该值针对经常性的调整,去设置比 1/3 值 更小的值。

 

-------------------------------------------------

 

名称:

  max.partition.fetch.bytes

优先级:

  high

类型:

 int

默认值:

  1048576

合法值:

  [0,...]

描述:


     The maximum amount of data per-partition the server will return. Records are fetched in batches by the consumer. If the first record batch in the first non-empty partition of the fetch is larger than this limit, the batch will still be returned to ensure that the consumer can make progress. The maximum record batch size accepted by the broker is defined via message.max.bytes (broker config) or max.message.bytes (topic config). See fetch.max.bytes for limiting the consumer request size.

解释:

   每个partition 在一次 fetch 最大返回的数据量。记录 通过 许多批次进行拉取,如果 第一个批次 从 一个非空的 partition 拉取超过该限制的值, 该批次仍会返回记录 确保 消费者 能够正常工作。

  broker 一个批次最多能拉取的数据量 通过 以下两个属性进行定义,

message.max.bytes (broker 中设置 )

max.message.bytes (topic 中设置 )

See fetch.max.bytes for limiting the consumer request size. , 查看  fetch.max.bytes 限制 消费者一次拉取的最多数据量

 

 

--------------------------------------------

 

名称:

  session.timeout.ms

优先级:

  high

类型:

  int

默认值:

  10000

合法值:

  ---

描述:

  The timeout used to detect consumer failures when using Kafka's group management facility. The consumer sends periodic heartbeats to indicate its liveness to the broker. If no heartbeats are received by the broker before the expiration of this session timeout, then the broker will remove this consumer from the group and initiate a rebalance. Note that the value must be in the allowable range as configured in the broker configuration by group.min.session.timeout.ms and group.max.session.timeout.ms.

解释:

  当使用kafka group 管理工具时,用来决定 消费超时失败的时间。 

  消费者周期性的发送心跳确定,来确保对于 broker 来说,消费者是活跃的。

 如果 在 session 超时之前, broker 没有收到心跳请求,broker 会移除 该消费者 并 导致一次 调整 rebalance.

 注意:

   该值 必须在 broker 配置的 min.session.timeout.ms , max.session.timeout.ms 之内。

 

安全性相关属性 : 

-------------------------------------------------------

 

名称:

  ssl.key.password

优先级:

  high

类型:

  password

默认值:

  null

合法值:

  ---

描述:

    The password of the private key in the key store file. This is optional for client.

解释:

 

------------------------------------------------

 

名称:

  ssl.keystore.location

优先级:

  high 

类型:

  string

默认值:

  null

合法值:

  ---

描述:

  The location of the key store file. This is optional for client and can be used for two-way authentication for client.

解释:

 

----------------------------------------------

 

 

名称:

  ssl.keystore.password

优先级:

  high

类型:

  password

默认值:

  null

合法值:

  ---

描述:

  The store password for the key store file. This is optional for client and only needed if ssl.keystore.location is configured.

解释:
 

 

--------------------------------------------

 

 

名称:

  ssl.truststore.location

优先级:

  high

类型:

  string

默认值:

  null

合法值:

  --- 

描述:

  The location of the trust store file.

解释:

 

--------------------------------------------------------

 

名称:

  ssl.truststore.password    

优先级:

   high

类型:

  password

默认值:

  null

合法值:

  ---

描述:

      The password for the trust store file. If a password is not set access to the truststore is still available, but integrity checking is disabled.

解释:
 

---------------------------------

 

中优先级

 

名称:

  auto.offset.reset 

优先级:

  medium

类型:

 string

默认值:

  latest

合法值:

  [latest, earliest, none]

描述:

      What to do when there is no initial offset in Kafka or if the current offset does not exist any more on the server (e.g. because that data has been deleted):
  earliest: automatically reset the offset to the earliest offset
  latest: automatically reset the offset to the latest offset
  none: throw exception to the consumer if no previous offset is found for the consumer's group
  anything else: throw exception to the consumer.

解释:

  当 kafka 中没有便宜两个初始值时,采取的策略。 或者 服务端 中 当前偏移量 不在存在 (比如:数据被删除了)

  earliest : 自动将偏移量设置为 最早的偏移量 

  latest : 自动将便宜里设置为 最新的额偏移量

  none : 如果之前的偏移量没有找到,抛出一个异常

  其他值: 直接抛出一个异常

 

-----------------------------------------------

 

名称:

  connections.max.idle.ms

优先级:

  medium

类型:

  long

默认值:

 540000

合法值:

  ---

描述:

  Close idle connections after the number of milliseconds specified by this config.

解释:

  超过 指定的 ms , 关闭 限制的链接。

 

--------------------------------

 

名称:

  enable.auto.commit

优先级:

  medium

类型:

  boolean

默认值:

 true

合法值:

 true / false

描述:
    If true the consumer's offset will be periodically committed in the background.

解释:
  如果设置为 true,  那么消费者 的 偏移量 会在后台定期提交。

 

---------------------------------------

 

名称:

  exclude.internal.topics

优先级:

  medium

类型:

 boolean

默认值:

 true

合法值:

 true / false

描述:

  Whether records from internal topics (such as offsets) should be exposed to the consumer. If set to true the only way to receive records from an internal topic is subscribing to it.

解释:

 topic 内部的记录 (例如 偏移量) 是否需要 暴露给消费者。

 如果设置为 true,  从一个内部topic 接受 records 的唯一方式 就是 订阅该 topic

 

----------------------------------------------------

 

名称:

  fetch.max.bytes

优先级:

  medium

类型:

  int 

默认值:

  52428800

合法值:

  [0,...]

描述:

    The maximum amount of data the server should return for a fetch request. Records are fetched in batches by the consumer, and if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that the consumer can make progress. As such, this is not a absolute maximum. The maximum record batch size accepted by the broker is defined via message.max.bytes (broker config) or max.message.bytes (topic config). Note that the consumer performs multiple fetches in parallel.

解释:

  一次拉取请求,数据获取量的最大值。

 消费者 通过许多批次 拉取记录, 如果 第一个 非空的 partition 的 数据量 比这个值大, 该批次 仍然 会去检测 消费者 是否能正常工作。 因此,该值不是一个确切的最大值。一个批次 通过 broker 能拉取的最大值 通过  

message.max.bytes( broker config)  或者 max.message.byte (topic.config) 进行设置。

提示:

 comsumer 推荐并行拉取。

 

---------------------------------------------------------

 

 

名称:

  isolation.level

优先级:

  medium

类型:

  string

默认值:

  read_uncommitted

合法值:

  [read_committed, read_uncommitted]

描述:

  Controls how to read messages written transactionally. If set to read_committed, consumer.poll() will only return transactional messages which have been committed. If set to read_uncommitted' (the default), consumer.poll() will return all messages, even transactional messages which have been aborted. Non-transactional messages will be returned unconditionally in either mode.

Messages will always be returned in offset order. Hence, in read_committed mode, consumer.poll() will only return messages up to the last stable offset (LSO), which is the one less than the offset of the first open transaction. In particular any messages appearing after messages belonging to ongoing transactions will be withheld until the relevant transaction has been completed. As a result, read_committed consumers will not be able to read up to the high watermark when there are in flight transactions.

Further, when in read_committed the seekToEnd method will return the LSO

解释:

  控制以何种事务级别去读取 messages.

  如果设置为  read_committed , consumer.poll() 仅会返回 承诺的  transactional messages

  如果设置为 read_uncommitted (默认值), consumer.poll() 会返回所有的 message, 即使 transactional messages 出现问题。

非事务的消息 (Non-transactional messages )  在这两种模式下都会无条件的返回。

  

  消息一般都会按照 偏移量序列 返回。

因此:

  在 read_committed 模式下,  consumer.poll()  只会返回 到最后一个稳定的偏移量(LSO) 的 记录,  该偏移量值比第一个公开的事务的偏移量值要小。 因此,任何在 当前事务 之后出现的记录 都会被保留,直到相关的 事务完成。

因此, read_committed 模式下的消费者 不能读取到 flight transactions.中的 直到 high watermark  的记录。

进一步来说, read_committed 中 seekToEnd 会返回 LSO。

 

 

-----------------------------------------------------------

 

名称:

  max.poll.interval.ms

优先级:

  medium

类型:

  int 

默认值:

  300000

合法值:

  [1,...]

描述:

      The maximum delay between invocations of poll() when using consumer group management. This places an upper bound on the amount of time that the consumer can be idle before fetching more records. If poll() is not called before expiration of this timeout, then the consumer is considered failed and the group will rebalance in order to reassign the partitions to another member.

解释:

    当使用 消费者管理的时候,调用 poll() 方法的 最大间隔时长。该值 设置了消费者 拉取更多数据 的 空闲时间的一个上限值。

如果 超过该值,poll() 仍然没有被调用,那么 cousumer 就会被认为失败, group 会重新调整 ,将这些 partitions 分配给另一个成员。

 


--------------------------------------------

 

名称:

  max.poll.records

优先级:

  medium

类型:

  int

默认值:

  500

合法值:

   [1,...]

描述:

   The maximum number of records returned in a single call to poll().

解释:

 调用 poll()  方法,一次最多能返回的  记录数 (records).

 

 

----------------------------------------------

 

名称:

  partition.assignment.strategy

优先级:

  medium

类型:

  class

默认值:

  org.apache.kafka.clients.consumer.RangeAssignor

合法值:

  ---

描述:
    The class name of the partition assignment strategy that the client will use to distribute partition ownership amongst consumer instances when group management is used

解释:
  当 group management 使用时,多个消费者之间 ,分区所有者 的 分配策略的 实现类名

 

 

-----------------------------------------------------

 

 

名称:

  receive.buffer.bytes

优先级:

  medium

类型:

  int 

默认值:

  65536

合法值:

  [-1,...]

描述:

  The size of the TCP receive buffer (SO_RCVBUF) to use when reading data. If the value is -1, the OS default will be used.

解释:

  当读取数据时。TCP 接受数据的缓冲池大小 。 当设置为 -1 时,使用系统 OS 默认值。

 

 

---------------------------------------------------

 

名称:

  request.timeout.ms

优先级:

  medium

类型:

  int 

默认值:

  305000

合法值:

  [0,...]

描述:
    The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted.

解释:

  该配置 配置了客户端 等待请求 返回的最大等待时长。

  如果客户端 超过 超时时间还未收到 请求, 客户端会在需要时重新发起请求,如果 多次请求仍旧失败,就会失败。

 

---------------------------------------------------------------

 

名称:

  send.buffer.bytes

优先级:

  medium

类型:

  int 

默认值:

  131072

合法值:

  [-1,...]

描述:

  The size of the TCP send buffer (SO_SNDBUF) to use when sending data. If the value is -1, the OS default will be used.

解释:

  当发送数据时,TCP 发送的缓冲池大小 (SO_SNDBUF) 。当该值为 -1 时, 使用系统(OS) 默认值。

-----------------------------------------------------------------

 

安全相关属性:

 

名称:

  sasl.jaas.config

优先级:

  medium

类型:

  password

默认值:

  null

合法值:

  ---

描述:

    JAAS login context parameters for SASL connections in the format used by JAAS configuration files. JAAS configuration file format is described here. The format for the value is: ' ( = )*;'

解释:

 

-------------------------------------------------------------------------

 

名称:

  sasl.kerberos.service.name

优先级:

  medium

类型:

  string

默认值:

  null

合法值:

  ---

描述:
    The Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config.

解释:

 

--------------------------------------------------

 

名称:

  sasl.mechanism

优先级:

  medium

类型:

  string

默认值:

  GSSAPI

合法值:

  ---

描述:
    SASL mechanism used for client connections. This may be any mechanism for which a security provider is available. GSSAPI is the default mechanism.

解释:

 

-----------------------------------------------------------

 

名称:

  security.protocol 

优先级:

  medium

类型:

  string

默认值:

  PLAINTEXT

合法值:

  PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.

描述:

  Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.

解释:

        

------------------------------------------------------

 

名称:

  ssl.enabled.protocols

优先级:

  medium

类型:

  list 

默认值:

  TLSv1.2,TLSv1.1,TLSv1

合法值:

  ---

描述:

  The list of protocols enabled for SSL connections.

解释:

 

------------------------------------------------------------

 

名称:  
  ssl.keystore.type

优先级:

  medium

类型:

 string

默认值:

  JKS

合法值:

  ---

描述:

  The file format of the key store file. This is optional for client.

解释:

 

---------------------------------

 

名称:

  ssl.protocol

优先级:

  medium

类型:

  string

默认值:

  TLS

合法值:

  ---

描述:

      The SSL protocol used to generate the SSLContext. Default setting is TLS, which is fine for most cases. Allowed values in recent JVMs are TLS, TLSv1.1 and TLSv1.2. SSL, SSLv2 and SSLv3 may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities.

解释:

 

---------------------------------------------

 

名称:

  ssl.provider

优先级:

  medium

类型:

  string

默认值:

  null

合法值:

  ---

描述:

    The name of the security provider used for SSL connections. Default value is the default security provider of the JVM.

解释:

 

--------------------------------------

 

名称:

  ssl.truststore.type

优先级:

  medium

类型:

  string

默认值:

 JKS

合法值:

  ---

描述:

   The file format of the trust store file.

解释:

 

-------------------------------------------------

 

 

低优先级

 

名称:

  auto.commit.interval.ms

优先级:

  low

类型:

  int

默认值:

  5000

合法值:

  [0,...]

描述:

  The frequency in milliseconds that the consumer offsets are auto-committed to Kafka if enable.auto.commit is set to true.

解释:

  当 enable.auto.commit 设置为 true 时。 通过该值设置 consumer 自动提交 offset 给 kafka 的频率 。 单位 ms 

---------------------------------------------------------

 

名称:

   check.crcs

优先级:

   low

类型:

  boolean

默认值:

  true

合法值:

  true / false 

描述:

  Automatically check the CRC32 of the records consumed. This ensures no on-the-wire or on-disk corruption to the messages occurred. This check adds some overhead, so it may be disabled in cases seeking extreme performance.

解释:

  自动用 CRC32 算法 检测 消费的 记录 records 的完整性。 这样 确保 没有 on-the-wire 或者  on-disk 错误 对 消息 的 影响。

 这种检测会增加额外的 开销 overhead  , 所以可以设置为 disable ,在寻求额外的性能提升的场景。 

 

 

---------------------------------------------------------------

 

名称:

  client.id

优先级:

  low

类型:

  string

默认值:

  “”

合法值:

  ---

描述:

  An id string to pass to the server when making requests. The purpose of this is to be able to track the source of requests beyond just ip/port by allowing a logical application name to be included in server-side request logging.

解释:

  当建立请求时,传给服务端的 一个id 字符串。这个字符串的目的是为了 除了 ip/ port 之外,用于追踪的一个逻辑应用名,

这个id 会包含在 server-side 服务端内部的请求日志中。

 

---------------------------------------------------------------

 

名称:

  fetch.max.wait.ms

优先级:

  low

类型:

  int

默认值:

  500

合法值:

  [0,...]

描述:

  The maximum amount of time the server will block before answering the fetch request if there isn't sufficient data to immediately satisfy the requirement given by fetch.min.bytes.

解释:

  如果 当前 没有给出的 fetch.min.bytes 的足够数据, 服务端 阻塞 fetch 请求的 最大时长。.

 

 

---------------------------------------------------------------

 

名称:

  interceptor.classes

优先级:

  low

类型:

  list

默认值:

  null

合法值:

  ---

描述:

  A list of classes to use as interceptors. Implementing the org.apache.kafka.clients.consumer.ConsumerInterceptor interface allows you to intercept (and possibly mutate) records received by the consumer. By default, there are no interceptors.

解释:

  一系列用来作为解释器的类,实现了org.apache.kafka.clients.consumer.ConsumerInterceptor 接口能让你 解析消费者收到的 (有可能 mutate 变异)的记录。

默认没有解析器。

 

---------------------------------------------------------------

 

名称:

  metadata.max.age.ms

优先级:

  low

类型:

  long

默认值:

  300000

合法值:

  [0,...]

描述:

   The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions.

解释:

  即使 我们我们发现 新的 brokers 或者 partitions,  或者没发现 parition 领导 发生变化,强制元数据 刷新的 时间间隔

 

---------------------------------------------------------------

 

名称:

  metric.reporters

优先级:

  low

类型:

  list

默认值:

  ""

合法值:

  ---

描述:

  A list of classes to use as metrics reporters. Implementing the org.apache.kafka.common.metrics.MetricsReporter interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics.

解释:

 

 

---------------------------------------------------------------

 

名称:

  metrics.num.samples

优先级:

  low

类型:

  int

默认值:

  2

合法值:

  [1,...]

描述:

  The number of samples maintained to compute metrics.

解释:

 

 

---------------------------------------------------------------

 

名称:

  metrics.recording.level

优先级:

  low

类型:

  string

默认值:

  INFO

合法值:

  [INFO, DEBUG]

描述:

  The highest recording level for metrics.

解释:

 

 

---------------------------------------------------------------

 

名称:

  metrics.sample.window.ms

优先级:

  low

类型:

  long

默认值:

  30000

合法值:

  [0,...]

描述:

  The window of time a metrics sample is computed over.

解释:

 

 

---------------------------------------------------------------

 

名称:

  reconnect.backoff.max.ms

优先级:

  low

类型:

  long

默认值:

  1000

合法值:

  [0,...]

描述:

      The maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect. If provided, the backoff per host will increase exponentially for each consecutive connection failure, up to this maximum. After calculating the backoff increase, 20% random jitter is added to avoid connection storms.

解释:

 

 

---------------------------------------------------------------

 

名称:

  reconnect.backoff.ms

优先级:

  low

类型:

  long

默认值:

 50

合法值:

  [0,...]

描述:

  The base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all connection attempts by the client to a broker.

解释:

 

 

---------------------------------------------------------------

 

名称:

  retry.backoff.ms

优先级:

  low

类型:

  long

默认值:

 100

合法值:

  [0,...]

描述:

      The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios.

解释:

 

 

安全相关属性

 

名称:

  sasl.kerberos.kinit.cmd

优先级:

  low

类型:
  string

默认值:

  /usr/bin/kinit

合法值:

  ---

描述:

  Kerberos kinit command path.

解释:

 

-------------------------------------------

 

名称:

  sasl.kerberos.min.time.before.relogin

优先级:

  low

类型:

  long

默认值:

  60000

合法值:

  ---

描述:

  Login thread sleep time between refresh attempts.

解释:

 

 

---------------------------------------

 

名称:

   sasl.kerberos.ticket.renew.jitter

优先级:

  low

类型:

  double

默认值:

  0.05

合法值:

  ---

描述:

  Percentage of random jitter added to the renewal time.    

解释:

 

--------------------------------------- 

 

名称:

   sasl.kerberos.ticket.renew.window.factor

优先级:

  low

类型:

  double

默认值:

  0.8

合法值:

  ---

描述:

  Login thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which time it will try to renew the ticket.

解释:

 

------------------------------

 

名称:

  ssl.cipher.suites

优先级:

  low 

类型:

  list

默认值:

  null

合法值:

  ---

描述:

    A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. By default all the available cipher suites are supported.

解释:

 

-------------------------------------

 

名称:

  ssl.endpoint.identification.algorithm

优先级:

  low

类型:

  string

默认值:

  null

合法值:

  ---

描述:

  The endpoint identification algorithm to validate server hostname using server certificate

解释:

 

 

---------------------------------------

 

名称:

  ssl.keymanager.algorithm

优先级:

  low

类型:

  string

默认值:

  SunX509

合法值:

  ---

描述:

  The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for the Java Virtual Machine.

解释:

 

--------------------------------------------------

 

名称:

    ssl.secure.random.implementation

优先级:

  low

类型:

  string

默认值:

  null

合法值:

  ---

描述:

   The SecureRandom PRNG implementation to use for SSL cryptography operations.

解释:

 

--------------------------------------

 

名称:

  ssl.trustmanager.algorithm

优先级:

  low 

类型:

  string

默认值:

  PKIX

合法值:

  ---

描述:

    The algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured for the Java Virtual Machine.

解释:

 

 

  • 0
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
Kafka提供了一个Java客户端库`kafka-clients`,其中包含用于创建和管理消费者的类和方法。下面是一个示例,展示如何使用`kafka-clients`中的消费者类来消费Kafka消息: ```java import org.apache.kafka.clients.consumer.ConsumerConfig; import org.apache.kafka.clients.consumer.ConsumerRecord; import org.apache.kafka.clients.consumer.ConsumerRecords; import org.apache.kafka.clients.consumer.KafkaConsumer; import org.apache.kafka.common.TopicPartition; import java.time.Duration; import java.util.Collections; import java.util.Properties; public class KafkaConsumerExample { public static void main(String[] args) { String bootstrapServers = "localhost:9092"; String groupId = "my-consumer-group"; String topic = "my-topic"; // 配置消费者属性 Properties properties = new Properties(); properties.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers); properties.put(ConsumerConfig.GROUP_ID_CONFIG, groupId); properties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer"); properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer"); // 创建消费者实例 KafkaConsumer<String, String> consumer = new KafkaConsumer<>(properties); // 订阅主题 consumer.subscribe(Collections.singletonList(topic)); // 或者指定特定的分区进行订阅 // TopicPartition partition = new TopicPartition(topic, 0); // consumer.assign(Collections.singleton(partition)); // 开始消费消息 while (true) { ConsumerRecords<String, String> records = consumer.poll(Duration.ofMillis(1000)); for (ConsumerRecord<String, String> record : records) { // 处理消息 System.out.println("Received message: " + record.value()); } } } } ``` 在上述示例中,首先配置消费者的属性,包括Kafka集群地址、消费者组ID以及消息的反序列化器。然后创建了一个`KafkaConsumer`对象,并使用`subscribe`方法订阅了一个主题(或者可以使用`assign`方法指定特定的分区进行订阅)。 最后,在一个无限循环中调用`poll`方法来获取消息记录,然后遍历处理每条消息。 需要注意的是,消费者需要定期调用`poll`方法以获取新的消息记录。另外,消费者还可以使用`commitSync`或`commitAsync`方法手动提交消费位移,以确保消息被成功处理。 希望以上示例对你理解如何使用`kafka-clients`库中的消费者类来消费Kafka消息有所帮助!

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值