kafka使用笔记

资料:https://www.w3cschool.cn/apache_kafka/apache_kafka_basic_operations.html

kafka的版本差异比较大,可以查看官方文档:http://kafka.apachecn.org/

踩过的坑:

linux安装的kafka版本为:2.11-0.9.0.0,本地jar包版本为:0.8.2.2,可正常使用,jar包版本换成1.1.0,即会报错:

ERROR Processor got uncaught exception. (kafka.network.Processor)
java.lang.ArrayIndexOutOfBoundsException: 18

当前用的kafka版本为0.8.2.2, 支持的request最大id为16, 这个18是新版 kafka中的ApiVersion Request, 因此会抛这个异常出来;

可知是版本问题,特别是kafka的0.8和1.0之前版本差异很大,api的使用也不一样,要特别注意

 

启动生产者和消费者的时候报错:

1)查看Kafka的配置文件,cat config/server.properties

zookeeper.connect=localhost:2181

连接的zookeeper的为localhost,所以需要用localhost启动生产和消费进程

vi  config/server.properties,将listeners改掉  

listeners=PLAINTEXT://localhost:9092

java程序中使用的话,将localhost改成了ip(后面启动生产者的命令中的localhost也要改成ip),为localhost时没有测试过

启动zookeeper:bin/zookeeper-server-start.sh config/zookeeper.properties

启动Kafka:bin/kafka-server-start.sh config/server.properties

启动kafka后ctrl + c 会直接退出运行程序,可用

在bin目录下:

./kafka-server-start.sh -daemon ../config/server.properties

启动,即表示在后台程序启动。

新建窗口:

创建Kafka主题 :

bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1   
--partitions 1 --topic Hello-Kafka

我们刚刚创建了一个名为 Hello-Kafka 的主题,其中包含一个分区和一个副本因子。

查看主题列表:

bin/kafka-topics.sh --list --zookeeper localhost:2181

删除主题:

bin/kafka-topics.sh --zookeeper localhost:2181 --delete --topic topic_name

启动生产者: bin/kafka-console-producer.sh --broker-list localhost:9092 --topic Hello-Kafka

注意:listeners=PLAINTEXT://192.168.75.128:9092 如果这里是ip,localhost也要改成ip,不然发送消息消费者接收不到,

启动消费目前没有发现要这样。

启动消费者:bin/kafka-console-consumer.sh --zookeeper localhost:2181 —topic Hello-Kafka --from-beginning --whitelist Hello-Kafka

kafka常用命令:

  • 检测2181与9092端口

netstat -tunlp|egrep "(2181|9092)"
tcp        0      0 :::2181                     :::*                        LISTEN      19787/java          
tcp        0      0 :::9092                     :::*                        LISTEN      28094/java 

说明:

Kafka的进程ID为28094,占用端口为9092

kafka启动报错:kafka.common.KafkaException: Failed to acquire lock on file .lock

执行 netstat -lnp|grep 9092
在执行结果中找到进程号
执行 kill -9 进程号
再尝试启动Kafka

consumer的配置

NAMEDESCRIPTIONTYPEDEFAULTVALID VALUESIMPORTANCE
bootstrap.servers

A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all servers irrespective of which servers are specified here for bootstrapping—this list only impacts the initial hosts used to discover the full set of servers. This list should be in the formhost1:port1,host2:port2,.... Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set of servers (you may want more than one, though, in case a server is down).

host/port对的列表,用来建立与kafka的初始链接。客户端将使用列表中所有指定的servers-这个列表只影响客户端的初始化,客户端需要使用这个列表去查询所有servers的完整列表。列表格式应该为:host1:port1,host2,port2,....;因为这些server列表只是用来初始化发现完整的server列表(而完整的server列表可能在使用中发生变化,机器损坏,部署迁移等),这个表不需要包含所有server的ip和port(但是最好多于1个,预防这个server挂掉的风险,防止下次启动无法链接)

list  high
key.deserializer

Deserializer class for key that implements theDeserializer interface.

 

Deserializer接口的密钥的类的key

class  high
value.deserializer

Deserializer class for value that implements theDeserializer interface.

 

实现Deserializer接口的Deserializer的值。

class  high
fetch.min.bytes

The minimum amount of data the server should return for a fetch request. If insufficient data is available the request will wait for that much data to accumulate before answering the request. The default setting of 1 byte means that fetch requests are answered as soon as a single byte of data is available or the fetch request times out waiting for data to arrive. Setting this to something greater than 1 will cause the server to wait for larger amounts of data to accumulate which can improve server throughput a bit at the cost of some additional latency.

 

server返回给抓取请求的最小数据量。在返回消息量小于这个值时,请求会一直等待。默认设置为1,即只要有一个字节就可以立刻返回,而不用等到超时。增大这个值,在一定程度上可以改善吞吐量,但是有可能带来额外的延迟。

int1[0,...]high
group.id

A unique string that identifies the consumer group this consumer belongs to. This property is required if the consumer uses either the group management functionality by using subscribe(topic) or the Kafka-based offset management strategy.

 

代表consumer组的唯一字符串。当consumer通过subscribe(topic)或者基于kafka的offset管理策略来使用group管理函数时,必须要有group.id

string"" high
heartbeat.interval.ms

The expected time between heartbeats to the consumer coordinator when using Kafka's group management facilities. Heartbeats are used to ensure that the consumer's session stays active and to facilitate rebalancing when new consumers join or leave the group. The value must be set lower thansession.timeout.ms, but typically should be set no higher than 1/3 of that value. It can be adjusted even lower to control the expected time for normal rebalances.

 

当使用Kafka的group管理用法时,consumer协作器两次心跳之间的时间间隔。心跳链接用来保证consumer的会话依然活跃,以及在新consumer加入consumer group时可以重新进行负载均衡。这个值要比session.timeout.ms小,但是一般要比session.timeout.ms的1/3要打。这个值可以适当的减小,以控制重负载均衡的时间。

int3000 high
max.partition.fetch.bytes

The maximum amount of data per-partition the server will return. If the first message in the first non-empty partition of the fetch is larger than this limit, the message will still be returned to ensure that the consumer can make progress. The maximum message size accepted by the broker is defined via message.max.bytes (broker config) or max.message.bytes (topic config). See fetch.max.bytes for limiting the consumer request size

 

server返回消息中针对每个partition数据请求的最大数据量。这个值也不是绝对的,如果请求的第一个非空partition的第一条消息大于这个值,则消息依然会返回给consumer,以保证继续进行。broker可以接受的消息尺寸通过message.max.bytes(broker配置)或者max.message.bytes(topic配置)来设置。查看fetch.max.bytes获取consumer请求的最大消息尺寸。

int1048576[0,...]high
session.timeout.ms

The timeout used to detect consumer failures when using Kafka's group management facility. The consumer sends periodic heartbeats to indicate its liveness to the broker. If no heartbeats are received by the broker before the expiration of this session timeout, then the broker will remove this consumer from the group and initiate a rebalance. Note that the value must be in the allowable range as configured in the broker configuration bygroup.min.session.timeout.ms andgroup.max.session.timeout.ms.

 

当使用Kafka group管理用法时,这个超时时间用来检测consumer是否失效。consumer通过发送心跳信息给broker,用来表明自己还有效。如果broker在这个超时时间内没有收到来自consumer的心跳信息,则broker会从consumer group中移除这个consumer,并重新进行负载均衡。注意,这个值必须在broker配置的允许范围之内:即group.min.session.timeout.ms和group.max.session.timeout.ms之间。

int10000 high
ssl.key.password

The password of the private key in the key store file. This is optional for client.

 

存储在密钥文件中私有密钥。这个是可选的

passwordnull high
ssl.keystore.locationThe location of the key store file. This is optional for client and can be used for two-way authentication for client.stringnull high
ssl.keystore.password

The store password for the key store file. This is optional for client and only needed if ssl.keystore.location is configured.

 

密钥文件路径。这个是可选的

passwordnull high
ssl.truststore.location

The location of the trust store file.

 

受信任文件的位置

stringnull high
ssl.truststore.password

The password for the trust store file.

 

受信任文件的密码

passwordnull high
auto.offset.reset

What to do when there is no initial offset in Kafka or if the current offset does not exist any more on the server (e.g. because that data has been deleted): 当kafka没有初始offset或者server中也不存在任何初始化offset时,consumer遇到这种情况应该从哪里开始获取消息

  • earliest: automatically reset the offset to the earliest offset自动设置offset为最早的offset
  • latest: automatically reset the offset to the latest offset自动设置offset为最新的offset
  • none: throw exception to the consumer if no previous offset is found for the consumer's group如果没有发现此consumer的offset值,则抛出异常
  • anything else: throw exception to the consumer.抛出异常
stringlatest[latest, earliest, none]medium
connections.max.idle.ms

Close idle connections after the number of milliseconds specified by this config.

 

空闲链接的超时时间:server socket处理线程会关闭超时的链接。

long540000 medium
enable.auto.commit

If true the consumer's offset will be periodically committed in the background.

 

如果设置为true,则consumer的offset会在后台周期性的上传

booleantrue medium
exclude.internal.topics

Whether records from internal topics (such as offsets) should be exposed to the consumer. If set to true the only way to receive records from an internal topic is subscribing to it.

 

内部topics(例如offsets)是否需要暴漏给consumer。如果设置true,从内部topic获取数据的方式只能是订阅它。

booleantrue medium
fetch.max.bytes

The maximum amount of data the server should return for a fetch request. This is not an absolute maximum, if the first message in the first non-empty partition of the fetch is larger than this value, the message will still be returned to ensure that the consumer can make progress. The maximum message size accepted by the broker is defined via message.max.bytes (broker config) ormax.message.bytes (topic config). Note that the consumer performs multiple fetches in parallel.

 

server针对抓取请求的应答中所包含的最大字节数。但是这个值并不是绝对的,如果请求中第一个非空partition的第一条消息大于这个值,这条消息仍然会返回给客户端,以保证继续进行。broker可以接受的最大消息尺寸通过message.max.bytes(broker config)或者max.message.bytes(topic config)确定。注意,consumer可以同时发布多条请求信息。

int52428800[0,...]medium
max.poll.interval.ms

The maximum delay between invocations of poll() when using consumer group management. This places an upper bound on the amount of time that the consumer can be idle before fetching more records. If poll() is not called before expiration of this timeout, then the consumer is considered failed and the group will rebalance in order to reassign the partitions to another member.

 

当使用consumer 组管理时,在两次调用poll()之间的停留时间。这个值指明了consumer在抓取更多消息之前的处于空闲的最长时间。如果poll()在这个值指定的超时之前没有调用,则consumer会被认定为失效,consumer group会重新负载均衡,并重新分配失效consumer负责的partitions给其它consumer成员。

int300000[1,...]medium
max.poll.records

The maximum number of records returned in a single call to poll().

 

一次单独调用poll()可以返回的消息的最大条数。

int500[1,...]medium
partition.assignment.strategy

The class name of the partition assignment strategy that the client will use to distribute partition ownership amongst consumer instances when group management is used

 

partitions的分配策略的类名。当使用group管理策略时,客户端用来将来将partitions分配给组中consumer实例。

list[class org.apache.kafka.clients.consumer.RangeAssignor] medium
receive.buffer.bytes

The size of the TCP receive buffer (SO_RCVBUF) to use when reading data. If the value is -1, the OS default will be used.

 

TCP接受缓存的大小(SO_RCVBUF)。如果设置为-1,则使用OS默认值.

int65536[-1,...]medium
request.timeout.ms

The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted.

 

客户端等待broker应答的超时时间。如果超时了,客户端没有收到应答,如果必要的话可能会重发请求,如果重试都失败了也可能会报请求失败

int305000[0,...]medium
sasl.kerberos.service.name

The Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config.

 

kafka运行的Kerberos主机名。可以在Kafka's JAAS配置或者Kafka's 配置中定义。

stringnull medium
sasl.mechanism

SASL mechanism used for client connections. This may be any mechanism for which a security provider is available. GSSAPI is the default mechanism.

 

客户端链接进行通信的SASL机制。默认时GSSAPI

stringGSSAPI medium
security.protocol

Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.

 

brokers之间通信使用的安全协议。正确值为:PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.

stringPLAINTEXT medium
send.buffer.bytes

The size of the TCP send buffer (SO_SNDBUF) to use when sending data. If the value is -1, the OS default will be used.

 

TCP发送的socket的SO_SNDBUF缓存。如果设置为-1,将使用OS的默认值

int131072[-1,...]medium
ssl.enabled.protocols

The list of protocols enabled for SSL connections.

 

SSL链接的协议

list[TLSv1.2, TLSv1.1, TLSv1] medium
ssl.keystore.type

The file format of the key store file. This is optional for client.

 

密钥文件的文件格式。对客户端来说是可选的。

stringJKS medium
ssl.protocol

The SSL protocol used to generate the SSLContext. Default setting is TLS, which is fine for most cases. Allowed values in recent JVMs are TLS, TLSv1.1 and TLSv1.2. SSL, SSLv2 and SSLv3 may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities.

 

生成SSLContext的SSL协议。默认配置时TLS,适用于大部分情况。最近JVMS支持的协议包括:TLS,TLSv1.1,TLSv1.2.
SSL,SSLv2,SSLv3在老版本的JVMS中可用,但是由于知名的安全漏洞,它们并不受欢迎。

stringTLS medium
ssl.provider

The name of the security provider used for SSL connections. Default value is the default security provider of the JVM.

 

SSL链接安全提供者名字。默认是JVM

stringnull medium
ssl.truststore.type

The file format of the trust store file.

 

受信任的文件的文件格式

stringJKS medium
auto.commit.interval.ms

The frequency in milliseconds that the consumer offsets are auto-committed to Kafka if enable.auto.commitis set to true.

 

consumer offsets可以自动提交到kafka的频率(微秒), 如果设置

int5000[0,...]low
check.crcs

Automatically check the CRC32 of the records consumed. This ensures no on-the-wire or on-disk corruption to the messages occurred. This check adds some overhead, so it may be disabled in cases seeking extreme performance.

 

自动检查消费的消息的CRC32.这个检查保证了消息在发送过程中没有损坏,或者在磁盘上没有损坏。这个检查有可能增加负担,因此对性能要求比较高的情况可能禁用这个检查。

boolean

true low
client.id

An id string to pass to the server when making requests. The purpose of this is to be able to track the source of requests beyond just ip/port by allowing a logical application name to be included in server-side request logging.

 

请求中会附带上id 字符串,用来标识客户端。目的是追踪请求的来源,用于检查某些请求是否来自非法ip/port。

string"" low
fetch.max.wait.ms

The maximum amount of time the server will block before answering the fetch request if there isn't sufficient data to immediately satisfy the requirement given by fetch.min.bytes.

 

server在应答抓取请求之前可以阻塞的最长时间,如果没有足够的消息满足fetch.min.bytes,server一般会阻塞这么长的时间,以获取足够的消息

int500[0,...]low
interceptor.classes

A list of classes to use as interceptors. Implementing theConsumerInterceptor interface allows you to intercept (and possibly mutate) records received by the consumer. By default, there are no interceptors.

 

用作拦截器的类的列表。接口ConsumerInterceptor可以拦截部分消息,以防它们发送到kafka集群。默认情况下没有拦截器

listnull low
metadata.max.age.ms

The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions.

 

更新metadata的时间间隔,无论partition的leader是否发生变换或者topic其它的元数据是否发生变化。

long300000[0,...]low
metric.reporters

A list of classes to use as metrics reporters. Implementing the MetricReporter interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics.

 

用于实现指标统计的类的列表。MetricReporter接口允许调用实现指标统计的插件类。JmxReporter总是包含注册JMX统计。

list[] low
metrics.num.samples

The number of samples maintained to compute metrics.

 

维护计算指标的样本数

int2[1,...]low
metrics.sample.window.ms

The window of time a metrics sample is computed over.

 

度量样本的计算的时长

long30000[0,...]low
reconnect.backoff.ms

The amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all requests sent by the consumer to the broker.

 

重连给定host之前的等待时间。避免频繁的重连某个host。这个backoff时间也设定了consumer请求broker的重试等待时间。

long50[0,...]low
retry.backoff.ms

The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios.

 

重新发送失败请求给某个topic partition之前的最长等待时间,避免极短时间内频繁的重试。

long100[0,...]low
sasl.kerberos.kinit.cmd

Kerberos kinit command path.

 

Kerberos kinit命令路径

string/usr/bin/kinit low
sasl.kerberos.min.time.before.relogin

Login thread sleep time between refresh attempts.

 

在重试之间登陆线程的睡眠时间

long60000 low
sasl.kerberos.ticket.renew.jitter

Percentage of random jitter added to the renewal time.

 

添加到更新时间的随机抖动的百分比。

double0.05 low
sasl.kerberos.ticket.renew.window.factor

Login thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which time it will try to renew the ticket.

 

重新进行登录验证刷新之前,登录线程的睡眠时间

double0.8 low
ssl.cipher.suites

A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. By default all the available cipher suites are supported.

 

密码套件列表。 这是一种集认证,加密,MAC和密钥交换算法一块的命名组合,用于使用TLS或SSL网络协议协商网络连接的安全设置。 默认情况下,支持所有可用的密码套件。

listnull low
ssl.endpoint.identification.algorithm

The endpoint identification algorithm to validate server hostname using server certificate.

 

端点标识算法,使用服务器证书验证服务器主机名。

stringnull low
ssl.keymanager.algorithm

The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for the Java Virtual Machine.

 

密钥管理器工厂用于SSL连接的算法。 默认值是为Java虚拟机配置的密钥管理器工厂算法

stringSunX509 low
ssl.secure.random.implementation

The SecureRandom PRNG implementation to use for SSL cryptography operations.

 

用于SSL加密操作的SecureRandom PRNG实现。

stringnull low
ssl.trustmanager.algorithm

The algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured for the Java Virtual Machine.

 

ssl链接信任管理者工厂的算法。默认时JVM支持的算法。

stringPKIX low


 

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值