Kafka基础知识

Kafka基础知识

一、概叙

  1. Kafka是一个开源的消息系统,由scala写成
  2. 目标是为处理实时数据提供一个统一、高通量、低等待的平台
  3. Kafka是一个分布式消息队列
  4. 无论是kafka集群,还是consumer都依赖于zookeeper集群保存一些meta信息,来保证系统可用性

二、架构

  1. Producer:消息生产者,就是向kafka broker发消息的客户端
  2. Consumer :消息消费者,向kafka broker取消息的客户端
  3. Topic:可以理解为一个队列
  4. Consumer Group (CG):这是kafka用来实现一个topic消息的广播(发给所有的consumer)和单播(发给任意一个consumer)的手段
  5. Broker :一台kafka服务器就是一个broker,一个集群由多个broker组成,一个broker可以容纳多个topic
  6. Partition:为了实现扩展性,一个非常大的topic可以分布到多个broker(即服务器)上,一个topic可以分为多个partition,每个partition是一个有序的队列
  7. Offset:kafka的存储文件都是按照offset.kafka来命名,用offset做名字的好处是方便查找

三、Kafka命令行操作

  1. 查看当前服务器中的所有topic

    kafka-topics.sh --zookeeper hadoop101:2181 --list
    
  2. 创建topic

    kafka-topics.sh --zookeeper hadoop101:2181 --create --replication-factor 3 --partitions 1 --topic first
    
  3. 删除topic

    kafka-topics.sh --zookeeper hadoop101:2181 --delete --topic first
    
  4. 发送消息

    kafka-console-producer.sh --broker-list hadoop101:9092 --topic first
    
  5. 消费消息

    kafka-console-consumer.sh --zookeeper hadoop101:2181 --from-beginning --topic first
    
  6. 查看某个Topic的详情

    kafka-topics.sh --zookeeper hadoop101:2181 --describe --topic first
    

四、Kafka生产过程分析

  1. 写入方式:producer采用推(push)模式将消息发布到broker,每条消息都被追加(append)到分区(patition)中,属于顺序写磁盘(顺序写磁盘效率比随机写内存要高,保障kafka吞吐率)
  2. 分区:消息发送时都被发送到一个topic,其本质就是一个目录,而topic是由一些Partition Logs(分区日志)组成
    1. 分区原因:方便在集群中扩展;提高并发
    2. 分区原则:
      1. 指定了patition,则直接使用
      2. 未指定patition但指定key,通过对key的value进行hash出一个patition
      3. patition和key都未指定,使用轮询选出一个patition
  3. 副本:同一个partition可能会有多个replication(对应 server.properties 配置中的 default.replication.factor=N)。没有replication的情况下,一旦broker 宕机,其上所有 patition 的数据都不可被消费,同时producer也不能再将数据存于其上的patition。引入replication之后,同一个partition可能会有多个replication,而这时需要在这些replication之间选出一个leader,producer和consumer只与这个leader交互,其它replication作为follower从leader 中复制数据。
  4. 写入流程
    1. producer先从zookeeper的 "/brokers/…/state"节点找到该partition的leader
    2. producer将消息发送给该leader
    3. leader将消息写入本地log
    4. followers从leader pull消息,写入本地log后向leader发送ACK
    5. leader收到所有ISR中的replication的ACK后,增加HW(high watermark,最后commit 的offset)并向producer发送ACK

五、Broker保存消息

  1. 存储方式:物理上把topic分成一个或多个patition(对应 server.properties 中的num.partitions=3配置),每个patition物理上对应一个文件夹(该文件夹存储该patition的所有消息和索引文件)
  2. 存储策略:无论消息是否被消费,kafka都会保留所有消息。
    1. 两种策略可以删除旧数据:
      1. 基于时间:log.retention.hours=168
      2. 基于大小:log.retention.bytes=1073741824
    2. 需要注意的是,因为Kafka读取特定消息的时间复杂度为O(1),即与文件大小无关,所以这里删除过期文件与提高 Kafka 性能无关
  3. 注意:producer不在zk中注册,消费者在zk中注册

六、Kafka消费过程分析

  1. 消费方式:consumer采用pull(拉)模式从broker中读取数据,pull模式则可以根据consumer的消费能力以适当的速率消费消息

  2. 消费者组:消费者是以consumer group消费者组的方式工作,由一个或者多个消费者组成一个组,共同消费一个topic。每个分区在同一时间只能由group中的一个消费者读取,但是多个group可以同时消费这个partition

  3. 消费者组案例

    1. 需求:测试同一个消费者组中的消费者,同一时刻只能有一个消费者消费

    2. 在hadoop101、hadoop102上修改/opt/module/kafka/config/consumer.properties配置文件中的group.id属性为任意组名

      group.id=kgg
      
    3. 在hadoop101、hadoop102上分别启动消费者

      kafka-console-consumer.sh --zookeeper hadoop101:2181 --topic first --consumer.config config/consumer.properties
      
    4. 在hadoop103上启动生产者

      kafka-console-producer.sh --broker-list hadoop101:9092 --topic first
      
    5. 查看hadoop101和hadoop102的接收者,同一时刻只有一个消费者接收到消息

七、Kafka API实战

  1. 导入pom依赖

    <dependencies>
        <dependency>
            <groupId>org.apache.kafka</groupId>
            <artifactId>kafka-clients</artifactId>
            <version>0.11.0.0</version>
        </dependency>
        <dependency>
            <groupId>org.apache.kafka</groupId>
            <artifactId>kafka_2.12</artifactId>
            <version>0.11.0.0</version>
        </dependency>
    </dependencies>
    
  2. Kafka生产者(新API)

    package com.kgg.kafka;
    import java.util.Properties;
    import org.apache.kafka.clients.producer.KafkaProducer;
    import org.apache.kafka.clients.producer.Producer;
    import org.apache.kafka.clients.producer.ProducerRecord;
    
    public class NewProducer {
    
    	public static void main(String[] args) {
    		
    		Properties props = new Properties();
    		// Kafka服务端的主机名和端口号
    		props.put("bootstrap.servers", "hadoop102:9092");
    		// 等待所有副本节点的应答
    		props.put("acks", "all");
    		// 消息发送最大尝试次数
    		props.put("retries", 0);
    		// 一批消息处理大小
    		props.put("batch.size", 16384);
    		// 请求延时
    		props.put("linger.ms", 1);
    		// 发送缓存区内存大小
    		props.put("buffer.memory", 33554432);
    		// key序列化
    		props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
    		// value序列化
    		props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
    
    		Producer<String, String> producer = new KafkaProducer<>(props);
    		for (int i = 0; i < 50; i++) {
    			producer.send(new ProducerRecord<String, String>("first", Integer.toString(i), "hello world-" + i));
    		}
    
    		producer.close();
    	}
    }
    
  3. 创建生产者带回调函数(新API)

    package com.kgg.kafka;
    import java.util.Properties;
    import org.apache.kafka.clients.producer.Callback;
    import org.apache.kafka.clients.producer.KafkaProducer;
    import org.apache.kafka.clients.producer.ProducerRecord;
    import org.apache.kafka.clients.producer.RecordMetadata;
    
    public class CallBackProducer {
    
    	public static void main(String[] args) {
    
    Properties props = new Properties();
    		// Kafka服务端的主机名和端口号
    		props.put("bootstrap.servers", "hadoop102:9092");
    		// 等待所有副本节点的应答
    		props.put("acks", "all");
    		// 消息发送最大尝试次数
    		props.put("retries", 0);
    		// 一批消息处理大小
    		props.put("batch.size", 16384);
    		// 增加服务端请求延时
    		props.put("linger.ms", 1);
    // 发送缓存区内存大小
    		props.put("buffer.memory", 33554432);
    		// key序列化
    		props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
    		// value序列化
    		props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
    
    		KafkaProducer<String, String> kafkaProducer = new KafkaProducer<String,String>(props);
    
    		for (int i = 0; i < 50; i++) {
    
    			kafkaProducer.send(new ProducerRecord<String, String>("first", "hello" + i), new Callback() {
    
    				@Override
    				public void onCompletion(RecordMetadata metadata, Exception exception) {
    
    					if (metadata != null) {
    
    						System.err.println(metadata.partition() + "---" + metadata.offset());
    					}
    				}
    			});
    		}
    
    		kafkaProducer.close();
    	}
    }
    
  4. 自定义分区生产者

    1. 需求:将所有数据存储到topic的第0号分区上

    2. 定义一个类实现Partitioner接口,重写里面的方法(过时API)

      package com.kgg.kafka;
      import java.util.Map;
      import kafka.producer.Partitioner;
      
      public class CustomPartitioner implements Partitioner {
      
      	public CustomPartitioner() {
      		super();
      	}
      
      	@Override
      	public int partition(Object key, int numPartitions) {
      		// 控制分区
      		return 0;
      	}
      }
      
    3. 自定义分区(新API)

      package com.kgg.kafka;
      import java.util.Map;
      import org.apache.kafka.clients.producer.Partitioner;
      import org.apache.kafka.common.Cluster;
      
      public class CustomPartitioner implements Partitioner {
      
      	@Override
      	public void configure(Map<String, ?> configs) {
      		
      	}
      
      	@Override
      	public int partition(String topic, Object key, byte[] keyBytes, Object value, byte[] valueBytes, Cluster cluster) {
              // 控制分区
      		return 0;
      	}
      
      	@Override
      	public void close() {
      		
      	}
      }
      
    4. 在代码中调用

      package com.kgg.kafka;
      import java.util.Properties;
      import org.apache.kafka.clients.producer.KafkaProducer;
      import org.apache.kafka.clients.producer.Producer;
      import org.apache.kafka.clients.producer.ProducerRecord;
      
      public class PartitionerProducer {
      
      	public static void main(String[] args) {
      		
      		Properties props = new Properties();
      		// Kafka服务端的主机名和端口号
      		props.put("bootstrap.servers", "hadoop102:9092");
      		// 等待所有副本节点的应答
      		props.put("acks", "all");
      		// 消息发送最大尝试次数
      		props.put("retries", 0);
      		// 一批消息处理大小
      		props.put("batch.size", 16384);
      		// 增加服务端请求延时
      		props.put("linger.ms", 1);
      		// 发送缓存区内存大小
      		props.put("buffer.memory", 33554432);
      		// key序列化
      		props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
      		// value序列化
      		props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
      		// 自定义分区
      		props.put("partitioner.class", "com.test.CustomPartitioner");
      
      		Producer<String, String> producer = new KafkaProducer<String,String>(props);
      		producer.send(new ProducerRecord<String, String>("first", "1", "kgg"));
      
      		producer.close();
      	}
      }
      
  5. Kafka消费者(新API)

    package com.kgg.kafka.consume;
    import java.util.Arrays;
    import java.util.Properties;
    import org.apache.kafka.clients.consumer.ConsumerRecord;
    import org.apache.kafka.clients.consumer.ConsumerRecords;
    import org.apache.kafka.clients.consumer.KafkaConsumer;
    
    public class CustomNewConsumer {
    
    	public static void main(String[] args) {
    
    		Properties props = new Properties();
    		// 定义kakfa 服务的地址,不需要将所有broker指定上 
    		props.put("bootstrap.servers", "hadoop101:9092");
    		// 制定consumer group 
    		props.put("group.id", "test");
    		// 是否自动确认offset 
    		props.put("enable.auto.commit", "true");
    		// 自动确认offset的时间间隔 
    		props.put("auto.commit.interval.ms", "1000");
    		// key的序列化类
    		props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
    		// value的序列化类 
    		props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
    		// 定义consumer 
    		KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);
    		
    		// 消费者订阅的topic, 可同时订阅多个 
    		consumer.subscribe(Arrays.asList("first", "second","third"));
    
    		while (true) {
    			// 读取数据,读取超时时间为100ms 
    			ConsumerRecords<String, String> records = consumer.poll(100);
    			
    			for (ConsumerRecord<String, String> record : records)
    				System.out.printf("offset = %d, key = %s, value = %s%n", record.offset(), record.key(), record.value());
    		}
    	}
    }
    

八、Kafka producer拦截器(interceptor)

  1. Intercetpor的实现接口是org.apache.kafka.clients.producer.ProducerInterceptor

  2. 定义的方法包括:

    1. configure(configs):获取配置信息和初始化数据时调用
    2. onSend(ProducerRecord):Producer确保在消息被序列化以及计算分区前调用该方法
    3. onAcknowledgement(RecordMetadata, Exception):该方法会在消息被应答或消息发送失败时调用
    4. close:关闭interceptor,主要用于执行一些资源清理工作
  3. 案例

    1. 需求:实现一个简单的双interceptor组成的拦截链。第一个interceptor会在消息发送前将时间戳信息加到消息value的最前部;第二个interceptor会在消息发送后更新成功发送消息数或失败发送消息数。

    2. 增加时间戳拦截器

      package com.kgg.kafka.interceptor;
      import java.util.Map;
      import org.apache.kafka.clients.producer.ProducerInterceptor;
      import org.apache.kafka.clients.producer.ProducerRecord;
      import org.apache.kafka.clients.producer.RecordMetadata;
      
      public class TimeInterceptor implements ProducerInterceptor<String, String> {
      
      	@Override
      	public void configure(Map<String, ?> configs) {
      
      	}
      
      	@Override
      	public ProducerRecord<String, String> onSend(ProducerRecord<String, String> record) {
      		// 创建一个新的record,把时间戳写入消息体的最前部
      		return new ProducerRecord(record.topic(), record.partition(), record.timestamp(), record.key(),
      				System.currentTimeMillis() + "," + record.value().toString());
      	}
      
      	@Override
      	public void onAcknowledgement(RecordMetadata metadata, Exception exception) {
      
      	}
      
      	@Override
      	public void close() {
      
      	}
      }
      
    3. 统计发送消息成功和发送失败消息数,并在producer关闭时打印这两个计数器

      package com.kgg.kafka.interceptor;
      import java.util.Map;
      import org.apache.kafka.clients.producer.ProducerInterceptor;
      import org.apache.kafka.clients.producer.ProducerRecord;
      import org.apache.kafka.clients.producer.RecordMetadata;
      
      public class CounterInterceptor implements ProducerInterceptor<String, String>{
          private int errorCounter = 0;
          private int successCounter = 0;
      
      	@Override
      	public void configure(Map<String, ?> configs) {
      		
      	}
      
      	@Override
      	public ProducerRecord<String, String> onSend(ProducerRecord<String, String> record) {
      		 return record;
      	}
      
      	@Override
      	public void onAcknowledgement(RecordMetadata metadata, Exception exception) {
      		// 统计成功和失败的次数
              if (exception == null) {
                  successCounter++;
              } else {
                  errorCounter++;
              }
      	}
      
      	@Override
      	public void close() {
              // 保存结果
              System.out.println("Successful sent: " + successCounter);
              System.out.println("Failed sent: " + errorCounter);
      	}
      }
      
    4. producer主程序

      package com.kgg.kafka.interceptor;
      import java.util.ArrayList;
      import java.util.List;
      import java.util.Properties;
      import org.apache.kafka.clients.producer.KafkaProducer;
      import org.apache.kafka.clients.producer.Producer;
      import org.apache.kafka.clients.producer.ProducerConfig;
      import org.apache.kafka.clients.producer.ProducerRecord;
      
      public class InterceptorProducer {
      
      	public static void main(String[] args) throws Exception {
      		// 1 设置配置信息
      		Properties props = new Properties();
      		props.put("bootstrap.servers", "hadoop101:9092");
      		props.put("acks", "all");
      		props.put("retries", 0);
      		props.put("batch.size", 16384);
      		props.put("linger.ms", 1);
      		props.put("buffer.memory", 33554432);
      		props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
      		props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
      		
      		// 2 构建拦截链
      		List<String> interceptors = new ArrayList<String>();
      		interceptors.add("com.kgg.kafka.interceptor.TimeInterceptor"); 	interceptors.add("com.kgg.kafka.interceptor.CounterInterceptor"); 
      		props.put(ProducerConfig.INTERCEPTOR_CLASSES_CONFIG, interceptors);
      		 
      		String topic = "first";
      		Producer<String, String> producer = new KafkaProducer<String,String>(props);
      		
      		// 3 发送消息
      		for (int i = 0; i < 10; i++) {
      			
      		    ProducerRecord<String, String> record = new ProducerRecord<String,String>(topic, "message" + i);
      		    producer.send(record);
      		}
      		 
      		// 4 一定要关闭producer,这样才会调用interceptor的close方法
      		producer.close();
      	}
      }
      

九、Kafka与Flume比较(重要)

  1. flume:cloudera公司研发:

    1. 适合多个生产者;
    2. 适合下游数据消费者不多的情况;
    3. 适合数据安全性要求不高的操作;
    4. 适合与Hadoop生态圈对接的操作。
  2. kafka:linkedin公司研发:

    1. 适合数据下游消费众多的情况;
    2. 适合数据安全性要求较高的操作,支持replication。
  3. 常用的一种模型是

    ​ 线上数据 --> flume --> kafka --> flume(根据情景增删该流程) --> HDFS

十、Flume与kafka集成(重要)

# define
a1.sources = r1
a1.sinks = k1
a1.channels = c1

# source
a1.sources.r1.type = exec
a1.sources.r1.command = tail -F -c +0 /opt/module/datas/flume.log
a1.sources.r1.shell = /bin/bash -c

# sink
a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink
a1.sinks.k1.kafka.bootstrap.servers = hadoop101:9092,hadoop102:9092,hadoop103:9092
a1.sinks.k1.kafka.topic = first
a1.sinks.k1.kafka.flumeBatchSize = 20
a1.sinks.k1.kafka.producer.acks = 1
a1.sinks.k1.kafka.producer.linger.ms = 1

# channel
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

# bind
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

十一、Kafka配置信息(了解)

  1. Broker配置信息
属性默认值描述
broker.id必填参数,broker的唯一标识
log.dirs/tmp/kafka-logsKafka数据存放的目录。可以指定多个目录,中间用逗号分隔,当新partition被创建的时会被存放到当前存放partition最少的目录。
port9092BrokerServer接受客户端连接的端口号
zookeeper.connectnullZookeeper的连接串,格式为:hostname1:port1,hostname2:port2,hostname3:port3。可以填一个或多个,为了提高可靠性,建议都填上。注意,此配置允许我们指定一个zookeeper路径来存放此kafka集群的所有数据,为了与其他应用集群区分开,建议在此配置中指定本集群存放目录,格式为:hostname1:port1,hostname2:port2,hostname3:port3/chroot/path 。需要注意的是,消费者的参数要和此参数一致。
message.max.bytes1000000服务器可以接收到的最大的消息大小。注意此参数要和consumer的maximum.message.size大小一致,否则会因为生产者生产的消息太大导致消费者无法消费。
num.io.threads8服务器用来执行读写请求的IO线程数,此参数的数量至少要等于服务器上磁盘的数量。
queued.max.requests500I/O线程可以处理请求的队列大小,若实际请求数超过此大小,网络线程将停止接收新的请求。
socket.send.buffer.bytes100 * 1024The SO_SNDBUFF buffer the server prefers for socket connections.
socket.receive.buffer.bytes100 * 1024The SO_RCVBUFF buffer the server prefers for socket connections.
socket.request.max.bytes100 * 1024 * 1024服务器允许请求的最大值, 用来防止内存溢出,其值应该小于 Java heap size.
num.partitions1默认partition数量,如果topic在创建时没有指定partition数量,默认使用此值,建议改为5
log.segment.bytes1024 * 1024 * 1024Segment文件的大小,超过此值将会自动新建一个segment,此值可以被topic级别的参数覆盖。
log.roll.{ms,hours}24 * 7 hours新建segment文件的时间,此值可以被topic级别的参数覆盖。
log.retention.{ms,minutes,hours}7 daysKafka segment log的保存周期,保存周期超过此时间日志就会被删除。此参数可以被topic级别参数覆盖。数据量大时,建议减小此值。
log.retention.bytes-1每个partition的最大容量,若数据量超过此值,partition数据将会被删除。注意这个参数控制的是每个partition而不是topic。此参数可以被log级别参数覆盖。
log.retention.check.interval.ms5 minutes删除策略的检查周期
auto.create.topics.enabletrue自动创建topic参数,建议此值设置为false,严格控制topic管理,防止生产者错写topic。
default.replication.factor1默认副本数量,建议改为2。
replica.lag.time.max.ms10000在此窗口时间内没有收到follower的fetch请求,leader会将其从ISR(in-sync replicas)中移除。
replica.lag.max.messages4000如果replica节点落后leader节点此值大小的消息数量,leader节点就会将其从ISR中移除。
replica.socket.timeout.ms30 * 1000replica向leader发送请求的超时时间。
replica.socket.receive.buffer.bytes64 * 1024The socket receive buffer for network requests to the leader for replicating data.
replica.fetch.max.bytes1024 * 1024The number of byes of messages to attempt to fetch for each partition in the fetch requests the replicas send to the leader.
replica.fetch.wait.max.ms500The maximum amount of time to wait time for data to arrive on the leader in the fetch requests sent by the replicas to the leader.
num.replica.fetchers1Number of threads used to replicate messages from leaders. Increasing this value can increase the degree of I/O parallelism in the follower broker.
fetch.purgatory.purge.interval.requests1000The purge interval (in number of requests) of the fetch request purgatory.
zookeeper.session.timeout.ms6000ZooKeeper session 超时时间。如果在此时间内server没有向zookeeper发送心跳,zookeeper就会认为此节点已挂掉。 此值太低导致节点容易被标记死亡;若太高,.会导致太迟发现节点死亡。
zookeeper.connection.timeout.ms6000客户端连接zookeeper的超时时间。
zookeeper.sync.time.ms2000H ZK follower落后 ZK leader的时间。
controlled.shutdown.enabletrue允许broker shutdown。如果启用,broker在关闭自己之前会把它上面的所有leaders转移到其它brokers上,建议启用,增加集群稳定性。
auto.leader.rebalance.enabletrueIf this is enabled the controller will automatically try to balance leadership for partitions among the brokers by periodically returning leadership to the “preferred” replica for each partition if it is available.
leader.imbalance.per.broker.percentage10The percentage of leader imbalance allowed per broker. The controller will rebalance leadership if this ratio goes above the configured value per broker.
leader.imbalance.check.interval.seconds300The frequency with which to check for leader imbalance.
offset.metadata.max.bytes4096The maximum amount of metadata to allow clients to save with their offsets.
connections.max.idle.ms600000Idle connections timeout: the server socket processor threads close the connections that idle more than this.
num.recovery.threads.per.data.dir1The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
unclean.leader.election.enabletrueIndicates whether to enable replicas not in the ISR set to be elected as leader as a last resort, even though doing so may result in data loss.
delete.topic.enablefalse启用deletetopic参数,建议设置为true。
offsets.topic.num.partitions50The number of partitions for the offset commit topic. Since changing this after deployment is currently unsupported, we recommend using a higher setting for production (e.g., 100-200).
offsets.topic.retention.minutes1440Offsets that are older than this age will be marked for deletion. The actual purge will occur when the log cleaner compacts the offsets topic.
offsets.retention.check.interval.ms600000The frequency at which the offset manager checks for stale offsets.
offsets.topic.replication.factor3The replication factor for the offset commit topic. A higher setting (e.g., three or four) is recommended in order to ensure higher availability. If the offsets topic is created when fewer brokers than the replication factor then the offsets topic will be created with fewer replicas.
offsets.topic.segment.bytes104857600Segment size for the offsets topic. Since it uses a compacted topic, this should be kept relatively low in order to facilitate faster log compaction and loads.
offsets.load.buffer.size5242880An offset load occurs when a broker becomes the offset manager for a set of consumer groups (i.e., when it becomes a leader for an offsets topic partition). This setting corresponds to the batch size (in bytes) to use when reading from the offsets segments when loading offsets into the offset manager’s cache.
offsets.commit.required.acks-1The number of acknowledgements that are required before the offset commit can be accepted. This is similar to the producer’s acknowledgement setting. In general, the default should not be overridden.
offsets.commit.timeout.ms5000The offset commit will be delayed until this timeout or the required number of replicas have received the offset commit. This is similar to the producer request timeout.
  1. Producer配置信息

    属性默认值描述
    metadata.broker.list启动时producer查询brokers的列表,可以是集群中所有brokers的一个子集。注意,这个参数只是用来获取topic的元信息用,producer会从元信息中挑选合适的broker并与之建立socket连接。格式是:host1:port1,host2:port2。
    request.required.acks0参见3.2节介绍
    request.timeout.ms10000Broker等待ack的超时时间,若等待时间超过此值,会返回客户端错误信息。
    producer.typesync同步异步模式。async表示异步,sync表示同步。如果设置成异步模式,可以允许生产者以batch的形式push数据,这样会极大的提高broker性能,推荐设置为异步。
    serializer.classkafka.serializer.DefaultEncoder序列号类,.默认序列化成 byte[] 。
    key.serializer.classKey的序列化类,默认同上。
    partitioner.classkafka.producer.DefaultPartitionerPartition类,默认对key进行hash。
    compression.codecnone指定producer消息的压缩格式,可选参数为: “none”, “gzip” and “snappy”。关于压缩参见4.1节
    compressed.topicsnull启用压缩的topic名称。若上面参数选择了一个压缩格式,那么压缩仅对本参数指定的topic有效,若本参数为空,则对所有topic有效。
    message.send.max.retries3Producer发送失败时重试次数。若网络出现问题,可能会导致不断重试。
    retry.backoff.ms100Before each retry, the producer refreshes the metadata of relevant topics to see if a new leader has been elected. Since leader election takes a bit of time, this property specifies the amount of time that the producer waits before refreshing the metadata.
    topic.metadata.refresh.interval.ms600 * 1000The producer generally refreshes the topic metadata from brokers when there is a failure (partition missing, leader not available…). It will also poll regularly (default: every 10min so 600000ms). If you set this to a negative value, metadata will only get refreshed on failure. If you set this to zero, the metadata will get refreshed after each message sent (not recommended). Important note: the refresh happen only AFTER the message is sent, so if the producer never sends a message the metadata is never refreshed
    queue.buffering.max.ms5000启用异步模式时,producer缓存消息的时间。比如我们设置成1000时,它会缓存1秒的数据再一次发送出去,这样可以极大的增加broker吞吐量,但也会造成时效性的降低。
    queue.buffering.max.messages10000采用异步模式时producer buffer 队列里最大缓存的消息数量,如果超过这个数值,producer就会阻塞或者丢掉消息。
    queue.enqueue.timeout.ms-1当达到上面参数值时producer阻塞等待的时间。如果值设置为0,buffer队列满时producer不会阻塞,消息直接被丢掉。若值设置为-1,producer会被阻塞,不会丢消息。
    batch.num.messages200采用异步模式时,一个batch缓存的消息数量。达到这个数量值时producer才会发送消息。
    send.buffer.bytes100 * 1024Socket write buffer size
    client.id“”The client id is a user-specified string sent in each request to help trace calls. It should logically identify the application making the request.
  2. Consumer配置信息

    属性默认值描述
    group.idConsumer的组ID,相同goup.id的consumer属于同一个组。
    zookeeper.connectConsumer的zookeeper连接串,要和broker的配置一致。
    consumer.idnull如果不设置会自动生成。
    socket.timeout.ms30 * 1000网络请求的socket超时时间。实际超时时间由max.fetch.wait + socket.timeout.ms 确定。
    socket.receive.buffer.bytes64 * 1024The socket receive buffer for network requests.
    fetch.message.max.bytes1024 * 1024查询topic-partition时允许的最大消息大小。consumer会为每个partition缓存此大小的消息到内存,因此,这个参数可以控制consumer的内存使用量。这个值应该至少比server允许的最大消息大小大,以免producer发送的消息大于consumer允许的消息。
    num.consumer.fetchers1The number fetcher threads used to fetch data.
    auto.commit.enabletrue如果此值设置为true,consumer会周期性的把当前消费的offset值保存到zookeeper。当consumer失败重启之后将会使用此值作为新开始消费的值。
    auto.commit.interval.ms60 * 1000Consumer提交offset值到zookeeper的周期。
    queued.max.message.chunks2用来被consumer消费的message chunks 数量, 每个chunk可以缓存fetch.message.max.bytes大小的数据量。
    auto.commit.interval.ms60 * 1000Consumer提交offset值到zookeeper的周期。
    queued.max.message.chunks2用来被consumer消费的message chunks 数量, 每个chunk可以缓存fetch.message.max.bytes大小的数据量。
    fetch.min.bytes1The minimum amount of data the server should return for a fetch request. If insufficient data is available the request will wait for that much data to accumulate before answering the request.
    fetch.wait.max.ms100The maximum amount of time the server will block before answering the fetch request if there isn’t sufficient data to immediately satisfy fetch.min.bytes.
    rebalance.backoff.ms2000Backoff time between retries during rebalance.
    refresh.leader.backoff.ms200Backoff time to wait before trying to determine the leader of a partition that has just lost its leader.
    auto.offset.resetlargestWhat to do when there is no initial offset in ZooKeeper or if an offset is out of range ;smallest : automatically reset the offset to the smallest offset; largest : automatically reset the offset to the largest offset;anything else: throw exception to the consumer
    consumer.timeout.ms-1若在指定时间内没有消息消费,consumer将会抛出异常。
    exclude.internal.topicstrueWhether messages from internal topics (such as offsets) should be exposed to the consumer.
    zookeeper.session.timeout.ms6000ZooKeeper session timeout. If the consumer fails to heartbeat to ZooKeeper for this period of time it is considered dead and a rebalance will occur.
    zookeeper.connection.timeout.ms6000The max time that the client waits while establishing a connection to zookeeper.
    zookeeper.sync.time.ms2000How far a ZK follower can be behind a ZK leader
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值