kafka

1.kafka介绍:

Kafka是最初由Linkedin公司开发,是一个分布式、分区的、多副本的、多订阅者,基于zookeeper协调的分布式日志系统(也可以当做MQ系统),常见可以用于web/nginx日志、访问日志,消息服务等等,Linkedin于2010年贡献给了Apache基金会并成为顶级开源项目。

主要应用场景是:日志收集系统和消息系统。

Kafka主要设计目标如下:

  • 以时间复杂度为O(1)的方式提供消息持久化能力,即使对TB级以上数据也能保证常数时间的访问性能。
  • 高吞吐率。即使在非常廉价的商用机器上也能做到单机支持每秒100K条消息的传输。
  • 支持Kafka Server间的消息分区,及分布式消费,同时保证每个partition内的消息顺序传输。
  • 同时支持离线数据处理和实时数据处理。
  • Scale out:支持在线水平扩展

2.kafka安装: a. 解压  b.修改配置文件 server.properties

1).将hdp-1、hdp-2和hdp-3的broker.id分别修改为1、2、3(只要保证不冲突就可以)

2).修改log配置文件 /root/kafkadata/kafka-logs

3).修改zookeeper的连接地址,多个地址使用逗号分开。

4).在socket server setting中修改listeners=PLAINTEXT://hdp-1:9092

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# see kafka.server.KafkaConfig for additional details and defaults

############################# Server Basics #############################

# The id of the broker. This must be set to a unique integer for each broker.
broker.id=3

############################# Socket Server Settings #############################

# The address the socket server listens on. It will get the value returned from 
# java.net.InetAddress.getCanonicalHostName() if not configured.
#   FORMAT:
#     listeners = listener_name://host_name:port
#   EXAMPLE:
#     listeners = PLAINTEXT://your.host.name:9092
   listeners=PLAINTEXT://hdp-3:9092

# Hostname and port the broker will advertise to producers and consumers. If not set, 
# it uses the value for "listeners" if configured.  Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
#advertised.listeners=PLAINTEXT://your.host.name:9092

# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL

# The number of threads that the server uses for receiving requests from the network and sending responses to the network
num.network.threads=3

# The number of threads that the server uses for processing requests, which may include disk I/O
num.io.threads=8

# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400

# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400

# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600


############################# Log Basics #############################

# A comma separated list of directories under which to store log files
log.dirs=/root/kafkadata/kafka-logs

# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=1

# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1

############################# Internal Topic Settings  #############################
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
# For anything other than development testing, a value greater than 1 is recommended for to ensure availability such as 3.
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1

############################# Log Flush Policy #############################

# Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
#    1. Durability: Unflushed data may be lost if you are not using replication.
#    2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
#    3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.

# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000

# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000

############################# Log Retention Policy #############################

# The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.

# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=168

# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
#log.retention.bytes=1073741824

# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824

# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000

############################# Zookeeper #############################

# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=hdp-1:2181,hdp-2:2181,hdp-3:2181

# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=6000


############################# Group Coordinator Settings #############################

# The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
# The default value for this is 3 seconds.
# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.
# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
group.initial.rebalance.delay.ms=0

3.生产者producer、消费者consumer

     产生数据     (消费)接受数据

4、启动kafka     

a、启动zookeeper     b. 启动kafka

a、启动zookeeper

 

b. 启动kafka

./kafka-server-start.sh -daemon ../config/server.properties

5、 一台centos启动producer发送数据   

 

如果没有创建该topic则新建topic

bin/kafka-topics.sh --create --zookeeper hdp-1:2181 --replication-factor 1 --partitions 1 --topic animal

参数说明:--create  创建

--zookeeper zookeeper的机器

--replication-factor:副本数量

--partitions: 分区

--topic: 类型

bin/kafka-console-producer.sh --broker-list hdp-2:9092 --topic animal

 

一台centos启动consumer接受数据

bin/kafka-console-consumer.sh --bootstrap-server hdp-2:9092 --topic animal --from-beginning

6、flume采集数据下沉到kafka  --------配置文件tail-kafka

a1.sources = source1
a1.sinks = k1
a1.channels = c1

a1.sources.source1.type = exec
a1.sources.source1.command = tail -F /root/log/access.log

# Describe the sink
a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink
a1.sinks.k1.topic = animal
a1.sinks.k1.brokerList = hdp-2:9092, hdp-3:9092
a1.sinks.k1.requiredAcks = 1
a1.sinks.k1.batchSize = 20
a1.sinks.k1.channel = c1

# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

# Bind the source and sink to the channel
a1.sources.source1.channels = c1
a1.sinks.k1.channel = c1

7、java代码cousumer消费者

porm.xml

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>com.zpark.kafkatest</groupId>
    <artifactId>kafkatest</artifactId>
    <version>1.0-SNAPSHOT</version>
    <dependencies>
        <dependency>
            <groupId>org.apache.kafka</groupId>
            <artifactId>kafka_2.12</artifactId>
        </dependency>
        <dependency>
            <groupId>org.apache.kafka</groupId>
            <artifactId>kafka-clients</artifactId>
            <version>2.2.0</version>
        </dependency>
    </dependencies>

</project>
ConsumerDemo
import java.util.Collections;
import java.util.Properties;

import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;

public class ConsumerDemo {
    private static KafkaConsumer<String, String> consumer;
    private static Properties props;

    static {
        props = new Properties();
        //消费者kafkka地址
        props.put("bootstrap.servers", "hdp-3:9092");
        //key反序列化
        props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        //组
        props.put("group.id", "yangk");
    }

    /**
     * 从kafka中获取数据(SpringBoot也集成了kafka)
     */
    private static void ConsumerMessage() {
        //允许自动提交位移
        props.put("enable.auto.commit", true);
        consumer = new KafkaConsumer<String, String>(props);
        consumer.subscribe(Collections.singleton("animal"));

        //使用轮询拉取数据--消费完成之后会根据设置时长来清除消息,被消费过的消息,如果想再次被消费,可以根据偏移量(offset)来获取
        try {
            while (true) {
                //从kafka中读到了数据放在records中
                ConsumerRecords<String, String> records = consumer.poll(100);
                for (ConsumerRecord<String, String> r : records) {
                    System.out.printf("topic = %s, offset = %s, key = %s, value = %s", r.topic(), r.offset(),
                            r.key(), r.value());

                }
            }
        } finally {
            consumer.close();
        }
    }

    public static void main(String[] args) {
        ConsumerMessage();
    }
}

scala版生产者:

package com.km.sparkdemo.shixun

/**
  * @Author Lucas
  * @Date 2020/6/23 14:05
  * @Version 1.0
  */
import java.util.Properties
import org.apache.kafka.clients.producer.{KafkaProducer, ProducerRecord, RecordMetadata}

/**
  * 实现producer
  */
object KafkaProducerDemo {
  def main(args: Array[String]): Unit = {
    val prop = new Properties
    // 指定请求的kafka集群列表
    prop.put("bootstrap.servers", "hdp-2:9092")// 指定响应方式
    //prop.put("acks", "0")
    prop.put("acks", "all")
    // 请求失败重试次数
    //prop.put("retries", "3")
    // 指定key的序列化方式, key是用于存放数据对应的offset
    prop.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer")
    // 指定value的序列化方式
    prop.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer")
    // 配置超时时间
    prop.put("request.timeout.ms", "60000")
    //prop.put("batch.size", "16384")
    //prop.put("linger.ms", "1")
    //prop.put("buffer.memory", "33554432")

    // 得到生产者的实例
    val producer = new KafkaProducer[String, String](prop)

    // 模拟一些数据并发送给kafka
    for (i <- 1 to 100) {
      val msg = s"${i}: this is a animal ${i} kafka data"
      println("send -->" + msg)
      // 得到返回值
      val rmd: RecordMetadata = producer.send(new ProducerRecord[String, String]("animal", msg)).get()
      println(rmd.toString)
      Thread.sleep(500)
    }

    producer.close()
  }
}

scala版消费者:

package com.km.sparkdemo.shixun

/**
  * @Author Lucas
  * @Date 2020/6/23 14:01
  * @Version 1.0
  */
import java.util.{Collections, Properties}
import org.apache.kafka.clients.consumer.{ConsumerRecords, KafkaConsumer}
/**
  * 实现consumer
  */
object KafkaConsumerDemo {
  def main(args: Array[String]): Unit = {
    // 配置信息
    val prop = new Properties
    prop.put("bootstrap.servers", "hdp-2:9092")
    // 指定消费者组
    prop.put("group.id", "group01")
    // 指定消费位置: earliest/latest/none
    prop.put("auto.offset.reset", "earliest")
    // 指定消费的key的反序列化方式
    prop.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer")
    // 指定消费的value的反序列化方式
    prop.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer")
    prop.put("enable.auto.commit", "true")
    prop.put("session.timeout.ms", "30000")
    // 得到Consumer实例
    val kafkaConsumer = new KafkaConsumer[String, String](prop)
    // 首先需要订阅topic
    kafkaConsumer.subscribe(Collections.singletonList("animal"))
    // 开始消费数据
    while (true) {
      // 如果Kafak中没有消息,会隔timeout这个值读一次。比如上面代码设置了2秒,也是就2秒后会查一次。
      // 如果Kafka中还有消息没有消费的话,会马上去读,而不需要等待。
      val msgs: ConsumerRecords[String, String] = kafkaConsumer.poll(2000)
      // println(msgs.count())
      val it = msgs.iterator()
      while (it.hasNext) {
        val msg = it.next()
        println(s"partition: ${msg.partition()}, offset: ${msg.offset()}, key: ${msg.key()}, value: ${msg.value()}")
      }
    }
  }
}

生产者配置参数解释:

bootstrap.servers: kafka集群broker的地址
key.serializer:关键字的序列化方式
value.serializer:消息值的序列化方式
acks:指定必须要有多少个分区的副本接收到该消息,服务端才会向生产者发送响应,可选值为:0,1,2,…,all,如果设置为0,producter就只管发出不管kafka server有没有确认收到。设置all则表示kafka所有的分区副本全部确认接收到才返回。
buffer.memory:生产者的内存缓冲区大小。如果生产者发送消息的速度 > 消息发送到kafka的速度,那么消息就会在缓冲区堆积,导致缓冲区不足。这个时候,send()方法要么阻塞,要么抛出异常。
max.block.ms:表示send()方法在抛出异常之前可以阻塞多久的时间,默认是60s
compression.type:消息在发往kafka之前可以进行压缩处理,以此来降低存储开销和网络带宽。默认值是null,可选的压缩算法有snappy、gzip和lz4
retries:生产者向kafka发送消息可能会发生错误,有的是临时性的错误,比如网络突然阻塞了一会儿,有的不是临时的错误,比如“消息太大了”,对于出现的临时错误,可以通过重试机制来重新发送
retry.backoff.ms:每次重试之间间隔的时间,第一次失败了,那么休息一会再重试,休息多久,可以通过这个参数来调节
batch.size:生产者在发送消息时,可以将即将发往同一个分区的消息放在一个批次里,然后将这个批次整体进行发送,这样可以节约网络带宽,提升性能。该参数就是用来规约一个批次的大小的。但是生产者并不是说要等到一个批次装满之后,才会发送,不是这样的,有时候半满,甚至只有一个消息的时候,也可能会发送,具体怎么选择,我们不知道,但是不是说非要等装满才发。因此,如果把该参数调的比较大的话,是不会造成消息发送延迟的,但是会占用比较大的内存。但是如果设置的太小,会造成消息发送次数增加,会有额外的IO开销
linger.ms:生产者在发送一个批次之前,可以适当的等一小会,这样可以让更多的消息加入到该批次。这样会造成延时增加,但是降低了IO开销,增加了吞吐量
client.id:服务器用来标志消息的来源,是一个任意的字符串
max.in.flight.requests.per.connection:一个消息发送给kafka集群,在收到服务端的响应之前的这段时间里,生产者还可以发n-1个消息。这个参数配置retries,可以保证消息的顺序,后面会介绍
request.timeout.ms:生产者在发送消息之后,到收到服务端响应时,等待的时间限制
max.request.size:生产者发送消息的大小。可以是一个消息的大小,也可以发送的一个批次的消息大小
receive.buffer.bytes和send.buffer.bytes:tcp socket接收和发送消息的缓冲区大小,其实指的就是ByteBuffer的大小

消费者配置参数解释:

groupid:一个字符串用来指示一组consumer所在的组群。实现同一个topic可由不同的组群消费

auto.offset.reset:可选三个参数

earliest ---当各分区下有已提交的offset时,从提交的offset开始消费;无提交的offset时,从头开始消费 
latest---当各分区下有已提交的offset时,从提交的offset开始消费;无提交的offset时,消费新产生的该分区下的数据 
none---topic各分区都存在已提交的offset时,从offset后开始消费;只要有一个分区不存在已提交的offset,则抛出异常

socket.timeout.ms:默认值:3000,socket超时时间。

socket.buffersize: 默认值:64*1024,socket receive buffer。

fetch.size: 默认值:300 * 1024,控制在一个请求中获取的消息的字节数。 这个参数在0.8.x中由fetch.message.max.bytes,fetch.min.bytes取代。

backoff.increment.ms:默认值:1000,这个参数避免在没有新数据的情况下重复频繁的拉数据。 如果拉到空数据,则多推后这个时间。

queued.max.message.chunks:默认值:2,consumer内部缓存拉回来的消息到一个队列中。 这个值控制这个队列的大小。

auto.commit.enable:默认值:true,如果true,consumer定期地往zookeeper写入每个分区的offset。

auto.commit.interval.ms:默认值:10000,往zookeeper上写offset的频率。

auto.offset.reset:默认值:largest,如果offset出了返回,则 smallest: 自动设置reset到最小的offset. largest : 自动设置offset到最大的offset. 其它值不允许,会抛出异常。

consumer.timeout.ms:默认值:-1,默认-1,consumer在没有新消息时无限期的block。如果设置一个正值, 一个超时异常会抛出。

rebalance.retries.max:默认值:4,rebalance时的最大尝试次数。

max.poll.interval.ms:拉取的最大时间间隔,如果你一次拉取的比较多,建议加大这个值,长时间没有调用poll,且间隔超过这个值时,就会认为这个consumer失败了

max.poll.records:默认值:500,Consumer每次调用poll()时取到的records的最大数。

  • 1
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值