一、前提准备
1.安装zookeeper
2.配置zookeeper
vi conf/zoo.cfg
修改存储目录
dataDir=/opt/modules/zookeeper
3.启动zk
bin/zkServer.sh start
二、kafka配置和启动
1.broker配置
properties文件:$KAFKA_HOME/config/server.properties
broker.id=0
# The port the socket server listens on
port=9092
# Hostname the broker will bind to. If not set, the server will bind to all interfaces
host.name=bigdata.ibeifeng.com
# A comma seperated list of directories under which to store log files
log.dirs=/opt/modules/kafka_2.11-0.10.2.1/data/0
# root directory for all kafka znodes.
zookeeper.connect=hadoop:2181/kafka
备注:
(1)针对kafka_2.11-0.9.0.0+版本,是需要配置
listeners=PLAINTEXT://hadoop:9092
例如
broker.id=0
############################# Socket Server Settings #############################
listeners=PLAINTEXT://hadoop:9093
# The port the socket server listens on
port=9092
# Hostname the broker will bind to. If not set, the server will bind to all interfaces
host.name=bigdata.ibeifeng.com
# A comma seperated list of directories under which to store log files
log.dirs=/opt/modules/kafka_2.11-0.10.2.1/data/0
# root directory for all kafka znodes.
zookeeper.connect=hadoop:2181/kafka
(2)针对kafka_2.11-0.10.1.1配置以下四点
broker.id=0
## 给定broker的id的值,在一个kafka集群中该参数必须唯一
log.dirs=/opt/modules/kafka_2.11-0.10.1.1/data/0
## 指定kafka存储磁盘的路径,可以使用","分割,给定多个磁盘路径;如果服务器挂载多个磁盘,可以将kafka的数据分布存储到不同的磁盘中(每个磁盘写一个路径),对于Kafka的数据的读写效率有一定的提升(场景:高并发、大数据量的情况)
zookeeper.connect=hadoop01:2181/kafka10
## 配置kafka连接zk的相关信息,连接url以及kafka数据存储的zk根目录;这里的配置含义是:连接hadoop机器2181端口的zookeeper作为kafka的元数据管理zk,zk中使用/kafka08作为kafka元数据存储的根目录,默认kafka在zk中的根目录是zk的顶级目录("/")
listeners=PLAINTEXT://hadoop01:9092
## 配置监听端口
(3)针对kafka_2.11-2.4.1
配置一下4点即可
broker.id=0
# listeners = PLAINTEXT://your.host.name:9092
listeners=PLAINTEXT://hadoop:9093
# A comma separated list of directories under which to store log files
log.dirs=/data/../kafka_2.11-2.4.1/data
# root directory for all kafka znodes.
zookeeper.connect=hadoop:2181/kafka241
2.启动kafka和创建topic
(1)启动kafka
bin/kafka-server-start.sh config/server.properties
后台启动(亲测可以)
bin/kafka-server-start.sh config/server.properties 1>/dev/null 2>&1 &
或者
bin/kafka-server-start.sh -daemon config/server.properties &
(2)创建topic
kafka-topics.sh --create --zookeeper hadoop:2181/kafka08 --replication-factor 1 --partitions 1 --topic hello_topic
备注:针对kafka_2.11-0.10.1.1
[root@hadoop01 kafka_2.11-0.10.1.1]# bin/kafka-topics.sh --zookeeper hadoop01:2181/kafka10 --create --replication-factor 1 --partitions 1 --topic merchants-template
Created topic "merchants-template".
(3)查看所有topic
kafka-topics.sh --list --zookeeper hadoop:2181/kafka08
三、测试
1.开启生产者
bin/kafka-console-producer.sh --broker-list hadoop:9092 --topic hello_topic
2.开启消费者
bin/kafka-console-consumer.sh --zookeeper hadoop:2181/kafka08 --topic hello_topic --from-beginning
3.针对kafka_2.11-0.10.1.1开启生产者和消费者
创建topic
bin/kafka-topics.sh --create --topic dayu --zookeeper hadoop:2181/kafka10_01 --partitions 1 --replication-factor 1
(1)开启生产者
bin/kafka-console-producer.sh --broker-list hadoop01:9092 --topic merchants-template
(2)开启消费者
bin/kafka-console-consumer.sh --bootstrap-server hadoop01:9092 --topic merchants-template