Kafka集群搭建
1、进入目录:
cp -r kafka_2.11-2.1.0 /usr/local
cd /usr/local
cd kafka_2.11-2.1.0/
2、创建kafka日志数据目录:
mkdir logs
3、进入目录:
cd config/
4、修改server.properties配置文件:
vim server.properties
broker.id=0
# listeners = PLAINTEXT://your.host.name:9092
listeners=PLAINTEXT://wyh:9092
num.network.threads=3
# The number of threads that the server uses for processing requests, which may include disk I/O
num.io.threads=8
# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400
# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400
# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600
############################# Log Basics #############################
# A comma separated list of directories under which to store log files
log.dirs=/usr/local/kafka_2.11-2.1.0/logs
# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=1
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
zookeeper.connect=wyh:2181,ai-lab:2181
# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=0
auto.create.topics.enable = false
delete.topic.enable=true
修改参数如下:
参数名称 | 参数值 | 备注 |
broker.id | 0 | broker.id的值三个节点要配置不同的值,分别配置为0,1,2 |
advertised.host.name | kafka1.sd.cn | 在hosts文件配置kafka1域名,另外两台分别为:kafka2.sd.cn,kafka3.sd.cn |
advertised.port | 9092 | 默认端口,不需要改 |
log.dirs | /opt/kafka_2.11-0.10.0.1/kafka-logs-1 | Kafka日志数据目录 |
num.partitions | 40 | 分区数,根据自行修改 |
log.retention.hours | 24 | 日志保存时间 |
zookeeper.connect | kafka1.sd.cn:3181,kafka2.sd.cn:3181,kafka3.sd.cn:3181 | zookeeper连接地址,多个以逗号隔开 |
将kafka_2.12-1.1.0文件夹复制到另外两台节点下
并修改每个节点对应的server.properties文件的broker.id和listenrs
broker.id=0
advertised.listeners=PLAINTEXT://192.168.150.35:9097
broker.id=1
advertised.listeners=PLAINTEXT://192.168.150.34:9097
5、 启动kafka集群并测试:
======================================kafka命令============================
启动:./kafka-server-start.sh ../config/server.properties &
生产者消费者:
./kafka-topics.sh --zookeeper 127.0.0.1:2181 --list #查看本机topic
创建 一个 topic
bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
查看topic
bin/kafka-topics.sh --list --zookeeper localhost:2181
生产者发送消息
bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
./kafka-console-producer.sh --broker-list weekend01:9092 --topic test //my-kafka-topic时topic的名字 This is a message This is another message
启动一个客户端(消费者)
./kafka-console-consumer.sh --bootstrap-server weekend01:9092 --topic test
# 通过以上命令,可以看到消费者可以接收生产者发送的消息
# 如果需要从头开始接收数据,需要添加--from-beginning参数
kafka-console-consumer.sh --bootstrap-server node01:9092 --from-beginning --topic my-kafka-topic
./kafka-console-consumer.sh --bootstrap-server weekend01:9092 --from-beginning --topic mygirls
##表示从指定主题中有效的起始位移位置开始消费所有分区的消息。
6 其它命令
5,a.创建topic
kafka-topics.sh --create --zookeeper weekend01:2181 --replication-factor 1 --partitions 1 --topic my-kafka-topic
b.查看topic列表
./kafka-topics.sh --list --zookeeper weekend01:2181
c.如果需要查看topic的详细信息,需要使用describe命令
kafka-topics.sh --describe --zookeeper node1:2181 --topic test-topic
d.#若不指定topic,则查看所有topic的信息
kafka-topics.sh --describe --zookeeper node1:2181
e.删除topic
kafka-topics.sh --delete --zookeeper node1:2181 --topic my-kafka-topic