1、下载kafka
下载版本为:apache-storm-0.9.2-incubating.tar.gz
2、把kafka移动到/home/hdfs目录下并解压,重命名
执行如下命令:
tar -zxvf apache-storm-0.9.2-incubating.tar.gz
mv apache-storm-0.9.2 kafka
cd kafka
分表部署到slave1,slave2,slave3节点下
3、在kafka用户下添加环境变量
执行如下命令:
vim .profile
在.profile文件中添加如下内容:
# KAFKA
export KAFKA_HOME=/home/hdfs/kafka
export PATH=$PATH:$KAFKA_HOME/bin
4 、配置 kafka配置文件
4.1 、配置server.properties
对slave1节点下的server.properties,consumer.properties的配置分别如下:
config/server.properties:
zookeeper.connect=slave1.hadoop:2181,slave2.hadoop:2181,slave3.hadoop:2181/kafka
broker.id=0
port=9092
host.name=slave1.hadoop
log.dirs=/home/hdfs/kafka/logs/kafka-logs
config/consumer.properties:
zookeeper.connect=slave1.hadoop:2181,slave2.hadoop:2181,slave3.hadoop:2181/kafka
对slave2节点下的server.properties,consumer.properties的配置分别如下:
config/server.properties:
zookeeper.connect=slave1.hadoop:2181,slave2.hadoop:2181,slave3.hadoop:2181/kafka
broker.id=1
port=9093
host.name=slave2.hadoop
log.dirs=/home/hdfs/kafka/logs/kafka-logs
config/consumer.properties:
zookeeper.connect=slave1.hadoop:2181,slave2.hadoop:2181,slave3.hadoop:2181/kafka
对slave3节点下的server.properties,consumer.properties的配置分别如下:
config/server.properties:
zookeeper.connect=slave1.hadoop:2181,slave2.hadoop:2181,slave3.hadoop:2181/kafka
broker.id=2
port=9094
host.name=slave3.hadoop
log.dirs=/home/hdfs/kafka/logs/kafka-logs
config/consumer.properties:
zookeeper.connect=slave1.hadoop:2181,slave2.hadoop:2181,slave3.hadoop:2181/kafka
注意:
需要在Zookeeper中提前创建好kafka这个目录,否则会报错
5 、开启kafka服务
Zookeeper已经开启
并在slave1,slave2,slave3开启kafka服务
命令如下:
bin/kafka-server-start.sh config/server.properties
6 、kafka测试
6.1 、创建一个topic
创建一个topic命令如下:
bin/kafka-topics.sh --create --zookeeper slave1.hadoop:2181,slave2.hadoop:2181,slave3.hadoop:2181/kafka --replication-factor 3 --partitions 1 --topic my-replicated-topic
获取topic:
bin/kafka-topics.sh --list --zookeeper slave1.hadoop:2181,slave2.hadoop:2181,slave3.hadoop:2181/kafka
--topic my-replicated-topic
查看topic状态:
bin/kafka-topics.sh --describe --zookeeper slave1.hadoop:2181,slave2.hadoop:2181,slave3.hadoop:2181/kafka --topic my-replicated-topic
显示如下:
Topic:my-replicated-topic PartitionCount:1 ReplicationFactor:3 Configs:
Topic:my-replicated-topic Partition: 0 Leader: 1 Replicas:1,2,0 Isr: 1,2,0
6.2 、发送&接收消息
slave1节点发送消息:
bin/kafka-console-producer.sh--broker-list slave1.hadoop:9092 --topic my-replicated-topic
发送消息如下:
This is amessage
slave2接收消息:
bin/kafka-console-consumer.sh --zookeeper slave1.hadoop:2181,slave2.hadoop:2181,slave3.hadoop:2181/kafka--topic my-replicated-topic --from-beginning
会获取之前发送的消息:Thisis a message
6.3 、干掉follow broker
topic状态可以看出,leader的broker id为1
杀掉一个非lead broker 2
在此查看topic:
bin/kafka-topics.sh --describe --zookeeper slave1.hadoop:2181,slave2.hadoop:2181,slave3.hadoop:2181/kafka --topic my-replicated-topic
显示如下:
Topic:my-replicated-topic PartitionCount:1 ReplicationFactor:3 Configs:
Topic:my-replicated-topic Partition: 0 Leader: 1 Replicas:1,2,0 Isr: 1,0
此时,存活的broker只有1,0
测试:produce发送消息,consumer能正常接收到
6.4 、继续干掉leader broker
杀掉一个非lead broker 1
在此查看topic:
bin/kafka-topics.sh --describe --zookeeper slave1.hadoop:2181,slave2.hadoop:2181,slave3.hadoop:2181/kafka --topic my-replicated-topic
显示如下:
Topic:my-replicated-topic PartitionCount:1 ReplicationFactor:3 Configs:
Topic:my-replicated-topic Partition: 0 Leader: 0 Replicas:1,2,0 Isr: 0
杀掉leader broker过了一会,broker 0成为新的leader broker
测试:produce发送消息,consumer能正常接收到