原创转载请注明出处:http://agilestyle.iteye.com/blog/2292260
前期准备
Kafka配置
创建Kafka日志文件夹
mkdir kafkaLogs
修改config目录下的server.properties
vi server.properties
修改broker.id(hadoop-0000上为0,hadoop-0001上为1,hadoop-0002上为2)
修改delete.topic.enable,设置为true
修改log.dirs
log.dirs=/home/hadoop/app/kafka_2.11-0.9.0.1/kafkaLogs
修改zookeeper.connect改为自定义的zookeeper
zookeeper.connect=hadoop-0000:2181,hadoop-0001:2181,hadoop-0002:2181
保存退出,scp到其他两台Server上(hadoop-0001和hadoop-0002,同时需要修改对应的broker.id)
scp -r kafka_2.11-0.9.0.1/ hadoop-0001:/home/hadoop/app/ scp -r kafka_2.11-0.9.0.1/ hadoop-0002:/home/hadoop/app/
启动Kafka集群
首先启动zookeeper集群,分别在3台Server上执行命令
zkServer.sh start
接着启动Kafka集群,分别在3台Server上执行命令
./kafka-server-start.sh -daemon ../config/server.properties
或者执行
./kafka-server-start.sh ../config/server.properties 1>/dev/null 2>&1 &
启动完毕后,再hadoop-0000上创建一个topic
./kafka-topics.sh --create --zookeeper hadoop-0000:2181 --replication-factor 3 --partitions 1 --topic shuguo
list查看一下创建的topic
./kafka-topics.sh --list --zookeeper hadoop-0000:2181 ./kafka-topics.sh --list --zookeeper hadoop-0001:2181 ./kafka-topics.sh --list --zookeeper hadoop-0002:2181
describe查看topic的描述
./kafka-topics.sh --describe --zookeeper hadoop-0002:2181 --topic shuguo
hadoop-0000上启动producer
./kafka-console-producer.sh --broker-list hadoop-0000:9092 --topic shuguo
hadoop-0001上启动consumer
./kafka-console-consumer.sh --zookeeper hadoop-0001:2181 --topic shuguo --from-beginning
hadoop-0000上输入消息
hadoop-0001上查看消息