1.环境(centos7):
zookeeper-3.4.9 + kafka_2.11-0.10.0.0 + apache-flume-1.7.0
2.zookeeeper集群
下载安装:wget https://mirrors.tuna.tsinghua.edu.cn/apache/zookeeper/zookeeper-3.4.9/zookeeper-3.4.9.tar.gz
tar zxvf zookeeper-3.4.9.tar.gz
mv zookeeper-3.4.9 /opt/zookeeper
修改配置文件
cp zoo_sample.cfg zoo.cfg
vi zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/var/zookeeper
logDir=/var/log/zookeeper
clientPort=2181
server.1=192.168.35.129:2888:3888
server.2=192.168.35.130:2888:3888
server.3=192.168.35.131:2888:3888
创建myid文件(内容为1)
cd /var/zookeeper
touch myid
把配置好的文件copy到另外2台机器
scp -r /opt/zookeeper root@192.168.35.130:/opt/zookeeper (root为用户名)
scp /varzookeeper/myid root@192.168.35.130:/varzookeeper/myid (修改myid内容为2)
scp -r /opt/zookeeper root@192.168.35.131:/opt/zookeeper (root为用户名)
scp /varzookeeper/myid root@192.168.35.131:/varzookeeper/myid (修改myid内容为3)
关闭防火墙并启动systemcrl stop firewalld
/opt/zookeeper/bin/zkServer.sh start
验证:
telnet 192.168.35.96 2181
Trying 192.168.35.96...
Connected to 192.168.35.96.
Escape character is '^]'.
stat
Zookeeper version: 3.4.9-1757313, built on 08/23/2016 06:50 GMT
Clients:
/192.168.35.130:43953[1](queued=0,recved=838,sent=838)
/192.168.35.129:38408[0](queued=0,recved=1,sent=0)
/192.168.35.130:43961[1](queued=0,recved=8901,sent=8901)
/192.168.35.130:43954[1](queued=0,recved=153,sent=153)
Latency min/avg/max: 0/1/99
Received: 9894
Sent: 9893
Connections: 4
Outstanding: 0
Zxid: 0x500001010
Mode: follower
Node count: 158
Connection closed by foreign host.
3.kafka集群
wget http://mirrors.hust.edu.cn/apache/kafka/0.10.1.0/kafka_2.11-0.10.1.0.tgz
参考上面步骤解压到/opt/kafka
配置文件如下:
server.properties
broker.id=0
num.network.threads=3
num.io.threads=8
host.name=192.168.35.129
port=9092
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/tmp/kafka-logs
num.partitions=2
num.recovery.threads.per.data.dir=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=192.168.35.129:2181,192.168.35.130:2181,192.168.35.131:2181
zookeeper.connection.timeout.ms=6000
同样的配置Copy到其他机器上
启动kafka
bin/kafka-server-start.sh -daemon config/server.properties
创建topic
bin/kafka-topics.sh --create --zookeeper 192.168.35.129:2181 --replication-factor 2 --partitions 2 --topic test
发送消息
bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
接受消息
bin/kafka-console-consumer.sh --zookeeper 192.168.35.129:2181 --topic test --from-beginning
4.flume
wget http://mirrors.hust.edu.cn/apache/flume/1.7.0/apache-flume-1.7.0-bin.tar.gz
解压 /opt/flume
配置文件
producer.conf
producer.sources = s1
producer.channels = c1
producer.sinks = sk1
producer.sources.s1.type = netcat
producer.sources.s1.bind = 192.168.35.96
producer.sources.s1.port = 9527
producer.sources.s1.channels = c1
producer.sinks.sk1.type = org.apache.flume.sink.kafka.KafkaSink
producer.sinks.sk1.topic = test
producer.sinks.sk1.brokerList = 192.168.35.96:9092,192.168.35.130:9092
producer.sinks.sk1.requiredAcks = 1
producer.sinks.sk1.batchSize = 20
producer.sinks.sk1.channel = c1
producer.channels.c1.type = memory
producer.channels.c1.capacity = 1000
consumer.conf
consumer.sources = s
consumer.channels = c
consumer.sinks = r
consumer.sources.s.type = seq
consumer.sources.s.channels = c
consumer.sinks.r.type = logger
consumer.sinks.r.channel = c
consumer.channels.c.type = memory
consumer.channels.c.capacity = 100
consumer.sources.s.type = org.apache.flume.source.kafka.KafkaSource
consumer.sources.s.zookeeperConnect=192.168.35.96:2181,192.168.35.130:2181,192.168.16.219:2181
consumer.sources.s.topic=test
consumer.sources.s.channel=c
consumer.sources.s.groupId=flume
把zookeeper下的zookeeper.jar Copy到flume/lib下
cp /opt/zookeeper/zookeeper-3.4.9.jar /opt/flume/lib
启动
bin/flume-ng agent --conf conf --conf-file conf/producer.conf --name producer -Dflume.root.logger=INFO,console
bin/flume-ng agent --conf conf --conf-file conf/comsumer.conf --name consumer -Dflume.root.logger=INFO,console
验证
telnet 192.168.35.129 9527