测试flink实时流系列(一):搭建ZK+Kafka集群

一、集群主机列表:

10.110.169.104     Kafka+ZooKeeper  (小网IP:1.17.1.45)
10.110.169.75      Kafka+ZooKeeper  (小网IP:1.17.1.115)
10.110.169.76      Kafka+ZooKeeper   目前没配小网 

二、下载Kafka/Zookeeper软件:

#wget http://archive.apache.org/dist/zookeeper/zookeeper-3.4.6/zookeeper-3.4.6.tar.gz
#wget https://archive.apache.org/dist/kafka/0.8.2.2/kafka_2.11-0.8.2.2.tgz

三、配置Zookeeper集群:

1. 登录Zookeeper集群服务器中的一个节点,我们叫它为节点1,进入zookeeper-3.4.6/conf/目录,复制 zoo_sample.cfg 文件为zoo.cfg。在该配置文件里添加ZK集群所有节点的列表信息,如下。如果是本机节点的IP可以写成0.0.0.0。另外,下面server.1里的1为ZK节点的id,需要与myid里的id对应,后面会提到myid这个文件。

#添加下面集群服务器列表
server.1=0.0.0.0:2888:3888
server.2=10.110.169.75:2888:3888
server.3=10.110.169.104:2888:3888

2. 创建ZK的dataDir目录,zoo.cfg里默认为/tmp/zookeeper.

# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/tmp/zookeeper

3. 为ZK节点1创建 myid文件,下面的id 1需要与zoo.cfg里的server.id对应。

$touch /tmp/zookeeper/myid
$echo 1 > /tmp/zookeeper/myid

4. 将zookeeper文件复制到另外两个节点,节点2和节点3上。

$ scp -r zookeeper-3.4.6/ 10.110.169.75:~/
$ scp -r zookeeper-3.4.6/ 10.110.169.104:~/

在节点2和节点3上,分别创建/tmp/zookeeper目录及myid文件:

mkdir /tmp/zookeeper
touch /tmp/zookeeper/myid
#节点2上
echo 2 > /tmp/zookeeper/myid
#节点3上
echo 3 > /tmp/zookeeper/myid

5. 挂掉ZK集群里各个节点的防火墙和iptable,ubuntu系统如下:

#打开或关闭防火墙命令: sudo ufw enable|disable
#这里是关闭防火墙
sudo ufw disable
Firewall stopped and disabled on system startup

iptable

#由于UBUNTU没有相关的直接命令
#请用如下命令
iptables -P INPUT ACCEPT  
 
iptables -P OUTPUT ACCEPT  

#另外,直接卸载 iptalbe命令如下:
#apt-get remove iptables

6. 在ZK集群的任何一个节点上,启动zk server,ZK会自动选择一个节点为leader.

$./bin/zkServer.sh start

用命令可以查询ZK节点的状态(有leader/follower):

$ ./bin/zkServer.sh status
JMX enabled by default
Using config: /home/flink/zookeeper/bin/../conf/zoo.cfg
Mode: leader

 到此,表示你的zookeeper集群安装和运行成功。

四、配置Kafka集群:

1. 进入ZK节点1里解压的kafka目录,修改config/server.properties文件

#kafka集群的master broker.id=0
broker.id=0

# The port the socket server listens on
port=9092

advertised.host.name = 10.110.169.76
listeners=PLAINTEXT://10.110.169.76:9092
advertised.listeners=PLAINTEXT://10.110.169.76:9092

# The number of threads handling network requests
num.network.threads=192

# The number of threads doing disk I/O
num.io.threads=24

# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400

# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400

# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600

# A comma seperated list of directories under which to store log files
#log.dirs=/tmp/kafka-logs
log.dirs=/mnt/hdd2/kafka,/mnt/hdd3/kafka,/mnt/hdd4/kafka,/mnt/hdd5/kafka,/mnt/hdd6/kafka,/mnt/hdd7/kafka,/mnt/hdd8/kafka,/mnt/hdd9/kafka,/mnt/hdd10/kafka,/mnt/ssd1/kafka,/mnt/ssd2/kafka

# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=192
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1
# The minimum age of a log file to be eligible for deletion
log.retention.hours=168

# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824

# to the retention policies
log.retention.check.interval.ms=300000
# If log.cleaner.enable=true is set the cleaner will be enabled and individual logs can then be marked for log compaction.
log.cleaner.enable=false


# root directory for all kafka znodes.
zookeeper.connect=10.110.169.76:2181,10.110.169.75:2181,10.110.169.104:2181

# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=300000
zookeeper.session.timeout.ms=300000
auto.create.topics.enable=true
delete.topic.enable=true


重点关注broker.id、listenrs 和 zookeeper.connect 的配置

2.将 kafka文件夹复制到kafka集群的另外两个节点

3. 修改kafka另外两个节点config/server.properties文件里对应的 server.properties 文件的 broker.id和listenrs

4. 进入kafka目录,启动各个节点上的kafka服务

bin/kafka-server-start.sh config/server.properties &

五、Zookeeper+Kafka集群测试

创建topic:

$bin/kafka-topics.sh --create --zookeeper 10.110.169.75:2181,10.110.169.76:2181,10.110.169.104:2181 --replication-factor 3 --partitions 3 --topic test

列出topic:

$bin/kafka-topics.sh --zookeeper 10.110.169.75:2181,10.110.169.76:2181,10.110.169.104:2181 --topic list

参考:Zookeeper+Kafka集群部署 

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值