zookeeper集群与kafka集群搭建

一、zookeeper集群

1、软件环境

三台服务器进行测试(zookeeper集群是超过半数存活才能对外提供服务,所以一般选择单数台进行集群搭建)
11.11.11.11  server1
22.22.22.22  server2
33.33.33.33  server3

2、安装与配置:

1、安装好java等相关环境后,安装zookeeper,安装方法比较简单,不举例
2、配置:
修改配置文件zoo.cfg:
tickTime=2000

initLimit=10

syncLimit=5

dataDir=/apps/dat/zookeeper

dataLogDir=/apps/logs/zookeeper

#maxClientCnxns=60
maxClientCnxns=2000

#minSessionTimeout=4000
#maxSessionTimeout=40000
#zookeeper cluster
#server.1 这个1是服务器的标识也可以是其他的数字, 表示这个是第几号服务器,用来标识服务器,这个标识要写到快照目录下面myid文件里
#11.11.11.11为集群里的IP地址,第一个端口是master和slave之间的通信端口,第二个端口是leader选举的端口,集群刚启动的时候选举或者leader挂掉之后进行新的选举的端口,注意端口不要与zookeeper的启动端口冲突server.1=11.11.11.11:12888:5181server.2=22.22.22.22:12888:5181server.3=33.33.33.33:12888:5181

创建myid文件:

#server1
echo "1" > /apps/dat/zookeeper/myid
#server2
echo "2" > /apps/dat/zookeeper/myid
#server3
echo "3" > /apps/dat/zookeeper/myid
3、配置说明:

a、myid文件和server.myid  在快照目录下存放的标识本台服务器的文件,他是整个zk集群用来发现彼此的一个重要标识。

b、zoo.cfg 文件是zookeeper配置文件 在conf目录里。

c、log4j.properties文件是zk的日志输出文件 在conf目录里用java写的程序基本上有个共同点日志都用log4j,来进行管理。

d、 zkServer.sh 主的管理程序文件
e、zkEnv.sh 是主要配置,zookeeper集群启动时配置环境变量的文件

3、启动zookeeper服务

1、到安装目录bin下:执行:(三台都需启动)
./zkServer.sh start

2、检查服务状态:
#检查服务器状态
./zkServer.sh status

(注意不要出现端口冲突,否则检查服务状态的时候会出错)
./zkServer.sh status
JMX enabled by default
Using config: /apps/svr/zookeeper/zookeeper-3.4.6/bin/../conf/zoo.cfg  #配置文件
Mode: follower  #他是否为领导

二、kafka集群

1、软件环境

kafka软件版本kafka_2.11-0.9.0.1.tgz
选择Binary downloads下载 (Source download需要编译才能使用)

2、修改配置:

1、到config目录下:
-rw-r--r-- 1 apps apps 1199 Sep  3  2015 consumer.properties
-rw-r--r-- 1 apps apps 3846 Sep  3  2015 log4j.properties
-rw-r--r-- 1 apps apps 2228 Sep  3  2015 producer.properties
-rw-r--r-- 1 apps apps 5725 Jan 24 15:34 server.properties
-rw-r--r-- 1 apps apps 3325 Sep  3  2015 test-log4j.properties
-rw-r--r-- 1 apps apps  993 Sep  3  2015 tools-log4j.properties
-rw-r--r-- 1 apps apps 1023 Sep  3  2015 zookeeper.properties
2、主要修改server.properties:
broker.id=1

############################# Socket Server Settings #############################
# The port the socket server listens on
port=9092
# Hostname the broker will bind to. If not set, the server will bind to all interfaces
host.name=11.11.11.11

# The number of threads handling network requests
num.network.threads=3
 
# The number of threads doing disk I/O
num.io.threads=8

# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400

# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400

# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600

############################# Log Basics #############################
# A comma seperated list of directories under which to store log files
log.dirs=/tmp/kafka-logs

# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=1

# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1
############################# Log Flush Policy #############################
# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000

# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000
############################# Log Retention Policy #############################
# The minimum age of a log file to be eligible for deletion
log.retention.hours=168
message.max.byte=5242880
default.replication.factor=2
replica.fetch.max.bytes=5242880

# A size-based retention policy for logs. Segments are pruned from the log as long as the remaining
# segments don't drop below log.retention.bytes.
#log.retention.bytes=1073741824

# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824

# The interval at which log segments are checked to see if they can be deleted according 
# to the retention policies
log.retention.check.interval.ms=300000

# By default the log cleaner is disabled and the log retention policy will default to just delete segments after their retention expires.
# If log.cleaner.enable=true is set the cleaner will be enabled and individual logs can then be marked for log compaction.
log.cleaner.enable=false

auto.leader.rebalance.enable=true
############################# Zookeeper #############################

# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=11.11.11.11:2181,22.22.22.22:2181,33.33.33.33:2181

# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=6000
注意几个点:
a、broker.id与zookeeper的myid类似,表示当前机器在集群中的唯一标识,每台服务器都不一致
b、host.name要设置,否则创建topic会出现 WARN Failed to send producer request with correlation id 5 to broker 2 with data for partitions [heshan,0] (kafka.producer.async.DefaultEventHandler)的问题。

3、启动kafka集群:
1、cd到安装目录的bin目录下:
执行:
./kafka-server-start.sh -daemon ../config/server.properties & 后台启动
2、检查服务是否启动:
#执行命令jps
20348 Jps
4233 QuorumPeerMain
18991 Kafka
3、创建topic并测试生产者和消费者:
#创建Topic
./kafka-topics.sh --create --zookeeper 11.11.11.11:2181 --replication-factor 2 --partitions 1 --topic heshan
#解释
--replication-factor 2   #复制两份
--partitions 1 #创建1个分区
--topic #主题为heshan

'''在一台服务器上创建一个生产者'''
#创建一个broker,发布者
./kafka-console-producer.sh --broker-list 11.11.11.11:9092 --topic heshan

'''在一台服务器上创建一个消费者'''
./kafka-console-consumer.sh --zookeeper 22.22.22.22:2181 --topic heshan --from-beginning
4、测试结果:
生产者
./kafka-console-producer.sh --broker-list 11.11.11.11:9092 --topic heshan
[2017-01-24 16:22:57,969] WARN Property topic is not valid (kafka.utils.VerifiableProperties)

from producter
消费者:
./kafka-console-consumer.sh --zookeeper 22.22.22.22:2181 --topic heshan --from-beginning


from producter
4、查看topic状态:
./kafka-topics.sh --list --zookeeper localhost:12181
#就会显示我们创建的所有topic
#查看topic状态
./kafka-topics.sh --describe --zookeeper localhost:2181 --topic heshan
Topic:heshan    PartitionCount:1        ReplicationFactor:2     Configs:
        Topic: heshan   Partition: 0    Leader: 2       Replicas: 2,3   Isr: 2,3

3、kafka集群与zookeeper集群:

用命令行连接zookeeper后,ls命令可以看到有如下几个目录:

[zk: 127.0.0.1:2181(CONNECTED) 0] ls /
[consumers,controller_epoch, zookeeper, admin, config, controller, brokers]
结构如下,详情见:http://blog.csdn.net/lizhitao/article/details/23744675





  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值