使用Xshell7控制多台服务同时安装ZK最新版集群服务

一: 环境准备:

      主机名称         主机IP    节点    (集群内通讯端口|选举leader|clinet端提供服务)端口
       docker0   192.168.1.100
node-0  
           2888      |      3888      |       2181 
       docker1   192.168.1.101
node-1
           2888      |      3888      |       2181
       docker2   192.168.1.102
node-1 
           2888      |      3888      |       2181
1.1 下载zookeeper安装包

集群为大于等于3个基数,如 3、5、7…,不宜太多,集群机器多了选举和数据同步耗时时长长,不稳定。目前觉得,三台选举+N台observe很不错

Index of /dist/zookeeper/zookeeper-3.9.0  最新版下载

[root@www tools]# wget https://archive.apache.org/dist/zookeeper/zookeeper-3.9.0/apache-zookeeper-3.9.0-bin.tar.gz
查看是否已经下载:

1.2: 将安装包拷贝到其他服务器上:

在docker0节点上,分发该包到各个服务器上的:/usr/local/tools目录,然后创建/usr/local/zookeeper目录,准备再docker0节点同时安装其他两台服务器的ZK;

[root@www tools]# mkdir -p /usr/local/zookeeper

首先,使其他两台服务(docker1,docker2)处于docker0控制的状态,如下图所示:

(可以参考:Centos7卸载|安装JDK1.8|Xshell7批量控制多个终端

[root@www tools]# sudo yum install rsync -y
rsync和scp区别:用rsync做文件的复制要比scp的速度快,rsync只对差异文件做更新。scp是把所有文件都复制过去

在主目录创建bin目录:

[root@www tools]# mkdir ~/bin

创建分发脚本文件:xsync   脚本内容

[root@www tools]#  cat ~/bin/xsync 

#!/bin/bash
#1. 判断参数个数
if [ $# -lt 1 ]
then
    echo Not Enough Arguement!
    exit;
fi

#2. 遍历集群所有机器
for host in docker0  docker1 docker2
do
    echo ====================  $host  ====================
    
    #3. 遍历所有目录,挨个发送
    for file in $@
    do
        #4. 判断文件是否存在
        if [ -e $file ]
            then
                #5. 获取父目录
                pdir=$(cd -P $(dirname $file); pwd)
                #6. 获取当前文件的名称
                fname=$(basename $file)
                ssh $host "mkdir -p $pdir"
                rsync -av $pdir/$fname $host:$pdir
            else
                echo $file does not exists!
        fi
    done
done

节点docker0上执行解压命令:  由于其他两台机器docker1,docker2与docker0主机直接相互处于Xshell控制状态,在任何一个节点的相关命令都会同步到其他机器上

下面我们来看看docker0上的节点上的 ~/bin/xsync

[root@www bin]# ll
总用量 4
-rwxr-xr-x 1 root root 736 8月  21 20:53 xsync

1.3 将jdk分发到docker1,docker2服务器上

[root@www bin]# ./xsync  /usr/local/tools/jdk-8u371-linux-x64.tar.gz  
==================== docker0 ====================
sending incremental file list

sent 66 bytes  received 12 bytes  156.00 bytes/sec
total size is 139,219,380  speedup is 1,784,863.85
==================== docker1 ====================
sending incremental file list
jdk-8u371-linux-x64.tar.gz

sent 139,253,473 bytes  received 35 bytes  39,786,716.57 bytes/sec
total size is 139,219,380  speedup is 1.00
==================== docker2 ====================
sending incremental file list
jdk-8u371-linux-x64.tar.gz

sent 139,253,473 bytes  received 35 bytes  39,786,716.57 bytes/sec
total size is 139,219,380  speedup is 1.00

1.4 验证docker1与docker2的/usr/local/tools下该文件是否拷贝过去

 该脚本已生效:


二: 在节点docker0上解压zk安装包:

[root@www zookeeper]#

tar -zxvf /usr/local/tools/apache-zookeeper-3.9.0-bin.tar.gz -C /usr/local/zookeeper/


cd ~/bin/

[root@www bin]# ./xsync /usr/local/zookeeper/

将解压后的zookeeper文件全拷贝到docker1|docker2机器:

[root@www bin]# ./xsync /usr/local/zookeeper/
==================== docker0 ====================
sending incremental file list

sent 61,605 bytes  received 227 bytes  123,664.00 bytes/sec
total size is 390,744,347  speedup is 6,319.45
==================== docker1 ====================
sending incremental file list

sent 61,613 bytes  received 235 bytes  123,696.00 bytes/sec
total size is 390,744,347  speedup is 6,317.82
==================== docker2 ====================
sending incremental file list

sent 61,613 bytes  received 235 bytes  123,696.00 bytes/sec
total size is 390,744,347  speedup is 6,317.82
 


1.5 验证下是否已经拷贝过去:

 


二: docker0节点配置zoo.cfg文件

2.1:zoo.cfg配置文件内容

[root@www conf]# cat zoo.cfg 
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial 
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
# 数据存放目录
dataDir=/usr/local/zookeeper/data/
# 日志存放目录
dataLogDir=/usr/local/zookeeper/dataLog/

# the port at which the clients will connect
# 客户端服务端口
clientPort=2181

# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the 
# administrator guide before turning on autopurge.
#
# https://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1

## Metrics Providers
#
# https://prometheus.io Metrics Exporter
#metricsProvider.className=org.apache.zookeeper.metrics.prometheus.PrometheusMetricsProvider
#metricsProvider.httpHost=0.0.0.0
#metricsProvider.httpPort=7000
#metricsProvider.exportJvmInfo=true
#server.NUM=IP:port1:port2  NUM表示本机为第几号服务器;IP为本机ip地址;
#port1为leader与follower通信端口;port2为参与竞选leader的通信端口
server.1=192.168.1.100:2888:3888
server.2=192.168.1.101:2888:3888
server.3=192.168.1.102:2898:3888

 

[root@www conf]# pwd
/usr/local/zookeeper/apache-zookeeper-3.9.0-bin/conf

2.2:拷贝docker0节点中zoo.cfg配置文到其他机器上
 

[root@www conf]# scp zoo.cfg docker1:/usr/local/zookeeper/apache-zookeeper-3.9.0-bin/conf
zoo.cfg                                                                      100% 1573   783.0KB/s   00:00    
[root@www conf]# scp zoo.cfg docker2:/usr/local/zookeeper/apache-zookeeper-3.9.0-bin/conf
zoo.cfg           


2.3  创建/usr/local/zookeeper/data目录与/usr/local/zookeeper/dataLog目录 

让三台机器处于同一Xshell会话控制状态下

然后再docker0节点执行如下命令:

[root@www conf]# mkdir -p /usr/local/zookeeper/data

[root@www conf]# mkdir -p /usr/local/zookeeper/dataLog

2.4 在data目录中创建myid文件

myid文件内容分别为:

docker0节点

[root@www data]# cat myid
1
 


docker1节点

[root@www data]# cat myid
2
 


docker2节点

[root@www data]# cat myid
3
 


三:  ZK集群选举说明

zk选举机制
zk集群超过半数才能正常工作,所有zk集群一般为奇数台

半数机制

判断是否已经胜出
默认是采用投票数大于半数则胜出的逻辑。

选举状态
LOOKING,竞选状态。
FOLLOWING,随从状态,同步leader状态,参与投票。
OBSERVING,观察状态,同步leader状态,不参与投票。
LEADING,领导者状态。

选举流程简述
目前有3台服务器,每台服务器均没有数据,它们的服务器ID(存放在myid文件中)分别是1,2,3,按编号依次启动,它们的选择举过程如下:

服务器1启动,给自己投票,然后发投票信息,由于其它机器还没有启动所以它收不到反馈信息,服务器1的状态一直属于Looking。
服务器2启动,给自己投票,同时与之前启动的服务器1交换结果,由于服务器2的编号大所以服务器2胜出,但此时投票数没有大于半数,所以两个服务器的状态依然是LOOKING。
服务器3启动,给自己投票,同时与之前启动的服务器1,2交换信息,由于服务器3的编号最大所以服务器3胜出,此时投票数正好大于半数,所以服务器3成为领导者,服务器1,2成为小弟。
 

四:  制作开机启动服务脚本

 4.1: 让三台机器处于同一Xshell会话控制状态,在其中任意一台服务器上制作脚本文件

脚本内容如下:

[root@www data]# cat /usr/lib/systemd/system/zookeeper.service
[Unit]

Description=Zookeeper Server

After=network.target

[Service]

Type=forking

Environment=ZOO_LOG_DIR=/usr/local/zookeeper/dataLog

Environment=JAVA_HOME=/usr/local/java/jdk1.8.0_371

ExecStart=/usr/local/zookeeper/apache-zookeeper-3.9.0-bin/bin/zkServer.sh start

ExecStop=/usr/local/zookeeper/apache-zookeeper-3.9.0-bin/bin/zkServer.sh stop

Restart=always

#User=zookeeper

#Group=zookeeper

[Install]

WantedBy=multi-user.target
 

4.2: 启动ZK

开机启动

systemctl enable zookeeper

重新加载信息

systemctl daemon-reload

启动zookeeper:systemctl start zookeeper.service

关掉zookeeper:systemctl stop zookeeper.service

查看进程状态及日志:systemctl status zookeeper.service

开机自启动:systemctl enable zookeeper.service

关闭自启动:systemctl disable zookeeper.service
 

[root@www conf]# systemctl status zookeeper
zookeeper.service - Zookeeper Server
   Loaded: loaded (/usr/lib/systemd/system/zookeeper.service; enabled; vendor preset: disabled)
   Active: active (running) since 六 2023-08-26 23:02:24 CST; 19s ago
  Process: 3263 ExecStop=/usr/local/zookeeper/apache-zookeeper-3.9.0-bin/bin/zkServer.sh stop (code=exited, status=0/SUCCESS)
  Process: 3286 ExecStart=/usr/local/zookeeper/apache-zookeeper-3.9.0-bin/bin/zkServer.sh start (code=exited, status=0/SUCCESS)
 Main PID: 3302 (java)
    Tasks: 49
   CGroup: /system.slice/zookeeper.service
           └─3302 /usr/local/java/jdk1.8.0_371/bin/java -Dzookeeper.log.dir=/usr/local/zookeeper/dataLog -Dzookeeper.log.file=zookeeper--server-www.yhchange.com.log -XX:+HeapDumpOnOutOfMemoryError -XX:OnOutOfMemoryError=kill -9 %p -cp /usr/lo...

8月 26 23:02:23 www.yhchange.com systemd[1]: Starting Zookeeper Server...
8月 26 23:02:23 www.yhchange.com zkServer.sh[3286]: ZooKeeper JMX enabled by default
8月 26 23:02:23 www.yhchange.com zkServer.sh[3286]: Using config: /usr/local/zookeeper/apache-zookeeper-3.9.0-bin/bin/../conf/zoo.cfg
8月 26 23:02:24 www.yhchange.com zkServer.sh[3286]: Starting zookeeper ... STARTED
8月 26 23:02:24 www.yhchange.com systemd[1]: Started Zookeeper Server.
 

4.2: 查看集群状态

  


4.3启动日志:

[root@www dataLog]# tail -f zookeeper--server-www.yhchange.com.out 
2023-08-27 09:23:15,240 [myid:] - INFO  [ListenerHandler-/192.168.1.102:3888:o.a.z.s.q.QuorumCnxManager$Listener$ListenerHandler@1076] - Received connection request from /192.168.1.101:59700
2023-08-27 09:23:15,276 [myid:] - INFO  [WorkerReceiver[myid=3]:o.a.z.s.q.FastLeaderElection$Messenger$WorkerReceiver@391] - Notification: my state:LEADING; n.sid:2, n.state:LOOKING, n.leader:3, n.round:0x1, n.peerEpoch:0x2, n.zxid:0x0, message format version:0x2, n.config version:0x0
2023-08-27 09:23:15,335 [myid:] - INFO  [LearnerHandler-/192.168.1.101:41160:o.a.z.s.q.LearnerHandler@511] - Follower sid: 2 : info : 192.168.1.101:2888:3888:participant
2023-08-27 09:23:15,342 [myid:] - INFO  [LearnerHandler-/192.168.1.101:41160:o.a.z.s.ZKDatabase@347] - On disk txn sync enabled with snapshotSizeFactor 0.33
2023-08-27 09:23:15,342 [myid:] - INFO  [LearnerHandler-/192.168.1.101:41160:o.a.z.s.q.LearnerHandler@806] - Synchronizing with Learner sid: 2 maxCommittedLog=0x0 minCommittedLog=0x0 lastProcessedZxid=0x300000000 peerLastZxid=0x0
2023-08-27 09:23:15,346 [myid:] - INFO  [LearnerHandler-/192.168.1.101:41160:o.a.z.s.q.LearnerHandler@573] - Sending snapshot last zxid of peer is 0x0, zxid of leader is 0x300000000, send zxid of db as 0x300000000, 1 concurrent snapshot sync, snapshot sync was exempt from throttle
2023-08-27 09:57:16,166 [myid:] - INFO  [NIOWorkerThread-1:o.a.z.s.c.FourLetterCommands@223] - The list of known four letter word commands is : [{1936881266=srvr, 1937006964=stat, 2003003491=wchc, 1685417328=dump, 1668445044=crst, 1936880500=srst, 1701738089=envi, 1668247142=conf, -720899=telnet close, 1751217000=hash, 2003003507=wchs, 2003003504=wchp, 1684632179=dirs, 1668247155=cons, 1835955314=mntr, 1769173615=isro, 1920298859=ruok, 1735683435=gtmk, 1937010027=stmk}]
2023-08-27 09:57:16,166 [myid:] - INFO  [NIOWorkerThread-1:o.a.z.s.c.FourLetterCommands@224] - The list of enabled four letter word commands is : [[srvr]]
2023-08-27 09:57:16,167 [myid:] - INFO  [NIOWorkerThread-1:o.a.z.s.NIOServerCnxn@529] - Processing srvr command from /127.0.0.1:48150
2023-08-27 09:59:23,779 [myid:] - INFO  [NIOWorkerThread-2:o.a.z.s.NIOServerCnxn@529] - Processing srvr command from /127.0.0.1:48152
 


[root@www bin]# cd ../../dataLog/
[root@www dataLog]# ll
总用量 32
-rw-r--r--. 1 root root 28713 8月  27 09:59 zookeeper--server-www.yhchange.com.out
[root@www dataLog]# tail -f zookeeper--server-www.yhchange.com.out 
2023-08-27 09:23:14,958 [myid:] - INFO  [QuorumPeer[myid=2](plain=[0:0:0:0:0:0:0:0]:2181)(secure=disabled):o.a.z.s.q.CommitProcessor@490] - Configuring CommitProcessor with readBatchSize -1 commitBatchSize 1
2023-08-27 09:23:14,959 [myid:] - INFO  [QuorumPeer[myid=2](plain=[0:0:0:0:0:0:0:0]:2181)(secure=disabled):o.a.z.s.q.CommitProcessor@451] - Configuring CommitProcessor with 8 worker threads.
2023-08-27 09:23:14,961 [myid:] - INFO  [QuorumPeer[myid=2](plain=[0:0:0:0:0:0:0:0]:2181)(secure=disabled):o.a.z.s.q.FollowerRequestProcessor@59] - Initialized FollowerRequestProcessor with zookeeper.follower.skipLearnerRequestToNextProcessor as false
2023-08-27 09:23:14,965 [myid:] - INFO  [QuorumPeer[myid=2](plain=[0:0:0:0:0:0:0:0]:2181)(secure=disabled):o.a.z.s.RequestThrottler@75] - zookeeper.request_throttler.shutdownTimeout = 10000 ms
2023-08-27 09:23:14,989 [myid:] - INFO  [QuorumPeer[myid=2](plain=[0:0:0:0:0:0:0:0]:2181)(secure=disabled):o.a.z.s.q.Learner@717] - Learner received UPTODATE message
2023-08-27 09:23:14,989 [myid:] - INFO  [QuorumPeer[myid=2](plain=[0:0:0:0:0:0:0:0]:2181)(secure=disabled):o.a.z.s.q.QuorumPeer@920] - Peer state changed: following - broadcast
2023-08-27 09:57:03,908 [myid:] - INFO  [NIOWorkerThread-1:o.a.z.s.c.FourLetterCommands@223] - The list of known four letter word commands is : [{1936881266=srvr, 1937006964=stat, 2003003491=wchc, 1685417328=dump, 1668445044=crst, 1936880500=srst, 1701738089=envi, 1668247142=conf, -720899=telnet close, 1751217000=hash, 2003003507=wchs, 2003003504=wchp, 1684632179=dirs, 1668247155=cons, 1835955314=mntr, 1769173615=isro, 1920298859=ruok, 1735683435=gtmk, 1937010027=stmk}]
2023-08-27 09:57:03,908 [myid:] - INFO  [NIOWorkerThread-1:o.a.z.s.c.FourLetterCommands@224] - The list of enabled four letter word commands is : [[srvr]]
2023-08-27 09:57:03,909 [myid:] - INFO  [NIOWorkerThread-1:o.a.z.s.NIOServerCnxn@529] - Processing srvr command from /127.0.0.1:54672
2023-08-27 09:59:23,780 [myid:] - INFO  [NIOWorkerThread-2:o.a.z.s.NIOServerCnxn@529] - Processing srvr command from /127.0.0.1:54674
 


[root@www bin]# cd ../../dataLog/
[root@www dataLog]# ll
总用量 48
-rw-r--r-- 1 root root 48192 8月  27 09:59 zookeeper--server-www.yhchange.com.out
[root@www dataLog]# tail -f zookeeper--server-www.yhchange.com.out 
2023-08-27 09:22:17,282 [myid:] - INFO  [QuorumPeer[myid=1](plain=[0:0:0:0:0:0:0:0]:2181)(secure=disabled):o.a.z.s.RequestThrottler@75] - zookeeper.request_throttler.shutdownTimeout = 10000 ms
2023-08-27 09:22:17,360 [myid:] - INFO  [QuorumPeer[myid=1](plain=[0:0:0:0:0:0:0:0]:2181)(secure=disabled):o.a.z.s.q.Learner@717] - Learner received UPTODATE message
2023-08-27 09:22:17,361 [myid:] - INFO  [QuorumPeer[myid=1](plain=[0:0:0:0:0:0:0:0]:2181)(secure=disabled):o.a.z.s.q.QuorumPeer@920] - Peer state changed: following - broadcast
2023-08-27 09:23:15,171 [myid:] - INFO  [ListenerHandler-/192.168.1.100:3888:o.a.z.s.q.QuorumCnxManager$Listener$ListenerHandler@1076] - Received connection request from /192.168.1.101:53108
2023-08-27 09:23:15,178 [myid:] - INFO  [WorkerReceiver[myid=1]:o.a.z.s.q.FastLeaderElection$Messenger$WorkerReceiver@391] - Notification: my state:FOLLOWING; n.sid:2, n.state:LOOKING, n.leader:2, n.round:0x1, n.peerEpoch:0x2, n.zxid:0x0, message format version:0x2, n.config version:0x0
2023-08-27 09:23:15,182 [myid:] - INFO  [WorkerReceiver[myid=1]:o.a.z.s.q.FastLeaderElection$Messenger$WorkerReceiver@391] - Notification: my state:FOLLOWING; n.sid:2, n.state:LOOKING, n.leader:3, n.round:0x1, n.peerEpoch:0x2, n.zxid:0x0, message format version:0x2, n.config version:0x0
2023-08-27 09:56:45,578 [myid:] - INFO  [NIOWorkerThread-1:o.a.z.s.c.FourLetterCommands@223] - The list of known four letter word commands is : [{1936881266=srvr, 1937006964=stat, 2003003491=wchc, 1685417328=dump, 1668445044=crst, 1936880500=srst, 1701738089=envi, 1668247142=conf, -720899=telnet close, 1751217000=hash, 2003003507=wchs, 2003003504=wchp, 1684632179=dirs, 1668247155=cons, 1835955314=mntr, 1769173615=isro, 1920298859=ruok, 1735683435=gtmk, 1937010027=stmk}]
2023-08-27 09:56:45,578 [myid:] - INFO  [NIOWorkerThread-1:o.a.z.s.c.FourLetterCommands@224] - The list of enabled four letter word commands is : [[srvr]]
2023-08-27 09:56:45,578 [myid:] - INFO  [NIOWorkerThread-1:o.a.z.s.NIOServerCnxn@529] - Processing srvr command from /127.0.0.1:44382
2023-08-27 09:59:23,731 [myid:] - INFO  [NIOWorkerThread-2:o.a.z.s.NIOServerCnxn@529] - Processing srvr command from /127.0.0.1:44384
 


五: 附录Zookeeper配置参数详解 

一,zookeeper中日志的配置

1,快照文件snapshot的目录:

dataDir=/usr/local/zookeeper/data

存储快照文件snapshot的目录。默认情况下,事务日志也会存储在这里

所以我们建议指定dataLogDir

2,事务日志的目录

dataLogDir=/usr/localzookeeper/datalogs

事务日志输出目录,不建议和快照文件写在一个目录下

3,日志清理

ZooKeeper 默认不会自动清理 tx log,总有一天你会遇到磁盘空间耗尽。

可以开启自动清理机制

autopurge.snapRetainCount=300autopurge.purgeInterval=72

#autopurge.purgeInterval=1

自动清理事务日志和快照文件的功能

这个参数是清理频率,单位是小时

默认值是0,表示不开启自动清理功能

#autopurge.snapRetainCount=3

自动清理事务日志和快照文件:保留的文件数量,默认值是保留3个

4,日志的手动清理命令

如果zookeeper集群访问量较高,清理会影响性能,可以手动清理

手动清理文件的例子:

#-n 3 表示保留3个文件

[root@www bin]# /usr/local/zookeeper/apache-zookeeper-3.9.0-bin/bin/zkCleanup.sh -n 3

也可以放到crond中按时间计划执行

二,zookeeper中客户端相关配置

1,客户端连接 Zookeeper 服务器的端口

clientPort=2181

端口端和zookeeper连接时,zookeeper使用的端口,防火墙要放开此端口供客户端访问

2,客户端的并发连接数限制

maxClientCnxns=300

官方说明:

maxClientCnxns : (No Java system property) Limits the number of concurrent connections (at the socket level) that a single client,

identified by IP address,

maymaketo a single member of the ZooKeeper ensemble.

This is used to prevent certain classes of DoS attacks,

includingfiledescriptor exhaustion.

The default is60. Setting this to 0 entirely removes the limit on concurrent connections

对一个客户端的连接数限制,默认值是60

将它设置为0表示取消对并发连接的限制

这个值过低会在日志中出现:too many connections from host - max is 60

可以视实际连接的情况进行调整

3,关闭启动内置的管理器

admin.enableServer=false

避免启动内置的管理器,也避免占用8080端口

三,zookeeper集群的配置

1,tickTime

tickTime=2000

Zookeeper 服务器之间或客户端与服务器之间维持心跳的时间间隔,也就是每个 tickTime 时间就会发送一个心跳。

tickTime以毫秒为单位

默认值:2000

保持默认值即可,无需修改

2,initLimit:LF初始通信时限

initLimit=10

集群中的follower服务器(F)与leader服务器(L)之间 初始连接 时能容忍的最多心跳数(tickTime的数量)

表示允许 follower连接 并同步到  leader 的初始化连接时间,它以 tickTime 的倍数来表示。

当超过设置倍数的 tickTime 时间,则连接失败

在设定的initLimit时间长度内,如果半数以上的跟随者不能完成同步,领导者便会放弃领导地位,进行另一次的领导选举。

如果zk集群环境数量很大则同步数据的时间会变长,这种情况下可以适当调大该参数。

默认为10

3,syncLimit:LF同步通信时限

syncLimit=5

集群中的follower服务器(F)与leader服务器(L)之间 请求和应答 之间能容忍的最多心跳数(tickTime的数量)

表示 leader 与 follower 之间发送消息,请求 和 应答 时间长度。

如果 follower 在设置的时间内不能与leader 进行通信,那么此 follower 将被集群丢弃

默认值是5

4,cluster成员列表

#cluster

server.1=192.168.1.100:2888:3888

server.2=192.168.1.101::2888:3888

server.3=192.168.1.102::2888:3888

server.A = B:C:D

A:zookeeper服务器的序号,即第几号服务器.

注意这个序号要与zookeeper的myid保持一致

B:服务器的 IP 地址

C:服务器跟随者follower与集群中的 Leader 服务器交换信息的端口

D:如果集群中的 Leader 服务器宕机,需要一个端口通信重新进行选举,选出一个新的 Leader。

这个端口就是用来做leader选举的端口

四,为zookeeper的运行配置JVM参数

vi zkEnv.sh

修改:SERVER_JVMFLAGS一行为:

export SERVER_JVMFLAGS="-Xmx2048m -Xms2048m"

说明:这两个参数的作用,

Xmx  :程序运行期间最大可占用的内存大小,

如果程序运行需要占用更多的内存,超出了这个设置值,就会抛出  OutOfMemory异常

Xms  : 程序启动时占用内存大小

此值可以设置与-Xmx相同,

以避免每次垃圾回收完成后JVM重新分配内存

应设置为多少?

Xmx     默认是物理内存的1/4,

最大建议不超过物理内存的3/4

所以如果没有其他应用同时运行的话,

可以设置为物理内存的1/2再观察调整

五,用jmx监控集群:

1,在zookeeper的conf目录下新建java.env

[root@zk1 conf]# vi java.env

内容:

JMXHOSTNAME="172.18.1.1"JMXPORT=8899

说明:JMXHOSTNAME的值是当前服务器的ip

2,修改zkServer.sh

echo "ZooKeeper remote JMX log4j set to $JMXLOG4J" >&2

下面的

ZOOMAIN="-Dcom.sun.management.jmxremote

jmxrmote后面添加:

-Djava.rmi.server.hostname=$JMXHOSTNAME

3,添加完成后,重启服务:

[root@zk1 conf]# systemctl stop zookeeper

[root@zk1 conf]# systemctl start zookeeper

4,启动jconsole,

在远程进程处输入:

172.18.1.1:8899

(说明:这里输入的是远程进程的ip地址和端口)

然后点连接

进入后:

mbean->org.apache.ZooKeeperService

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值