如何安装配置zookeeper

1:配置java环境

修改/etc/bashrc文件,添加JAVA_HOME

cat /etc/bashrc

export JAVA_HOME=/root/jdk-11.0.16.1

export PATH=$PATH:$JAVA_HOME/bin:.

2:下载zookeeper

https://dlcdn.apache.org/zookeeper/zookeeper-3.7.1/apache-zookeeper-3.7.1-bin.tar.gz

这个是3.7.1版本的下载地址

3:zookeeper进行解压缩

tar zxvf apache-zookeeper-3.7.1-bin.tar.gz

4: 修改配置文件zoo.cfg

zoo.cfg在apache-zookeeper-3.7.1-bin/conf目录下。

进入zookeeper目录,将conf目录下的zoo_sample.cfg文件复制为zoo.cfg,并修改配置:

cd zookeeper/conf

cp zoo_sample.cfg zoo.cfg

vi zoo.cfg

添加以下配置:

[kfk@bigdata-pro01 conf]$ cat zoo.cfg

# The number of milliseconds of each tick

tickTime=2000

# The number of ticks that the initial

# synchronization phase can take

initLimit=10

# The number of ticks that can pass between

# sending a request and getting anacknowledgement

syncLimit=5

# the directory where the snapshot isstored.

# do not use /tmp for storage, /tmp here isjust

# example sakes.

dataDir=/home/kfk/apache-zookeeper-3.7.1-bin/data

# the port at which the clients willconnect

clientPort=2181

# the maximum number of client connections.

# increase this if you need to handle moreclients

#maxClientCnxns=0

#

# Be sure to read the maintenance sectionof the

# administrator guide before turning onautopurge.

#

#http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance

#

# The number of snapshots to retain indataDir

#autopurge.snapRetainCount=20

# Purge task interval in hours

# Set to "0" to disable autopurge feature

#autopurge.purgeInterval=48

## Metrics Providers

#

# https://prometheus.io Metrics Exporter

#metricsProvider.className=org.apache.zookeeper.metrics.prometheus.PrometheusMetricsProvider

#metricsProvider.httpPort=7000

#metricsProvider.exportJvmInfo=true

server.1=bigdata-pro01:2888:3888

server.2=bigdata-pro02:2888:3888

server.3=bigdata-pro03:2888:3888

server.4=bigdata-pro04:2888:3888

server.5=bigdata-pro05:2888:3888

zookeeper.session.timeout.ms=40000000

其中dataDir 指向的目录容量尽量大。

maxClientCnxns=0 #为 0 或者忽略取消对并发连接的限制

autopurge.snapRetainCount=20 #保存最新的20个日志文件

autopurge.purgeInterval=48 #保存日志文件48小时

5:配置myid文件

/home/kfk/apache-zookeeper-3.7.1-bin/data/myid

不同的服务器使用不通的myid。

我是从1进行递增的

6:进行试启动

bin/zkServer.sh start

7: 检查进程是否启动

ps -ef|grep zookeeper

如果能查到进程,那么恭喜你,启动成功了。

kfk 31127 1 0 1月04 ? 00:01:27 /root/jdk-11.0.16.1/bin/java-Dzookeeper.log.dir=/home/kfk/apache-zookeeper-3.7.1-bin/bin/../logs-Dzookeeper.log.file=zookeeper-kfk-server-bigdata-pro01.kfk.com.log-Dzookeepe.root.logger=INFO,CONSOLE -XX:+HeapDumpOnOutOfMemoryError-XX:OnOutOfMemoryError=kill -9 %p -cp/home/kfk/apache-zookeeper-3.7.1-bin/bin/../zookeeper-server/target/classes:/home/kfk/apache-zookeeper-3.7.1-bin/bin/../build/classes:/home/kfk/apache-zookeeper-3.7.1-bin/bin/../zookeeper-server/target/lib/*.jar:/home/kfk/apache-zookeeper-3.7.1-bin/bin/../build/lib/*.jar:/home/kfk/apache-zookeeper-3.7.1-bin/bin/../lib/zookeeper-prometheus-metrics-3.7.1.jar:/home/kfk/apache-zookeeper-3.7.1-bin/bin/../lib/zookeeper-jute-3.7.1.jar:/home/kfk/apache-zookeeper-3.7.1-bin/bin/../lib/zookeeper-3.7.1.jar:/home/kfk/apache-zookeeper-3.7.1-bin/bin/../lib/snappy-java-1.1.7.7.jar:/home/kfk/apache-zookeeper-3.7.1-bin/bin/../lib/slf4j-reload4j-1.7.35.jar:/home/kfk/apache-zookeeper-3.7.1-bin/bin/../lib/slf4j-api-1.7.35.jar:/home/kfk/apache-zookeeper-3.7.1-bin/bin/../lib/simpleclient_servlet-0.9.0.jar:/home/kfk/apache-zookeeper-3.7.1-bin/bin/../lib/simpleclient_hotspot-0.9.0.jar:/home/kfk/apache-zookeeper-3.7.1-bin/bin/../lib/simpleclient_common-0.9.0.jar:/home/kfk/apache-zookeeper-3.7.1-bin/bin/../lib/simpleclient-0.9.0.jar:/home/kfk/apache-zookeeper-3.7.1-bin/bin/../lib/reload4j-1.2.19.jar:/home/kfk/apache-zookeeper-3.7.1-bin/bin/../lib/netty-transport-native-unix-common-4.1.76.Final.jar:/home/kfk/apache-zookeeper-3.7.1-bin/bin/../lib/netty-transport-native-epoll-4.1.76.Final.jar:/home/kfk/apache-zookeeper-3.7.1-bin/bin/../lib/netty-transport-classes-epoll-4.1.76.Final.jar:/home/kfk/apache-zookeeper-3.7.1-bin/bin/../lib/netty-transport-4.1.76.Final.jar:/home/kfk/apache-zookeeper-3.7.1-bin/bin/../lib/netty-resolver-4.1.76.Final.jar:/home/kfk/apache-zookeeper-3.7.1-bin/bin/../lib/netty-handler-4.1.76.Final.jar:/home/kfk/apache-zookeeper-3.7.1-bin/bin/../lib/netty-common-4.1.76.Final.jar:/home/kfk/apache-zookeeper-3.7.1-bin/bin/../lib/netty-codec-4.1.76.Final.jar:/home/kfk/apache-zookeeper-3.7.1-bin/bin/../lib/netty-buffer-4.1.76.Final.jar:/home/kfk/apache-zookeeper-3.7.1-bin/bin/../lib/metrics-core-4.1.12.1.jar:/home/kfk/apache-zookeeper-3.7.1-bin/bin/../lib/jline-2.14.6.jar:/home/kfk/apache-zookeeper-3.7.1-bin/bin/../lib/jetty-util-ajax-9.4.43.v20210629.jar:/home/kfk/apache-zookeeper-3.7.1-bin/bin/../lib/jetty-util-9.4.43.v20210629.jar:/home/kfk/apache-zookeeper-3.7.1-bin/bin/../lib/jetty-servlet-9.4.43.v20210629.jar:/home/kfk/apachezookeeper-3.7.1-bin/bin/../lib/jetty-server-9.4.43.v20210629.jar:/home/kfk/apache-zookeeper-3.7.1-bin/bin/../lib/jetty-security-9.4.43.v20210629.jar:/home/kfk/apache-zookeeper-3.7.1-bin/bin/../lib/jetty-io-9.4.43.v20210629.jar:/home/kfk/apache-zookeeper-3.7.1-bin/bin/../lib/jetty-http-9.4.43.v20210629.jar:/home/kfk/apache-zookeeper-3.7.1-bin/bin/../lib/javax.servlet-api-3.1.0.jar:/home/kfk/apache-zookeeper-3.7.1-bin/bin/../lib/jackson-databind-2.13.2.1.jar:/home/kfk/apache-zookeeper-3.7.1-bin/bin/../lib/jackson-core-2.13.2.jar:/home/kfk/apache-zookeeper-3.7.1-bin/bin/../lib/jackson-annotations-2.13.2.jar:/home/kfk/apache-zookeeper-3.7.1-bin/bin/../lib/commons-cli-1.4.jar:/home/kfk/apache-zookeeper-3.7.1-bin/bin/../lib/audience-annotations-0.12.0.jar:/home/kfk/apache-zookeeper-3.7.1-bin/bin/../zookeeper-*.jar:/home/kfk/apache-zookeeper-3.7.1-bin/bin/../zookeeper-server/src/main/resources/lib/*.jar:/home/kfk/apache-zookeeper-3.7.1-bin/bin/../conf:-Xmx1000m -Dcom.sun.management.jmxremote-Dcom.sun.management.jmxremote.local.only=falseorg.apache.zookeeper.server.quorum.QuorumPeerMain/home/kfk/apache-zookeeper-3.7.1-bin/bin/../conf/zoo.cfg

[kfk@bigdata-pro01 ~]$

8:将配置好的zooker进行打包

tar cvf apache-zookeeper-3.7.1-bin.tarapache-zookeeper-3.7.1-bin

9:将压缩包分发到集群中的主机

scp apache-zookeeper-3.7.1-bin.tarkfk@bigdata-pro02:/home/kfk/

scp apache-zookeeper-3.7.1-bin.tarkfk@bigdata-pro03:/home/kfk/

scp apache-zookeeper-3.7.1-bin.tarkfk@bigdata-pro04:/home/kfk

scp apache-zookeeper-3.7.1-bin.tarkfk@bigdata-pro05:/home/kfk/

我这次是配置了5台主机。

10:在其他主机解压缩

tar xvf apache-zookeeper-3.7.1-bin.tar

11:修改其他主机myid

修改myid文件

12:启动其他主机的zookeeper

启动 bin/zkServer.sh start

13:客户端连接

bin/zkCli.sh -server127.0.0.1:2181

这使您可以执行简单的类似文件的操作。

连接后,您应该会看到类似以下内容:

Connecting to localhost:2181
...
Welcome to ZooKeeper!
JLine support is enabled
[zkshell: 0]

在 shell 中,键入以获取可从客户端执行的命令列表,如下所示:help

ZooKeeper -server host:port-client-configuration properties-file cmd args

addWatch [-m mode] path # optional mode is one of [PERSISTENT,PERSISTENT_RECURSIVE] - default is PERSISTENT_RECURSIVE

addauth scheme auth

close

config [-c] [-w] [-s]

connect host:port

create [-s] [-e] [-c] [-t ttl] path [data] [acl]

delete [-v version] path

deleteall path [-b batch size]

delquota [-n|-b|-N|-B] path

get [-s] [-w] path

getAcl [-s] path

getAllChildrenNumber path

getEphemerals path

history

listquota path

ls [-s] [-w] [-R] path

printwatches on|off

quit

reconfig [-s] [-v version] [[-file path] | [-membersserverID=host:port1:port2;port3[,...]*]] | [-addserverId=host:port1:port2;port3[,...]]* [-remove serverId[,...]*]

redo cmdno

removewatches path [-c|-d|-a] [-l]

set [-s] [-v version] path data

setAcl [-s] [-v version] [-R] path acl

setquota -n|-b|-N|-B val path

stat [-w] path

sync path

version

whoami

Command not found: Command not found help

从这里,您可以尝试一些简单的命令来感受这个简单的命令行界面。首先,首先发出 list 命令,如 中所示,结果为:ls

[zk: 127.0.0.1:2181(CONNECTED) 1] ls

ls [-s] [-w] [-R] path

[zk: 127.0.0.1:2181(CONNECTED) 2] ls /

[admin, brokers, cluster, config,consumers, controller, controller_epoch, feature, isr_change_notification,latest_producer_id_block, log_dir_event

接下来,通过运行 来创建新的 znode。这将创建一个新的 znode 并将字符串 “my_data” 与节点关联。您应该看到:create

/zk_test my_data

[zk: 127.0.0.1:2181(CONNECTED) 3] create/zk_test my_data

Created /zk_test

[zk: 127.0.0.1:2181(CONNECTED) 4]

发出另一个命令以查看目录的外观:ls

/

[admin, brokers, cluster, config,consumers, controller, controller_epoch, feature, isr_change_notification,latest_producer_id_block, log_dir_event_notification, zk_test, zookeeper]

请注意,zk_test目录现已创建。

接下来,通过运行命令验证数据是否与 znode 关联,如下所示:get

[zk: 127.0.0.1:2181(CONNECTED) 0] get/zk_test

my_data

[zk: 127.0.0.1:2181(CONNECTED) 1]

[zk: 127.0.0.1:2181(CONNECTED) 3] set/zk_test jkjunk

[zk: 127.0.0.1:2181(CONNECTED) 4] get/zk_test

jkjunk

[zk: 127.0.0.1:2181(CONNECTED) 5]

(请注意,我们在设置数据后做了一个,它确实发生了变化。Get

最后,让我们通过发出以下命令来发布节点:delete

[zk: 127.0.0.1:2181(CONNECTED) 6] ls /

[admin, brokers, cluster, config,consumers, controller, controller_epoch, feature, isr_change_notification,latest_producer_id_block, log_dir_event_notification, zookeeper]

  • 5
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 7
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 7
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

醉心编码

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值