hadoop集群

一.安装配置hadoop

hadoop基于java环境进行运行,需要安装java

建立hadoop运行用户:
[root@server1 ~]# useradd -u 800 hadoop   
[root@server1 ~]# id hadoop
uid=800(hadoop) gid=800(hadoop) groups=800(hadoop)
[root@server1 ~]# ls
hadoop-2.7.3.tar.gz                       jdk-7u79-linux-x64.tar.gz
[root@server1 ~]# mv hadoop-2.7.3.tar.gz jdk-7u79-linux-x64.tar.gz /home/hadoop/
[root@server1 ~]# su - hadoop
解压配置软连接:
[hadoop@server1 ~]$ tar -zxf hadoop-2.7.3.tar.gz 
[hadoop@server1 ~]$ tar -zxf jdk-7u79-linux-x64.tar.gz 
[hadoop@server1 ~]$ ls
hadoop-2.7.3  hadoop-2.7.3.tar.gz  jdk1.7.0_79  jdk-7u79-linux-x64.tar.gz
[hadoop@server1 ~]$ ln -s jdk1.7.0_79/ java
[hadoop@server1 ~]$ ln -s hadoop-2.7.3 hadoop
[hadoop@server1 ~]$ ll
total 359004
drwxr-xr-x 9 hadoop hadoop      4096 Aug 18  2016 hadoop-2.7.3
-rw-r--r-- 1 root   root   214092195 Mar  2  2017 hadoop-2.7.3.tar.gz
lrwxrwxrwx 1 hadoop hadoop        12 Jul 21 09:59 java -> jdk1.7.0_79/
drwxr-xr-x 8 hadoop hadoop      4096 Apr 11  2015 jdk1.7.0_79
-rw-r--r-- 1 root   root   153512879 Jun 23  2016 jdk-7u79-linux-x64.tar.gz
编辑java环境,当java更新时更改软连接即可:
[hadoop@server1 ~]$ cd hadoop
[hadoop@server1 hadoop]$ cd etc/hadoop/
[hadoop@server1 hadoop]$ ls
capacity-scheduler.xml      httpfs-env.sh            mapred-env.sh
configuration.xsl           httpfs-log4j.properties  mapred-queues.xml.template
container-executor.cfg      httpfs-signature.secret  mapred-site.xml.template
core-site.xml               httpfs-site.xml          slaves
hadoop-env.cmd              kms-acls.xml             ssl-client.xml.example
hadoop-env.sh               kms-env.sh               ssl-server.xml.example
hadoop-metrics2.properties  kms-log4j.properties     yarn-env.cmd
hadoop-metrics.properties   kms-site.xml             yarn-env.sh
hadoop-policy.xml           log4j.properties         yarn-site.xml
hdfs-site.xml               mapred-env.cmd
[hadoop@server1 hadoop]$ vim hadoop-env.sh
 24 # The java implementation to use.
 25 export JAVA_HOME=/home/hadoop/java   ###设置java路径
测试hadoop是否安装成功:
[hadoop@server1 hadoop]$ bin/hadoop ##只要后面出现命令即可
Usage: hadoop [--config confdir] [COMMAND | CLASSNAME]
.................................
[hadoop@server1 hadoop]$ mkdir input   ###建立hadoop目录放置文件目录
[hadoop@server1 hadoop]$ cd input/
[hadoop@server1 input]$ cp ../etc/hadoop/* .
[hadoop@server1 input]$ ls
capacity-scheduler.xml      httpfs-env.sh            mapred-env.sh
configuration.xsl           httpfs-log4j.properties  mapred-queues.xml.template
container-executor.cfg      httpfs-signature.secret  mapred-site.xml.template
core-site.xml               httpfs-site.xml          slaves
hadoop-env.cmd              kms-acls.xml             ssl-client.xml.example
hadoop-env.sh               kms-env.sh               ssl-server.xml.example
hadoop-metrics2.properties  kms-log4j.properties     yarn-env.cmd
hadoop-metrics.properties   kms-site.xml             yarn-env.sh
hadoop-policy.xml           log4j.properties         yarn-site.xml
hdfs-site.xml               mapred-env.cmd
[hadoop@server1 hadoop]$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar  grep input output 'dfs[a-z.]+'
[hadoop@server1 hadoop]$ cd output/
[hadoop@server1 output]$ ls
part-r-00000  _SUCCESS
[hadoop@server1 output]$ cat *
6   dfs.audit.logger
4   dfs.class
3   dfs.server.namenode.
2   dfs.period
2   dfs.audit.log.maxfilesize
2   dfs.audit.log.maxbackupindex
1   dfsmetrics.log
1   dfsadmin
1   dfs.servers
1   dfs.file

二.数据操作

配置hadoop

[hadoop@server1 hadoop]$ pwd
/home/hadoop/hadoop/etc/hadoop

[hadoop@server1 hadoop]$ vim core-site.xml ###设置节点
 19 <configuration>
 20   <property>
 21           <name>fs.defaultFS</name>
 22                   <value>hdfs://172.25.60.1:9000</value>   
 23                       </property>
 24 
 25 </configuration>

[hadoop@server1 hadoop]$ vim slaves     
172.25.60.1

[hadoop@server1 hadoop]$ vim hdfs-site.xml  
 19 <configuration>
 20   <property>
 21           <name>dfs.replication</name>
 22                   <value>1</value>
 23                       </property>
 24 </configuration>

配置ssh
进行无密码访问连接:生成公钥,将id_rsa.pub复制为私钥authorized_keys

[hadoop@server1 hadoop]$ ssh-keygen  ###默认直接会车
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hadoop/.ssh/id_rsa): 
Created directory '/home/hadoop/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/hadoop/.ssh/id_rsa.
Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
The key fingerprint is:
e8:8d:6a:5a:b6:50:56:4f:c6:07:d1:3b:fc:0a:6b:75 hadoop@server1
The key's randomart image is:
+--[ RSA 2048]----+
|        oo       |
|       . ..      |
|      . +...     |
|     . = .+      |
|    o . S  o     |
|   o . o. . E    |
|  . o o .+ o     |
|   +.o  o .      |
|  .oo  .         |
+-----------------+
[hadoop@server1 ~]$ cd .ssh/
[hadoop@server1 .ssh]$ ls
id_rsa  id_rsa.pub
[hadoop@server1 .ssh]$ cp id_rsa.pub authorized_keys
[hadoop@server1 .ssh]$ ssh localhost   ###将本机加入无密访问列表
[hadoop@server1 .ssh]$ logout
[hadoop@server1 .ssh]$ ssh 172.25.60.1
[hadoop@server1 .ssh]$ logout
[hadoop@server1 .ssh]$ ssh server1
[hadoop@server1 .ssh]$ logout
[hadoop@server1 .ssh]$ ssh 0.0.0.0
[hadoop@server1 .ssh]$ logout

启动dfs

格式化:文件存在/tmp/目录
[hadoop@server1 hadoop] pwd
/home/hadoop
[hadoop@server1 hadoop]$ bin/hdfs namenode -format

[hadoop@server1 hadoop]$ ls /tmp/
hadoop-hadoop      yum.log
hsperfdata_hadoop  yum_save_tx-2018-07-20-21-44UB94wi.yumtx

配置环境变量:
[hadoop@server1 ~]$ vim .bash_profile   ###设置java全部变量,否则服务不能启动
10 PATH=$PATH:$HOME/bin:~/java/bin
[hadoop@server1 ~]$ source .bash_profile

启动dfs:
[hadoop@server1 hadoop]$ sbin/start-dfs.sh 
Starting namenodes on [server1]
server1: starting namenode, logging to /home/hadoop/hadoop-2.7.3/logs/hadoop-hadoop-namenode-server1.out
172.25.60.1: starting datanode, logging to /home/hadoop/hadoop-2.7.3/logs/hadoop-hadoop-datanode-server1.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /home/hadoop/hadoop-2.7.3/logs/hadoop-hadoop-secondarynamenode-server1.out

[hadoop@server1 ~]$ jps
6906 SecondaryNameNode
6586 NameNode
6690 DataNode
7250 Jps

浏览器查看是否启动:
这里写图片描述
处理文件系统

[hadoop@server1 hadoop]$ bin/hdfs dfs -usage   ##查看用法
[hadoop@server1 hadoop]$ bin/hdfs dfs -mkdir /user
[hadoop@server1 hadoop]$ bin/hdfs dfs -mkdir /user/hadoop
[hadoop@server1 hadoop]$ bin/hdfs dfs -ls
[hadoop@server1 hadoop]$ bin/hdfs dfs -put input/
[hadoop@server1 hadoop]$ bin/hdfs dfs -ls
Found 1 items
drwxr-xr-x   - hadoop supergroup          0 2018-07-21 10:56 input
[hadoop@server1 hadoop]$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar  wordcount input output 
[hadoop@server1 hadoop]$ bin/hdfs dfs -cat output/*  ##查看生成文件

查看新建目录output下生成的文件:
这里写图片描述
这里写图片描述

三.分布式文件存储

namenode(172.25.60.1)

需要停止server1的dfs:
[hadoop@server1 hadoop]$ sbin/stop-dfs.sh
Stopping namenodes on [server1]
server1: stopping namenode
172.25.60.1: stopping datanode
Stopping secondary namenodes [0.0.0.0]
0.0.0.0: stopping secondarynamenod

[root@server1 ~]# yum insatll -y nfs-utils.x86_64
[root@server1 ~]# /etc/init.d/rpcbind start
[root@server1 ~]# vim /etc/exports 
/home/hadoop   *(rw,anonuid=800,anongid=800)
[root@server1 ~]# /etc/init.d/nfs start
Starting NFS services:                                     [  OK  ]
Starting NFS quotas:                                       [  OK  ]
Starting NFS mountd:                                       [  OK  ]
Starting NFS daemon:                                       [  OK  ]
Starting RPC idmapd:                                       [  OK  ]

[root@server1 ~]# exportfs -v
/home/hadoop    <world>(rw,wdelay,root_squash,no_subtree_check,anonuid=800,anongid=800)
[root@server1 ~]# exportfs -rv
exporting *:/home/hadoop

datanode(172.25.60.2和172.25.60.3操作一样)

节点的用户uid和gid必须一样,安装nfs-utils
[root@server2 ~]# useradd -u 800 hadoop
[root@server2 ~]# id hadoop
uid=800(hadoop) gid=800(hadoop) groups=800(hadoop)
[root@server2 ~]# yum insatll -y nfs-utils.x86_64
[root@server2 ~]# /etc/init.d/rpcbind start
Starting rpcbind:                                          [  OK  ]
[root@server2 ~]# showmount -e 172.25.60.1
Export list for 172.25.60.1:
/home/hadoop *
[root@server2 ~]# mount 172.25.60.1:/home/hadoop/ /home/hadoop/
[root@server2 ~]# ll -d /home/hadoop/
drwx------ 5 hadoop hadoop 4096 Jul 21 11:28 /home/hadoop/
[root@server2 ~]# df
Filesystem                   1K-blocks    Used Available Use% Mounted on
/dev/mapper/VolGroup-lv_root  19134332 1041524  17120828   6% /
tmpfs                           510188      16    510172   1% /dev/shm
/dev/vda1                       495844   33457    436787   8% /boot
172.25.60.1:/home/hadoop/     19134336 2226816  15935616  13% /home/hadoop

编辑配置文件

[hadoop@server1 hadoop]$ pwd
/home/hadoop/hadoop/etc
[hadoop@server1 etc]$ vim slaves     
172.25.60.2
172.25.60.3
[hadoop@server1 etc]$ vim mapred-site.xml
将<value>1</value>的1改为2即可

测试ssh服务

[hadoop@server1 tmp]$ ssh server2
The authenticity of host 'server2 (172.25.60.2)' can't be established.
RSA key fingerprint is f6:e9:06:37:67:3c:42:3b:94:c1:b7:a7:31:1b:c4:2d.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'server2,172.25.60.2' (RSA) to the list of known hosts.
[hadoop@server2 ~]$ logout
Connection to server2 closed.
[hadoop@server1 tmp]$ ssh server3
The authenticity of host 'server3 (172.25.60.3)' can't be established.
RSA key fingerprint is f6:e9:06:37:67:3c:42:3b:94:c1:b7:a7:31:1b:c4:2d.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'server3,172.25.60.3' (RSA) to the list of known hosts.
[hadoop@server3 ~]$ logout
Connection to server3 closed.
[hadoop@server1 tmp]$ ssh 172.25.60.2
Last login: Sat Jul 21 11:44:01 2018 from server1
[hadoop@server2 ~]$ logout
Connection to 172.25.60.2 closed.
[hadoop@server1 tmp]$ ssh 172.25.60.3
Last login: Sat Jul 21 11:44:07 2018 from server1
[hadoop@server3 ~]$ logout
Connection to 172.25.60.3 closed.

格式化

[hadoop@server1 hadoop]$ bin/hdfs namenode -format
[hadoop@server1 hadoop]$ ls /tmp/
hadoop-hadoop      yum.log
hsperfdata_hadoop  yum_save_tx-2018-07-20-21-44UB94wi.yumtx

启动nfs:namenode和datanode节点分开

[hadoop@server1 hadoop]$ sbin/start-dfs.sh   ###开启nfs服务
Starting namenodes on [server1]
server1: starting namenode, logging to /home/hadoop/hadoop-2.7.3/logs/hadoop-hadoop-namenode-server1.out
172.25.60.2: starting datanode, logging to /home/hadoop/hadoop-2.7.3/logs/hadoop-hadoop-datanode-server2.out
172.25.60.3: starting datanode, logging to /home/hadoop/hadoop-2.7.3/logs/hadoop-hadoop-datanode-server3.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /home/hadoop/hadoop-2.7.3/logs/hadoop-hadoop-secondarynamenode-server1.out

namenode节点:
[hadoop@server1 hadoop]$ jps
11609 Jps
11312 NameNode
11500 SecondaryNameNode
datanode节点:
[hadoop@server2 ~]$ jps
1261 Jps
1167 DateNode

处理文件

[hadoop@server1 hadoop]$ cd /tmp/
[hadoop@server1 tmp]$ ls
hadoop-hadoop
hsperfdata_hadoop
Jetty_0_0_0_0_50070_hdfs____w2cu08
Jetty_0_0_0_0_50090_secondary____y6aanv
Jetty_localhost_48651_datanode____.mi2vgr
yum.log
yum_save_tx-2018-07-20-21-44UB94wi.yumtx
删除/tmp目录下所有文件:
[hadoop@server1 tmp]$ rm -fr *
rm: cannot remove `yum.log': Operation not permitted
rm: cannot remove `yum_save_tx-2018-07-20-21-44UB94wi.yumtx': Operation not permitted
重新格式化:
[hadoop@server1 hadoop]$ bin/hdfs namenode -format   
[hadoop@server1 hadoop]$ ls /tmp/
hadoop-hadoop      yum.log
hsperfdata_hadoop  yum_save_tx-2018-07-20-21-44UB94wi.yumtx
[hadoop@server1 hadoop]$ ls
bin  include  libexec      logs        README.txt  share
etc  lib      LICENSE.txt  NOTICE.txt  sbin
启动dfs服务,创建目录:
[hadoop@server1 hadoop]$ sbin/start-dfs.sh 
[hadoop@server1 hadoop]$ bin/hdfs dfs -mkdir /user
[hadoop@server1 hadoop]$ bin/hdfs dfs -mkdir /user/hadoop
将etc/hadoop/目录的文件放在input目录下:
[hadoop@server1 hadoop]$ bin/hdfs dfs -put etc/hadoop/ input
[hadoop@server1 hadoop]$ bin/hdfs dfs -ls input

删除文件时:
这里写图片描述
重新放置文件时:
这里写图片描述
这里写图片描述

四.节点的添加和删除

在线添加server4(在线扩容)

[root@server4 ~]# yum install -y nfs-utils
[root@server4 ~]# /etc/init.d/rpcbind start
Starting rpcbind:                                          [  OK  ]
[root@server4 ~]# useradd -u 800 hadoop
[root@server4 ~]# id hadoop
uid=800(hadoop) gid=800(hadoop) groups=800(hadoop)
[root@server4 sbin]# pwd
/home/hadoop/hadoop/sbin
[root@server4 sbin]# ./hadoop-daemon.sh start datanode
starting datanode, logging to /home/hadoop/hadoop-2.7.3/logs/hadoop-root-datanode-server4.out
[root@server4 sbin]# su - hadoop 
[root@server4 ~]# vim hadoop/etc/hadoop/slave
172.25.60.2
172.25.60.3
172.25.60.4

浏览器查看是否添加server4:
这里写图片描述
namenode测试ssh服务

[hadoop@server4 ~]$ ssh server4
[hadoop@server4 ~]$ logout
[hadoop@server4 ~]$ ssh 172.25.60.4
[hadoop@server4 ~]$ logout
[hadoop@server4 ~]$ cd hadoop
[hadoop@server4 ~]$ sbin/hadoop-daemon.sh start datanode
[hadoop@server4 ~]$ jps
1250 Jps
1177 DateNode

server2测试

[hadoop@server1 hadoop]$ pwd
/home/hadoop/hadoop/etc/hadoop
[hadoop@server1 hadoop]$ vim hdfs-site.xml 
    <property>
        <name>dfs.hosts.exclude</name>
        <value>/home/hadoop/hadoop/etc/hadoop/hosts-exclude</value>
    </property>

[hadoop@server1 hadoop]$ vim hosts-exclude
172.25.60.2    ##删除的节点IP

[hadoop@server1 hadoop]$ dd if=/dev/zero of=bigfile bs=1M count=200
[hadoop@server1 hadoop]$ ../../bin/hdfs dfs -put bigfile 
[hadoop@server1 hadoop]$ vim slaves
172.25.60.3
172.25.60.4

[hadoop@server1 hadoop]$ ../../bin/hdfs dfsadmin -refreshNodes ##刷新节点
Refresh nodes successful
[hadoop@server1 hadoop]$ ../../bin/hdfs dfsadmin -report
Name: 172.25.60.2:50010 (server2)
Hostname: server2
Decommission Status : Decommission in progress
Configured Capacity: 19593555968 (18.25 GB)
DFS Used: 135581696 (129.30 MB)
Non DFS Used: 1954566144 (1.82 GB)
DFS Remaining: 17503408128 (16.30 GB)
DFS Used%: 0.69%
DFS Remaining%: 89.33%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Sat Jul 21 14:23:43 CST 2018

Name: 172.25.60.2:50010 (server2)
Hostname: server2
Decommission Status : Decommissioned
[hadoop@server2 hadoop]$ sbin/hadoop-daemon.sh stop datanode
stopping datanode
[hadoop@server2 hadoop]$ jps
2039 Jps

yarn模式

[hadoop@server1 hadoop]$ cp mapred-site.xml.template mapred-site.xml
[hadoop@server1 hadoop]$ vim mapred-site.xml
<property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
 </property>
[hadoop@server1 hadoop]$ vim yarn-site.xml
<property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
</property>
[hadoop@server1 hadoop]$ cd ..
[hadoop@server1 etc]$ cd ..
[hadoop@server1 hadoop]$ sbin/start-yarn.sh 
namenode:
[hadoop@server1 hadoop]$ jps
22720 Jps
11312 NameNode
11500 SecondaryNameNode
22461 ResourceManager
datanode:
[hadoop@server3 ~]$ jps
14481 Jps
4153 DataNode
14318 NodeManager
[hadoop@server4 hadoop]$ jps
2355 Jps
2119 DataNode
2257 NodeManager

五. zookeeper集群搭建

所有节点清空/tmp/目录
server5主机

root@server5 ~]# yum install -y nfs-utils
[root@server5 ~]# /etc/init.d/rpcbind start
[root@server5 ~]# useradd -u 800 hadoop
[root@server5 ~]# id hadoop
uid=800(hadoop) gid=800(hadoop) groups=800(hadoop)
[root@server5 ~]# mount 172.25.60.1:/home/hadoop/ /home/hadoop/
[root@server5 ~]# df  
Filesystem                   1K-blocks    Used Available Use% Mounted on
/dev/mapper/VolGroup-lv_root  19134332 1184068  16978284   7% /
tmpfs                           510188       0    510188   0% /dev/shm
/dev/vda1                       495844   33457    436787   8% /boot
172.25.60.1:/home/hadoop/     19134336 2297472  15864832  13% /home/hadoop

server1主机

server1停止所有服务
[hadoop@server1 hadoop]$ sbin/stop-all.sh 
server1无密连接server5:
[hadoop@server1 hadoop]$ ssh server5
[hadoop@server1 hadoop]$ logout
[hadoop@server1 hadoop]$ ssh 172.25.60.5
[hadoop@server1 hadoop]$ logout

配置zookeeper

[hadoop@server1 ~]$ tar zxf zookeeper-3.4.9.tar.gz 
[hadoop@server1 ~]$ cd zookeeper-3.4.9
[hadoop@server1 zookeeper-3.4.9]$ ls
bin          dist-maven       LICENSE.txt           src
build.xml    docs             NOTICE.txt            zookeeper-3.4.9.jar
CHANGES.txt  ivysettings.xml  README_packaging.txt  zookeeper-3.4.9.jar.asc
conf         ivy.xml          README.txt            zookeeper-3.4.9.jar.md5
contrib      lib              recipes               zookeeper-3.4.9.jar.sha1
[hadoop@server1 zookeeper-3.4.9]$ cd conf/
[hadoop@server1 conf]$ ls
configuration.xsl  log4j.properties  zoo_sample.cfg
[hadoop@server1 conf]$ cp zoo_sample.cfg zoo.cfg
[hadoop@server1 conf]$ vim zoo.cfg
server.1=172.25.60.2:2888:3888
server.2=172.25.60.3:2888:3888
server.3=172.25.60.4:2888:3888

server2和server3和server4的配置
注意myid的不同,对应上面的配置文件

[hadoop@server2 zookeeper-3.4.9]$ cd /tmp/
[hadoop@server2 tmp]$ mkdir zookeeper
[hadoop@server2 tmp]$ cd zookeeper/
[hadoop@server2 zookeeper]$ vim myid
1

[hadoop@server3 tmp]$ mkdir zookeeper
[hadoop@server3 tmp]$ cd zookeeper/
[hadoop@server3 zookeeper]$ echo 2 > myid
[hadoop@server3 zookeeper]$ cat myid 
2

[hadoop@server4 tmp]$ mkdir zookeeper
[hadoop@server4 tmp]$ cd zookeeper/
[hadoop@server4 zookeeper]$ echo 3 > myid
[hadoop@server4 zookeeper]$ cat myid 
3

3个datanode主机启动zookeeper集群

[hadoop@server2 ~]$ cd zookeeper-3.4.9
[hadoop@server2 zookeeper-3.4.9]$ cd bin/
[hadoop@server2 bin]$ ./zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /home/hadoop/zookeeper-3.4.9/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED

[hadoop@server3 tmp]$ cd /home/hadoop/zookeeper-3.4.9/bin/
[hadoop@server3 bin]$ ./zkServer.sh start 
ZooKeeper JMX enabled by default
Using config: /home/hadoop/zookeeper-3.4.9/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED

[hadoop@server4 zookeeper]$ cd /home/hadoop/zookeeper-3.4.9/bin/
[hadoop@server4 bin]$ ./zkServer.sh start 
ZooKeeper JMX enabled by default
Using config: /home/hadoop/zookeeper-3.4.9/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED

查看所有节点信息

[hadoop@server2 zookeeper-3.4.9]$ bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /home/hadoop/zookeeper-3.4.9/bin/../conf/zoo.cfg
Mode: follower

[hadoop@server3 zookeeper-3.4.9]$ bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /home/hadoop/zookeeper-3.4.9/bin/../conf/zoo.cfg
Mode: follower

[hadoop@server4 zookeeper-3.4.9]$ bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /home/hadoop/zookeeper-3.4.9/bin/../conf/zoo.cfg
Mode: leader

leader(server4)测试
[hadoop@server4 zookeeper-3.4.9]$ bin/zkCli.sh
Connecting to localhost:2181
WATCHER::

WatchedEvent state:SyncConnected type:None path:null

[zk: localhost:2181(CONNECTED) 0] ls
[zk: localhost:2181(CONNECTED) 1] ls /
[zookeeper]
[zk: localhost:2181(CONNECTED) 2] ls /zookeeper
[quota]
[zk: localhost:2181(CONNECTED) 3] ls /zookeeper/quota
[]
[zk: localhost:2181(CONNECTED) 5] quit 
Quitting...
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值