zookeeper的安装:
java
准备:安装jdk1.8
zookeeper版本:3.4.10
安装节点:奇数台 255台 3-11台
安装3台 hadoop01 hadoop02 hadoop03
步骤:
1)上传
2)解压
tar -xvzf zookeeper-3.4.10.tar.gz
3)配置环境变量
vi /etc/profile
export JAVA_HOME=/home/hadoop/apps/jdk1.8.0_73
export HADOOP_HOME=/home/hadoop/apps/hadoop-2.7.6
export ZOOKEEPER_HOME=/home/hadoop/apps/zookeeper-3.4.10
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$ZOOKEEPER_HOME/bin
source /etc/profile
4)修改zk的配置文件
/home/hadoop/apps/zookeeper-3.4.10/conf
mv zoo_sample.cfg zoo.cfg
# The number of milliseconds of each tick 心跳时间间隔
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take 初始化心跳连接次数
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
发送请求和返回响应之间的心跳次数
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
//zk的数据文件存储位置 zk的核心数据文件 一定要改
dataDir=/home/hadoop/data/zookeeperdata
# the port at which the clients will connect
# 客户端连接zk的端口 默认端口2181
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
/* 末尾追加:配置zk的所有的节点信息
zk集群中 每一个节点都有一个自己的独立的id
zk节点相互之间不能重复的 这个id认为规定的
通信选举的时候使用的就是id 范围0-255
server.id=主机名:2888:3888
2888:心跳端口
3888:选举端口
一行代表一个节点
hadoop01----1
hadoop02----2
hadoop03----3*/
server.1=hadoop01:2888:3888
server.2=hadoop02:2888:3888
server.3=hadoop03:2888:3888
5)到对应的节点上添加id文件
路径:dataDir=/home/hadoop/data/zookeeperdata
文件名:myid
文件内容 :对应的id 不要有多余的空格 或空行
mkdir /home/hadoop/data/zookeeperdata
vi myid
1
6)远程发送到其他节点
scp -r zookeeper-3.4.10 hadoop02:/home/hadoop/apps/
scp -r zookeeper-3.4.10 hadoop03:/home/hadoop/apps/
sudo scp /etc/profile hadoop02:/etc/
sudo scp /etc/profile hadoop03:/etc/
source /etc/profile
7)启动
关闭防火墙 sudo service iptables stop
启动:三个节点分别启动
hadoop01: zkServer.sh start
jps
6332 QuorumPeerMain
不代表启动成功
查看zk的状态:
zkServer.sh status
以下状态 非正常状态
ZooKeeper JMX enabled by default
Using config: /home/hadoop/apps/zookeeper-3.4.10/bin/../conf/zoo.cfg
Error contacting service. It is probably not running.
hadoop02:zkServer.sh start
正常状态
ZooKeeper JMX enabled by default
Using config: /home/hadoop/apps/zookeeper-3.4.10/bin/../conf/zoo.cfg
Mode: leader
hadoop03:zkServer.sh start
zookeeper的初次安装选主
先启动hadoop01---1-----follower(跟随者) 从
有启动hadoop02---2-----leader(领导者) 主
最后启动hadoop03---3----follower 从
全新集群的选主过程:
启动顺序:1----2----3
1)hadoop01启动id=1,会去找集群中的leader通信,发现集群中
没有leader,发起投票选举leader 默认将票投给自己
id就是投票的依据
2)hadoop02启动 id=2,启动完成先去找集群中的leader,没有leader
发起投票选举leader 将票投给自己 hadoop01进行重新投票 默认id小的
强制将票投给id大的 hadoop01票投给hadoop02
hadoop01----0
hadoop02---2票 3台机器 过半选举了
hadoop02就是目前情况下的leader
hadoop01将自己的状态切换为follower
3)hadoop03 启动,先去找集群中的leader 已经有leader
将自己的状态切为follower
启动顺序:1---3---2 3----1----2
命令一起发送:hadoop02 hadoop03 leader