NSD ARCHITECTURE DAY05
- 案例1:组建zookeeper集群
- 案例2:测试集群的远程管理和高可用
- 案例3:在node节点上搭建3台kafka
- 案例4:准备实验环境
- 案例5:配置namenode与resourcemanager高可用
- 案例6:启动服务,验证高可用
1 案例1:组建zookeeper集群
1.1 问题
本案例要求:
- 组建 zookeeper 集群
- 1 个 leader
- 2 个 follower
- 1 个 observer
1.2 步骤
实现此案例需要按照如下步骤进行。
步骤一:安装Zookeeper
1)编辑/etc/hosts ,所有集群主机可以相互 ping 通(在hadoop1上面配置,同步到node-0001,node-0002,node-0003)
[root@hadoop1 hadoop]# vim /etc/hosts 192.168.1.50 hadoop1 192.168.1.51 node-0001 192.168.1.52 node-0002 192.168.1.53 node-0003 192.168.1.56 newnode
[root@nn01 hadoop]# for i in {52…54}
do
scp /etc/hosts 192.168.1.$i:/etc/
done //同步配置
hosts 100% 253 639.2KB/s 00:00
hosts 100% 253 497.7KB/s 00:00
hosts 100% 253 662.2KB/s 00:00
2)安装 java-1.8.0-openjdk-devel,由于之前的hadoop上面已经安装过,这里不再安装,若是新机器要安装
3)zookeeper 解压拷贝到 /usr/local/zookeeper
[root@hadoop1 ~]# tar -xf zookeeper-3.4.13.tar.gz [root@hadoop1 ~]# mv zookeeper-3.4.13 /usr/local/zookeeper
4)配置文件改名,并在最后添加配置
[root@hadoop1 ~]# cd /usr/local/zookeeper/conf/ [root@hadoop1 conf]# ls configuration.xsl log4j.properties zoo_sample.cfg [root@hadoop1 conf]# mv zoo_sample.cfg zoo.cfg [root@hadoop1 conf]# chown root.root zoo.cfg [root@hadoop1 conf]# vim zoo.cfg server.1=node-0001:2888:3888 server.2=node-0002:2888:3888 server.3=node-0003:2888:3888 server.4=hadoop1:2888:3888:observer
5)拷贝 /usr/local/zookeeper 到其他集群主机
[root@hadoop1 conf]# for i in {52..54}; do rsync -aSH --delete /usr/local/zookeeper/ 192.168.1.$i:/usr/local/zookeeper -e 'ssh' & done [4] 4956 [5] 4957 [6] 4958
6)创建 mkdir /tmp/zookeeper,每一台都要
[root@hadoop1 conf]# mkdir /tmp/zookeeper [root@hadoop1 conf]# ssh node-0001 mkdir /tmp/zookeeper [root@hadoop1 conf]# ssh node-0002 mkdir /tmp/zookeeper [root@hadoop1 conf]# ssh node-0003 mkdir /tmp/zookeeper
7)创建 myid 文件,id 必须与配置文件里主机名对应的 server.(id) 一致
[root@hadoop1 conf]# echo 4 >/tmp/zookeeper/myid [root@hadoop1 conf]# ssh node-0001 'echo 1 >/tmp/zookeeper/myid' [root@hadoop1 conf]# ssh node-0002 'echo 2 >/tmp/zookeeper/myid' [root@hadoop1 conf]# ssh node-0003 'echo 3 >/tmp/zookeeper/myid'
8)启动服务,单启动一台无法查看状态,需要启动全部集群以后才能查看状态,每一台上面都要手工启动(以hadoop1为例子)
[root@hadoop1 conf]# /usr/local/zookeeper/bin/zkServer.sh start ZooKeeper JMX enabled by default Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg Starting zookeeper ... STARTED
注意:刚启动zookeeper查看状态的时候报错,启动的数量要保证半数以上,这时再去看就成功了
9)查看状态
[root@hadoop1 conf]# /usr/local/zookeeper/bin/zkServer.sh status ZooKeeper JMX enabled by default Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg Mode: observe [root@hadoop1 conf]# /usr/local/zookeeper/bin/zkServer.sh stop //关闭之后查看状态其他服务器的角色 ZooKeeper JMX enabled by default Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg Stopping zookeeper ... STOPPED
2 案例2:测试集群的远程管理和高可用
2.1 问题
本案例要求:
- 测试集群的远程管理和高可用
2.2 步骤
实现此案例需要按照如下步骤进行。
步骤一: 测试集群的远程管理和高可用
[root@hadoop1 conf]# socat - TCP:node1:2181 stat ... ... Outstanding: 0 Zxid: 0x0 Mode: follower Node count: 4 [root@hadoop1 conf]# vim api.sh #!/bin/bash function getstatus(){ exec 9<>/dev/tcp/$1/2181 2>/dev/null echo stat >&9 MODE=$(cat <&9 |grep -Po "(?<=Mode:).*") exec 9<&- echo ${MODE:-NULL} } for i in node{1..3} hadoop1;do echo -ne "${i}\t" getstatus ${i} done [root@hadoop1 conf]# chmod 755 api.sh [root@hadoop1 conf]# ./api.sh node-0001 follower node-0002 leader node