HDFS和YARN HA部署

1.QJM剖析

任意时刻只能有一个nn(active状态)去写,nn standby 读
jn部署台数是奇数2n+1,active nn把编辑日志写到jn,要求至少 n/2+1台jn是好的。
如果有3台jn,三台都是好的,那么会写三台,如果有一台是坏的,那么满足3+1/2>=2台,会写两台
jn、zk不能部署太多,写的话也要耗很多时间,投票时间会越来越长,影响状态的切换。
standby nn读jn,随机找一台jn读。

有个问题:第一台机器是active,第二台是standby,如果第一台机器挂了,理论上第二台要立马切换过来,如果第二台没有切换过来什么原因呢?怎么办?原因可能是zookeeper问题,zk太重了,切不过来,导致两个都是standby。生产上一般将zk单独部署。

2.SSH互相信任关系和hosts文件配置

现在在阿里云上买三台云主机(2 vCPU 4 GiB (I/O优化) 1Mbps),按量付费的。
在这里插入图片描述
上传三个安装包到hadoop001上:hadoop、jdk、zookeeper
在这里插入图片描述

每台机器上增加hadoop用户:useradd hadoop
三台机器都切换到hadoop用户上:su - hadoop
创建app目录:mkdir app
第一台机器退出到root用户,
[root@hadoop001 ~]# mv * /home/hadoop/app/

配置用户多台机器的ssh互相信任关系 ,无密码访问
删除当前用户默认的.ssh的文件夹:
[hadoop@hadoop001 ~]$ rm -rf .ssh
然后[hadoop@hadoop001 ~]$ ssh-keygen(然后按三次回车,其实是三台一起做的)

[hadoop@hadoop001 ~]$ rm -rf .ssh
[hadoop@hadoop001 ~]$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hadoop/.ssh/id_rsa): 
Created directory '/home/hadoop/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/hadoop/.ssh/id_rsa.
Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
The key fingerprint is:
25:d2:7b:db:11:54:c9:76:79:22:8c:30:b1:de:65:8e hadoop@hadoop001
The key's randomart image is:
+--[ RSA 2048]----+
|        +o o.o...|
|       . o..o =.o|
|      . + . +o o.|
|       o = = .   |
|        S E o    |
|         . o .   |
|          . .    |
|                 |
|                 |
+-----------------+
#上面是三台一起做,下面是hadoop001做
[hadoop@hadoop001 ~]$ ls -a
.  ..  app  .bash_history  .bash_logout  .bash_profile  .bashrc  .ssh
[hadoop@hadoop001 ~]$ cd .ssh
[hadoop@hadoop001 .ssh]$ ll
total 8
-rw------- 1 hadoop hadoop 1675 Apr  8 20:17 id_rsa
-rw-r--r-- 1 hadoop hadoop  398 Apr  8 20:17 id_rsa.pub
[hadoop@hadoop001 .ssh]$ cat id_rsa.pub >> authorized_keys

#相当于hadoop001机器的公钥放在这个可信任的文件里面,然后把hadoop002、hadoop003的公钥文件传给hadoop001

[hadoop@hadoop002 ~]$ cd .ssh
[hadoop@hadoop002 .ssh]$ ls
id_rsa  id_rsa.pub  known_hosts
[hadoop@hadoop002 .ssh]$ scp id_rsa.pub root@172.19.12.134:/home/hadoop/.ssh/id_rsa2
root@172.19.12.134's password: 
Permission denied, please try again.
root@172.19.12.134's password: 
id_rsa.pub                                                                                           100%  398     0.4KB/s   00:00    
[hadoop@hadoop002 .ssh]$ 

scp id_rsa.pub root@172.19.12.134:/home/hadoop/.ssh/id_rsa2
172.19.12.134这个是hadoop001的内网ip
hadoop003也是一样。

[hadoop@hadoop001 .ssh]$ ll
total 20
-rw-rw-r-- 1 hadoop hadoop  398 Apr  8 20:56 authorized_keys
-rw------- 1 hadoop hadoop 1675 Apr  8 20:17 id_rsa
-rw-r--r-- 1 root   root    398 Apr  8 20:55 id_rsa2
-rw-r--r-- 1 root   root    398 Apr  8 20:55 id_rsa3
-rw-r--r-- 1 hadoop hadoop  398 Apr  8 20:17 id_rsa.pub
[hadoop@hadoop001 .ssh]$ cat id_rsa2 >> authorized_keys
[hadoop@hadoop001 .ssh]$ cat id_rsa3 >> authorized_keys
[hadoop@hadoop001 .ssh]$ cat authorized_keys
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAxauTWIstggjZ5tlJ6u9xu5h7ih7v1j/Z6geSapmu/47XFbXmUvAxY0XpPOzFWH0fWM0YCIHZxGNJGomM019diCWpgvo7Pt9/19m6VWR+Ih+Bjnm3LU7GssnQjjcX5Rm6tR2hha0XN+ejfJiKsKP4F+mDdpCVVbjOr74633veDhHrQY2Czx0AwKdQD1vYsfjaua88kv6G57RKlK/zQCmh+d2hMt8md5muz3kna7s6chxKf0xZxULJKJJvQierFQhpV0/a/DZ6by/cbAgRAQG7eLQ/417DTLh7qyt5+/EFs3I0HKr+JTUevc+yuvLShLafo5yruiWroNd8wO+DPqN2zw== hadoop@hadoop001
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAuG7Fvbn8U/uQhrK9GhNUgeLkxtl3JSHS6BCHFCc/1rD3D0v2/uqEzGOhU6+rL5GFldbkYn+fH9eA4c+jp+kZdhmL5GH4vVNhO/7NgJpqHjvSbsWBAWOI4c0e9t/zA1T3+yv9RFd6mEi9MeR0qfQtN3QiqXj26UBGQLnamLstJ3faTGEev14eCbCakHS7i96W9Qsj8HK2fcuL3+3sNP2lsz6P5NacJ5p3XyHL4WNK49J0FHgJT3Tvh3JouQ/ncRdLIcQafeouEGzYe/Jc7Si8ezCRt5imE8xMjUH0xIVmGBFjy/ATElumZjhoHO6pW9wuhq65W9RmIRhzk/6gbM8icQ== hadoop@hadoop002
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAuqufGcqM2ZHYRsWgTsgjxzXO/GJplFIYI2s5nOoO4EPnAmFRVc0WnbagQjn7ITFhNItpdsoFe/nsoffyfZahVgaY/HrKhreCgCAKzQ3xBUDi9lV2rxjU0Lpq8R7D8Diz3d8CbXdgDS+OEW0g1rtPIyhWGJcx7he/x/qLSBlDFUGccqwFGIqGs8n9ZiVg8HDiPR/8jE+3ACptQSduFgWAqcA/Uv9l4Vmn6GtM6QMyqX8ccNQbm5APJG/kNBhwPkxtnxHwnHp3cP9wxwp/RTWATVP1PyaeV6xEUGgLG7VlC/uogEislt7aDzPm7i56RWLWMlpYDM0yE51aHiKrYObNpQ== hadoop@hadoop003
[hadoop@hadoop001 .ssh]$ ping hadoop002
ping: unknown host hadoop002
[root@hadoop001 ~]# vi /etc/hosts
127.0.0.1       localhost       localhost.localdomain   localhost4      localhost4.localdomain4
::1     localhost       localhost.localdomain   localhost6      localhost6.localdomain6
172.19.12.134   hadoop001       hadoop001
172.19.12.133   hadoop002       hadoop002
172.19.12.135   hadoop003       hadoop003

其它两台也和上面一样,配置一下hosts文件

[root@hadoop001 .ssh]# su - hadoop
[hadoop@hadoop001 .ssh]$ ll
total 20
-rw-rw-r-- 1 hadoop hadoop 1194 Apr  8 20:57 authorized_keys
-rw------- 1 hadoop hadoop 1675 Apr  8 20:17 id_rsa
-rw-r--r-- 1 root   root    398 Apr  8 20:55 id_rsa2
-rw-r--r-- 1 root   root    398 Apr  8 20:55 id_rsa3
-rw-r--r-- 1 hadoop hadoop  398 Apr  8 20:17 id_rsa.pub
[hadoop@hadoop001 .ssh]$ scp authorized_keys root@hadoop002:/home/hadoop/.ssh/
The authenticity of host 'hadoop002 (172.19.12.133)' can't be established.
RSA key fingerprint is 80:de:c4:fd:99:fa:f5:d5:98:c6:cb:98:f0:d1:77:5c.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'hadoop002,172.19.12.133' (RSA) to the list of known hosts.
root@hadoop002's password: 
authorized_keys                                                                                      100% 1194     1.2KB/s   00:00    
[hadoop@hadoop001 .ssh]$ scp authorized_keys root@hadoop003:/home/hadoop/.ssh/
The authenticity of host 'hadoop003 (172.19.12.135)' can't be established.
RSA key fingerprint is 1b:bc:e6:0e:32:c7:f4:3e:7a:60:53:9c:8b:9b:74:69.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'hadoop003,172.19.12.135' (RSA) to the list of known hosts.
root@hadoop003's password: 
Permission denied, please try again.
root@hadoop003's password: 
authorized_keys                                                                                      100% 1194     1.2KB/s   00:00    
[hadoop@hadoop001 .ssh]$ 
[hadoop@hadoop001 .ssh]$ ssh hadoop001 date
The authenticity of host 'hadoop001 (172.19.12.134)' can't be established.
RSA key fingerprint is 9b:07:17:ac:1a:50:b9:22:76:75:2a:6d:ea:3e:6d:da.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'hadoop001,172.19.12.134' (RSA) to the list of known hosts.
hadoop@hadoop001's password: 
Permission denied, please try again.
hadoop@hadoop001's password: 

[hadoop@hadoop001 .ssh]$ chmod 600 authorized_keys 
[hadoop@hadoop001 .ssh]$ ssh hadoop001 date
Mon Apr  8 21:15:22 CST 2019
[hadoop@hadoop001 .ssh]$ ssh hadoop002 date
Mon Apr  8 21:16:39 CST 2019
[hadoop@hadoop001 .ssh]$ ssh hadoop003 date
Mon Apr  8 21:16:44 CST 2019
[hadoop@hadoop001 .ssh]$ 

进入hadoop002

[root@hadoop002 ~]# su - hadoop
[hadoop@hadoop002 ~]$ ssh hadoop001 date
The authenticity of host 'hadoop001 (172.19.12.134)' can't be established.
RSA key fingerprint is 9b:07:17:ac:1a:50:b9:22:76:75:2a:6d:ea:3e:6d:da.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'hadoop001' (RSA) to the list of known hosts.
Mon Apr  8 21:18:44 CST 2019
[hadoop@hadoop002 ~]$ ssh hadoop002 date
The authenticity of host 'hadoop002 (172.19.12.133)' can't be established.
RSA key fingerprint is 80:de:c4:fd:99:fa:f5:d5:98:c6:cb:98:f0:d1:77:5c.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'hadoop002' (RSA) to the list of known hosts.
Mon Apr  8 21:18:53 CST 2019
[hadoop@hadoop002 ~]$ ssh hadoop003 date
The authenticity of host 'hadoop003 (172.19.12.135)' can't be established.
RSA key fingerprint is 1b:bc:e6:0e:32:c7:f4:3e:7a:60:53:9c:8b:9b:74:69.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'hadoop003,172.19.12.135' (RSA) to the list of known hosts.
Mon Apr  8 21:18:59 CST 2019
[hadoop@hadoop002 ~]$ ssh hadoop003 date
Mon Apr  8 21:19:02 CST 2019
[hadoop@hadoop002 ~]$ 

root用户无所谓,非root用户给需要authorized_keys 600权限

[root@hadoop001 .ssh]# ll
total 24
-rw------- 1 hadoop hadoop 1194 Apr  8 20:57 authorized_keys
-rw------- 1 hadoop hadoop 1675 Apr  8 20:17 id_rsa
-rw-r--r-- 1 root   root    398 Apr  8 20:55 id_rsa2
-rw-r--r-- 1 root   root    398 Apr  8 20:55 id_rsa3
-rw-r--r-- 1 hadoop hadoop  398 Apr  8 20:17 id_rsa.pub
-rw-r--r-- 1 hadoop hadoop 1215 Apr  8 21:15 known_hosts
[root@hadoop001 .ssh]# 


[root@hadoop002 .ssh]# ll
total 16
-rw-r--r-- 1 root   root   1194 Apr  8 21:10 authorized_keys
-rw------- 1 hadoop hadoop 1675 Apr  8 20:17 id_rsa
-rw-r--r-- 1 hadoop hadoop  398 Apr  8 20:17 id_rsa.pub
-rw-r--r-- 1 hadoop hadoop 1977 Apr  8 21:18 known_hosts
[root@hadoop002 .ssh]# 

总结:
以上操作的目的就是:配置用户多台机器的ssh互相信任关系 ,远程到你不需要输入密码,实现无密码访问。选择第一台机器做中心点,把剩下的机器的公钥文件传过来,把三台机器的公钥文件cat追加到authorized_keys这个信任关系的文件里,然后把这这个信任关系再传给剩下的机器,然后使用 ssh hadoop003 date 这个命令,打印一个日期,来触发这个信任关系的连接,第一次要输入yes,输入yes之后会在 .ssh/known_hosts文件里做一次记录,但是这里有坑,需要注意一下,比如说你的秘钥文件发生了变更,你再去连接的时候报错了,这个时候你只需要把known_hosts文件里面,只把秘钥变更的那台连接信息删除,再重新连接即可。记住只删除那一行记录。
生产上不要把known_hosts文件整个清空掉,如果清空掉了,会影响和其它机器的信任关系的连接。

3.JDK的部署
[hadoop@hadoop001 app]$ ls
hadoop-2.6.0-cdh5.7.0.tar.gz  jdk-8u45-linux-x64.gz  zookeeper-3.4.6.tar.gz
[hadoop@hadoop001 app]$ scp * hadoop002:/home/hadoop/app/
hadoop-2.6.0-cdh5.7.0.tar.gz                                                                         100%  297MB  49.5MB/s   00:06    
jdk-8u45-linux-x64.gz                                                                                100%  165MB  55.1MB/s   00:03    
zookeeper-3.4.6.tar.gz                                                                               100%   17MB  16.9MB/s   00:01    
[hadoop@hadoop001 app]$ scp * hadoop003:/home/hadoop/app/
hadoop-2.6.0-cdh5.7.0.tar.gz                                                                         100%  297MB  49.5MB/s   00:06    
jdk-8u45-linux-x64.gz                                                                                100%  165MB  55.1MB/s   00:03    
zookeeper-3.4.6.tar.gz                                                                               100%   17MB  16.9MB/s   00:00    
[hadoop@hadoop001 app]$ 

做大数据,部署JDK,都放在:mkdir /usr/java/ 这里面。

[root@hadoop001 ~]# mkdir /usr/java/
[root@hadoop001 ~]# tar -xzvf /home/hadoop/app/jdk-8u45-linux-x64.gz -C /usr/java/
[root@hadoop001 ~]# cd /usr/java/
[root@hadoop001 java]# ll
total 4
drwxr-xr-x 8 uucp 143 4096 Apr 11  2015 jdk1.8.0_45
[root@hadoop001 java]# chown -R root:root /usr/java/
[root@hadoop001 java]# ll
total 4
drwxr-xr-x 8 root root 4096 Apr 11  2015 jdk1.8.0_45
[root@hadoop001 java]# 

其它两台机器也做一下,注意一下是root用户。
jdk解压之后权限是uucp,需要变更一下。

配置环境变量:
配置全局环境变量:vi /etc/profile

在最后添加这几行:

#env
export JAVA_HOME=/usr/java/jdk1.8.0_45
export JRE_HOME=$JAVA_HOME/jre
export CLASSPATH=.:$JAVA_HOME/lib:$JER_HOME/lib:$CLASSPATH
export PATH=$JAVA_HOME/bin:$JER_HOME/bin:$PATH

保存。

source /etc/profile 生效
which查看一下

4.关闭防火墙

关掉防火墙
执行命:service iptables stop
验证:service iptables status
先查看一下:

[root@hadoop001 java]# service iptables status
iptables: Firewall is not running.
[root@hadoop001 ~]# iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         
Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         
Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         
[root@hadoop001 ~]# iptables -F
[root@hadoop001 ~]#
5.Zookeeper部署及定位

每个机器都做一下。
解压:

[hadoop@hadoop001 app]$ tar -xzvf zookeeper-3.4.6.tar.gz
[hadoop@hadoop001 app]$ ll
total 490792
-rw-r--r--  1 root   root   311585484 Mar 30 09:37 hadoop-2.6.0-cdh5.7.0.tar.gz
-rw-r--r--  1 root   root   173271626 Mar 28 23:32 jdk-8u45-linux-x64.gz
drwxr-xr-x 10 hadoop hadoop      4096 Feb 20  2014 zookeeper-3.4.6
-rw-r--r--  1 root   root    17699306 Mar 28 22:51 zookeeper-3.4.6.tar.gz
[hadoop@hadoop001 app]$ 

建立软连接:

[hadoop@hadoop001 app]$ ln -s /home/hadoop/app/zookeeper-3.4.6 /home/hadoop/app/zookeeper
[hadoop@hadoop001 app]$ cd zookeeper
[hadoop@hadoop001 zookeeper]$ ll
total 1552
drwxr-xr-x  2 hadoop hadoop    4096 Feb 20  2014 bin
-rw-rw-r--  1 hadoop hadoop   82446 Feb 20  2014 build.xml
-rw-rw-r--  1 hadoop hadoop   80776 Feb 20  2014 CHANGES.txt
drwxr-xr-x  2 hadoop hadoop    4096 Feb 20  2014 conf
drwxr-xr-x 10 hadoop hadoop    4096 Feb 20  2014 contrib
drwxr-xr-x  2 hadoop hadoop    4096 Feb 20  2014 dist-maven
drwxr-xr-x  6 hadoop hadoop    4096 Feb 20  2014 docs
-rw-rw-r--  1 hadoop hadoop    1953 Feb 20  2014 ivysettings.xml
-rw-rw-r--  1 hadoop hadoop    3375 Feb 20  2014 ivy.xml
drwxr-xr-x  4 hadoop hadoop    4096 Feb 20  2014 lib
-rw-rw-r--  1 hadoop hadoop   11358 Feb 20  2014 LICENSE.txt
-rw-rw-r--  1 hadoop hadoop     170 Feb 20  2014 NOTICE.txt
-rw-rw-r--  1 hadoop hadoop    1770 Feb 20  2014 README_packaging.txt
-rw-rw-r--  1 hadoop hadoop    1585 Feb 20  2014 README.txt
drwxr-xr-x  5 hadoop hadoop    4096 Feb 20  2014 recipes
drwxr-xr-x  8 hadoop hadoop    4096 Feb 20  2014 src
-rw-rw-r--  1 hadoop hadoop 1340305 Feb 20  2014 zookeeper-3.4.6.jar
-rw-rw-r--  1 hadoop hadoop     836 Feb 20  2014 zookeeper-3.4.6.jar.asc
-rw-rw-r--  1 hadoop hadoop      33 Feb 20  2014 zookeeper-3.4.6.jar.md5
-rw-rw-r--  1 hadoop hadoop      41 Feb 20  2014 zookeeper-3.4.6.jar.sha1
[hadoop@hadoop001 zookeeper]$ cd conf/
[hadoop@hadoop001 conf]$ ll
total 12
-rw-rw-r-- 1 hadoop hadoop  535 Feb 20  2014 configuration.xsl
-rw-rw-r-- 1 hadoop hadoop 2161 Feb 20  2014 log4j.properties
-rw-rw-r-- 1 hadoop hadoop  922 Feb 20  2014 zoo_sample.cfg
[hadoop@hadoop001 conf]$ cp zoo_sample.cfg zoo.cfg
[hadoop@hadoop001 conf]$ vi zoo.cfg

编辑一下zoo.cfg配置文件,里面有 dataDir=/tmp/zookeeper ,生产上不能把这个目录设置为tmp目录,因为服务器会定期清理tmp目录。所以需要设置一下:比如设置成dataDir=/home/hadoop/app/zookeeper/data
然后在配置文件最后加上这三行:
server.1=hadoop001:2888:3888
server.2=hadoop002:2888:3888
server.3=hadoop003:2888:3888
做集群,每个组件的编号要不唯一,比如把hadoop001设为1,后面是它的通信端口,服务组件之间的通信端口。

然后:

[hadoop@hadoop001 conf]$ cd ..
[hadoop@hadoop001 zookeeper]$ mkdir data
[hadoop@hadoop001 zookeeper]$ cd data
[hadoop@hadoop001 data]$ touch myid
[hadoop@hadoop001 data]$ echo 1 > myid   #注意 1 的前后都有空格
[hadoop@hadoop001 data]$ cd ..
[hadoop@hadoop001 zookeeper]$ scp conf/zoo.cfg hadoop002:/home/hadoop/app/zookeeper/conf/
zoo.cfg                                                                                              100% 1029     1.0KB/s   00:00    
[hadoop@hadoop001 zookeeper]$ scp conf/zoo.cfg hadoop003:/home/hadoop/app/zookeeper/conf/
zoo.cfg                                                                                              100% 1029     1.0KB/s   00:00    
[hadoop@hadoop001 zookeeper]$ scp -r data hadoop002:/home/hadoop/app/zookeeper/
myid                                                                                                 100%    2     0.0KB/s   00:00    
[hadoop@hadoop001 zookeeper]$ scp -r data hadoop003:/home/hadoop/app/zookeeper/
myid                                                                                                 100%    2     0.0KB/s   00:00    
[hadoop@hadoop001 zookeeper]$ 

然后进入hadoop002:

[hadoop@hadoop002 zookeeper]$ ls data
myid
[hadoop@hadoop002 zookeeper]$ echo 2 > data/myid 
[hadoop@hadoop002 zookeeper]$ 

hadoop003也一样。

zookeeper组件里面只有bin文件夹,里面有命令。其它的组件有些是bin和sbin文件夹。

[hadoop@hadoop001 zookeeper]$ cd bin
[hadoop@hadoop001 bin]$ ll
total 36
-rwxr-xr-x 1 hadoop hadoop  238 Feb 20  2014 README.txt
-rwxr-xr-x 1 hadoop hadoop 1937 Feb 20  2014 zkCleanup.sh
-rwxr-xr-x 1 hadoop hadoop 1049 Feb 20  2014 zkCli.cmd
-rwxr-xr-x 1 hadoop hadoop 1534 Feb 20  2014 zkCli.sh
-rwxr-xr-x 1 hadoop hadoop 1333 Feb 20  2014 zkEnv.cmd
-rwxr-xr-x 1 hadoop hadoop 2696 Feb 20  2014 zkEnv.sh
-rwxr-xr-x 1 hadoop hadoop 1084 Feb 20  2014 zkServer.cmd
-rwxr-xr-x 1 hadoop hadoop 5742 Feb 20  2014 zkServer.sh
[hadoop@hadoop001 bin]$ rm -f *.cmd
[hadoop@hadoop001 bin]$ ll
total 24
-rwxr-xr-x 1 hadoop hadoop  238 Feb 20  2014 README.txt
-rwxr-xr-x 1 hadoop hadoop 1937 Feb 20  2014 zkCleanup.sh
-rwxr-xr-x 1 hadoop hadoop 1534 Feb 20  2014 zkCli.sh
-rwxr-xr-x 1 hadoop hadoop 2696 Feb 20  2014 zkEnv.sh
-rwxr-xr-x 1 hadoop hadoop 5742 Feb 20  2014 zkServer.sh
[hadoop@hadoop001 bin]$ ./zkServer.sh start
JMX enabled by default
Using config: /home/hadoop/app/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[hadoop@hadoop001 bin]$  ./zkServer.sh status
JMX enabled by default
Using config: /home/hadoop/app/zookeeper/bin/../conf/zoo.cfg
Error contacting service. It is probably not running.
[hadoop@hadoop001 bin]$ 

发现没有启动
需要配置一下环境变量:加入这三行,最后生效一下
export JAVA_HOME=/usr/java/jdk1.8.0_45
export ZOOKEEPER_HOME=/home/hadoop/app/zookeeper
export PATH= J A V A H O M E / b i n : JAVA_HOME/bin: JAVAHOME/bin:ZOOKEEPER_HOME/bin:$PATH

然后再启动./zkServer.sh start ,然后看一下状态:/zkServer.sh status

[hadoop@hadoop001 bin]$ ./zkServer.sh status
JMX enabled by default
Using config: /home/hadoop/app/zookeeper/bin/../conf/zoo.cfg
Mode: follower

从上面可以看出,hadoop001是follower。三台中有两台是follower,一台是leader。任何时刻
jps看一下:

[hadoop@hadoop001 app]$ jps
1532 Jps
1503 QuorumPeerMain

养成一个好习惯,shell脚本的debug模式(第一行后面加个 -x)
比如说大数据的组件的启动基本是通过shell脚本启动的,如果启动有问题(可以查看日志),也可以打开相应的shell脚本,在第一行后面加上 -x 进入debug模式: #!/usr/bin/env bash -x 然后执行脚本,仔细分析哪地方出错了。大数据中这种情况会很常见。

6.hdfs&yarn HA部署

进入/home/hadoop/app目录,解压hadoop-2.6.0-cdh5.7.0.tar.gz
tar -xzvf hadoop-2.6.0-cdh5.7.0.tar.gz
创建一下软连接:

[hadoop@hadoop001 app]$ ln -s /home/hadoop/app/hadoop-2.6.0-cdh5.7.0 /home/hadoop/app/hadoop

创建目录,后面的配置文件里会用到(不创建好像也可以,它会自动创建):
mkdir -p /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/tmp
mkdir -p /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/data/dfs/name
mkdir -p /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/data/dfs/data
mkdir -p /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/data/dfs/jn

准备好五个配置文件(可以提前在记事本里编辑好):
①core-site.xml
②hdfs-site.xml
③mapred-site.xml
④yarn-site.xml
⑤slaves
core-site.xml内容如下:

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
	<!--Yarn 需要使用 fs.defaultFS 指定NameNode URI -->
        <property>
                <name>fs.defaultFS</name>
                <value>hdfs://ruozeclusterg6</value>
        </property>
        <!--==============================Trash机制======================================= -->
        <property>
                <!--多长时间创建CheckPoint NameNode截点上运行的CheckPointer 从Current文件夹创建CheckPoint;默认:0 由fs.trash.interval项指定 -->
                <name>fs.trash.checkpoint.interval</name>
                <value>0</value>
        </property>
        <property>
                <!--多少分钟.Trash下的CheckPoint目录会被删除,该配置服务器设置优先级大于客户端,默认:0 不删除 -->
                <name>fs.trash.interval</name>
                <value>1440</value>
        </property>

         <!--指定hadoop临时目录, hadoop.tmp.dir 是hadoop文件系统依赖的基础配置,很多路径都依赖它。如果hdfs-site.xml中不配 置namenode和datanode的存放位置,默认就放在这>个路径中 -->
        <property>   
                <name>hadoop.tmp.dir</name>
                <value>/home/hadoop/app/hadoop-2.6.0-cdh5.7.0/tmp</value>
        </property>

         <!-- 指定zookeeper地址 -->
        <property>
                <name>ha.zookeeper.quorum</name>
                <value>hadoop001:2181,hadoop002:2181,hadoop003:2181</value>
        </property>
         <!--指定ZooKeeper超时间隔,单位毫秒 -->
        <property>
                <name>ha.zookeeper.session-timeout.ms</name>
                <value>2000</value>
        </property>

        <property>
           <name>hadoop.proxyuser.hadoop.hosts</name>
           <value>*</value> 
        </property> 
        <property> 
            <name>hadoop.proxyuser.hadoop.groups</name> 
            <value>*</value> 
       </property> 


      <property>
		  <name>io.compression.codecs</name>
		  <value>org.apache.hadoop.io.compress.GzipCodec,
			org.apache.hadoop.io.compress.DefaultCodec,
			org.apache.hadoop.io.compress.BZip2Codec,
			org.apache.hadoop.io.compress.SnappyCodec
		  </value>
      </property>
</configuration>

hdfs-site.xml内容如下:

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
	<!--HDFS超级用户 -->
	<property>
		<name>dfs.permissions.superusergroup</name>
		<value>hadoop</value>
	</property>

	<!--开启web hdfs -->
	<property>
		<name>dfs.webhdfs.enabled</name>
		<value>true</value>
	</property>
	<property>
		<name>dfs.namenode.name.dir</name>
		<value>/home/hadoop/app/hadoop-2.6.0-cdh5.7.0/data/dfs/name</value>
		<description> namenode 存放name table(fsimage)本地目录(需要修改)</description>
	</property>
	<property>
		<name>dfs.namenode.edits.dir</name>
		<value>${dfs.namenode.name.dir}</value>
		<description>namenode粗放 transaction file(edits)本地目录(需要修改)</description>
	</property>
	<property>
		<name>dfs.datanode.data.dir</name>
		<value>/home/hadoop/app/hadoop-2.6.0-cdh5.7.0/data/dfs/data</value>
		<description>datanode存放block本地目录(需要修改)</description>
	</property>
	<property>
		<name>dfs.replication</name>
		<value>3</value>
	</property>
	<!-- 块大小256M (默认128M) -->
	<property>
		<name>dfs.blocksize</name>
		<value>268435456</value>
	</property>
	<!--======================================================================= -->
	<!--HDFS高可用配置 -->
	<!--指定hdfs的nameservice为ruozeclusterg6,需要和core-site.xml中的保持一致 -->
	<property>
		<name>dfs.nameservices</name>
		<value>ruozeclusterg6</value>
	</property>
	<property>
		<!--设置NameNode IDs 此版本最大只支持两个NameNode -->
		<name>dfs.ha.namenodes.ruozeclusterg6</name>
		<value>nn1,nn2</value>
	</property>

	<!-- Hdfs HA: dfs.namenode.rpc-address.[nameservice ID] rpc 通信地址 -->
	<property>
		<name>dfs.namenode.rpc-address.ruozeclusterg6.nn1</name>
		<value>hadoop001:8020</value>
	</property>
	<property>
		<name>dfs.namenode.rpc-address.ruozeclusterg6.nn2</name>
		<value>hadoop002:8020</value>
	</property>

	<!-- Hdfs HA: dfs.namenode.http-address.[nameservice ID] http 通信地址 -->
	<property>
		<name>dfs.namenode.http-address.ruozeclusterg6.nn1</name>
		<value>hadoop001:50070</value>
	</property>
	<property>
		<name>dfs.namenode.http-address.ruozeclusterg6.nn2</name>
		<value>hadoop002:50070</value>
	</property>

	<!--==================Namenode editlog同步 ============================================ -->
	<!--保证数据恢复 -->
	<property>
		<name>dfs.journalnode.http-address</name>
		<value>0.0.0.0:8480</value>
	</property>
	<property>
		<name>dfs.journalnode.rpc-address</name>
		<value>0.0.0.0:8485</value>
	</property>
	<property>
		<!--设置JournalNode服务器地址,QuorumJournalManager 用于存储editlog -->
		<!--格式:qjournal://<host1:port1>;<host2:port2>;<host3:port3>/<journalId> 端口同journalnode.rpc-address -->
		<name>dfs.namenode.shared.edits.dir</name>
		<value>qjournal://hadoop001:8485;hadoop002:8485;hadoop003:8485/ruozeclusterg6</value>
	</property>

	<property>
		<!--JournalNode存放数据地址 -->
		<name>dfs.journalnode.edits.dir</name>
		<value>/home/hadoop/app/hadoop-2.6.0-cdh5.7.0/data/dfs/jn</value>
	</property>
	<!--==================DataNode editlog同步 ============================================ -->
	<property>
		<!--DataNode,Client连接Namenode识别选择Active NameNode策略 -->
                             <!-- 配置失败自动切换实现方式 -->
		<name>dfs.client.failover.proxy.provider.ruozeclusterg6</name>
		<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
	</property>
	<!--==================Namenode fencing:=============================================== -->
	<!--Failover后防止停掉的Namenode启动,造成两个服务 -->
	<property>
		<name>dfs.ha.fencing.methods</name>
		<value>sshfence</value>
	</property>
	<property>
		<name>dfs.ha.fencing.ssh.private-key-files</name>
		<value>/home/hadoop/.ssh/id_rsa</value>
	</property>
	<property>
		<!--多少milliseconds 认为fencing失败 -->
		<name>dfs.ha.fencing.ssh.connect-timeout</name>
		<value>30000</value>
	</property>

	<!--==================NameNode auto failover base ZKFC and Zookeeper====================== -->
	<!--开启基于Zookeeper  -->
	<property>
		<name>dfs.ha.automatic-failover.enabled</name>
		<value>true</value>
	</property>
	<!--动态许可datanode连接namenode列表 -->
	 <property>
	   <name>dfs.hosts</name>
	   <value>/home/hadoop/app/hadoop-2.6.0-cdh5.7.0/etc/hadoop/slaves</value>
	 </property>
</configuration>

mapred-site.xml内容如下:

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
	<!-- 配置 MapReduce Applications -->
	<property>
		<name>mapreduce.framework.name</name>
		<value>yarn</value>
	</property>
	<!-- JobHistory Server ============================================================== -->
	<!-- 配置 MapReduce JobHistory Server 地址 ,默认端口10020 -->
	<property>
		<name>mapreduce.jobhistory.address</name>
		<value>hadoop001:10020</value>
	</property>
	<!-- 配置 MapReduce JobHistory Server web ui 地址, 默认端口19888 -->
	<property>
		<name>mapreduce.jobhistory.webapp.address</name>
		<value>hadoop001:19888</value>
	</property>

<!-- 配置 Map段输出的压缩,snappy-->
  <property>
      <name>mapreduce.map.output.compress</name> 
      <value>true</value>
  </property>
              
  <property>
      <name>mapreduce.map.output.compress.codec</name> 
      <value>org.apache.hadoop.io.compress.SnappyCodec</value>
   </property>

</configuration>

yarn-site.xml内容如下:

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
	<!-- nodemanager 配置 ================================================= -->
	<property>
		<name>yarn.nodemanager.aux-services</name>
		<value>mapreduce_shuffle</value>
	</property>
	<property>
		<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
		<value>org.apache.hadoop.mapred.ShuffleHandler</value>
	</property>
	<property>
		<name>yarn.nodemanager.localizer.address</name>
		<value>0.0.0.0:23344</value>
		<description>Address where the localizer IPC is.</description>
	</property>
	<property>
		<name>yarn.nodemanager.webapp.address</name>
		<value>0.0.0.0:23999</value>
		<description>NM Webapp address.</description>
	</property>

	<!-- HA 配置 =============================================================== -->
	<!-- Resource Manager Configs -->
	<property>
		<name>yarn.resourcemanager.connect.retry-interval.ms</name>
		<value>2000</value>
	</property>
	<property>
		<name>yarn.resourcemanager.ha.enabled</name>
		<value>true</value>
	</property>
	<property>
		<name>yarn.resourcemanager.ha.automatic-failover.enabled</name>
		<value>true</value>
	</property>
	<!-- 使嵌入式自动故障转移。HA环境启动,与 ZKRMStateStore 配合 处理fencing -->
	<property>
		<name>yarn.resourcemanager.ha.automatic-failover.embedded</name>
		<value>true</value>
	</property>
	<!-- 集群名称,确保HA选举时对应的集群 -->
	<property>
		<name>yarn.resourcemanager.cluster-id</name>
		<value>yarn-cluster</value>
	</property>
	<property>
		<name>yarn.resourcemanager.ha.rm-ids</name>
		<value>rm1,rm2</value>
	</property>


    <!--这里RM主备结点需要单独指定,(可选)
	<property>
		 <name>yarn.resourcemanager.ha.id</name>
		 <value>rm2</value>
	 </property>
	 -->

	<property>
		<name>yarn.resourcemanager.scheduler.class</name>
		<value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>
	</property>
	<property>
		<name>yarn.resourcemanager.recovery.enabled</name>
		<value>true</value>
	</property>
	<property>
		<name>yarn.app.mapreduce.am.scheduler.connection.wait.interval-ms</name>
		<value>5000</value>
	</property>
	<!-- ZKRMStateStore 配置 -->
	<property>
		<name>yarn.resourcemanager.store.class</name>
		<value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
	</property>
	<property>
		<name>yarn.resourcemanager.zk-address</name>
		<value>hadoop001:2181,hadoop002:2181,hadoop003:2181</value>
	</property>
	<property>
		<name>yarn.resourcemanager.zk.state-store.address</name>
		<value>hadoop001:2181,hadoop002:2181,hadoop003:2181</value>
	</property>
	<!-- Client访问RM的RPC地址 (applications manager interface) -->
	<property>
		<name>yarn.resourcemanager.address.rm1</name>
		<value>hadoop001:23140</value>
	</property>
	<property>
		<name>yarn.resourcemanager.address.rm2</name>
		<value>hadoop002:23140</value>
	</property>
	<!-- AM访问RM的RPC地址(scheduler interface) -->
	<property>
		<name>yarn.resourcemanager.scheduler.address.rm1</name>
		<value>hadoop001:23130</value>
	</property>
	<property>
		<name>yarn.resourcemanager.scheduler.address.rm2</name>
		<value>hadoop002:23130</value>
	</property>
	<!-- RM admin interface -->
	<property>
		<name>yarn.resourcemanager.admin.address.rm1</name>
		<value>hadoop001:23141</value>
	</property>
	<property>
		<name>yarn.resourcemanager.admin.address.rm2</name>
		<value>hadoop002:23141</value>
	</property>
	<!--NM访问RM的RPC端口 -->
	<property>
		<name>yarn.resourcemanager.resource-tracker.address.rm1</name>
		<value>hadoop001:23125</value>
	</property>
	<property>
		<name>yarn.resourcemanager.resource-tracker.address.rm2</name>
		<value>hadoop002:23125</value>
	</property>
	<!-- RM web application 地址 -->
	<property>
		<name>yarn.resourcemanager.webapp.address.rm1</name>
		<value>hadoop001:8088</value>
	</property>
	<property>
		<name>yarn.resourcemanager.webapp.address.rm2</name>
		<value>hadoop002:8088</value>
	</property>
	<property>
		<name>yarn.resourcemanager.webapp.https.address.rm1</name>
		<value>hadoop001:23189</value>
	</property>
	<property>
		<name>yarn.resourcemanager.webapp.https.address.rm2</name>
		<value>hadoop002:23189</value>
	</property>

	<property>
	   <name>yarn.log-aggregation-enable</name>
	   <value>true</value>
	</property>
	<property>
		 <name>yarn.log.server.url</name>
		 <value>http://hadoop001:19888/jobhistory/logs</value>
	</property>


	<property>
		<name>yarn.nodemanager.resource.memory-mb</name>
		<value>2048</value>
	</property>
	<property>
		<name>yarn.scheduler.minimum-allocation-mb</name>
		<value>1024</value>
		<discription>单个任务可申请最少内存,默认1024MB</discription>
	 </property>

  
  <property>
	<name>yarn.scheduler.maximum-allocation-mb</name>
	<value>2048</value>
	<discription>单个任务可申请最大内存,默认8192MB</discription>
  </property>

   <property>
       <name>yarn.nodemanager.resource.cpu-vcores</name>
       <value>2</value>
    </property>

</configuration>

slaves内容如下:

hadoop001
hadoop002
hadoop003

然后进入目录/home/hadoop/app/hadoop/etc/hadoop,把里面的上面几个文件删除,然后把上面几个文件上传进去即可。

然后配置一下环境变量:
vi ~/.bash_profile

export JAVA_HOME=/usr/java/jdk1.8.0_45
export ZOOKEEPER_HOME=/home/hadoop/app/zookeeper
export HADOOP_HOME=/home/hadoop/app/hadoop
export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$JAVA_HOME/bin:$ZOOKEEPER_HOME/bin:$PATH

然后生效一下,另外两台机器也一样。

上面做完之后,选择第一台机器hadoop001,然后进行格式化。

[hadoop@hadoop001 hadoop]$ hadoop namenode -format
..........
..........
19/04/09 23:05:59 WARN namenode.NameNode: Encountered exception during format: 
org.apache.hadoop.hdfs.qjournal.client.QuorumException: Unable to check if JNs are ready for formatting. 2 exceptions thrown:
172.19.12.134:8485: Call From hadoop001/172.19.12.134 to hadoop001:8485 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
172.19.12.133:8485: Call From hadoop001/172.19.12.134 to hadoop002:8485 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
        at org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81)
        at org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:223)
        at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.hasSomeData(QuorumJournalManager.java:232)
        at org.apache.hadoop.hdfs.server.common.Storage.confirmFormat(Storage.java:900)
        at org.apache.hadoop.hdfs.server.namenode.FSImage.confirmFormat(FSImage.java:171)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1037)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1606)
19/04/09 23:05:59 ERROR namenode.NameNode: Failed to start namenode.
org.apache.hadoop.hdfs.qjournal.client.QuorumException: Unable to check if JNs are ready for formatting. 2 exceptions thrown:
172.19.12.134:8485: Call From hadoop001/172.19.12.134 to hadoop001:8485 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
172.19.12.133:8485: Call From hadoop001/172.19.12.134 to hadoop002:8485 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
        at org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81)
        at org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:223)
        at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.hasSomeData(QuorumJournalManager.java:232)
        at org.apache.hadoop.hdfs.server.common.Storage.confirmFormat(Storage.java:900)
        at org.apache.hadoop.hdfs.server.namenode.FSImage.confirmFormat(FSImage.java:171)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1037)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1606)
19/04/09 23:05:59 INFO util.ExitUtil: Exiting with status 1
19/04/09 23:05:59 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hadoop001/172.19.12.134
************************************************************/
[hadoop@hadoop001 hadoop]$ 

发现格式化的时候,在最后失败了。
可以看一下日志,分析一下出错原因。
或者 采取shell脚本的debug模式 看看能不能分析出出错原因。
或者仔细分析一下上面的代码,把上面日志拷贝到记事本里仔细分析,比如这句话:Unable to check if JNs are ready for formatting.。
上面出错原因:需要先启动JNs,才能进行格式化。
启动JNs:

[hadoop@hadoop001 hadoop]$ hadoop-daemon.sh start journalnode
starting journalnode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-journalnode-hadoop001.out
[hadoop@hadoop001 hadoop]$ pwd
/home/hadoop/app/hadoop/etc/hadoop
[hadoop@hadoop001 hadoop]$ jps
2194 Jps
2143 JournalNode
1503 QuorumPeerMain
[hadoop@hadoop001 hadoop]$ 

另外两台机器把JN也启动一下。
然后选中第一台hadoop001再进行格式化,就成功了:

[hadoop@hadoop001 hadoop]$ hadoop namenode -format
.......
19/04/09 23:24:41 INFO common.Storage: Storage directory /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/data/dfs/name has been successfully formatted.
19/04/09 23:24:41 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
19/04/09 23:24:41 INFO util.ExitUtil: Exiting with status 0
19/04/09 23:24:41 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hadoop001/172.19.12.134
************************************************************/
[hadoop@hadoop001 hadoop]$ 

然后,在hdfs HA架构中,active namenode和standby namenode的元数据应该是一样的。现在第一台已经格式化了,已经有元数据了,第二台还是空的。
第一台namenode元数据:

[hadoop@hadoop001 current]$ pwd
/home/hadoop/app/hadoop/data/dfs/name/current
[hadoop@hadoop001 current]$ ls
fsimage_0000000000000000000  fsimage_0000000000000000000.md5  seen_txid  VERSION
[hadoop@hadoop001 current]$ 

这个时候可以把hadoop001的namenode元数据传给hadoop002:

[hadoop@hadoop001 hadoop]$ scp -r data/ hadoop002:/home/hadoop/app/hadoop/
in_use.lock                                                                                          100%   14     0.0KB/s   00:00    
VERSION                                                                                              100%  154     0.2KB/s   00:00    
seen_txid                                                                                            100%    2     0.0KB/s   00:00    
fsimage_0000000000000000000.md5                                                                      100%   62     0.1KB/s   00:00    
fsimage_0000000000000000000                                                                          100%  338     0.3KB/s   00:00    
VERSION                                                                                              100%  205     0.2KB/s   00:00    
[hadoop@hadoop001 hadoop]$ 

然后初始化ZKFC:

[hadoop@hadoop001 hadoop]$ hdfs zkfc -formatZK
。。。。。
19/04/09 23:46:37 INFO ha.ActiveStandbyElector: Session connected.
19/04/09 23:46:37 INFO ha.ActiveStandbyElector: Successfully created /hadoop-ha/ruozeclusterg6 in ZK.
19/04/09 23:46:37 INFO zookeeper.ClientCnxn: EventThread shut down
19/04/09 23:46:37 INFO zookeeper.ZooKeeper: Session: 0x16a01fcd8460000 closed
[hadoop@hadoop001 hadoop]$ 

ZKFC初始化成功了。

(这个是万一碰到的话)如果出现问题,想把上面清空,从core-site.xml等文件到最后重来一遍,需要做这些:
先把zookeeper里的元数据删除干净:

[hadoop@hadoop001 bin]$ pwd
/home/hadoop/app/zookeeper/bin  
[hadoop@hadoop001 bin]$ ./zkCli.sh 
。。。。  
WatchedEvent state:SyncConnected type:None path:null  
[zk:> localhost:2181(CONNECTED) 0]  ls / 
[zookeeper, hadoop-ha] 
[zk:> localhost:2181(CONNECTED) 1] ls /hadoop-ha
[ruozeclusterg6] [zk:> localhost:2181(CONNECTED) 2] rmr /hadoop-ha  

然后需要把相关机器相关的进程给kill掉,比如JournalNode、hadoop。zookeeper进程不需要kill。
然后要把hadoop下的data目录下要清空。 然后看需要重新做。

然后启动集群:

[hadoop@hadoop001 hadoop]$ start-dfs.sh 
19/04/10 00:31:44 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [hadoop001 hadoop002]
hadoop002: Error: JAVA_HOME is not set and could not be found.
hadoop001: Error: JAVA_HOME is not set and could not be found.
hadoop003: Error: JAVA_HOME is not set and could not be found.
: Name or service not knownstname hadoop001
: Name or service not knownstname hadoop002
Starting journal nodes [hadoop001 hadoop002 hadoop003]
hadoop001: Error: JAVA_HOME is not set and could not be found.
hadoop002: Error: JAVA_HOME is not set and could not be found.
hadoop003: Error: JAVA_HOME is not set and could not be found.
19/04/10 00:31:48 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting ZK Failover Controllers on NN hosts [hadoop001 hadoop002]
hadoop001: Error: JAVA_HOME is not set and could not be found.
hadoop002: Error: JAVA_HOME is not set and could not be found.
[hadoop@hadoop001 hadoop]$ 

发现出错了。
这个时候需要进入目录:/home/hadoop/app/hadoop/etc/hadoop
修改一下hadoop-env.sh配置文件:
export JAVA_HOME=/usr/java/jdk1.8.0_45
三台机器都要修改一下。
然后再启动一下,就好了。

[hadoop@hadoop001 sbin]$ ./start-dfs.sh 
19/04/17 08:58:23 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [hadoop001 hadoop002]
hadoop001: starting namenode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-namenode-hadoop001.out
hadoop002: starting namenode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-namenode-hadoop002.out
hadoop001: starting datanode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-datanode-hadoop001.out
hadoop003: starting datanode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-datanode-hadoop003.out
hadoop002: starting datanode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-datanode-hadoop002.out
Starting journal nodes [hadoop001 hadoop002 hadoop003]
hadoop001: journalnode running as process 1731. Stop it first.
hadoop003: journalnode running as process 1563. Stop it first.
hadoop002: journalnode running as process 1568. Stop it first.
19/04/17 08:58:35 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting ZK Failover Controllers on NN hosts [hadoop001 hadoop002]
hadoop001: starting zkfc, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-zkfc-hadoop001.out
hadoop002: starting zkfc, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-zkfc-hadoop002.out
[hadoop@hadoop001 sbin]$ jps
2322 DFSZKFailoverController
1731 JournalNode
1540 QuorumPeerMain
2372 Jps
2006 DataNode
1897 NameNode
[hadoop@hadoop001 sbin]$ 

然后去领外两台机器jps看一下。

[hadoop@hadoop002 sbin]$ jps
1568 JournalNode
1472 QuorumPeerMain
1969 Jps
1898 DFSZKFailoverController
1659 NameNode
1739 DataNode
[hadoop@hadoop002 sbin]$ 
[hadoop@hadoop003 sbin]$ jps
1473 QuorumPeerMain
1563 JournalNode
1614 Jps
[hadoop@hadoop003 sbin]$ jps
1473 QuorumPeerMain
1654 DataNode
1563 JournalNode
1757 Jps
[hadoop@hadoop003 sbin]$ 

然后启动一下yarn集群

[hadoop@hadoop001 hadoop]$ start-yarn.sh 
starting yarn daemons
starting resourcemanager, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/yarn-hadoop-resourcemanager-hadoop001.out
: Name or service not knownstname hadoop002
hadoop003: starting nodemanager, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/yarn-hadoop-nodemanager-hadoop003.out
: Name or service not knownstname hadoop001
[hadoop@hadoop001 hadoop]$

发现又出错了。可能是slaves文件可能被污染了,删除掉slaves文件,重新建立一个slaves文件,并配置好就可以了。
然后再启动一次就可以了:

[hadoop@hadoop001 sbin]$ ./start-yarn.sh 
starting yarn daemons
starting resourcemanager, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/yarn-hadoop-resourcemanager-hadoop001.out
hadoop001: starting nodemanager, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/yarn-hadoop-nodemanager-hadoop001.out
hadoop002: starting nodemanager, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/yarn-hadoop-nodemanager-hadoop002.out
hadoop003: starting nodemanager, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/yarn-hadoop-nodemanager-hadoop003.out
[hadoop@hadoop001 sbin]$ jps
2944 Jps
2322 DFSZKFailoverController
1731 JournalNode
1540 QuorumPeerMain
2006 DataNode
1897 NameNode
2606 NodeManager
2495 ResourceManager
[hadoop@hadoop001 sbin]$ 

在这里只启动了第一台机器的RM,第二台的RM是需要手工去启动的。

[hadoop@hadoop002 sbin]$ jps
2048 NodeManager
1568 JournalNode
1472 QuorumPeerMain
2195 Jps
1898 DFSZKFailoverController
1659 NameNode
1739 DataNode
[hadoop@hadoop002 sbin]$ ./yarn-daemon.sh start resourcemanager
starting resourcemanager, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/yarn-hadoop-resourcemanager-hadoop002.out
[hadoop@hadoop002 sbin]$ jps
2048 NodeManager
1568 JournalNode
1472 QuorumPeerMain
2230 ResourceManager
1898 DFSZKFailoverController
1659 NameNode
1739 DataNode
2287 Jps
[hadoop@hadoop002 sbin]$ 
[hadoop@hadoop003 hadoop]$ jps
2530 NodeManager
1506 QuorumPeerMain
2114 DataNode
1910 JournalNode
2684 Jps
[hadoop@hadoop003 hadoop]$ 

然后试一下命令是否可用:

[hadoop@hadoop001 hadoop]$ hdfs dfs -ls /
19/04/17 09:09:28 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found 1 items
drwx------   - hadoop hadoop          0 2019-04-17 09:09 /user
[hadoop@hadoop001 hadoop]$ hdfs dfs -ls hdfs://ruozeclusterg6/
19/04/17 09:09:57 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found 1 items
drwx------   - hadoop hadoop          0 2019-04-17 09:09 hdfs://ruozeclusterg6/user
[hadoop@hadoop001 hadoop]$ hdfs dfs -put ./README.txt hdfs://ruozeclusterg6/
19/04/17 09:12:33 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[hadoop@hadoop001 hadoop]$ hdfs dfs -ls hdfs://ruozeclusterg6/
19/04/17 09:12:41 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found 2 items
-rw-r--r--   3 hadoop hadoop       1366 2019-04-17 09:12 hdfs://ruozeclusterg6/README.txt
drwx------   - hadoop hadoop          0 2019-04-17 09:09 hdfs://ruozeclusterg6/user
[hadoop@hadoop001 hadoop]$ 

是可用的。

然后在阿里云服务器上把相应的端口打开。可以把端口范围1/65535,对任意ip 0.0.0.0/0都允许访问。

7.web界面访问

①HDFS
然后进入web界面进行访问:http://47.102.145.183:50070/
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
可以看到它是一个standby节点。还可以看到其它比如NameNode Journal Status、datanode等的信息。
然后浏览http://47.102.154.10:50070
在这里插入图片描述
在这里插入图片描述
可以看出,这个是active节点。

②YARN:
然后打开http://47.102.145.183:8088/cluster
在这里插入图片描述
在这里插入图片描述
可以看到它是active节点。
然后再打开:http://http//47.102.154.10:8088
发现,它提示:
在这里插入图片描述
但是你把路径补全,就可以看到了:
http://47.102.154.10:8088/cluster/cluster
再去访问:
在这里插入图片描述
③jobhistory
它是一个服务,用来收集job运行的历史记录。job运行完了之后才能在这里看到。
在mapred-site.xml里面配置的。
启动:

[hadoop@hadoop001 sbin]$ mr-jobhistory-daemon.sh start historyserver
starting historyserver, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/mapred-hadoop-historyserver-hadoop001.out
[hadoop@hadoop001 sbin]$ jps
2322 DFSZKFailoverController
1731 JournalNode
1540 QuorumPeerMain
2006 DataNode
3704 JobHistoryServer          #jobhistory 在这里
1897 NameNode
2606 NodeManager
3743 Jps
2495 ResourceManager
[hadoop@hadoop001 sbin]$ netstat -nlp|grep 3704
(Not all processes could be identified, non-owned process info
 will not be shown, you would have to be root to see it all.)
tcp        0      0 172.19.12.134:10020         0.0.0.0:*                   LISTEN      3704/java           
tcp        0      0 172.19.12.134:19888         0.0.0.0:*                   LISTEN      3704/java           
tcp        0      0 0.0.0.0:10033               0.0.0.0:*                   LISTEN      3704/java           
[hadoop@hadoop001 sbin]$ 

查看端口号,看到它的端口号是19888
访问:
http://47.102.145.183:19888/jobhistory
在这里插入图片描述
这里还没有运行过job,所以还没有记录。

运行一个job看一下:

[hadoop@hadoop001 hadoop]$ find ./ -name *example*.jar
./share/hadoop/mapreduce2/sources/hadoop-mapreduce-examples-2.6.0-cdh5.7.0-sources.jar
./share/hadoop/mapreduce2/sources/hadoop-mapreduce-examples-2.6.0-cdh5.7.0-test-sources.jar
./share/hadoop/mapreduce2/hadoop-mapreduce-examples-2.6.0-cdh5.7.0.jar
./share/hadoop/mapreduce1/hadoop-examples-2.6.0-mr1-cdh5.7.0.jar
[hadoop@hadoop001 hadoop]$ hadoop jar ./share/hadoop/mapreduce2/hadoop-mapreduce-examples-2.6.0-cdh5.7.0.jar pi 5 10
Number of Maps  = 5
Samples per Map = 10
....此处省略一万字
19/04/17 09:53:12 INFO mapreduce.Job:  map 0% reduce 0%
19/04/17 09:53:18 INFO mapreduce.Job: Task Id : attempt_1555463078469_0001_m_000003_0, Status : FAILED
Error: java.lang.RuntimeException: native snappy library not available: this version of libhadoop was built without snappy support.
        at org.apache.hadoop.io.compress.SnappyCodec.checkNativeCodeLoaded(SnappyCodec.java:65)
        at org.apache.hadoop.io.compress.SnappyCodec.getCompressorType(SnappyCodec.java:134)
        at org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:150)
        at org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:165)
        at org.apache.hadoop.mapred.IFile$Writer.<init>(IFile.java:114)
        at org.apache.hadoop.mapred.IFile$Writer.<init>(IFile.java:97)
        at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.sortAndSpill(MapTask.java:1606)
        at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:1486)
        at org.apache.hadoop.mapred.MapTask$NewOutputCollector.close(MapTask.java:723)
        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:793)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
        at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
        at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
......此处省略一万字

然后运行完了后再去jobhistory里看一下,
在这里插入图片描述
在这里插入图片描述
从界面上看,job运行失败了,失败的原因是这个hadoop版本不支持snappy 压缩。
(需要去编译源代码,让它支持snappy 压缩。当然还有其他压缩)
这些压缩是在core-site.xml、mapred-site.xml里面配置的。
core-site.xml:

      <property>
                  <name>io.compression.codecs</name>
                  <value>org.apache.hadoop.io.compress.GzipCodec,
                        org.apache.hadoop.io.compress.DefaultCodec,
                        org.apache.hadoop.io.compress.BZip2Codec,
                        org.apache.hadoop.io.compress.SnappyCodec
                  </value>
      </property>

mapred-site.xml:

<!-- 配置 Map段输出的压缩,snappy-->
  <property>
      <name>mapreduce.map.output.compress</name> 
      <value>true</value>
  </property>
              
  <property>
      <name>mapreduce.map.output.compress.codec</name> 
      <value>org.apache.hadoop.io.compress.SnappyCodec</value>
   </property>

编译源码支持压缩格式请看后面的博客。

检查本版本的hadoop对各种压缩格式是否可用的方法:

[hadoop@hadoop001 hadoop]$ hadoop checknative
19/04/17 10:13:04 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Native library checking:
hadoop:  false 
zlib:    false 
snappy:  false 
lz4:     false 
bzip2:   false 
openssl: false 
19/04/17 10:13:04 INFO util.ExitUtil: Exiting with status 1
[hadoop@hadoop001 hadoop]$ 

可以看到本版本对上述的压缩格式都不支持。

现在把三台机器上面的core-site.xml、mapred-site.xml里的压缩那几行的配置都去掉。
去掉之后重启一下:
先stop-all.sh ,再 ./start-all.sh一下
再运行一下刚才的job看一下:

[hadoop@hadoop001 hadoop]$ hadoop jar ./share/hadoop/mapreduce2/hadoop-mapreduce-examples-2.6.0-cdh5.7.0.jar pi 5 10
Number of Maps  = 5
Samples per Map = 10
19/04/17 10:50:31 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Starting Job
19/04/17 10:50:33 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to rm2
19/04/17 10:50:33 INFO input.FileInputFormat: Total input paths to process : 5
19/04/17 10:50:34 INFO mapreduce.JobSubmitter: number of splits:5    #这里splits是5,splits是按什么划分的?
....此处省略一万字
Job Finished in 21.445 seconds
Estimated value of Pi is 3.28000000000000000000
[hadoop@hadoop001 hadoop]$ 

在这里插入图片描述
可以看到,成功了。

编译源码支持压缩格式请看后面的博客,会更新。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值