1.集群规划:
主机名 IP 安装的软件 运行的进程
drguo1 192.168.80.149 jdk、hadoop NameNode、DFSZKFailoverController(zkfc)、ResourceManager
drguo2 192.168.80.150 jdk、hadoop NameNode、DFSZKFailoverController(zkfc)、ResourceManager
drguo3 192.168.80.151 jdk、hadoop、zookeeper DataNode、NodeManager、JournalNode、QuorumPeerMain
drguo4 192.168.80.152 jdk、hadoop、zookeeper DataNode、NodeManager、JournalNode、QuorumPeerMain
drguo5 192.168.80.153 jdk、hadoop、zookeeper DataNode、NodeManager、JournalNode、QuorumPeerMain
排的好好的,显示出来就乱了!!!
2.前期准备:
准备五台机器,修改静态IP、主机名、主机名与IP的映射,关闭防火墙,安装JDK并配置环境变量(不会请看这http://blog.csdn.net/dr_guo/article/details/50886667),创建用户:用户组,SSH免密码登录SSH免密码登录(报错请看这http://blog.csdn.net/dr_guo/article/details/50967442)。
注意:要把127.0.1.1那一行注释掉,要不然会出现jps显示有datanode,但网页显示live nodes为0;
注释之后就正常了,好像有人没注释也正常,我也不知道为什么0.0
3.搭建zookeeper集群(drguo3/drguo4/drguo5)
见:ZooKeeper完全分布式集群搭建
4.正式开始搭建Hadoop HA集群
去官网下最新的Hadoop(http://apache.opencas.org/hadoop/common/stable/),目前最新的是2.7.2,下载完之后把它放到/opt/Hadoop下
- guo@guo:~/下载$ mv ./hadoop-2.7.2.tar.gz /opt/Hadoop/
- mv: 无法创建普通文件"/opt/Hadoop/hadoop-2.7.2.tar.gz": 权限不够
- guo@guo:~/下载$ su root
- 密码:
- root@guo:/home/guo/下载# mv ./hadoop-2.7.2.tar.gz /opt/Hadoop/
解压
- guo@guo:/opt/Hadoop$ sudo tar -zxf hadoop-2.7.2.tar.gz
- [sudo] guo 的密码:
解压jdk的时候我用的是tar -zxvf,其中的v呢就是看一下解压的过程,不想看你可以不写。
修改opt目录所有者(用户:用户组)直接把opt目录的所有者/组换成了guo。具体情况在ZooKeeper完全分布式集群搭建说过。
- root@guo:/opt/Hadoop# chown -R guo:guo /opt
设置环境变量
- guo@guo:/opt/Hadoop$ sudo gedit /etc/profile
在最后加上(这样设置在执行bin/sbin目录下的脚本时就不用进入该目录用了)
- #hadoop
- export HADOOP_HOME=/opt/Hadoop/hadoop-2.7.2
- export PATH=$PATH:$HADOOP_HOME/sbin
- export PATH=$PATH:$HADOOP_HOME/bin
然后更新配置
- guo@guo:/opt/Hadoop$ source /etc/profile
修改/opt/Hadoop/hadoop-2.7.2/etc/hadoop下的hadoop-env.sh
- guo@guo:/opt/Hadoop$ cd hadoop-2.7.2
- guo@guo:/opt/Hadoop/hadoop-2.7.2$ cd etc/hadoop/
- guo@guo:/opt/Hadoop/hadoop-2.7.2/etc/hadoop$ sudo gedit ./hadoop-env.sh
进入文件后
- export JAVA_HOME=${JAVA_HOME}#将这个改成JDK路径,如下
- export JAVA_HOME=/opt/Java/jdk1.8.0_73
然后更新文件配置
- guo@guo:/opt/Hadoop/hadoop-2.7.2/etc/hadoop$ source ./hadoop-env.sh
前面配置和单机模式一样,我就直接复制了。
注意:汉语注释是给你看的,复制粘贴的时候都删了!!!
修改core-site.xml
- <configuration>
-
- <property>
- <name>fs.defaultFS</name>
- <value>hdfs://ns1/</value>
- </property>
-
- <property>
- <name>hadoop.tmp.dir</name>
- <value>/opt/Hadoop/hadoop-2.7.2/tmp</value>
- </property>
-
- <property>
- <name>ha.zookeeper.quorum</name>
- <value>drguo3:2181,drguo4:2181,drguo5:2181</value>
- </property>
- </configuration>
修改hdfs-site.xml
把Hadoop整个目录拷贝到drguo2/3/4/5,拷之前把share下doc删了(文档不用拷),这样会快点。
5.启动zookeeper集群(分别在drguo3、drguo4、drguo5上启动zookeeper)
- guo@drguo3:~$ zkServer.sh start
- ZooKeeper JMX enabled by default
- Using config: /opt/zookeeper-3.4.8/bin/../conf/zoo.cfg
- Starting zookeeper ... STARTED
- guo@drguo3:~$ jps
- 2005 Jps
- 1994 QuorumPeerMain
- guo@drguo3:~$ ssh drguo4
- Welcome to Ubuntu 15.10 (GNU/Linux 4.2.0-16-generic x86_64)
-
- * Documentation: https://help.ubuntu.com/
-
- Last login: Fri Mar 25 14:04:43 2016 from 192.168.80.151
- guo@drguo4:~$ zkServer.sh start
- ZooKeeper JMX enabled by default
- Using config: /opt/zookeeper-3.4.8/bin/../conf/zoo.cfg
- Starting zookeeper ... STARTED
- guo@drguo4:~$ jps
- 1977 Jps
- 1966 QuorumPeerMain
- guo@drguo4:~$ exit
- 注销
- Connection to drguo4 closed.
- guo@drguo3:~$ ssh drguo5
- Welcome to Ubuntu 15.10 (GNU/Linux 4.2.0-16-generic x86_64)
-
- * Documentation: https://help.ubuntu.com/
-
- Last login: Fri Mar 25 14:04:56 2016 from 192.168.80.151
- guo@drguo5:~$ zkServer.sh start
- ZooKeeper JMX enabled by default
- Using config: /opt/zookeeper-3.4.8/bin/../conf/zoo.cfg
- Starting zookeeper ... STARTED
- guo@drguo5:~$ jps
- 2041 Jps
- 2030 QuorumPeerMain
- guo@drguo5:~$ exit
- 注销
- Connection to drguo5 closed.
- guo@drguo3:~$ zkServer.sh status
- ZooKeeper JMX enabled by default
- Using config: /opt/zookeeper-3.4.8/bin/../conf/zoo.cfg
- Mode: leader
6.启动journalnode(分别在drguo3、drguo4、drguo5上启动journalnode)注意只有第一次需要这么启动,之后启动hdfs会包含journalnode。
- guo@drguo3:~$ hadoop-daemon.sh start journalnode
- starting journalnode, logging to /opt/Hadoop/hadoop-2.7.2/logs/hadoop-guo-journalnode-drguo3.out
- guo@drguo3:~$ jps
- 2052 Jps
- 2020 JournalNode
- 1963 QuorumPeerMain
- guo@drguo3:~$ ssh drguo4
- Welcome to Ubuntu 15.10 (GNU/Linux 4.2.0-16-generic x86_64)
-
- * Documentation: https://help.ubuntu.com/
-
- Last login: Fri Mar 25 00:09:08 2016 from 192.168.80.149
- guo@drguo4:~$ hadoop-daemon.sh start journalnode
- starting journalnode, logging to /opt/Hadoop/hadoop-2.7.2/logs/hadoop-guo-journalnode-drguo4.out
- guo@drguo4:~$ jps
- 2103 Jps
- 2071 JournalNode
- 1928 QuorumPeerMain
- guo@drguo4:~$ exit
- 注销
- Connection to drguo4 closed.
- guo@drguo3:~$ ssh drguo5
- Welcome to Ubuntu 15.10 (GNU/Linux 4.2.0-16-generic x86_64)
-
- * Documentation: https://help.ubuntu.com/
-
- Last login: Thu Mar 24 23:52:17 2016 from 192.168.80.152
- guo@drguo5:~$ hadoop-daemon.sh start journalnode
- starting journalnode, logging to /opt/Hadoop/hadoop-2.7.2/logs/hadoop-guo-journalnode-drguo5.out
- guo@drguo5:~$ jps
- 2276 JournalNode
- 2308 Jps
- 1959 QuorumPeerMain
- guo@drguo5:~$ exit
- 注销
- Connection to drguo5 closed.
在drguo4/5启动时发现了问题,没有journalnode,查看日志发现是因为汉语注释造成的,drguo4/5全删了问题解决。drguo4/5的拼音输入法也不能用,我很蛋疼。。镜像都是复制的,咋还变异了呢。
7.格式化HDFS(在drguo1上执行)
- guo@drguo1:/opt$ hdfs namenode -format
这回又出问题了,还是汉语注释闹得,drguo1/2/3也全删了,问题解决。
注意:格式化之后需要把tmp目录拷给drguo2(不然drguo2的namenode起不来)
- guo@drguo1:/opt/Hadoop/hadoop-2.7.2$ scp -r tmp/ drguo2:/opt/Hadoop/hadoop-2.7.2/
8.格式化ZKFC(在drguo1上执行)
- guo@drguo1:/opt$ hdfs zkfc -formatZK
9.启动HDFS(在drguo1上执行)
- guo@drguo1:/opt$ start-dfs.sh
10.启动YARN(在drguo1上执行)
- guo@drguo1:/opt$ start-yarn.sh
PS:
1.drguo2的resourcemanager需要手动单独启动:
yarn-daemon.sh start resourcemanager
2.namenode、datanode也可以单独启动:
hadoop-daemon.sh start namenode
hadoop-daemon.sh start
datanode
3.NN 由standby转化成active
hdfs haadmin -transitionToActive nn1 --forcemanual
大功告成!!!
是不是和之前规划的一样0.0