1 安装配置zookeeper
1、https://archive.apache.org/dist/zookeeper/
2、上传压缩文件到集群 拖之
3、解压 tar -zxvf xxx.tar.gz -C /path
4、修改配置文件
进入conf目录:cd ZKHOME/conf
```
> mv zoo_sample.cfg zoo.cfg -- 改名
> vim zoo.cfg
dataDir=/apps/zkdata
server.1=kk-01:2888:3888
server.2=kk-02:2888:3888
server.3=kk-03:2888:3888
> mkdir -p /apps/zkdata
> echo 1 > /apps/zkdata/myid ## kk-01
> scp -r zookeeper-3.4.11/ kk-{01,02,03}:$PWD
> echo 2 > zkdata/myid ##kk-02
> echo 3 > zkdata/myid ##kk-03
```
5、配置环境变量
export ZOOKEEPER_HOME=/apps/zookeeper-3.4.11/ export PATH=$PATH:$ZOOKEEPER_HOME/bin
source /ect/profile
6、启动zk
zkServer.sh {start|stop|restart|status}
2 hadoop配置
Core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://bigdata01:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/opt/components/data/hadoop2.7.3_data/tmp</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>131702</value>
</property>
<!-- HA -->
<property>
<name>fs.defaultFS</name>
<value>hdfs://mycluster</value>
</property>
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/opt/components/data/hadoop2.7.3_data/journaldata</value>
</property>
<property>
<name>ha.zookeeper.quorum</name>
<value>bigdata01:2181,bigdata02:2181,bigdata03:2181</value>
</property>
</configuration>
Hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>bigdata02:9001</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
<!-- HA -->
<property>
<name>dfs.nameservices</name>
<value>mycluster</value>
</property>
<property>
<name>dfs.ha.namenodes.mycluster</name>
<value>nn1,nn2</value>
</property>
<property>
<name>dfs.namenode.rpc-address.mycluster.nn1</name>
<value>bigdata01:8020</value>
</property>
<property>
<name>dfs.namenode.rpc-address.mycluster.nn2</name>
<value>bigdata02:8020</value>
</property>
<property>
<name>dfs.namenode.http-address.mycluster.nn1</name>
<value>bigdata01:50070</value>
</property>
<property>
<name>dfs.namenode.http-address.mycluster.nn2</name>
<value>bigdata02:50070</value>
</property>
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://bigdata01:8485;bigdata02:8485;bigdata03:8485/mycluster</value>
</property>
<property>
<name>dfs.client.failover.proxy.provider.mycluster</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/root/.ssh/id_rsa</value>
</property>
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
</configuration>
Mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>bigdata01:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>bigdata01:19888</value>
</property>
</configuration>
Yarn-site.xml
<configuration>
<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.auxservices.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>bigdata01:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>bigdata01:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>bigdata01:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>bigdata01:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>bigdata01:8088</value>
</property>
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>1536</value>
</property>
<!-- HA -->
<property>
<name>yarn.resourcemanager.ha.enabled</name>
<value>true</value>
</property>
<property>
<name>yarn.resourcemanager.cluster-id</name>
<value>cluster1</value>
</property>
<property>
<name>yarn.resourcemanager.ha.rm-ids</name>
<value>rm1,rm2</value>
</property>
<property>
<name>yarn.resourcemanager.hostname.rm1</name>
<value>bigdata01</value>
</property>
<property>
<name>yarn.resourcemanager.hostname.rm2</name>
<value>bigdata02</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address.rm1</name>
<value>bigdata01:8088</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address.rm2</name>
<value>bigdata02:8088</value>
</property>
<property>
<name>yarn.resourcemanager.zk-address</name>
<value>bigdata01:2181,bigdata02:2181,bigdata03:2181</value>
</property>
</configuration>
3 HA步骤
理论知识:
http://www.tuicool.com/articles/jameeqm
这篇文章讲的非常详细了:
http://www.tuicool.com/articles/jameeqm
以下是进阶,讲QJM工作原理:
http://www.tuicool.com/articles/eIBB3a
安装配置HA
http://makaidong.com/tototuzuoquan/1/1002_10219241_2.htm
启动顺序
Zookeeper -> JournalNode -> 格式化NameNode -> 初始化JournalNode
-> 创建命名空间(zkfc) -> NameNode -> DataNode -> ResourceManager -> NodeManager。
首次启动ha集群过程:
hdfs zkfc -formatZK(这个之前落下了,很重要,如果不注册到zookeeper,那么等于hdfs和zookeeper没产生任何关系)
1、启动journalnode
sbin/hadoop-daemon.sh start journalnode
是每一台journode机器
2、启动namenode
1)格式化bin/hdfs namenode -format
2)启动这个namenode : sbin/hadoop-daemon.sh start namenode
3)格式化另一台namonode bin/hdfs namenode -bootstrapStandby
:注意2-3步骤的顺序,使用时,我犯了个错误,把顺序颠倒了,结果,第二台namenode的tem.dir目录一直没有任何文件。
4)启动第二台namenode:sbin/hadoop-daemon.sh start namenode
3、到了这一步对于新手来说有个陷阱。我们在学习的时候,都知道两台namenode一台是active,一台是standby。可是此刻,两台都是standby。
还以为是出了问题,后来终于发现,这里是需要【手动转换】的!
bin/hdfs haadmin -transitionToActive nn1
此时,可以通过之前配置的http地址访问集群了。
http://master:50070
tip:关闭防火墙:sudo ufw disable
4、启动datanode
逐台 sbin/hadoop-daemon start datanode
---------结束
记得zookeeper与hadoop要先同步。命令:hdfs zkfc -formatZK。
把非Ha集群,转换为Ha集群:(和上面的首次相比,只是步骤二由格式化变成了初始化)
1、启动所有journalnode
sbin/hadoop-daemon start journalnode
2、在其中一台namenode上对journalnode的共享数据进行初始化
bin/hdfs namenode -initializeShareEdits
3、启动这台namenode
sbin/hadoop-daemon start namenode
4、在第二台namenode上同步:
bin/hdfs namenode -bootstrapStandby
5、启动第二台namenode
6、启动所有的datanode
------------结束
一些常用的管理集群的命令:
bin/hdfs haadmin -getServiceStae nn1
bin/hdfs haadmin -failover nn1 nn2
bin/hdfs haadmin transitionActive nn1(不常使用,因为不会运行fence,无法关闭前一个namenode造成脑裂)
bin/hdfs haadmin transitionStandby nn2(不常使用,因为不会运行fence,无法关闭前一个namenode造成脑裂)
bin/hdfs haadmin checkHealth nn2