hadoop 2.0 hdfs 完全分布式安装(HA)
NN-1 NN-2 DN ZK ZKFC JNN
node08 * * *
node09 * * * * *
node10 * * *
node11 * *
【1】在Hadoop1.0 hdfs完全分布式集群搭建基础上搭建(详见:Hadoop1.0 hdfs完全分布式集群搭建):
【2】配置hdfs-site.xml:
<property>
<name>dfs.nameservices</name>
<value>mycluster</value>
</property>
<property>
<name>dfs.ha.namenodes.mycluster</name>
<value>nn1,nn2</value>
</property>
<property>
<name>dfs.namenode.rpc-address.mycluster.nn1</name>
<value>node08:8020</value>
</property>
<property>
<name>dfs.namenode.rpc-address.mycluster.nn2</name>
<value>node09:8020</value>
</property>
<property>
<name>dfs.namenode.http-address.mycluster.nn1</name>
<value>node08:50070</value>
</property>
<property>
<name>dfs.namenode.http-address.mycluster.nn2</name>
<value>node09:50070</value>
</property>
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://node08:8485;node09:8485;node10:8485/mycluster</value>
</property>
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/var/sxt/hadoop/ha/jn</value>
</property>
<property>
<name>dfs.client.failover.proxy.provider.mycluster</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/root/.ssh/id_dsa</value>
</property>
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
【3】配置core-site.xml:
<property>
<name>fs.defaultFS</name>
<value>hdfs://mycluster</value>
</property>
<property>
<name>ha.zookeeper.quorum</name>
<value>node09:2181,node10:2181,node11:2181</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/var/sxt/hadoop/ha</value>
</property>
【4】分别在node08 node09 node10 上启动JournalNode
命令:hadoop-daemon.sh start journalnode
【5】在node08上格式化:
hdfs namenode -format
【6】在node08上启动namenode
hadoop-daemon.sh start namenode
【7】在node09上node08中的namenode数据
hdfs namenode -bootstrapStandby
【8】在node08上格式化zk
hdfs zkfc -formatZK
【9】启动
start-dfs.sh
<-----------------以上安装完成------------------------->
测试:
http://192.168.88.18:50070
创建目录:hdfs dfs -mkdir -p /user/root
<---------------------------------------------------->
问题:
Hadoop HA active namenode宕机无法自动切换到standby namenode
配置:
<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence
shell(/bin/true)
</value>
</property>
启动zkfc:
hadoop-daemon.sh start zkfc