HDFS HA+Federation配置

 一、HDFS HA+Federation 实现双HA

二、集群规划

HOSTNAMEIPHDFS节点zookeeper节点Journalnode节点
CDH1192.168.123.101NS1-namenode1  
CDH2192.168.123.102NS2-namenode1QuorumPeerMainjournalnode
CDH3192.168.123.103NS2-namenode2QuorumPeerMainjournalnode
CDH4192.168.123.104NS1-namenode2QuorumPeerMainjournalnode

三、配置步骤

1.core-site.xml
整合Federation和HA的配置
2.hdfs-site.xml
添加新增节点配置
3.启动服务
zookeeper
journalnode
datanode
namenode
zkfc

四、配置

1.core-site.xml

<!--Put site-specific property overrides in this file.-->
<configuratlon xnlns:xi-"http://ww.w3.org/2001/XInclude">
<xi:include href="cnt.xnl"/>
<property>
<nane>fs.defaultFS</nane>
<value>vieufs://nsX</value>
</property>
<property>
<nane>hadoop.tnp.dir</nane>
<value>/hadoop2/hd260/tnp</value>
</property>
<property>
<name>dFs.journalnode.edits.dir</nane>
<value>/hadoop2/hd268/journalnode/data</value>
</property>
<property>
<name>ha.zookeeper.quorun</nane>
<value>cdh1:2181,cdh2:218,cdh3:2181</value>
</property>
</configuration>

2.Hdfs-site.xml

<configuration>
<property>
<nane>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.nanenode.name.dir</nane>
<value>/hadoop2/hd268/dfs/nane</value>
</praperty>
<property>
<name>dfs.datanode.data.dir</nane>
<value>/hadeop2/hd268/dfs/data</value>
</property>
<property>
<nane>dfs.naneservices</nane>
<value>ns1,ns2</value>
</property>
<property>
<name>dfs.ha.nanenodes.ns1</nane>
<value>nn1,nn2</value>
</property>
<property>
<name>dfs.ha.nanenodes.ns2</nane>
<value>nn3,nn4</value>
</property>
<property>
<name>dfs.nanenode.rpc-address.ns1.nn1</nane>
<value>cdh1:9000</value>
</property〉
<property>
<name>dfs.nanenode.rpc-address.ns2.nn3</nane>
<value>cdh2:9000</value>
</property>
<property〉
<name>dfs.nanenode.rpc-address.ns1.nn2</nane>
<value>cdh4:9000</value>
</property>
<property>
<name>dfs.nanenode.rpc-address.ns2.nn4</nane>
<value>cdh3:9000</value>
</property>
<property>
<name>dfs.nanenode.http-address.ns1.nn1</nae>
<value>cdh1:50070</value>
</property>
<property>
<name>dfs.nanenode.http-address.ns1.nn2</name>
<value>cdh4:50070</value>
</property>
<property>
<name>dfs.nanenode.http-address.ns2.nn3</name>
<value>cdh2:50070</value>
</property>
<property>
<nane>dfs.nanenode.http-address.ns2.nn4</name>
<value>cdh3:50070</value>
</property>
<property>
<nane>dfs.nanenode.shared.edits.dir</name>
<value>qjournal://cdh2:8485;cdh3:8485;cdh4:8485/ns1</value>
</property>
<property>
<name>dfs.client.failover.proxy.provider.ns1</nane>
<value>org.apache.hadoop.hdfs.server.nanenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
<name>dfs.client.Failover.proxy.provider.ns2</nane>
<value>org.apache.hadoop.hdfs.server.nanenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
<name>dfs.ha.fencing.methods</nane>
<value>sshfence</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.priuate-key-files</nane>
<value>/root/.ssh/id_rsa</value>
</property>
<property>
<nane>dfs.ha.fencing.ssh.connect-timeout</nane>
<value>30000</value>
</property>
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
</configuration>

3.同步配置到其它服务器 

scp *xml cdh2:/etc/hadoop/conf

scp *xml cdh3:/etc/hadoop/conf

scp *xml cdh4:/etc/hadoop/conf

4.修改CDH2 和CDH3 hdfs-site.xml

<nane>dfs.nanenode.shared.edits.dir</name>
<value>qjournal://cdh2:8485;cdh3:8485;cdh4:8485/ns2</value>
</property>

6.启动zookeeper和journalnode服务

7.格式化CDH1和CDH2执行下面的命令

hdfs namenode -format -clusterid ha260

8.CDH3和CDH4执行下面的命令,同步CDH1和CDH2信息,这两台namenode就不需要格式化了

hdfs namenode -bootstrapStandby

9.启动namenode进程 

jps确认有没有起来

10.开启zkfc进程,进行namenode Active和Standby的自动切换

11.Web监控页面看下状态

HTTP://cdh1:50070

HTTP://cdh4:50070

五、以上配置完结。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值