HDFS HA搭建 (ZKFC自动故障转移)

基本集群搭建见这篇博客:hadoop集群搭建笔记
在基本集群搭建上配置下述文件
HA搭建配置:
hdfs-site.xml

<configuration>
  <!-- set the replication of file, needed base on number of datanode  -->
<property>
	<name>dfs.replication</name>
     <value>3</value>
</property>
<property>
  <name>dfs.nameservices</name>
  <value>mycluster</value>
</property>
<!-- set namenode -->
<property>
  <name>dfs.ha.namenodes.mycluster</name>
  <value>nn1,nn2</value>
</property>
<!--set address of nn1,nn2 -->
<property>
  <name>dfs.namenode.rpc-address.mycluster.nn1</name>
  <value>chdp11:8020</value>
</property>
<property>
  <name>dfs.namenode.rpc-address.mycluster.nn2</name>
  <value>chdp12:8020</value>
</property>
<property>
  <name>dfs.namenode.http-address.mycluster.nn1</name>
  <value>chdp11:50070</value>
</property>
<property>
  <name>dfs.namenode.http-address.mycluster.nn2</name>
  <value>chdp12:50070</value>
</property>
<property>
  <name>dfs.namenode.shared.edits.dir</name>
  <value>qjournal://chdp11:8485;chdp12:8485;chdp13:8485/mycluster</value>
</property>
<property>
  <name>dfs.client.failover.proxy.provider.mycluster</name>
  <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
   <property>
      <name>dfs.ha.fencing.methods</name>
      <value>sshfence</value>
    </property>
    <property>
      <name>dfs.ha.fencing.ssh.private-key-files</name>
      <value>/home/root/.ssh/id_rsa</value>
    </property>
<property>
        <name>dfs.ha.automatic-failover.enabled</name>
        <value>true</value>
</property>
</configuration>

core-site.xml (配置了trash,不需要的直接去掉)

<configuration>
<property>
  <name>fs.defaultFS</name>
  <value>hdfs://mycluster</value>
</property>
<property>
  <name>dfs.journalnode.edits.dir</name>
  <value>/usr/SFT/HA/hadoop-2.7.2/data/jn</value>
</property>
<property>
        <name>ha.zookeeper.quorum</name>
        <value>chdp11:2181,chdp12:2181,chdp13:2181</value>
</property>
<!-- set the directory with produce when running hadoop-->
<property>
        <name>hadoop.tmp.dir</name>
       <value>/usr/SFT/HA/hadoop-2.7.2/data/tmp</value>
</property>
<!-- configuration for trash-->
<property>
  <name>fs.trash.interval</name>
  <value>60</value>
  <description>Number of minutes after which the checkpoint
  gets deleted.  If zero, the trash feature is disabled.
  This option may be configured both on the server and the
  client. If trash is disabled server side then the client
  side configuration is checked. If trash is enabled on the
  server side then the value configured on the server is
  used and the client configuration value is ignored.
  </description>
</property>
<property>
  <name>fs.trash.checkpoint.interval</name>
  <value>0</value>
  <description>Number of minutes between trash checkpoints.
  Should be smaller or equal to fs.trash.interval. If zero,
  the value is set to the value of fs.trash.interval.
  Every time the checkpointer runs it creates a new checkpoint 
  out of current and removes checkpoints created more than 
  fs.trash.interval minutes ago.
  </description>
</property>
<property>
        <name>ha.zookeeper.quorum</name>
        <value>chdp11:2181,chdp12:2181,chdp13:2181</value>
</property>
</configuration>

后续操作(我用的全路径操作)
(1)关闭所有HDFS服务:
/usr/SFT/HA/hadoop-2.7.2/sbin/stop-dfs.sh
(2)启动Zookeeper集群:
/usr/SFT/HA/hadoop-2.7.2/bin/zkServer.sh start
(3)初始化HA在Zookeeper中状态:
/usr/SFT/HA/hadoop-2.7.2/bin/hdfs zkfc -formatZK
(4)启动HDFS服务:
/usr/SFT/HA/hadoop-2.7.2/sbin/start-dfs.sh
(5)在备机上启动namenode
/usr/SFT/HA/hadoop-2.7.2/sbin/hadoop-daemon.sh start namenode
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值