hadoop2.x HA搭建以及相关错误解决

  1. 集群节点安排安排
节点名bigdata-pro01.kfk.combigdata-pro02.kfk.combigdata-pro02.kfk.com
namenodenamenode
datanodedatanodedatanode
  1. 配置hdfs-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
<property>
<name>dfs.permissions.enabled</name>
<value>false</value>
</property>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>



<property>
  <name>dfs.nameservices</name>
  <value>ns</value>
</property>

<property>
  <name>dfs.ha.namenodes.ns</name>
  <value>nn1,nn2</value>
</property>

<property>
  <name>dfs.namenode.rpc-address.ns.nn1</name>
  <value>bigdata-pro01.kfk.com:9000</value>
</property>

<property>
  <name>dfs.namenode.rpc-address.ns.nn2</name>
  <value>bigdata-pro02.kfk.com:9000</value>
</property>


<property>
  <name>dfs.namenode.http-address.ns.nn1</name>
  <value>bigdata-pro01.kfk.com:50070</value>
</property>

<property>
  <name>dfs.namenode.http-address.ns.nn2</name>
  <value>bigdata-pro02.kfk.com:50070</value>
</property>

<property>
  <name>dfs.namenode.shared.edits.dir</name>
  <value>qjournal://bigdata-pro01.kfk.com:8485;bigdata-pro02.kfk.com:8485;bigdata-pro03.kfk.com:8485/ns</value>
</property>

<property>
  <name>dfs.journalnode.edits.dir</name>
  <value>/opt/modules/hadoop-2.5.0/data/jn</value>
</property>





<property>
  <name>dfs.client.failover.proxy.provider.ns</name>
  <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>

<property>
  <name>dfs.ha.fencing.methods</name>
  <value>sshfence</value>
</property>

<property>
  <name>dfs.ha.fencing.ssh.private-key-files</name>
  <value>/home/kfk/.ssh/id_rsa</value>
</property>

<property>    
   <name>dfs.namenode.name.dir</name>    
   <value>file:///opt/modules/hadoop-2.5.0/hdfs/name</value>    
</property>
<property>    
    <name>dfs.datanode.data.dir</name>    
    <value>file:///opt/modules/hadoop-2.5.0/hdfs/data</value>    
</property> 
 <property>
    <name>dfs.ha.automatic-failover.enabled</name>
    <value>true</value>
</property>  

</configuration>

  1. core-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
	  <name>fs.defaultFS</name>
	  <value>hdfs://ns</value>
</property>
<property>
	<name>hadoop.http.staticuser.user</name>
	<value>kfk</value>
</property>

 <property>
         <name>hadoop.tmp.dir</name>
         <value>/opt/modules/hadoop-2.5.0/data/tmp</value>
</property>

 <property>
         <name>dfs.namenode.name.dir</name>
         <value>file://${hadoop.tmp.dir}/dfs/name</value>
</property>

<property>
	<name>ha.zookeeper.quorum.ns</name>
	<value>bigdata-pro01.kfk.com:2181,bigdata-pro02.kfk.com:2181,bigdata-pro03.kfk.com:2181</value>
</property>


</configuration>

  1. 将修改的配置分发到其他节点
scp hdfs-site.xml bigdata-pro02.kfk.com:/opt/modules/hadoop-2.5.0/etc/hadoop/
scp hdfs-site.xml bigdata-pro03.kfk.com:/opt/modules/hadoop-2.5.0/etc/hadoop/
scp core-site.xml bigdata-pro02.kfk.com:/opt/modules/hadoop-2.5.0/etc/hadoop/
scp core-site.xml bigdata-pro03.kfk.com:/opt/modules/hadoop-2.5.0/etc/hadoop/

  1. HDFS-HA 服务启动及自动故障转移测试
    1)启动所有节点上面的Zookeeper进程
    zkServer.sh start
    2)启动所有节点上面的journalnode进程
    3)在[bigdata-pro01.kfk.com]上,对namenode进行格式化,并启动
    #namenode 格式化
    bin/hdfs namenode -format
    #格式化高可用
    bin/hdfs zkfc -formatZK
    #启动namenode
    bin/hdfs namenode
    4)在[bigdata-pro02.kfk.com]上,同步nn1元数据信息
    bin/hdfs namenode -bootstrapStandby
    5)bigdata-pro02.kfk.com同步完数据后,在bigdata-pro01.kfk.com上,按下ctrl+c来结束namenode进程。然后关闭所有节点上面的journalnode进程
    sbin/hadoop-daemon.sh stop journalnode
    6)一键启动hdfs所有相关进程
    sbin/start-dfs.sh
    hdfs启动之后,kill其中Active状态的namenode,检查另外一个NameNode是否会自动切换为Active状态。同时通过命令上传文件至hdfs,检查hdfs是否可用。
    在这里插入图片描述

在这里插入图片描述

在这里插入图片描述

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值