Hadoop2.6 HA的搭建

一、软件

hadoop-2.6.0-cdh5.16.2.tar.gz
jdk-8u45-linux-x64.gz
zookeeper-3.4.5-cdh5.16.2.tar.gz

二、集群规划

主机安装软件进程
hadoop01hadoop、zookeeperNameNode、DFSZKFailoverController、JournalNode、DataNode、ResourceManager、JobHistoryServer、NodeManager、QuorumPeerMain
hadoop02hadoop、zookeeperNameNode、DFSZKFailoverController、JournalNode、DataNode 、ResourceManager 、NodeManager、QuorumPeerMain
hadoop03hadoop、zookeeperJournalNode、DataNode、 NodeManager、QuorumPeerMain

三、环境准备

  1. SSH互信
#生成密钥
[hadoop@hadoop01 ~]$ ssh-keygen -t rsa
[hadoop@hadoop02 ~]$ ssh-keygen -t rsa
[hadoop@hadoop03 ~]$ ssh-keygen -t rsa

#三台机器都执行一遍
[hadoop@hadoop01 ~]$ ssh-copy-id hadoop01
[hadoop@hadoop01 ~]$ ssh-copy-id hadoop02
[hadoop@hadoop01 ~]$ ssh-copy-id hadoop03

[hadoop@hadoop02 ~]$ ssh-copy-id hadoop01
[hadoop@hadoop02 ~]$ ssh-copy-id hadoop02
[hadoop@hadoop02 ~]$ ssh-copy-id hadoop03

[hadoop@hadoop03 ~]$ ssh-copy-id hadoop01
[hadoop@hadoop03 ~]$ ssh-copy-id hadoop02
[hadoop@hadoop03 ~]$ ssh-copy-id hadoop03

2.安装JDK

#解压
[hadoop@hadoop01 soft]$ tar -zxvf jdk-8u45-linux-x64.gz -C /usr/local/java/
[hadoop@hadoop02 soft]$ tar -zxvf jdk-8u45-linux-x64.gz -C /usr/local/java/
[hadoop@hadoop03 soft]$ tar -zxvf jdk-8u45-linux-x64.gz -C /usr/local/java/

#配置环境变量
[hadoop@hadoop01 ~]$ vim .bashrc
export JAVA_HOME=/usr/local/java/jdk1.8.0_212
export PATH=$JAVA_HOME/bin

#环境变量生效
[hadoop@hadoop01 ~]$ source   .bashrc

四、zookeeper安装
1、解压

[hadoop@hadoop01 soft]$ tar -zxvf zookeeper-3.4.5-cdh5.16.2.tar.gz -C /usr/local/java/
[hadoop@hadoop01 app]$ ln -s zookeeper-3.4.5-cdh5.16.2.tar.gz zookeeper

2、配置zookeeper/conf

 [hadoop@hadoop01 conf]$ cp zoo_sample.cfg zoo.cfg 

# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial 
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
dataDir=/home/hadoop/app/zookeeper/data
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the 
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
server.1=hadoop01:2888:3888
server.2=hadoop02:2888:3888
server.3=hadoop03:2888:3888

3、配置id

[hadoop@hadoop01 zookeeper]$ echo 1 > data/myid
[hadoop@hadoop02 zookeeper]$ echo 2 > data/myid
[hadoop@hadoop03 zookeeper]$ echo 3 > data/myid

五、Hadoop HA
1、解压

[hadoop@hadoop01 soft]$ tar -zxvf hadoop-2.6.0-cdh5.16.2.tar.gz -C ../app/
[hadoop@hadoop01 app]$ ln -s hadoop-2.6.0-cdh5.16.2.tar.gz -C ../app/

2、core-site.xml

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->


<configuration>

<!--Yarn 需要使用 fs.defaultFS 指定NameNode URI -->
        <property>
                <name>fs.defaultFS</name>
                <value>hdfs://hadoop</value>
        </property>
        <!--==============================Trash机制======================================= -->
        <property>
                <!--多长时间创建CheckPoint NameNode截点上运行的CheckPointer 从Current文件夹创建CheckPoint;默认:0 由fs.trash.interval项指定 -->
                <name>fs.trash.checkpoint.interval</name>
                <value>0</value>
        </property>
        <property>
                <!--多少分钟.Trash下的CheckPoint目录会被删除,该配置服务器设置优先级大于客户端,默认:0 不删除 -->
                <name>fs.trash.interval</name>
                <value>10080</value>
        </property>

         <!--指定hadoop临时目录, hadoop.tmp.dir 是hadoop文件系统依赖的基础配置,很多路径都依赖它。如果hdfs-site.xml中不配 置namenode和datanode的存放位置,默认就放在这>个路径中 -->
        <property>   
                <name>hadoop.tmp.dir</name>
                <value>/home/hadoop/tmp/hadoop</value>
        </property>

         <!-- 指定zookeeper地址 -->
        <property>
                <name>ha.zookeeper.quorum</name>
                <value>hadoop01:2181,hadoop02:2181,hadoop03:2181</value>
        </property>
         <!--指定ZooKeeper超时间隔,单位毫秒 -->
        <property>
                <name>ha.zookeeper.session-timeout.ms</name>
                <value>2000</value>
        </property>

        <property>
           <name>hadoop.proxyuser.hadoop.hosts</name>
           <value>*</value> 
        </property> 
        <property> 
            <name>hadoop.proxyuser.hadoop.groups</name> 
            <value>*</value> 
       </property> 


      <property>
		  <name>io.compression.codecs</name>
		  <value>org.apache.hadoop.io.compress.GzipCodec,
			org.apache.hadoop.io.compress.DefaultCodec,
			org.apache.hadoop.io.compress.BZip2Codec,
			org.apache.hadoop.io.compress.SnappyCodec
		  </value>
      </property>


</configuration>

3、hdfs-site.xml

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>

	<!--HDFS超级用户 -->
	<property>
		<name>dfs.permissions.superusergroup</name>
		<value>hadoop</value>
	</property>

	<!--开启web hdfs -->
	<property>
		<name>dfs.webhdfs.enabled</name>
		<value>true</value>
	</property>
	<property>
		<name>dfs.namenode.name.dir</name>
		<value>/home/hadoop/data/dfs/name</value>
		<description> namenode 存放name table(fsimage)本地目录(需要修改)</description>
	</property>
	<property>
		<name>dfs.namenode.edits.dir</name>
		<value>${dfs.namenode.name.dir}</value>
		<description>namenode粗放 transaction file(edits)本地目录(需要修改)</description>
	</property>
	<property>
		<name>dfs.datanode.data.dir</name>
		<value>/home/hadoop/data/dfs/data</value>
		<description>datanode存放block本地目录(需要修改)</description>
	</property>
	<property>
		<name>dfs.replication</name>
		<value>3</value>
	</property>
	<!-- 块大小128M (默认128M) -->
	<property>
		<name>dfs.blocksize</name>
		<value>134217728</value>
	</property>
	<!--======================================================================= -->
	<!--HDFS高可用配置 -->
	<!--指定hdfs的nameservice为ruozeclusterg7,需要和core-site.xml中的保持一致 -->
	<property>
		<name>dfs.nameservices</name>
		<value>hadoop</value>
	</property>
	<property>
		<!--设置NameNode IDs 此版本最大只支持两个NameNode -->
		<name>dfs.ha.namenodes.hadoop</name>
		<value>nn1,nn2</value>
	</property>

	<!-- Hdfs HA: dfs.namenode.rpc-address.[nameservice ID] rpc 通信地址 -->
	<property>
		<name>dfs.namenode.rpc-address.hadoop.nn1</name>
		<value>hadoop01:8020</value>
	</property>
	<property>
		<name>dfs.namenode.rpc-address.hadoop.nn2</name>
		<value>hadoop02:8020</value>
	</property>

	<!-- Hdfs HA: dfs.namenode.http-address.[nameservice ID] http 通信地址 -->
	<property>
		<name>dfs.namenode.http-address.hadoop.nn1</name>
		<value>hadoop01:50070</value>
	</property>
	<property>
		<name>dfs.namenode.http-address.hadoop.nn2</name>
		<value>hadoop02:50070</value>
	</property>

	<!--==================Namenode editlog同步 ============================================ -->
	<!--保证数据恢复 -->
	<property>
		<name>dfs.journalnode.http-address</name>
		<value>0.0.0.0:8480</value>
	</property>
	<property>
		<name>dfs.journalnode.rpc-address</name>
		<value>0.0.0.0:8485</value>
	</property>
	<property>
		<!--设置JournalNode服务器地址,QuorumJournalManager 用于存储editlog -->
		<!--格式:qjournal://<host1:port1>;<host2:port2>;<host3:port3>/<journalId> 端口同journalnode.rpc-address -->
		<name>dfs.namenode.shared.edits.dir</name>
		<value>qjournal://hadoop01:8485;hadoop02:8485;hadoop03:8485/ruozeclusterg10</value>
	</property>

	<property>
		<!--JournalNode存放数据地址 -->
		<name>dfs.journalnode.edits.dir</name>
		<value>/home/hadoop/data/dfs/jn</value>
	</property>
	<!--==================DataNode editlog同步 ============================================ -->
	<property>
		<!--DataNode,Client连接Namenode识别选择Active NameNode策略 -->
                             <!-- 配置失败自动切换实现方式 -->
		<name>dfs.client.failover.proxy.provider.hadoop</name>
		<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
	</property>
	<!--==================Namenode fencing:=============================================== -->
	<!--Failover后防止停掉的Namenode启动,造成两个服务 -->
	<property>
		<name>dfs.ha.fencing.methods</name>
		<value>sshfence</value>
	</property>
	<property>
		<name>dfs.ha.fencing.ssh.private-key-files</name>
		<value>/home/hadoop/.ssh/id_rsa</value>
	</property>
	<property>
		<!--多少milliseconds 认为fencing失败 -->
		<name>dfs.ha.fencing.ssh.connect-timeout</name>
		<value>30000</value>
	</property>

	<!--==================NameNode auto failover base ZKFC and Zookeeper====================== -->
	<!--开启基于Zookeeper  -->
	<property>
		<name>dfs.ha.automatic-failover.enabled</name>
		<value>true</value>
	</property>
	<!--动态许可datanode连接namenode列表 -->
	 <property>
	   <name>dfs.hosts</name>
	   <value>/home/hadoop/app/hadoop/etc/hadoop/slaves</value>
	 </property>

</configuration>

4、 mapred-site.xml

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
	<!-- 配置 MapReduce Applications -->
	<property>
		<name>mapreduce.framework.name</name>
		<value>yarn</value>
	</property>
	<!-- JobHistory Server ============================================================== -->
	<!-- 配置 MapReduce JobHistory Server 地址 ,默认端口10020 -->
	<property>
		<name>mapreduce.jobhistory.address</name>
		<value>hadoop01:10020</value>
	</property>
	<!-- 配置 MapReduce JobHistory Server web ui 地址, 默认端口19888 -->
	<property>
		<name>mapreduce.jobhistory.webapp.address</name>
		<value>hadoop01:19888</value>
	</property>

<!-- 配置 Map段输出的压缩,snappy-->
  <property>
      <name>mapreduce.map.output.compress</name> 
      <value>true</value>
  </property>
              
  <property>
      <name>mapreduce.map.output.compress.codec</name> 
      <value>org.apache.hadoop.io.compress.SnappyCodec</value>
   </property>

</configuration>

5、yarn-site.xml

<?xml version="1.0"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->
<configuration>

<!-- Site specific YARN configuration properties -->

	<!-- nodemanager 配置 ================================================= -->
	<property>
		<name>yarn.nodemanager.aux-services</name>
		<value>mapreduce_shuffle</value>
	</property>
	<property>
		<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
		<value>org.apache.hadoop.mapred.ShuffleHandler</value>
	</property>
	<property>
		<name>yarn.nodemanager.localizer.address</name>
		<value>0.0.0.0:23344</value>
		<description>Address where the localizer IPC is.</description>
	</property>
	<property>
		<name>yarn.nodemanager.webapp.address</name>
		<value>0.0.0.0:23999</value>
		<description>NM Webapp address.</description>
	</property>

	<!-- HA 配置 =============================================================== -->
	<!-- Resource Manager Configs -->
	<property>
		<name>yarn.resourcemanager.connect.retry-interval.ms</name>
		<value>2000</value>
	</property>
	<property>
		<name>yarn.resourcemanager.ha.enabled</name>
		<value>true</value>
	</property>
	<property>
		<name>yarn.resourcemanager.ha.automatic-failover.enabled</name>
		<value>true</value>
	</property>
	<!-- 使嵌入式自动故障转移。HA环境启动,与 ZKRMStateStore 配合 处理fencing -->
	<property>
		<name>yarn.resourcemanager.ha.automatic-failover.embedded</name>
		<value>true</value>
	</property>
	<!-- 集群名称,确保HA选举时对应的集群 -->
	<property>
		<name>yarn.resourcemanager.cluster-id</name>
		<value>yarn-cluster</value>
	</property>
	<property>
		<name>yarn.resourcemanager.ha.rm-ids</name>
		<value>rm1,rm2</value>
	</property>


    <!--这里RM主备结点需要单独指定,(可选)
         	<property>
		 <name>yarn.resourcemanager.ha.id</name>
		 <value>rm2</value>
	 </property>
	 -->

	<property>
		<name>yarn.resourcemanager.scheduler.class</name>
		<value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>
	</property>
	<property>
		<name>yarn.resourcemanager.recovery.enabled</name>
		<value>true</value>
	</property>
	<property>
		<name>yarn.app.mapreduce.am.scheduler.connection.wait.interval-ms</name>
		<value>5000</value>
	</property>
	<!-- ZKRMStateStore 配置 -->
	<property>
		<name>yarn.resourcemanager.store.class</name>
		<value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
	</property>
	<property>
		<name>yarn.resourcemanager.zk-address</name>
		<value>hadoop01:2181,hadoop02:2181,hadoop03:2181</value>
	</property>
	<property>
		<name>yarn.resourcemanager.zk.state-store.address</name>
		<value>hadoop01:2181,hadoop02:2181,hadoop03:2181</value>
	</property>
	<!-- Client访问RM的RPC地址 (applications manager interface) -->
	<property>
		<name>yarn.resourcemanager.address.rm1</name>
		<value>hadoop01:23140</value>
	</property>
	<property>
		<name>yarn.resourcemanager.address.rm2</name>
		<value>hadoop02:23140</value>
	</property>
	<!-- AM访问RM的RPC地址(scheduler interface) -->
	<property>
		<name>yarn.resourcemanager.scheduler.address.rm1</name>
		<value>hadoop01:23130</value>
	</property>
	<property>
		<name>yarn.resourcemanager.scheduler.address.rm2</name>
		<value>hadoop02:23130</value>
	</property>
	<!-- RM admin interface -->
	<property>
		<name>yarn.resourcemanager.admin.address.rm1</name>
		<value>hadoop01:23141</value>
	</property>
	<property>
		<name>yarn.resourcemanager.admin.address.rm2</name>
		<value>hadoop02:23141</value>
	</property>
	<!--NM访问RM的RPC端口 -->
	<property>
		<name>yarn.resourcemanager.resource-tracker.address.rm1</name>
		<value>hadoop01:23125</value>
	</property>
	<property>
		<name>yarn.resourcemanager.resource-tracker.address.rm2</name>
		<value>hadoop02:23125</value>
	</property>
	<!-- RM web application 地址 -->
	<property>
		<name>yarn.resourcemanager.webapp.address.rm1</name>
		<value>hadoop01:8088</value>
	</property>
	<property>
		<name>yarn.resourcemanager.webapp.address.rm2</name>
		<value>hadoop02:8088</value>
	</property>
	<property>
		<name>yarn.resourcemanager.webapp.https.address.rm1</name>
		<value>hadoop01:23189</value>
	</property>
	<property>
		<name>yarn.resourcemanager.webapp.https.address.rm2</name>
		<value>hadoop02:23189</value>
	</property>



	<property>
	   <name>yarn.log-aggregation-enable</name>
	   <value>true</value>
	</property>
	<property>
		 <name>yarn.log.server.url</name>
		 <value>http://hadoop01:19888/jobhistory/logs</value>
	</property>


	<property>
		<name>yarn.nodemanager.resource.memory-mb</name>
		<value>2048</value>
	</property>
	<property>
		<name>yarn.scheduler.minimum-allocation-mb</name>
		<value>1024</value>
		<discription>单个任务可申请最少内存,默认1024MB</discription>
	 </property>

  
  <property>
	<name>yarn.scheduler.maximum-allocation-mb</name>
	<value>2048</value>
	<discription>单个任务可申请最大内存,默认8192MB</discription>
  </property>

   <property>
       <name>yarn.nodemanager.resource.cpu-vcores</name>
       <value>2</value>
    </property>


</configuration>

6、slaves

hadoop01
hadoop02
hadoop03

7、hadoop-env.sh mapred-env.sh yarn-env.sh三个配置文件添加java环境变量

export JAVA_HOME=/usr/local/java/jdk1.8.0_212

8、Hadoop\zookeeper环境变量

[hadoop@hadoop01 ~]$ cat .bashrc 
# .bashrc

# Source global definitions
if [ -f /etc/bashrc ]; then
	. /etc/bashrc
fi

# Uncomment the following line if you don't like systemctl's auto-paging feature:
# export SYSTEMD_PAGER=

# User specific aliases and functions
export JAVA_HOME=/usr/local/java/jdk1.8.0_212
export HADOOP_HOME=/home/hadoop/app/hadoop
export ZOOKEEPER_HOME=/home/hadoop/app/zookeeper

export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$JAVA_HOME/bin:$ZOOKEEPER_HOME/bin:$PATH

9、将hadoop分发到hadoop02、hadoop03中去。

六、初始化

  1. 启动zookeeper
[hadoop@hadoop01 ~]$ zkServer.sh start
[hadoop@hadoop02 ~]$ zkServer.sh start
[hadoop@hadoop03 ~]$ zkServer.sh start

2、启动 JournalNode

[hadoop@hadoop01 sbin]$ hadoop-daemon.sh start journalnode 
[hadoop@hadoop02 sbin]$ hadoop-daemon.sh start journalnode 
[hadoop@hadoop03 sbin]$ hadoop-daemon.sh start journalnode 

3、NameNode格式化

[hadoop@hadoop01 hadoop]$  hadoop namenode -format 

4、同步NameNode元数据
同步 hadoop01 元数据到 hadoop02 主要是: dfs.namenode.name.dir,dfs.namenode.edits.dir
还应该确保共享存储目录下
(dfs.namenode.shared.edits.dir ) 包含 NameNode 所有的元数据

[hadoop@hadoop01 app]$ scp /home/hadoop/data/dfs/* hadoop@hadoop02:/home/hadoop/data/dfs/

5、初始化zkfc

[hadoop@hadoop01 bin]$ hdfs zkfc -formatZK 

6、启动hdfs

[hadoop@hadoop01 bin]$  start-dfs.sh 

七、总结
1、启动集群

[hadoop@hadoop01 hadoop]$ zkServer.sh start  
[hadoop@hadoop02 hadoop]$ zkServer.sh start 
[hadoop@hadoop03 hadoop]$ zkServer.sh start 
 
[hadoop@hadoop01 hadoop]$ start-all.sh
[hadoop@hadoop02 hadoop]$ yarn-daemon.sh start resourcemanager 
[hadoop@hadoop01 hadoop]$ mr-jobhistory-daemon.sh start historyserver 

2、关闭集群

[hadoop@hadoop01 hadoop]$ mr-jobhistory-daemon.sh stop historyserver 
[hadoop@hadoop02 hadoop]$ yarn-daemon.sh stop resourcemanager 
[hadoop@hadoop01 hadoop]$ stop-all.sh
[hadoop@hadoop01 hadoop]$ zkServer.sh stop  
[hadoop@hadoop02 hadoop]$ zkServer.sh stop 
[hadoop@hadoop03 hadoop]$ zkServer.sh stop 

3、监控集群

[hadoop@hadoop01 hadoop]$ hdfs dfsadmin -report 

4、web界面

HDFS:http://hadoop01:50070/ 
HDFS:http://hadoop02:50070/ 
 
ResourceManger(Active):http://hadoop01:8088 ResourceManger(Standby):http://hadoop02:8088/cluster/cluster 
JobHistory:http://hadoop01:19888/jobhistory 

5、单个进程启动/关闭

[hadoop@hadoop01 hadoop]$ hadoop-daemon.sh start|stop  namenode|datanode| journalnode|zkfc 
[hadoop@hadoop01 hadoop]$ yarn-daemon.sh start |stop  resourcemanager|nodemanager
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值