CDH版本HADOOP2.6.0高可用集群搭建

CDH版本HADOOP2.6.0高可用集群搭建

一、安装包准备

hadoop-2.6.0-cdh5.16.2.tar
jdk-8u45-linux-x64
zookeeper-3.4.5-cdh5.16.2.tar

二、集群规划

hadoop01
hadoop02
hadoop03

三、搭建过程

1、3台虚拟机配置,以第一台为例

**
1.1 ip设置

[root@hadoop01 ~]# vi /etc/sysconfig/network-scripts/ifcfg-ens33
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens33
UUID=01377317-1d24-4232-8cc7-43dc176622dd
DEVICE=ens33
ONBOOT=yes
IPADDR=192.168.42.41
GATEWAY=192.168.42.2
NETMASK=255.255.255.0
DNS1=114.114.114.114
DNS2=8.8.8.8

重启网卡

[root@hadoop01 /]# service network restart
[root@hadoop01 /]# ping www.baidu.com

1.2 设置主机名

[root@hadoop01 /]# vi /etc/hostname
hadoop01

1.3 绑定hostname

[root@hadoop01 /]# vi /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.42.41 hadoop01
192.168.42.42 hadoop02
192.168.42.43 hadoop03 

1.4 关闭防火墙

[root@hadoop01 /]# systemctl disable firewalld
[root@hadoop01 /]# systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
   Active: inactive (dead)
     Docs: man:firewalld(1)
[root@hadoop01 /]#

1.5 创建hadoop用户

[root@hadoop01 /]# useradd hadoop
[root@hadoop01 /]# passwd hadoop

配置用户root权限

[root@hadoop01 /]# vim /etc/sudoers

## Allow root to run any commands anywhere 
root	ALL=(ALL) 	ALL
hadoop  ALL=(ALL)       ALL

1.6 设置免密登录

[hadoop@hadoop02 ~]$ ssh-keygen -t rsa

回车3次

Generating public/private rsa key pair.
Enter file in which to save the key (/home/hadoop/.ssh/id_rsa): 
Created directory '/home/hadoop/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/hadoop/.ssh/id_rsa.
Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:mk/7sugGlqa8uP+SHaXGWv1p215P16kv2RkDNf5Sf6s hadoop@hadoop02
The key's randomart image is:
+---[RSA 2048]----+
|                 |
|               o |
|              o .|
|      .      . ..|
|   . =  S     ..o|
|    @ .o      .o*|
| . X oo...  . +oO|
| .* . .+=o . =.= |
|ooo+.oo.=*+  E=. |
+----[SHA256]-----+
[hadoop@hadoop02 ~]$ 

把公钥复制到三台机器内

[hadoop@hadoop01 ~]$ ssh-copy-id hadoop01
[hadoop@hadoop01 ~]$ ssh-copy-id hadoop02
[hadoop@hadoop01 ~]$ ssh-copy-id hadoop03

每台机器都进行此操作
测试hadoop01上登录hadoop02

[hadoop@hadoop01 ~]$  ssh hadoop02
Last login: Wed Jan 20 10:09:33 2021 from hadoop01
[hadoop@hadoop02 ~]$ 

2、java环境配置

2.1 进入/usr目录下,创建java文件夹(root用户),并上传java安装包,解压。

[root@hadoop01 ~]# mkdir /usr/java/
[root@hadoop01 java]# rz -e
[root@hadoop01 java]# tar -zxvf jdk-8u45-linux-x64.gz

2.2 添加环境变量(hadoop用户)

[hadoop@hadoop01 ~]$ vim .bashrc

添加如下内容

export JAVA_HOME=/usr/java/jdk1.8.0_141
export PATH=$PATH:$JAVA_HOME/bin

重启shell,检查是否生效

[hadoop@hadoop01 ~]$ which java
/usr/java/jdk1.8.0_141/bin/java
[hadoop@hadoop01 ~]$ 

3、zookeeper安装

3.1 在/home/hadoop/目录下创建文件夹app、soft

[hadoop@hadoop01 ~]$ mkdir app
[hadoop@hadoop01 ~]$ mkdir soft

3.2 进入soft文件夹,上传zookeeper安装包,并解压zookeeper到soft文件夹下,进入APP文件夹下,设置zookeeper软链接

[hadoop@hadoop01 java]# rz -e
[hadoop@hadoop01 app]$ tar -zxvf zookeeper-3.4.5-cdh5.16.2.tar.gz -C ../app/
[hadoop@hadoop01 app]$ cd ../app/
[hadoop@hadoop01 app]$ ln -s zookeeper-3.4.5-cdh5.16.2 zookeeper

3.3 配置zookeeper环境变量

[hadoop@hadoop01 ~]$ vi .bashrc
export ZOOKEEPER_HOME=/home/hadoop/app/zookeeper
export PATH=$PATH:$ZOOKEEPER_HOME/bin

3.4 进入zookeeper/conf文件夹下,修改zoo_sample.cfg为zoo.cfg,并进入修改如下
修改:

dataDir=/home/hadoop/app/zookeeper/data

添加

server.1=hadoop01:2888:3888
server.2=hadoop02:2888:3888
server.3=hadoop03:2888:3888

3.5 在zookeeper目录下创建data文件夹,并在里面创建文件myid,并写入1

[hadoop@hadoop01 zookeeper]$ mkdir data
[hadoop@hadoop01 zookeeper]$ vim data/myid

3.6 复制zookeeper文件夹到hadoop02,hadoop03中,并分别修改myid文件内容为2,3

[hadoop@hadoop01 ~]$ scp -r /home/hadoop/app hadoop02:/home/hadoop/app
[hadoop@hadoop01 ~]$ scp -r /home/hadoop/app hadoop03:/home/hadoop/app

4、安装hadoop

4.1 将下载好的hadoop放到soft文件夹下,并解压到app,设置软链接

[hadoop@hadoop01 soft]$ tar -zxvf hadoop-2.6.0-cdh5.16.2.tar.gz -C ../app/
[hadoop@hadoop01 app]$ ln -s hadoop-2.6.0-cdh5.16.2/ hadoop

4.2 配置环境变量

[hadoop@hadoop01 ~]$ vim .bashrc

# User specific aliases and functions
export JAVA_HOME=/usr/java/jdk1.8.0_141
export HADOOP_HOME=/home/hadoop/app/hadoop
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

4.3 配置文件
4.3.1 core-site.xml

<configuration>
<!--Yarn 需要使用 fs.defaultFS 指定NameNode URI -->
        <property>
                <name>fs.defaultFS</name>
                <value>hdfs://hadoopnn12</value>
        </property>
        <!--==============================Trash机制======================================= -->
        <property>
                <name>fs.trash.checkpoint.interval</name>
                <value>0</value>
        </property>
        <property>
                <!--多少分钟.Trash下的CheckPoint目录会被删除,该配置服务器设置优先级大于客户端,默认:0 不>删除 -->
                <name>fs.trash.interval</name>
                <value>10080</value>
        </property>

         <!--指定hadoop临时目录, hadoop.tmp.dir 是hadoop文件系统依赖的基础配置,很多路径都依赖它。如果hdfs-site.xml中不配 置namenode和datanode的存放位置,默认就放在这>个路径中 -->
        <property>
                <name>hadoop.tmp.dir</name>
                <value>/home/hadoop/tmp/hadoop</value>
        </property>

         <!-- 指定zookeeper地址 -->
        <property>
                <name>ha.zookeeper.quorum</name>
                <value>hadoop01:2181,hadoop02:2181,hadoop03:2181</value>
        </property>
         <!--指定ZooKeeper超时间隔,单位毫秒 -->
        <property>
                <name>ha.zookeeper.session-timeout.ms</name>
                <value>2000</value>
        </property>

        <property>
           <name>hadoop.proxyuser.hadoop.hosts</name>
           <value>*</value>
        </property>
        <property>
            <name>hadoop.proxyuser.hadoop.groups</name>
            <value>*</value>
       </property>
       <property>
                  <name>io.compression.codecs</name>
                  <value>org.apache.hadoop.io.compress.GzipCodec,
                        org.apache.hadoop.io.compress.DefaultCodec,
                        org.apache.hadoop.io.compress.BZip2Codec,
                        org.apache.hadoop.io.compress.SnappyCodec
                  </value>
       </property>
</configuration>

4.3.2 hdfs-site.xml

<configuration>
<!--HDFS超级用户 -->
        <property>
                <name>dfs.permissions.superusergroup</name>
                <value>hadoop</value>
        </property>

        <!--开启web hdfs -->
        <property>
                <name>dfs.webhdfs.enabled</name>
                <value>true</value>
        </property>
        <property>
                <name>dfs.namenode.name.dir</name>
                <value>/home/hadoop/data/dfs/name</value>
                <description> namenode 存放name table(fsimage)本地目录(需要修改)</description>
        </property>
        <property>
                <name>dfs.namenode.edits.dir</name>
                <value>${dfs.namenode.name.dir}</value>
                <description>namenode粗放 transaction file(edits)本地目录(需要修改)</description>
        </property>
        <property>
                <name>dfs.datanode.data.dir</name>
                <value>/home/hadoop/data/dfs/data</value>
                <description>datanode存放block本地目录(需要修改)</description>
        </property>
        <property>
                <name>dfs.replication</name>
                <value>3</value>
        </property>
        <!-- 块大小128M (默认128M) -->
        <property>
                <name>dfs.blocksize</name>
                <value>134217728</value>
        </property>
        <!--======================================================================= -->
        <property>
                <name>dfs.nameservices</name>
                <value>hadoopnn12</value>
        </property>
        <property>
                <!--设置NameNode IDs 此版本最大只支持两个NameNode -->
                <name>dfs.ha.namenodes.hadoopnn12</name>
                <value>nn1,nn2</value>
        </property>

        <!-- Hdfs HA: dfs.namenode.rpc-address.[nameservice ID] rpc 通信地址 -->
        <property>
                <name>dfs.namenode.rpc-address.hadoopnn12.nn1</name>
                <value>hadoop01:8020</value>
        </property>
        <property>
                <name>dfs.namenode.rpc-address.hadoopnn12.nn2</name>
                <value>hadoop02:8020</value>
        </property>

        <!-- Hdfs HA: dfs.namenode.http-address.[nameservice ID] http 通信地址 -->
        <property>
                <name>dfs.namenode.http-address.hadoopnn12.nn1</name>
                <value>hadoop01:50070</value>
        </property>
        <property>
                <name>dfs.namenode.http-address.hadoopnn12.nn2</name>
                <value>hadoop02:50070</value>
        </property>

        <!--==================Namenode editlog同步 ============================================ -->
        <!--保证数据恢复 -->
        <property>
                <name>dfs.journalnode.http-address</name>
                <value>0.0.0.0:8480</value>
        </property>
        <property>
                <name>dfs.journalnode.rpc-address</name>
                <value>0.0.0.0:8485</value>
        </property>
        <property>
                <!--设置JournalNode服务器地址,QuorumJournalManager 用于存储editlog -->
                <name>dfs.namenode.shared.edits.dir</name>
                <value>qjournal://hadoop01:8485;hadoop02:8485;hadoop03:8485/hadoopnn12</value>
        </property>

        <property>
                <!--JournalNode存放数据地址 -->
                <name>dfs.journalnode.edits.dir</name>
                <value>/home/hadoop/data/dfs/jn</value>
        </property>
        <!--==================DataNode editlog同步 ============================================ -->
        <property>
                <!--DataNode,Client连接Namenode识别选择Active NameNode策略 -->
                             <!-- 配置失败自动切换实现方式 -->
                <name>dfs.client.failover.proxy.provider.hadoopnn12</name>
                <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
        </property>
        <!--==================Namenode fencing:=============================================== -->
        <!--Failover后防止停掉的Namenode启动,造成两个服务 -->
        <property>
                <name>dfs.ha.fencing.methods</name>
                <value>sshfence</value>
        </property>
        <property>
                <name>dfs.ha.fencing.ssh.private-key-files</name>
                <value>/home/hadoop/.ssh/id_rsa</value>
        </property>
        <property>
                <!--多少milliseconds 认为fencing失败 -->
                <name>dfs.ha.fencing.ssh.connect-timeout</name>
                <value>30000</value>
        </property>

        <!--==================NameNode auto failover base ZKFC and Zookeeper====================== -->
        <!--开启基于Zookeeper  -->
        <property>
                <name>dfs.ha.automatic-failover.enabled</name>
                <value>true</value>
        </property>
        <!--动态许可datanode连接namenode列表 -->
         <property>
                <name>dfs.hosts</name>
                <value>/home/hadoop/app/hadoop/etc/hadoop/slaves</value>
        </property>
</configuration>

4.3.3 mapred-site.xml

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
        <!-- 配置 MapReduce Applications -->
        <property>
                <name>mapreduce.framework.name</name>
                <value>yarn</value>
        </property>
        <!-- JobHistory Server ============================================================== -->
        <!-- 配置 MapReduce JobHistory Server 地址 ,默认端口10020 -->
        <property>
                <name>mapreduce.jobhistory.address</name>
                <value>hadoop01:10020</value>
        </property>
        <!-- 配置 MapReduce JobHistory Server web ui 地址, 默认端口19888 -->
        <property>
                <name>mapreduce.jobhistory.webapp.address</name>
                <value>hadoop01:19888</value>
        </property>

<!-- 配置 Map段输出的压缩,snappy-->
  <property>
      <name>mapreduce.map.output.compress</name>
      <value>true</value>
  </property>
              
  <property>
      <name>mapreduce.map.output.compress.codec</name> 
      <value>org.apache.hadoop.io.compress.SnappyCodec</value>
   </property>

</configuration>

4.3.4 yarn-site.xml

<configuration>
<!-- Site specific YARN configuration properties -->
<!-- nodemanager 配置 ================================================= -->
        <property>
                <name>yarn.nodemanager.aux-services</name>
                <value>mapreduce_shuffle</value>
        </property>
        <property>
                <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
                <value>org.apache.hadoop.mapred.ShuffleHandler</value>
        </property>
        <property>
                <name>yarn.nodemanager.localizer.address</name>
                <value>0.0.0.0:23344</value>
                <description>Address where the localizer IPC is.</description>
        </property>
        <property>
                <name>yarn.nodemanager.webapp.address</name>
                <value>0.0.0.0:23999</value>
                <description>NM Webapp address.</description>
        </property>

        <!-- HA 配置 =============================================================== -->
        <!-- Resource Manager Configs -->
        <property>
                <name>yarn.resourcemanager.connect.retry-interval.ms</name>
                <value>2000</value>
        </property>
        <property>
                <name>yarn.resourcemanager.ha.enabled</name>
                <value>true</value>
        </property>
        <property>
                <name>yarn.resourcemanager.ha.automatic-failover.enabled</name>
                <value>true</value>
        </property>
        <!-- 使嵌入式自动故障转移。HA环境启动,与 ZKRMStateStore 配合 处理fencing -->
        <property>
                <name>yarn.resourcemanager.ha.automatic-failover.embedded</name>
                <value>true</value>
        </property>
        <!-- 集群名称,确保HA选举时对应的集群 -->
        <property>
                <name>yarn.resourcemanager.cluster-id</name>
                <value>yarn-cluster</value>
        </property>
        <property>
                <name>yarn.resourcemanager.ha.rm-ids</name>
                <value>rm1,rm2</value>
        </property>


    <!--这里RM主备结点需要单独指定,(可选)
                <property>
                 <name>yarn.resourcemanager.ha.id</name>
                 <value>rm2</value>
         </property>
         -->

        <property>
                <name>yarn.resourcemanager.scheduler.class</name>
                <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>
        </property>
        <property>
                <name>yarn.resourcemanager.recovery.enabled</name>
                <value>true</value>
        </property>
        <property>
                <name>yarn.app.mapreduce.am.scheduler.connection.wait.interval-ms</name>
                <value>5000</value>
        </property>
        <!-- ZKRMStateStore 配置 -->
        <property>
                <name>yarn.resourcemanager.store.class</name>
                <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
        </property>
        <property>
                <name>yarn.resourcemanager.zk-address</name>
                <value>hadoop01:2181,hadoop02:2181,hadoop03:2181</value>
        </property>
        <property>
                <name>yarn.resourcemanager.zk.state-store.address</name>
                <value>hadoop01:2181,hadoop02:2181,hadoop03:2181</value>
        </property>
        <!-- Client访问RM的RPC地址 (applications manager interface) -->
        <property>
                <name>yarn.resourcemanager.address.rm1</name>
                <value>hadoop01:23140</value>
        </property>
        <property>
                <name>yarn.resourcemanager.address.rm2</name>
                <value>hadoop02:23140</value>
        </property>
        <!-- AM访问RM的RPC地址(scheduler interface) -->
        <property>
                <name>yarn.resourcemanager.scheduler.address.rm1</name>
                <value>hadoop01:23130</value>
        </property>
        <property>
                <name>yarn.resourcemanager.scheduler.address.rm2</name>
                <value>hadoop02:23130</value>
        </property>
        <!-- RM admin interface -->
        <property>
                <name>yarn.resourcemanager.admin.address.rm1</name>
                <value>hadoop01:23141</value>
        </property>
        <property>
                <name>yarn.resourcemanager.admin.address.rm2</name>
                <value>hadoop02:23141</value>
        </property>
        <!--NM访问RM的RPC端口 -->
        <property>
                <name>yarn.resourcemanager.resource-tracker.address.rm1</name>
                <value>hadoop01:23125</value>
        </property>
        <property>
                <name>yarn.resourcemanager.resource-tracker.address.rm2</name>
                <value>hadoop02:23125</value>
        </property>
        <!-- RM web application 地址 -->
        <property>
                <name>yarn.resourcemanager.webapp.address.rm1</name>
                <value>hadoop01:8088</value>
        </property>
        <property>
                <name>yarn.resourcemanager.webapp.address.rm2</name>
                <value>hadoop02:8088</value>
        </property>
        <property>
                <name>yarn.resourcemanager.webapp.https.address.rm1</name>
                <value>hadoop01:23189</value>
        </property>
        <property>
                <name>yarn.resourcemanager.webapp.https.address.rm2</name>
                <value>hadoop02:23189</value>
        </property>
        <property>
           <name>yarn.log-aggregation-enable</name>
           <value>true</value>
        </property>
        <property>
                 <name>yarn.log.server.url</name>
                 <value>http://hadoop01:19888/jobhistory/logs</value>
        </property>
        <property>
                <name>yarn.nodemanager.resource.memory-mb</name>
                <value>2048</value>
        </property>
        <property>
                <name>yarn.scheduler.minimum-allocation-mb</name>
                <value>1024</value>
                <discription>单个任务可申请最少内存,默认1024MB</discription>
         </property>
  <property>
        <name>yarn.scheduler.maximum-allocation-mb</name>
        <value>2048</value>
        <discription>单个任务可申请最大内存,默认8192MB</discription>
  </property>

   <property>
       <name>yarn.nodemanager.resource.cpu-vcores</name>
       <value>2</value>
    </property>
</configuration>

4.3.5 slaves

hadoop01
hadoop02
hadoop03

4.3.6 hadoop-env.sh

# The java implementation to use.
export JAVA_HOME=/usr/java/jdk1.8.0_141/

4.3.7 mapred-env.sh

export JAVA_HOME=/usr/java/jdk1.8.0_141/

4.3.8 yarn-env.sh

# some Java parameters
export JAVA_HOME=/usr/java/jdk1.8.0_141/

4.4 在hadoop文件夹下创建tmp文件

[hadoop@hadoop02 hadoop]$ mkdir tmp

4.5 hadoop文件复制到hadoop02,hadoop03中

[hadoop@hadoop01 app]$ scp -r hadoop-2.6.0-cdh5.16.2 hadoop02:/home/hadoop/app/
[hadoop@hadoop01 app]$ scp -r hadoop-2.6.0-cdh5.16.2 hadoop03:/home/hadoop/app/

5、启动集群

5.1 启动zookeeper

[hadoop@hadoop01 ~]$ zkServer.sh start
[hadoop@hadoop02 ~]$ zkServer.sh start
[hadoop@hadoop03 ~]$ zkServer.sh start

[hadoop@hadoop01 ~]$ zkServer.sh status
Mode: follower
[hadoop@hadoop02 ~]$ zkServer.sh status
Mode: leader
[hadoop@hadoop03 ~]$ zkServer.sh status
Mode: follower

5.2 namenode格式化
在 journalnode 节点机器上先启动 JournalNode 进程

[hadoop@hadoop01 hadoop]$ hadoop-daemon.sh start journalnode
[hadoop@hadoop02 hadoop]$ hadoop-daemon.sh start journalnode
[hadoop@hadoop03 hadoop]$ hadoop-daemon.sh start journalnode
[hadoop@hadoop01 hadoop]$ jps
12000 Jps
11953 JournalNode
11827 QuorumPeerMain

[hadoop@hadoop01 ~]$ hadoop namenode -format

同步namenode元数据

[hadoop@hadoop01 ~]$ scp -r data hadoop02:/home/hadoop/

5.3 初始化zkfc

[hadoop@hadoop01 ~]$ hdfs zkfc -formatZK

5.4 启动hdfs

[hadoop@hadoop01 ~]$ start-dfs.sh

浏览器进入:
http://hadoop01:50070
http://hadoop02:50070
5.5 启动yarn

[hadoop@hadoop01 ~]$start-yarn.sh

5.6 hadoop02启动备用RM

[hadoop@hadoop02 ~]$yarn-daemon.sh start resourcemanager

6、关闭集群

6.1 关闭Hadoop

stop-yarn.sh
yarn-daemon.sh stop resourcemanager
stop-dfs.sh

6.2 关闭zookeeper

[hadoop@hadoop01 ~]$ zkServer.sh stop
[hadoop@hadoop02 ~]$ zkServer.sh stop
[hadoop@hadoop03 ~]$ zkServer.sh stop
  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值