4台虚拟机实现高可用Hadoop集群步骤

目录

一、集群安装

1、软件版本选择

2、机器配置

    1)4台机器分配 

    2)修改hosts

    3)免密登录

3、软件安装

      1)安装jdk

      2)安装zookeeper

      3)安装hadoop

      4)小结

二、启动集群

1、启动zookeeper

2、启动hadoop

1)启动journalnode进程初始化namenode

2)启动文件系统

3)启动yarn集群

4)启动 mapreduce 任务历史服务器

三、验证集群是否成功

四、一键启动脚本 


一、集群安装

1、软件版本选择

正确的选择hadoop、zookeeper、jdk和系统版本会事半功倍,且有利于以后扩展hive、storm、spark等集群组件。我跟你讲,特别是不要64位的系统装32位的hadoop、32位的系统装64位的hadoop等等等,然后又去找对应版本的libhadoop.so.1.0.0替换,会出现一系列莫名其妙还隐藏很深的问题。下面是我使用的软件和系统版本:

系统:64位 redhat 6.5  链接:https://pan.baidu.com/s/12Mr6RHzaYac4F-xaAzm59Q  提取码:op9z 
软件:jdk 1.7、hadoop 2.6、zookeeper 3.4 链接:https://pan.baidu.com/s/13phkiYe2zlw9NhaEvbDRzQ  提取码:t59h 

2、机器配置

    1)4台机器分配 

主机名redhat01redhat02redhat03redhat04
ip192.168.202.121192.168.202.122192.168.202.123192.168.202.124
 zookeeperzookeeperzookeeperzookeeper(obsrver)
 datanodedatanodedatanodedatanode
 nameipnodenamenoderesourcemanagerresourcemanager

    2)修改hosts

在4台机器的hosts添加如下配置 

vim /etc/hosts

192.168.202.121 redhat01
192.168.202.122 redhat02
192.168.202.123 redhat03
192.168.202.124 redhat04

3)免密登录

(推荐4台机器都新建个相同的普通用户,以下操作都使用普通用户)在4台机器上分别执行:

ssh-keygen -t rsa

一路回车后,在redhat01执行:

touch ~/.ssh/authorized_keys
chomd 600 ~/.ssh/authorized_keys

将4台机器 ~/.ssh/id_rsa.pub 的内容都粘贴到上面新建的authorized_keys文件里,然后将authorized_keys文件分发到另外的3台机器

scp ~/.shh/authorized_keys redhat02:~/.shh/authorized_keys
scp ~/.shh/authorized_keys redhat03:~/.shh/authorized_keys
scp ~/.shh/authorized_keys redhat04:~/.shh/authorized_keys

之后,在每台机器上都ssh连接4台机器(包括自己),第一次连接都要手动输入yes,以后就不要了,例如在redhat01上

ssh redhat01
#手动输入yes

ssh redhat02
#手动输入yes

ssh redhat03
#手动输入yes

ssh redhat04
#手动输入yes

3、软件安装

      1)安装jdk

解压jdk后,配置环境变量:

注意一定要配置到.bashrc,这样就不用修改hadoop-env.sh里面的JAVA_HOME项了。不然会报JAVA_HOME不存在的错。这是因为hadoop的守护进程是不会读取~/.bash_profile 这个配置文件的,但会读取.bashrc。想要具体了解可以百度“bashrc与profile的区别”。

vim ~/.bashrc

#加在最后 
export JAVA_HOME=/home/hadoop/bigdata/jdk1.7.0_80
export PATH=.:$JAVA_HOME/bin:$PATH
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar

      2)安装zookeeper

解压zookeeper,配置环境

vim ~/.bash_profile

##  在最后添加 
export ZOOKEEPER_HOME=/home/hadoop/bigdata/zookeeper-3.4.11
PATH=$PATH:$HOME/bin:$ZOOKEEPER_HOME/bin

export PATH

进入zookeeper安装目录,在conf目录存放的是zookeeper的配置文件

cp zoo_sample.cfg zoo.cfg

复制一下配置到zoo.cfg文件

# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial 
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
dataDir=/home/hadoop/bigdata/zookeeper-3.4.11/data
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the 
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1

## add by user 
# 以下内容手动添加
# server.id=主机名:心跳端口:选举端口
# 注意:这里给每个节点定义了id,这些id写到配置文件中
# id为1-255之间的任意的不重复的数字,一定要记得每个节点的id的对应关系
#observer表示该机器为观察者角色
server.1=redhat01:2888:3888
server.2=redhat02:2888:3888
server.3=redhat03:2888:3888
server.4=redhat04:2888:3888:observer

zookeeper的三种角色:

           leader:能接收所有的读写请求,也可以处理所有的读写请求,而且整个集群中的所有写数据请求都是由leader进行处理 

           follower:能接收所有的读写请求,但是读数据请求自己处理,写数据请求转发给leader

           observer:跟follower的唯一的区别就是没有选举权和被选举权

注意这一行配置dataDir=/home/hadoop/bigdata/zookeeper-3.4.11/data,这个目录是需要改为你自己的目录,并手动新建好,在这个目录下,还要新建个myid文件,并且每台机器内容都不一样,需要和zoo.cfg最后的server.后面的数保持一致,

#在Redhat01上 
echo 1 >/home/hadoop/bigdata/zookeeper-3.4.11/data/myid

#在Redhat02上 
echo 2 >/home/hadoop/bigdata/zookeeper-3.4.11/data/myid

#在Redhat03上 
echo 3 >/home/hadoop/bigdata/zookeeper-3.4.11/data/myid

#在Redhat04上 
echo 4 >/home/hadoop/bigdata/zookeeper-3.4.11/data/myid

      3)安装hadoop

解压hadoop,配置环境

vim ~/.bash_profile


##  在最后添加 
export ZOOKEEPER_HOME=/home/hadoop/bigdata/zookeeper-3.4.11
export HADOOP_HOME=/home/hadoop/bigdata/hadoop-2.6.0
PATH=$PATH:$HOME/bin:$ZOOKEEPER_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

export PATH

修改配置文件,在hadoop目录的etc/hadoop 目录下

core-site.xml

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
    <!--指定 hdfs 的 nameservice 为 ns(这个可以自定义)-->
    <property>
    <name>fs.defaultFS</name>
    <value>hdfs://ns/</value>
    </property>
    
    <!-- 指定 hadoop 工作目录 -->
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/home/hadoop/bigdata/hadoop-2.6.0/data/hadoopdata</value>
    </property>

    <!-- 指定 zookeeper 集群访问地址 -->
    <property>
        <name>ha.zookeeper.quorum</name>
        <value>redhat01:2181,redhat02:2181,redhat03:2181,redhat04:2181</value>
    </property>
    <!--修改core-site.xml中的ipc参数,防止出现连接journalnode服务ConnectException
        如果不加下面的参数,在用star-dfs.sh启动集群时,可能会报连接拒绝错误 
    -->
    <property>
        <name>ipc.client.connect.max.retries</name>
        <value>50</value>
        <description>Indicates the number of retries a client will make to establish a server connection.</description>
    </property>
    <property>
        <name>ipc.client.connect.retry.interval</name>
        <value>10000</value>
        <description>Indicates the number of milliseconds a client will wait for before retrying to establish a server connection.</description>
    </property>
</configuration>

hdfs-site.xml

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
    <!-- 指定副本数:不要超过datanode节点数量-->
    <property>
        <name>dfs.replication</name>
        <value>2</value>
    </property>

    <!--指定 hdfs 的 nameservice 为 ns,需要和 core-site.xml 中保持一致-->
    <property>
        <name>dfs.nameservices</name>
        <value>ns</value>
    </property>

    <!-- ns 下面有两个 NameNode,分别是 nn1,nn2 -->
    <property>
        <name>dfs.ha.namenodes.ns</name>
        <value>nn01,nn02</value>
    </property>

    <!-- nn1 的 RPC 通信地址 -->
    <property>
        <name>dfs.namenode.rpc-address.ns.nn01</name>
        <value>redhat01:9000</value>
    </property>
    <!-- nn1 的 http 通信地址 -->
    <property>
        <name>dfs.namenode.http-address.ns.nn01</name>
        <value>redhat01:50070</value>
    </property>
    <!-- nn2 的 RPC 通信地址 -->
    <property>
        <name>dfs.namenode.rpc-address.ns.nn02</name>
        <value>redhat02:9000</value>
    </property>
    <!-- nn2 的 http 通信地址 -->
    <property>
        <name>dfs.namenode.http-address.ns.nn02</name>
        <value>redhat02:50070</value>
    </property>

    <!-- 指定 NameNode 的 edits 元数据在 JournalNode 上的存放位置 -->
    <property>
        <name>dfs.namenode.shared.edits.dir</name>
        <value>qjournal://redhat01:8485;redhat02:8485;redhat03:8485/ns</value>
    </property>

    <!-- 指定 JournalNode 在本地磁盘存放数据的位置 -->
    <property>
        <name>dfs.journalnode.edits.dir</name>
        <value>/home/hadoop/bigdata/hadoop-2.6.0/data/journaldata</value>
    </property>

    <!-- 开启 NameNode 失败自动切换 -->
    <property>
        <name>dfs.ha.automatic-failover.enabled</name>
        <value>true</value>
    </property>

    <!-- 配置失败自动切换实现方式 -->
    <!-- 此处配置较长,在安装的时候切记检查不要换行-->
    <property>
        <name>dfs.client.failover.proxy.provider.ns</name>
        <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
    </property>

    <!-- 配置隔离机制方法,多个机制用换行分割,即每个机制占用一行-->
    <property>
        <name>dfs.ha.fencing.methods</name>
        <value>
            sshfence
            shell(/bin/true)
        </value>
    </property>

    <!-- 使用 sshfenns离机制时需要 ssh 免登陆 注意这里要修改成你自己的地址 -->
    <property>
        <name>dfs.ha.fencing.ssh.private-key-files</name>
        <value>/home/hadoop/.ssh/id_rsa</value>
    </property>
    
    <!-- 配置 sshfence 隔离机制超时时间(30s) -->
    <property>
        <name>dfs.ha.fencing.ssh.connect-timeout</name>
        <value>30000</value>
    </property>
</configuration>

mapred-site.xml 

注意集群没有这个文件这个要从mapred-site.xml.template复制一份,改名为mapred-site.xml

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
    <!-- 指定 mr 框架为 yarn 方式 -->
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>

    <!-- 设置 mapreduce 的历史服务器地址和端口号 -->
    <property>
        <name>mapreduce.jobhistory.address</name>
        <value>redhat01:10020</value>
    </property>

    <!-- mapreduce 历史服务器的 web 访问地址  -->
    <property>
        <name>mapreduce.jobhistory.webapp.address</name>
        <value>redhat01:19888</value>
    </property>
</configuration>

yarn-site.xml

<?xml version="1.0"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->
<configuration>

<!-- Site specific YARN configuration properties -->

    <!-- 开启 RM 高可用 -->
    <property>
        <name>yarn.resourcemanager.ha.enabled</name>
        <value>true</value>
    </property>

    <!-- 指定 RM 的 cluster id,可以自定义-->
    <property>
        <name>yarn.resourcemanager.cluster-id</name>
        <value>jyarn</value>
    </property>

    <!-- 指定 RM 的名字,可以自定义 -->
    <property>
        <name>yarn.resourcemanager.ha.rm-ids</name>
        <value>rm1,rm2</value>
    </property>

    <!-- 分别指定 RM 的地址 -->
    <property>
        <name>yarn.resourcemanager.hostname.rm1</name>
        <value>redhat03</value>
    </property>

    <property>
        <name>yarn.resourcemanager.hostname.rm2</name>
        <value>redhat04</value>
    </property>

    <!-- 指定 zk 集群地址 -->
    <property>
        <name>yarn.resourcemanager.zk-address</name>
        <value>redhat01:2181,redhat02:2181,redhat03:2181,redhat04:2181</value>
    </property>

    <!-- 要运行 MapReduce 程序必须配置的附属服务 -->
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>

    <!-- 开启 YARN 集群的日志聚合功能 -->
    <property>
        <name>yarn.log-aggregation-enable</name>
        <value>true</value>
    </property>

    <!-- YARN 集群的聚合日志最长保留时长 -->
    <property>
        <name>yarn.log-aggregation.retain-seconds</name>
        <!--1天-->
        <value>86400</value>
    </property>

    <!-- 启用自动恢复 -->
    <property>
        <name>yarn.resourcemanager.recovery.enabled</name>
    <value>true</value>
    </property>

    <!-- 制定 resourcemanager 的状态信息存储在 zookeeper 集群上-->
    <property>
        <name>yarn.resourcemanager.store.class</name>
        <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
    </property>


</configuration>

最后在修改 slaves配置文件,指定datanode所在节点

vim slaves

redhat01
redhat02
redhat03
redhat04

      4)小结

这些在4台机器上都要配置,有点繁琐,但请一定要细心,不然很容易出问题。也可以在一台服务器上配置好后直接拷贝到另外的机器

scp -r /home/hadoop/bigdata/hadoop-2.6.0 redhat02:/home/hadoop/bigdata/hadoop-2.6.0

特别是免密登录配置好后,使用scp拷贝非常方便

左右准备工作都做好后,要刷新下环境变量

#每天机器都要执行
source ~/.bash_profile

二、启动集群

1、启动zookeeper

在四台服务器上分别执行

zkserver.sh start

启动后可以查看节点状态,如redhat04确实处于observer模式

输入jps后可以看到zookeeper进程

2、启动hadoop

1)启动journalnode进程初始化namenode

在redhat01、redhat02、redhat03这3个journalnode节点上启动journalnode进程

hadoop-daemon.sh start journalnode

在redhat01上初始化文件系统

hadoop namenode -format

在redhat01上启动namenode进程 

hadoop-daemon.sh start namenode

在redhat02上同步namenode元数据

hadoop namenode -bootstrapStandby

在redhat01或redhat02上格式化zkfc

hdfs zkfc -formatZK

2)启动文件系统

先停掉所有进程,之后通过start-dfs.sh 一起启动

#在redhat01、redhat02、redhat03上分别执行,停掉journalnode进程
hadoop-daemon.sh stop journalnode

#在redhat01上执行,停掉namenode进程
hadoop-daemon.sh stop namenode

#在每个机器上执行jps,可看到所有进程都没停掉了。只有个jps进程和zookeeper进程 
jps

在redhat01上执行start-dfs.sh

启动结果如下

通过http://ip:50070访问web界面  ip是redhat01、redhat02的静态ip

 

3)启动yarn集群

在redhat03上执行start-yarn.sh

在redhat04上启动resourcemanager进程

yarn-daemon.sh start resourcemanager

通过http://ip:8088端口查看yarn的web界面   ip是redhat03、redhat04的静态ip

由于redhat03的resourcemanager处于活动状态,redhat04处于standby,输入redhat04的ip访问yarn的web界面时会转到redhat03,由于本机电脑并没有配置redhat03的hosts所以找不到主机,并不是集群出错了

4)启动 mapreduce 任务历史服务器

在redhat01上执行

mr-jobhistory-daemon.sh start historyserver

通过http://ip:19888 访问历史任务web界面 ip为redhat01静态ip

 

三、验证集群是否成功

四台机器的进程如下:

[hadoop@redhat01 ~]$ jps
2443 JobHistoryServer
2286 NodeManager
2071 DFSZKFailoverController
1950 JournalNode
1787 DataNode
2516 Jps
1363 QuorumPeerMain
1688 NameNode


[hadoop@redhat02 ~]$ jps
1667 DFSZKFailoverController
1249 QuorumPeerMain
1873 NodeManager
1417 NameNode
1559 JournalNode
2058 Jps
1484 DataNode


[hadoop@redhat03 ~]$ jps
1494 JournalNode
1624 ResourceManager
2067 Jps
1245 QuorumPeerMain
1725 NodeManager
1420 DataNode


[hadoop@redhat04 ~]$ jps
1701 Jps
1476 NodeManager
1355 DataNode
1247 QuorumPeerMain
1622 ResourceManager

查看集群状态

[hadoop@redhat01 ~]$ hdfs dfsadmin -report
Configured Capacity: 50189762560 (46.74 GB)
Present Capacity: 21985267712 (20.48 GB)
DFS Remaining: 21984055296 (20.47 GB)
DFS Used: 1212416 (1.16 MB)
DFS Used%: 0.01%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0

-------------------------------------------------
Live datanodes (4):

Name: 192.168.202.124:50010 (redhat04)
Hostname: redhat04
Decommission Status : Normal
Configured Capacity: 12547440640 (11.69 GB)
DFS Used: 544768 (532 KB)
......
......(省略)

[hadoop@redhat01 ~]$ 

运行hadoop自带wordconut程序

#新建路径 
hadoop fs -mkdir -p /user/hadoop/input

#将hadoop安装目录下license.txt文件上传
hadoop fs -put /home/hadoop/bigdata/hadoop-2.6.0/LICENSE.txt /user/hadoop/input/

#将hadoop安装目录下share/hadoop/mapreduce/ 目录下有自带的wordconut程序
hadoop jar /home/hadoop/bigdata/hadoop-2.6.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar wordcount /user/hadoop/input /user/hadoop/output

程序执行成功后在集群/user/hadoop/output目录下可以看到结果

四、一键启动脚本 

由于每次启动太麻烦了,写了一个hadoop一键启动和关闭脚本,该脚本适合第二次启动,第一次初始化还是按照上面一步步来:

#!/bin/bash

zp_home_dir='/home/hadoop/bigdata/zookeeper-3.4.11/bin'
hd_home_dir='/home/hadoop/bigdata/hadoop-2.6.0/sbin'
nodeArr=(redhat01 redhat02 redhat03 redhat04)
echo '---- start zookeeper ----'
for node in ${nodeArr[@]};do
echo $node '-> zookeeper started'
ssh ${node} "${zp_home_dir}/zkServer.sh start"
done

sleep 3s
echo '---- start hdfs ----'
start-dfs.sh

sleep 3s
echo '----redhat03 start yarn ----'
ssh redhat03 "${hd_home_dir}/start-yarn.sh"

sleep 3s
echo '----redhat04 start resourcemanager ----'
ssh redhat04 "${hd_home_dir}/yarn-daemon.sh start resourcemanager"

echo '----redhat01 start mapreduce jobhistory tracker ----'
mr-jobhistory-daemon.sh start historyserver 

关闭脚本:

#!/bin/bash

zp_home_dir='/home/hadoop/bigdata/zookeeper-3.4.11/bin'
hd_home_dir='/home/hadoop/bigdata/hadoop-2.6.0/sbin'
nodeArr=(redhat01 redhat02 redhat03 redhat04)

echo '----redhat01 stop mapreduce jobhistory tracker ----'
mr-jobhistory-daemon.sh stop historyserver 

sleep 3s
echo '----redhat04 stop resourcemanager ----'
ssh redhat04 "${hd_home_dir}/yarn-daemon.sh stop resourcemanager"

sleep 3s
echo '----redhat03 stop yarn ----'
ssh redhat03 "${hd_home_dir}/stop-yarn.sh"

sleep 3s
echo '---- stop hdfs ----'
stop-dfs.sh

echo '---- stop zookeeper ----'
for node in ${nodeArr[@]};do
echo $node '-> zookeeper stopping'
ssh ${node} "${zp_home_dir}/zkServer.sh stop"
done

 

  • 1
    点赞
  • 14
    收藏
    觉得还不错? 一键收藏
  • 12
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 12
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值