hadoop ha环境搭建

 

节点规划:

克隆四个虚拟机实例

 

 

 

 

Node01

Node02

Node03

Node04

Jps进程名称

Namenode1

*

 

 

 

NameNode

Namenode2

 

*

 

 

NameNode

datanode

 

*

*

*

DataNode

journalnode

*

*

*

 

JournalNode

zkfc

*

*

 

 

DFSZKFailoverController

zk

 

*

*

*

QuorumPeerMain

 

resourcemanager

 

 

*

*

ResourceManager

nodemanager

 

*

*

*

NodeManager

jps

 

 

 

 

 

 

目录规划:

软件上传目录:/opt/tools

 

软件解压安装目录:/opt/sxt/

 

数据文件目录:/var/sxt/

 

软件包准备:

hadoop-2.6.5.tar.gz

jdk-7u79-linux-x64.rpm

zookeeper-3.4.6.tar.gz

 

1.各个节点安装jdk,配置环境变量

 

#rpm -ivh jdk-7u79-linux-x64.rpm

#vi /etc/profile

在文件末尾追加

export JAVA_HOME=/usr/java/jdk1.7.0_79

export HADOOP_HOME=/opt/sxt/hadoop-2.6.5

export ZOOKEEPER_HOME=/opt/sxt/zookeeper-3.4.6

export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$ZOOKEEPER_HOME/bin

#source /etc/profile

#java –version

 

2.设置免密钥登录

 

生产公匙文件

#ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa

 

将各个节点上的公钥信息做备份,并分发到同一台机器节点上

#cd ~/.ssh

zyc01#cp id_dsa.pub id_dsa_1.pub

 

zyc02#cp id_dsa.pub id_dsa_2.pub

zyc02#scp id_dsa_2.pub zyc01:`pwd`

 

zyc03#cp id_dsa.pub id_dsa_3.pub

zyc03#scp id_dsa_3.pub zyc01:`pwd`

 

zyc04#cp id_dsa.pub id_dsa_4.pub

zyc04#scp id_dsa_4.pub zyc01:`pwd`

 

zyc01#cat ~/.ssh/id_dsa_1.pub >> ~/.ssh/authorized_keys

zyc01#cat ~/.ssh/id_dsa_2.pub >> ~/.ssh/authorized_keys

zyc01#cat ~/.ssh/id_dsa_3.pub >> ~/.ssh/authorized_keys

zyc01#cat ~/.ssh/id_dsa_4.pub >> ~/.ssh/authorized_keys

 

将拼接后的authorized_keys分发到各个机器节点

zyc01#scp authorized_keys zyc02:`pwd`

zyc01#scp authorized_keys zyc03:`pwd`

zyc01#scp authorized_keys zyc04:`pwd`

 

 

在各个节点上ssh登录其他节点

zyc01#ssh localhost

zyc01#ssh zyc02

 

3.解压安装hadoop-2.6.5 修改配置

#cd /opt/tools

# tar xvf hadoop-2.6.5.tar.gz

# mv hadoop-2.6.5 /opt/sxt/

# /opt/sxt/hadoop-2.6.5/etc/hadoop

免密钥登录无法读取环境变量信息,所以需要设置hadoop:JAVA_HOME

#vi hadoop-env.sh

#vi mapred-env.sh

#vi yarn-env.sh

:JAVA_HOME=/usr/java/jdk1.7.0_79

 

修改配置信息:

#vi hdfs-site.xml

 

<configuration>

        <property>

<name>dfs.replication</name>

<value>3</value>

</property>

<property>

          <name>dfs.nameservices</name>

          <value>mycluster</value>

        </property>

        <property>

          <name>dfs.ha.namenodes.mycluster</name>

          <value>nn1,nn2</value>

        </property>

<property>

  <name>dfs.namenode.rpc-address.mycluster.nn1</name>

  <value>zyc01:8020</value>

</property>

<property>

  <name>dfs.namenode.rpc-address.mycluster.nn2</name>

  <value>zyc02:8020</value>

</property>

<property>

  <name>dfs.namenode.http-address.mycluster.nn1</name>

  <value>zyc01:50070</value>

</property>

<property>

  <name>dfs.namenode.http-address.mycluster.nn2</name>

  <value>zyc02:50070</value>

</property>

 

<property>

  <name>dfs.namenode.shared.edits.dir</name>

  <value>qjournal://zyc01:8485;zyc02:8485;zyc03:8485/mycluster</value>

</property>

<property>

  <name>dfs.client.failover.proxy.provider.mycluster</name>

  <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>

</property>

<property>

  <name>dfs.ha.fencing.methods</name>

  <value>sshfence</value>

</property>

 

<property>

  <name>dfs.ha.fencing.ssh.private-key-files</name>

  <value>/root/.ssh/id_dsa</value>

</property>

<property>

   <name>dfs.ha.automatic-failover.enabled</name>

   <value>true</value>

 </property>

</configuration>

 

 

 

#vi core-site.xml

 

<configuration>

        <property>

          <name>fs.defaultFS</name>

          <value>hdfs://mycluster</value>

        </property>

 

        <property>

         <name>dfs.journalnode.edits.dir</name>

         <value>/path/to/journal/node/local/data</value>

        </property>

 

        <property>

         <name>hadoop.tmp.dir</name>

         <value>/usr/sxt/hadoop-2.6.5/ha</value>

        </property>

<property>

   <name>ha.zookeeper.quorum</name>

   <value>zyc02:2181,zyc03:2181,zyc04:2181</value>

 </property>

</configuration>

 

 

#vi mapred-site.xml

 

<configuration>

<property>

        <name>mapreduce.framework.name</name>

        <value>yarn</value>

  </property>

</configuration>

 

 

#vi yarn-site.xml

 

<configuration>

 

<property>

        <name>yarn.nodemanager.aux-services</name>

        <value>mapreduce_shuffle</value>

    </property>

<property>

   <name>yarn.resourcemanager.ha.enabled</name>

   <value>true</value>

 </property>

 <property>

   <name>yarn.resourcemanager.cluster-id</name>

   <value>cluster1</value>

 </property>

 <property>

   <name>yarn.resourcemanager.ha.rm-ids</name>

   <value>rm1,rm2</value>

 </property>

 <property>

   <name>yarn.resourcemanager.hostname.rm1</name>

   <value>zyc03</value>

 </property>

 <property>

   <name>yarn.resourcemanager.hostname.rm2</name>

   <value>zyc04</value>

 </property>

 <property>

   <name>yarn.resourcemanager.zk-address</name>

   <value>zyc02:2181,zyc03:2181,zyc04:2181</value>

 </property>

 

</configuration>

 

 

各个机器节点配置保持一致!

 

#vi slaves

 

zyc02

zyc03

zyc4

 

 

4.解压安装zookeeper-3.4.6 修改配置

在zookeeper所在节点上操作

zyc02#cd /opt/tools

zyc02#tar xvf zookeeper-3.4.6.tar.gz

zyc02#mv zookeeper-3.4.6 /opt/sxt/

zyc02#/opt/sxt/zookeeper-3.4.6/conf

zyc02# cp zoo_sample.cfg zoo.cfg

zyc02# vi zoo.cfg

 

dataDir=/var/sxt/zk

server.1=zyc02:2888:3888

server.2=zyc03:2888:3888

server.3=zyc04:2888:3888 

zyc02#echo 1 > /var/sxt/zk/myid

zyc03#echo 2 > /var/sxt/zk/myid

zyc04#echo 3 > /var/sxt/zk/myid

 

 

5.启动hdfs

第一次启动

-启动zookeeper集群:

[zyc02|zyc03|zyc04]#zkServer.sh start

查看hdfs进程

#jps

-启动journalnode:

[zyc01|zyc02|zyc03]# hadoop-daemon.sh start journalnode

-格式化namenode:

zyc01# hdfs namenode –format

zyc01#hadoop-daemon.sh start namenode

zyc02#hdfs namenode –bootstrapStandby

zyc01#hdfs zkfc –formatZK

zyc01#stop-dfs.sh

zyc01#start-dfs.sh

zyc03#start-yarn.sh

zyc04#yarn-daemon.sh start resourcemanager

 

 

 

 

 

第二次启动 :

[zyc02|zyc03|zyc04]#zkServer.sh start

zyc01:start-dfs.sh

zyc03#start-yarn.sh

zyc04#yarn-daemon.sh start resourcemanager

 

 

验证服务地址:

http://zyc01:50070/

http://zyc02:50070/

http://zyc03:8088/

http://zyc04:8088/

 

 

 

 

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值