HDFS 高可用集群

所有软件版本请保持一致

整体规划表

节点名NNDNJNZKZKFCRS
node21×××
node22×××××
node23××××
node23×××

准备虚拟机

准备好4台装了JDK、NTP等的虚拟机,参考环境配置的几篇文章
修改IP地址192.168.13.{21,22,23,24}
修改主机名node{21,22,23,24}
修改Hosts文件

192.168.13.21 node21
192.168.13.22 node22
192.168.13.23 node23
192.168.13.24 node24

配置免密钥
思想:node21→node2{2,3,4} 和node22→node2{1,3,4}

所有节点执行:
ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

node21执行:
scp ~/.ssh/id_dsa.pub root@node2{2,3,4}:/tmp/

node2{2,3,4}执行:
cat /tmp/id_dsa.pub >>/root/.ssh/authorized_keys

node22执行:
scp ~/.ssh/id_dsa.pub root@node2{2,3,4}:/tmp/

node2{1,3,4}执行:
cat /tmp/id_dsa.pub >>/root/.ssh/authorized_keys

安装hadoop

mkdir /opt/soft
cd /opt/soft
上传hadoop-2.5.1_x64.tar.gz
tar zxf hadoop-2.5.1_x64.tar.gz
vi /etc/profile

export JAVA_HOME=/usr/java/latest
export PATH=$PATH:$JAVA_HOME/bin
export HADOOP_HOME=/opt/soft/hadoop-2.5.1
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
export ZOOKEEPER_HOME=/opt/soft/zookeeper-3.4.6
export PATH=$PATH:$ZOOKEEPER_HOME/bin

. /etc/profile

配置hadoop

cd hadoop-2.5.1/etc/hadoop
vi hadoop-env.sh

export JAVA_HOME=/usr/java/latest

vi core-site.xml

<configuration>
<property>
        <name>fs.defaultFS</name>
        <value>hdfs://hdfsserver</value>
    </property>
<property>
        <name>hadoop.tmp.dir</name>
        <value>/opt/hadoop</value>
    </property>
<property>
        <name>ha.zookeeper.quorum</name>
        <value>node21:2181,node22:2181,node23:2181</value>
 </property>
</configuration>

vi hdfs-site.xml

<property>
  <name>dfs.replication</name>
  <value>3</value>
</property>

<property>
  <name>dfs.nameservices</name>
  <value>hdfsserver</value>
</property>

<property>
  <name>dfs.ha.namenodes.hdfsserver</name>
  <value>nn1,nn2</value>
</property>

<property>
  <name>dfs.namenode.rpc-address.hdfsserver.nn1</name>
  <value>node21:8020</value>
</property>

<property>
  <name>dfs.namenode.rpc-address.hdfsserver.nn2</name>
  <value>node22:8020</value>
</property>

<property>
  <name>dfs.namenode.http-address.hdfsserver.nn1</name>
  <value>node21:50070</value>
</property>

<property>
  <name>dfs.namenode.http-address.hdfsserver.nn2</name>
  <value>node22:50070</value>
</property>

<property>
  <name>dfs.namenode.shared.edits.dir</name>
  <value>qjournal://node22:8485;node23:8485;node24:8485/hdfsserver</value>
</property>

<property>
  <name>dfs.client.failover.proxy.provider.hdfsserver</name>
  <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>

<property>
  <name>dfs.ha.fencing.methods</name>
  <value>sshfence</value>
</property>

<property>
  <name>dfs.ha.fencing.ssh.private-key-files</name>
  <value>/root/.ssh/id_dsa</value>
</property>

<property>
  <name>dfs.journalnode.edits.dir</name>
  <value>/opt/journal/data</value>
</property>

<property>
  <name>dfs.ha.automatic-failover.enabled</name>
  <value>true</value>
</property>

rm -rf /opt/soft/hadoop-2.5.1/share/doc
vi slaves

node22
node23
node24

安装Zookeeper

上传zookeeper-3.4.6.tar.gz到/opt/soft
解压tar zxf zookeeper-3.4.6.tar.gz
cd /opt/soft/zookeeper-3.4.6/conf
mv zoo_sample.cfg zoo.cfg
vi zoo.cfg

dataDir=/opt/zookeeper
server.1=node21:2888:3888
server.2=node22:2888:3888
server.3=node23:2888:3888

mkdir /opt/zookeeper
vi /opt/zookeeper/myid

1

[root@node21 hadoop]# mv mapred-site.xml.template mapred-site.xml
vi mapred-site.xml

<property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>

vi yarn-site.xml

<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.resourcemanager.ha.enabled</name>
<value>true</value>
</property>
<property>
<name>yarn.resourcemanager.cluster-id</name>
<value>yarnserver</value>
</property>
<property>
<name>yarn.resourcemanager.ha.rm-ids</name>
<value>rm1,rm2</value>
</property>
<property>
<name>yarn.resourcemanager.hostname.rm1</name>
<value>node23</value>
</property>
<property>
<name>yarn.resourcemanager.hostname.rm2</name>
<value>node24</value>
</property>
<property>
<name>yarn.resourcemanager.zk-address</name>
<value>node21:2181,node22:2181,node23:2181</value>
</property>

同步修改配置

scp /etc/profile root@node22:/etc/profile
scp /etc/profile root@node23:/etc/profile
scp /etc/profile root@node24:/etc/profile

scp -r /opt/soft/hadoop-2.5.1 root@node22:/opt/soft/hadoop-2.5.1
scp -r /opt/soft/hadoop-2.5.1 root@node23:/opt/soft/hadoop-2.5.1
scp -r /opt/soft/hadoop-2.5.1 root@node24:/opt/soft/hadoop-2.5.1

scp -r /opt/soft/zookeeper-3.4.6 root@node22:/opt/soft/
scp -r /opt/soft/zookeeper-3.4.6 root@node23:/opt/soft/
mkdir /opt/zookeeper node22 and node 23 修改myid为2和3
scp /opt/zookeeper/myid root@node22:/opt/zookeeper/myid
scp /opt/zookeeper/myid root@node23:/opt/zookeeper/myid

初始化

请init 0 再快照后继续操作

执行的节点执行的命令
node22、node23、node24hadoop-daemon.sh start journalnode
node21hdfs namenode -format
node21hadoop-daemon.sh start namenode
node22hdfs namenode -bootstrapStandby
node21、node22、node23zkServer.sh start
node21hdfs zkfc -formatZK
node21start-dfs.sh
node23、node24yarn-daemon.sh start resourcemanager

关机后,再次启动集群的方法:

节点命令
node21、node22、node23zkServer.sh start
node21start-all.sh
node23、node24yarn-daemon.sh start resourcemanager

启动过程中遇到已经启动的,杀掉kill -9 xxxx 再重复执行命令

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值