Hadoop基础教程-第9章 HA高可用(9.2 HDFS 高可用配置)(草稿)

第9章 HA高可用

9.2 HDFS 高可用配置


9.2.1 准备工作

因为前面我们已经配置启动了普通的Hadoop相关服务,需要先停止相关服务并清除数据。 
(1)停止Hadoop服务 
首先停止YARN

[root@node1 ~]# stop-yarn.sh
stopping yarn daemons
stopping resourcemanager
node2: stopping nodemanager
node3: stopping nodemanager
node1: stopping nodemanager
node2: nodemanager did not stop gracefully after 5 seconds: killing with kill -9
node3: nodemanager did not stop gracefully after 5 seconds: killing with kill -9
node1: nodemanager did not stop gracefully after 5 seconds: killing with kill -9
no proxyserver to stop

停止HDFS服务

[root@node1 ~]# stop-dfs.sh
Stopping namenodes on [node1]
node1: stopping namenode
node3: stopping datanode
node1: stopping datanode
node2: stopping datanode
Stopping secondary namenodes [node2]
node2: stopping secondarynamenode
[root@node1 ~]# 

(2)删除HDFS数据

[root@node1 ~]# rm -rf /var/data/hadoop
[root@node2 ~]# rm -rf /var/data/hadoop
[root@node3 ~]# rm -rf /var/data/hadoop
[root@node1 ~]# rm -rf /tmp/*
[root@node2 ~]# rm -rf /tmp/*
[root@node3 ~]# rm -rf /tmp/*

(3)准备zookeeper 
启动Zookeeper集群。在每个节点上启动Zookeeper服务并确定状态正确 
node1

[root@node1 ~]# jps
6117 Jps
2623 QuorumPeerMain
[root@node1 ~]# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper-3.4.10/bin/../conf/zoo.cfg
Mode: follower
[root@node1 ~]# 

node2

[root@node2 ~]# jps
3632 Jps
2354 QuorumPeerMain
[root@node2 ~]# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper-3.4.10/bin/../conf/zoo.cfg
Mode: leader
[root@node2 ~]# 

node3

[root@node3 ~]# jps
2278 QuorumPeerMain
3277 Jps
[root@node3 ~]# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper-3.4.10/bin/../conf/zoo.cfg
Mode: follower
[root@node3 ~]# 

实际上,上面命令执行过程是: 
(1)通过XShell执行zkServer.sh start 
(2)等待1到2秒,再通过通过XShell执行zkServer.sh status

9.2.2 core-site.xml配置

[root@node1 ~]# cd /opt/hadoop-2.7.3/etc/hadoop/
[root@node1 hadoop]# vi core-site.xml 

编辑内容如下,

[root@node1 hadoop]# cat core-site.xml 
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://cetc</value>
        <description>默认路径前缀</description>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/hadoop</value>
        <description>Hadoop临时目录</description>
    </property>
    <property>
        <name>io.file.buffer.size</name>
        <value>65536</value>
        <description>序列文件的缓冲区的大小</description>
    </property>
    <property>
        <name>dfs.ha.fencing.methods</name>
        <value>
            sshfence
            shell(/bin/true)
        </value>
        <description>配置隔离机制,多个机制用换行分割</description>
    </property>
    <property>
        <name>dfs.ha.fencing.ssh.private-key-files</name>
        <value>/root/.ssh/id_rsa</value>
        <description>使用隔离机制时需要ssh免登录</description>
    </property>
    <property>
        <name>ha.zookeeper.quorum</name>
        <value>node1:2181,node2:2181,node2:2181</value>
        <description>由ZKFC在自动故障转移中使用的ZooKeeper列表</description>
    </property>
</configuration>

提醒:一些教程将dfs.ha.fencing.methods和dfs.ha.fencing.ssh.private-key-files属性配置在hdfs-site.xml文件中。通过查看官方文件http://hadoop.apache.org/docs/r2.7.3/hadoop-project-dist/hadoop-common/core-default.xml 和 http://hadoop.apache.org/docs/r2.7.3/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml,发现这两个属性应该在core-site.xml文件进行配置,而非hdfs-site.xml。

9.2.3 hdfs-site.xml配置

[root@node1 hadoop]# vi hdfs-site.xml 
[root@node1 hadoop]# cat hdfs-site.xml 
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
    <property>
        <name>dfs.nameservices</name>
        <value>cetc</value>
        <description>名称服务的逻辑名称</description>
    </property>
    <property>
        <name>dfs.ha.namenodes.cetc</name>
        <value>nn1,nn2</value>
        <description>名称服务中每个NameNode的唯一标识符</description>
    </property>
    <property>
        <name>dfs.namenode.rpc-address.cetc.nn1</name>
        <value>node1:8020</value>
        <description>每个NameNode要监听的完全限定的RPC地址</description>
    </property>
    <property>
        <name>dfs.namenode.rpc-address.cetc.nn2</name>
        <value>node2:8020</value>
        <description>每个NameNode要监听的完全限定的RPC地址</description>
    </property>
    <property>
        <name>dfs.namenode.http-address.cetc.nn1</name>
        <value>node1:50070</value>
        <description>每个NameNode要监听的完全限定的HTTP地址</description>
    </property>
    <property>
        <name>dfs.namenode.http-address.cetc.nn2</name>
        <value>node2:50070</value>
        <description>每个NameNode要监听的完全限定的HTTP地址</description>
    </property>
    <property>
        <name>dfs.namenode.shared.edits.dir</name>
        <value>qjournal://node1:8485;node2:8485;node3:8485/abc</value>
        <description>HA群集中多个节点之间的共享存储上的目录。</description>
    </property>
    <property>
        <name>dfs.journalnode.edits.dir</name>
        <value>/hadoop/journalnode</value>
        <description>指定JournalNode在本地磁盘存放数据的位置</description>
    </property>
    <property>
        <name>dfs.ha.automatic-failover.enabled</name>
        <value>true</value>
        <description>启用namenode自动切换</description>
    </property>
    <property>
        <name>dfs.client.failover.proxy.provider.cetc</name>
        <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
        <description>NameNode自动切换实现方式</description>
    </property>
    <property>
        <name>dfs.replication</name>
        <value>3</value>
        <description>副本数</description>
    </property>
</configuration>
[root@node1 hadoop]# 

提醒: 
请注意检查参数,别写错了。比如单词写错,很难检查出来。

9.2.4 配置hadoop-env.sh

[root@node1 hadoop]# vi hadoop-env.sh

修改3处 
(1)JAVA_HOME

export JAVA_HOME=/opt/jdk1.8.0_112

(2)HADOOP_PID_DIR

export HADOOP_PID_DIR=/var/run/hadoop

注意,如果设置了export HADOOP_PID_DIR=/var/run/hadoop,需要创建/var/run/hadoop目录。

[hadoop@node1 hadoop]$ sudo mkdir /var/run/hadoop
[hadoop@node1 hadoop]$ sudo chown hadoop:hadoop /var/run/hadoop

(3)HADOOP_LOG_DIR

# Where log files are stored.  $HADOOP_HOME/logs by default.
#export HADOOP_LOG_DIR=/var/log/hadoop
[hadoop@node1 hadoop]$ sudo mkdir /var/log/hadoop
[hadoop@node1 hadoop]$ sudo chown hadoop:hadoop /var/log/hadoop

9.2.4 分发配置文件

因为node2和node3节点上已经存在hadoop软件包,只需要向这两个节点分发配置文件即可。

[root@node1 hadoop]# scp core-site.xml node2:/opt/hadoop-2.7.3/etc/hadoop/
core-site.xml                                                                                                                                              100% 1003     1.0KB/s   00:00    
[root@node1 hadoop]# scp hdfs-site.xml node2:/opt/hadoop-2.7.3/etc/hadoop/
hdfs-site.xml                                                                                                                                              100% 2434     2.4KB/s   00:00    
[root@node1 hadoop]# scp core-site.xml node3:/opt/hadoop-2.7.3/etc/hadoop/
core-site.xml                                                                                                                                              100% 1003     1.0KB/s   00:00    
[root@node1 hadoop]# scp hdfs-site.xml node3:/opt/hadoop-2.7.3/etc/hadoop/
hdfs-site.xml                                                                                                                                              100% 2434     2.4KB/s   00:00    
[root@node1 hadoop]# scp hadoop-env.sh node2:/opt/hadoop-2.7.3/etc/hadoop/
hadoop-env.sh                                                                                                                                              100% 4227     4.1KB/s   00:00    
[root@node1 hadoop]# scp hadoop-env.sh node3:/opt/hadoop-2.7.3/etc/hadoop/
hadoop-env.sh                                                                                                                                              100% 4227     4.1KB/s   00:00    
[root@node1 hadoop]#

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值