0524-6.1-如何使用Cloudera Manager启用HDFS的HA

1 文档编写目的

在HDFS集群中NameNode存在单点故障(SPOF),对于只有一个NameNode的集群,如果NameNode机器出现意外,将导致整个集群无法使用。为了解决NameNode单点故障的问题,Hadoop给出了HDFS的高可用HA方案,HDFS集群由两个NameNode组成,一个处于Active状态,另一个处于Standby状态。Active NameNode可对外提供服务,而Standby NameNode则不对外提供服务,仅同步Active NameNode的状态,以便在Active NameNode失败时快速的进行切换。本篇文章Fayson主要讲述如何使用Cloudera Manager启用HDFS的HA。

  • 内容概述

1.HDFS HA启用

2.更新Hive Metastore NameNode

3.HDFS HA功能可用性测试

4.Hive及Impala测试

  • 测试环境

1.CM和CDH版本为6.1

2.Redhat7.4

3.集群已启用Kerberos

2 启用HDFS HA

1.使用管理员用户登录Cloudera Manager的Web管理界面,进入HDFS服务

2.点击“启用High Avaiability”,设置NameService名称

3.点击“继续”,选择NameNode主机及JouralNode主机

JouralNode主机选择,一般与Zookeeper节点一致即可(至少3个且为奇数)

4.点击“继续”,设置NameNode的数据目录和JouralNode的编辑目录

NameNode的数据目录默认继承已有NameNode数据目录。

5.点击“继续”,启用HDFS的High Availability,如果集群已有数据,格式化NameNode会报错,不用理。

6.点击“继续”,完成HDFS的High Availability

7.HDFS实例查看

通过实例列表可以看到启用HDFS HA后增加了NameNode、Failover Controller及JouralNode服务并且服务都正常启动,至此已完成了HDFS HA的启用,接下来进行HDFS HA功能的可用性测试。

CM上HDFS HA的使用,可以通过界面进行手动切换

点击“Federation与High Availability”进入

可以进行手动故障转移

故障转移成功

注意:等待切换成功可能需要几分钟时间,不是马上切换的。

3 更新Hive MetaStore NameNode

1.进入Hive服务并停止Hive的所有服务

2.确认Hive服务停止后,点击“更新Hive Metastore NameNode”

3.更新Hive Metastore NameNode

4.更新成功

5.启动Hive服务

完成Hive Metastore NameNode更新。

4 HDFS HA功能可用性测试

1.向集群目录put一个数据文件

[root@ip-172-31-6-83 generatedata]# ll hbase_data.csv 
-rw-r--r--. 1 root root 6353332720 Jan 23 22:03 hbase_data.csv
[root@ip-172-31-6-83 generatedata]# hadoop fs -ls /fayson_ha_test
[root@ip-172-31-6-83 generatedata]# hadoop fs -put hbase_data.csv /fayson_ha_csv

2.put文件的同时将Active NameNode服务停止,Put数据报错,但其实put任务没有终止。

[root@ip-172-31-6-83 generatedata]# hadoop fs -put hbase_data.csv /fayson_ha_csv
19/01/23 22:04:25 INFO retry.RetryInvocationHandler: java.io.EOFException: End of File Exception between local host is: "ip-172-31-6-83.ap-southeast-1.compute.internal/172.31.6.83"; destination host is: "ip-172-31-9-113.ap-southeast-1.compute.internal":8020; : java.io.EOFException; For more details see:  http://wiki.apache.org/hadoop/EOFException, while invoking ClientNamenodeProtocolTranslatorPB.addBlock over ip-172-31-9-113.ap-southeast-1.compute.internal/172.31.9.113:8020. Trying to failover immediately.
19/01/23 22:04:25 INFO retry.RetryInvocationHandler: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category READ is not supported in state standby. Visit https://s.apache.org/sbnn-error
        at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:88)
        at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1962)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1419)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2655)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:872)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:550)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)
, while invoking ClientNamenodeProtocolTranslatorPB.addBlock over ip-172-31-6-83.ap-southeast-1.compute.internal/172.31.6.83:8020 after 1 failover attempts. Trying to failover after sleeping for 1323ms.

3.NameNode状态

4.查看数据是否put到HDFS

hbase_data.csv数据文件已成功put到HDFS的/fayson_ha_test目录,说明在put过程中Active状态的NameNode停止后,会自动将Standby状态的NameNode切换为Active状态,未造成HDFS任务终止。

5

Hive测试

1.使用hive命令登录,查看test_table建表语句

可以看到Hive表的LOCATION已经被修改为HDFS的NameService名称。

2.执行Select操作

3.执行Count操作

6 Impala测试

1.在impala-shell命令行进行操作

[root@ip-172-31-6-83 generatedata]# impala-shell -i ip-172-31-12-142.ap-southeast-1.compute.internal   
Starting Impala Shell without Kerberos authentication
Opened TCP connection to ip-172-31-12-142.ap-southeast-1.compute.internal:21000
Error connecting: TTransportException, TSocket read 0 bytes
Kerberos ticket found in the credentials cache, retrying the connection with a secure transport.
Opened TCP connection to ip-172-31-12-142.ap-southeast-1.compute.internal:21000
Connected to ip-172-31-12-142.ap-southeast-1.compute.internal:21000
Server version: impalad version 3.1.0-cdh6.1.0 RELEASE (build 5efe077603091e405fbae2e5d68b851d6e790f59)
***********************************************************************************
Welcome to the Impala shell.
(Impala Shell v3.1.0-cdh6.1.0 (5efe077) built on Thu Dec  6 17:40:23 PST 2018)

The HISTORY command lists all shell commands in chronological order.
***********************************************************************************
[ip-172-31-12-142.ap-southeast-1.compute.internal:21000] default> select * from test_table limit 1;
Query: select * from test_table limit 1
Query submitted at: 2019-01-23 22:24:52 (Coordinator: http://ip-172-31-12-142.ap-southeast-1.compute.internal:25000)
Query progress can be monitored at: http://ip-172-31-12-142.ap-southeast-1.compute.internal:25000/query_plan?query_id=cf4e30d5008a7c07:f0c3cb4e00000000
+--------------------+--------+----+------+----------+-------------+-------------+-----------+-------------------+-----+-----+
| s1                 | s2     | s3 | s4   | s5       | s6          | s7          | s8        | s9                | s10 | s11 |
+--------------------+--------+----+------+----------+-------------+-------------+-----------+-------------------+-----+-----+
| 340111200507061443 | 鱼言思 | 0  | 遂宁 | 国家机关 | 13004386766 | 15900042793 | 广州银行1 | 市场三街65-10-8 | 0   | 1   |
+--------------------+--------+----+------+----------+-------------+-------------+-----------+-------------------+-----+-----+
Fetched 1 row(s) in 3.85s
[ip-172-31-12-142.ap-southeast-1.compute.internal:21000] default> select count(*) from test_table;
Query: select count(*) from test_table
Query submitted at: 2019-01-23 22:25:18 (Coordinator: http://ip-172-31-12-142.ap-southeast-1.compute.internal:25000)
Query progress can be monitored at: http://ip-172-31-12-142.ap-southeast-1.compute.internal:25000/query_plan?query_id=c64bd3495de83e21:bcacd67900000000
+----------+
| count(*) |
+----------+
| 821091   |
+----------+
Fetched 1 row(s) in 0.27s

7 常见问题

1.查询Hive表报错“SemanticException Unable to determine…”

hive> select * from test_table;
FAILED: SemanticException Unable to determine if hdfs://ip-172-31-6-83.ap-southeast-1.compute.internal:8020/fayson/hive-test is encrypted: org.apache.hadoop.hive.ql.metadata.HiveException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category READ is not supported in state standby. Visit https://s.apache.org/sbnn-error
        at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:88)
        at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1962)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1419)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getServerDefaults(FSNamesystem.java:1838)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getServerDefaults(NameNodeRpcServer.java:742)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getServerDefaults(ClientNamenodeProtocolServerSideTranslatorPB.java:433)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)    

问题原因:查询报错由于HDFS启用HA,Hive表的LOCATION需要配置为NameServer的名称如hdfs://nameservice1/user/hive/warehouse/xxxx

查看建表语句,可以看到Hive的LOCATION地址使用的是未启用高可用时的HDFS地址。

解决方法:参考更新Hive MetaStore NameNode章节

2.使用“更新Hive Metastore NameNode”功能,如果Hive表的LOCATION路径修改不成功,则可以通过直接修改hive的元数据库信息来完成。

mysql -uroot -p
mysql> use metastore;
mysql> update `DBS` set `DB_LOCATION_URI` = replace(DB_LOCATION_URI,"ip-172-31-6-83.ap-southeast-1.compute.internal","nameservice1");
mysql> update SDS  set location =replace(location,"ip-172-31-6-83.ap-southeast-1.compute.internal","nameservice1"); 

提示:代码块部分可以左右滑动查看噢
为天地立心,为生民立命,为往圣继绝学,为万世开太平。
温馨提示:如果使用电脑查看图片不清晰,可以使用手机打开文章单击文中的图片放大查看高清原图。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值