Hive安装、启动过程中遇到的问题及解决方案

问题一:

hive> select 1;
FAILED: SemanticException org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /tmp/hive/root/2ea6fd2b-c1f9-4a2b-8fac-5e6ed9674bac/hive_2022-03-20_21-31-37_510_3468482020810973734-1/dummy_path/dummy_file could only be replicated to 0 nodes instead of minReplication (=1).  There are 0 datanode(s) running and no node(s) are excluded in this operation.
    at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1625)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3127)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3051)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:725)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:493)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2217)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2213)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2213)

问题原因:

当我们使用hadoop namenode -format格式化namenode时,会在namenode数据文件夹(这个文件夹为自己配置文件中dfs.name.dir的路径)中保存一个current/VERSION文件,记录clusterID,datanode中保存的current/VERSION文件中的clustreID的值是上一次格式化保存的clusterID,这样,datanode和namenode之间的ID不一致,datanode启动失败。

解决方案:

 如果dfs文件夹中没有重要的数据,那么删除dfs文件夹,再重新运行下列指令: (删除所有节点下的dfs文件夹,dfs目录在${HADOOP_HOME}/tmp/)

cd /opt/hadoop-2.7.5

./bin/hdfs namenode -format

./sbin/start-dfs.sh

总结:

    初首次启动hadoop集群前,每次启动hadoop时,不要运行hadoop namenode -format命令,直接运行命令sbin/start-dfs.sh启动。

问题二:

hive> show databases;
OK
Failed with exception java.io.IOException:java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative path in absolute URI: ${system:user.name%7D
Time taken: 0.099 seconds

解决方案:

vim /opt/apache-hive-2.1.1-bin/conf/hive-site.xml

然后找到如下属性:

<property>
    <name>hive.exec.local.scratchdir</name>
    <value>/home/lch/software/Hive/apache-hive-2.1.1-bin/tmp/${system:user.name}</value>
    <description>Local scratch space for Hive jobs</description>
  </property>

把它修改成如下:

<property>
    <name>hive.exec.local.scratchdir</name>
    <value>/home/lch/software/Hive/apache-hive-2.1.1-bin/tmp/${user.name}</value>
    <description>Local scratch space for Hive jobs</description>
  </property>

问题三:

beeline
which: no hbase in (/usr/lib64/qt-3.3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/opt/hadoop-2.7.5/bin:/root/bin:/opt/hadoop-2.7.5/bin:/usr/local/hive/bin:/opt/hadoop-2.7.5/bin:/opt/apache-hive-2.1.1-bin/bin)
Beeline version 2.1.1 by Apache Hive
beeline> !connect jdbc:hive2://master:10000
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/apache-hive-2.1.1-bin/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/hadoop-2.7.5/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Connecting to jdbc:hive2://master:10000
Enter username for jdbc:hive2://master:10000: root
Enter password for jdbc:hive2://master:10000: ****
22/03/20 22:54:13 [main]: WARN jdbc.HiveConnection: Failed to connect to master:10000
Error: Could not open client transport with JDBC Uri: jdbc:hive2://master:10000: Failed to open new session: java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): User: root is not allowed to impersonate root (state=08S01,code=0)
 

解决方案:

修改Hadoop的core-site.xml配置文件

vim /opt/hadoop-2.7.5/etc/hadoop/core-site.xml

<property>
    <name>hadoop.proxyuser.root.hosts</name>
    <value>*</value>
  </property>
  <property>
    <name>hadoop.proxyuser.root.groups</name>
    <value>*</value>
  </property>

注意 :配置文件中的proxyuser要与登录hive时输入的username一致,比如都是root

修改后重启Hadoop:

./sbin/stop-dfs.sh

./sbin/start-dfs.sh

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值