常见错误集(持续更新)

10 篇文章 0 订阅

1.多次执行hdfs namenode -format命令然后启动start-dfs.sh 然后jps后发现datanode节点(或者其它节点,比如namenode)没有启动,然后去logs目录下查看datanode的日志文件hadoop-hadoop–datanode(或者其它比如namenode).log
发现报如下错: 2019-07-03 12:39:54,640 WARN
org.apache.hadoop.hdfs.server.common.Storage: java.io.IOException:
Incompatible clusterIDs in /tmp/hadoop-hadoop/dfs/data: namenode
clusterID = CID-b8177bab-6539-47dd-aa98-c85618e1b236; datanode
clusterID = CID-2f5d2ee8-3d73-4b13-9938-66dc3bdd1931 2019-07-03
12:39:54,648 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode:
Initialization failed for Block pool (Datanode Uuid
unassigned) service to localhost/127.0.0.1:9000. Exiting.
java.io.IOException: All specified directories are failed to load.
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:478)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1394)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1355)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:317)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:228)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:829)
at java.lang.Thread.run(Thread.java:745) 2019-07-03 12:39:54,650 WARN org.apache.hadoop.hdfs.server.datanode.DataNode:
Ending block pool service for: Block pool (Datanode Uuid
unassigned) service to localhost/127.0.0.1:9000 2019-07-03
12:39:54,660 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
Removed Block pool (Datanode Uuid unassigned) 2019-07-03
12:39:56,662 WARN org.apache.hadoop.hdfs.server.datanode.DataNode:
Exiting Datanode

解决办法为找到报错中有一个目录输出:/tmp/hadoop-hadoop/dfs/data
(data目录同级目录中应该还有name目录,namesecondary这些)然后里面有一个VERSION文件夹,修改里面的clusterID与其它目录中对应一致,然后再重新启动即可,,,或者还有一种办法,直接删除掉data目录下的所有目录(data目录里面可能有name,data…),然后重新hdfs
namenode -format格式化,然后重新start-dfs.sh启动即可

2.hive中进行表删除操作时候报错,错误为以下内容:
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:For direct MetaStore DB connections, we don’t support retries at the client level.)
通过查找hive详细日志发现以下错误:
ERROR [main]: metastore.RetryingHMSHandler (RetryingHMSHandler.java:invokeInternal(203)) - Retrying HMSHandler after 2000 ms (attempt 9 of 10) with error: javax.jdo.JDODataStoreException: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ‘OPTION SQL_SELECT_LIMIT=DEFAULT’ at line 1

发现是因为hive/lib下的mysql驱动版本过低导致的,so,去mysql官网下载新的mysql驱动包放置hive/lib下即可解决

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值