在hive中执行sql语句报错

错误1

这是因为在向分区表中动态插入数据时,需要至少一个静态分区字段。

解决方式:开启非严格模式

set hive.exec.dynamic.partition.mode=nonstrict;

错误2

2024-09-21 16:31:19,598 Stage-10 map = 0%,  reduce = 0%
Ended Job = job_local984897282_0045 with errors
Error during job, obtaining debugging information...
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
MapReduce Jobs Launched: 
Stage-Stage-7:  HDFS Read: 238956966 HDFS Write: 179575159 SUCCESS
Stage-Stage-9:  HDFS Read: 239294207 HDFS Write: 179657548 SUCCESS
Stage-Stage-14:  HDFS Read: 119708734 HDFS Write: 89979172 SUCCESS
Stage-Stage-15:  HDFS Read: 119715714 HDFS Write: 89986056 SUCCESS
Stage-Stage-12:  HDFS Read: 127246885 HDFS Write: 95923573 SUCCESS
Stage-Stage-10:  HDFS Read: 132110166 HDFS Write: 103037312 FAIL
Total MapReduce CPU Time Spent: 0 msec

解决方式:

set hive.exec.max.dynamic.partitions = 100000;
set hive.exec.max.dynamic.partitions.pernode=10000;

错误3

2024-09-23T14:31:23,619  WARN [HiveServer2-Background-Pool: Thread-1938] metastore.RetryingMetaStoreClient: MetaStoreClient lost connection. Attempting to reconnect (1 of 1) after 1s. setPartitionColumnStatistics
org.apache.thrift.transport.TTransportException: null


一般都是插入的时候,第二次插入。
如果你是metastore 和hiveserver2 在一台服务器上,报以上错误,如果,你是两台服务器,服务启动在不同的服务器上,会报如下错误:
java.lang.ClassCastException: org.apache.hadoop.hive.metastore.api.StringColumnStatsData cannot be cast to org.apache.hadoop.hive.metastore.columnstats.cache.StringColumnStatsDataInspector

解决方案

使用 insert overwrite table 替代 insert into 
或者 
在建表之前 set hive.stats.autogather=false; 建表,插入数据

错误4

org.apache.hadoop.hive.ql.parse.SemanticException: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /tmp/hive/root/f39a320d-d50d-4627-a685-5eb5fff0e648/hive_2024-09-23_09-50-49_151_6473486879651462761-1/dummy_path/dummy_file could only be written to 0 of the 1 minReplication nodes. There are 1 datanode(s) running and 1 node(s) are excluded in this operation.
	at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2329)
	at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:294)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2942)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:915)
	at org.apache.hadoop.hdfs.protocolPB.

Hive插入数据时报错There are 1 datanode(s) running and 1 node(s) are excluded in this operation.-CSDN博客

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值