hive初使用derby报错解决

以下是我使用hive-3.1.2的derby遇到的bug

1.装好之后,进行初始化出现报错,显示‘main’什么有问题

/schematool -initSchema -dbType derby

需要把Hadoop/share/common/lib中的guava-27.0-jre.jar替换掉hive/lib中的guava包可以解决问题

2.在进入hive时使用show databases;报错如下

FAILED: HiveException java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient

我是看b站网友的一个方法:进入hive目录,里面有一个metestore文件夹(是之前启动hive的derby时自动生成的,这里需要将metastore_db 目录重命名为 metastore_db.tmp,然后再初始化)。
解决步骤(在hive目录下):
1.> mv metastore_db metastore_db.tmp
2.> bin/schematool -initSchema -dbType derby

3.在创建表之后遇到的insert into问题,mapreduce报错我花了两三天也没有解决,最后也是看b站网友的一个方法

hive> insert into test values(1);
Query ID = sun_20220803161158_db759634-9959-4eae-8716-a8bd424356de
Total jobs = 3
Launching Job 1 out of 3
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Starting Job = job_1659513147782_0001, Tracking URL = http://hadoop102:8088/proxy/application_1659513147782_0001/
Kill Command = /opt/module/hadoop-3.1.4/bin/mapred job  -kill job_1659513147782_0001
Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 0
2022-08-03 16:12:13,078 Stage-1 map = 0%,  reduce = 0%
Ended Job = job_1659513147782_0001 with errors
Error during job, obtaining debugging information...
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
MapReduce Jobs Launched: 
Stage-Stage-1:  HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec

最后终于成功了

hive> set hive.exec.mode.local.auto=true;
hive> insert into test values(1);
Automatically selecting local only mode for query
Query ID = sun_20220803161326_da011a6c-a83c-424e-9f48-75bcfdcd19c2
Total jobs = 3
Launching Job 1 out of 3
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Job running in-process (local Hadoop)
2022-08-03 16:13:28,792 Stage-1 map = 100%,  reduce = 100%
Ended Job = job_local1462336920_0001
Stage-4 is selected by condition resolver.
Stage-3 is filtered out by condition resolver.
Stage-5 is filtered out by condition resolver.
Moving data to directory hdfs://hadoop103:8020/user/hive/warehouse/test/.hive-staging_hive_2022-08-03_16-13-26_675_5481797862656407633-1/-ext-10000
Loading data to table default.test
MapReduce Jobs Launched: 
Stage-Stage-1:  HDFS Read: 0 HDFS Write: 81934083 SUCCESS
Total MapReduce CPU Time Spent: 0 msec
OK
Time taken: 2.625 seconds
hive> select * from test;
OK
1
Time taken: 0.215 seconds, Fetched: 1 row(s)

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值