项目场景:
提示:这里简述项目相关背景:
初始化hive,插入数据失败
下面展示一些 内联代码片
。
hive
Hadoop3.1.3,hive3.1.2,直连数据库测试,创建表成功,插入数据失败:
hive> insert into test values(1,1);
Query ID = root_20240901203520_8bbcfb12-ba9f-4532-87b0-ce022587a055
Total jobs = 3
Launching Job 1 out of 3
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapreduce.job.reduces=<number>
Starting Job = job_1725193171848_0004, Tracking URL = http://node4:8088/proxy/application_1725193171848_0004/
Kill Command = /opt/hadoop-3.1.3/bin/mapred job -kill job_1725193171848_0004
Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 0
2024-09-01 20:35:27,642 Stage-1 map = 0%, reduce = 0%
Ended Job = job_1725193171848_0004 with errors
Error during job, obtaining debugging information...
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
MapReduce Jobs Launched:
Stage-Stage-1: HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec
![在这里插入图片描述](https://i-blog.csdnimg.cn/direct/895ce4dc9b4942e49b50d9bfaa0b8a4d.png)
hive> show tables;
OK
tb_test
test
tttt
原因分析:
提示:这里填写问题的分析:
例如:Handler
发送消息有两种方式,分别是 Handler.obtainMessage()
和 Handler.sendMessage()
,其中 obtainMessage
方式当数据量过大时,由于 MessageQuene
大小也有限,所以当 message
处理不及时时,会造成先传的数据被覆盖,进而导致数据丢失。
解决方案:
提示:这里填写该问题的具体解决方案:
例如:新建一个 Message
对象,并将读取到的数据存入 Message
,然后 mHandler.obtainMessage(READ_DATA, bytes, -1, buffer).sendToTarget();
换成 mHandler.sendMessage()
。