【转】【HDFS】hive任务报HDFS异常:last block does not have enough number of replicas

HIVE运行查询脚本时报错,last block does not have enough number of replicas:

  1 2018-10-15 2018-07-17
  2 2018-10-15 10:00:01
  3 Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512M; support was removed in 8.0
  4 Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512M; support was removed in 8.0
  5 
  6 Logging initialized using configuration in jar:file:/data/cloudera/parcels/CDH-5.11.0-1.cdh5.11.0.p0.34/jars/hive-common-1.1.0-cdh5.11.0.jar!/hive-log4j.properties
  7 Query ID = work_20181015100000_e24dc755-be3e-4d26-b088-f7195d4a9f6d
  8 Total jobs = 1
  9 Stage-1 is selected by condition resolver.
 10 Launching Job 1 out of 1
 11 Number of reduce tasks not specified. Estimated from input data size: 1099
 12 In order to change the average load for a reducer (in bytes):
 13   set hive.exec.reducers.bytes.per.reducer=<number>
 14 In order to limit the maximum number of reducers:
 15   set hive.exec.reducers.max=<number>
 16 In order to set a constant number of reducers:
 17   set mapreduce.job.reduces=<number>
 18 java.io.IOException: Unable to close file because the last block BP-1541923511-10.28.4.4-1501148646603:blk_1906958696_833801584 does not have enough number of replicas.
 19     at org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutputStream.java:2705)
 20     at org.apache.hadoop.hdfs.DFSOutputStream.closeImpl(DFSOutputStream.java:2667)
 21     at org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:2621)
 22     at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
 23     at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)
 24     at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:61)
 25     at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:119)
 26     at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:369)
 27     at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:341)
 28     at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:292)
 29     at org.apache.hadoop.mapreduce.JobResourceUploader.copyRemoteFiles(JobResourceUploader.java:203)
 30     at org.apache.hadoop.mapreduce.JobResourceUploader.uploadFiles(JobResourceUploader.java:128)
 31     at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:99)
 32     at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:194)
 33     at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1307)
 34     at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1304)
 35     at java.security.AccessController.doPrivileged(Native Method)
 38     at org.apache.hadoop.mapreduce.Job.submit(Job.java:1304)
 39     at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:578)
 40     at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:573)
 41     at java.security.AccessController.doPrivileged(Native Method)
 42     at javax.security.auth.Subject.doAs(Subject.java:422)
 43     at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920)
 44     at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:573)
 45     at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:564)
 46     at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:418)
 47     at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:142)
 48     at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:214)
 49     at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100)
 50     at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1979)
 51     at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1692)
 52     at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1424)
 53     at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1208)
 54     at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1198)
 55     at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:220)
 56     at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:172)
 57     at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:383)
 58     at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:318)
 59     at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:720)
 60     at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:693)
 61     at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:628)
 62     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 63     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 64     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 65     at java.lang.reflect.Method.invoke(Method.java:498)
 66     at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
 67     at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
 68 Job Submission failed with exception 'java.io.IOException(Unable to close file because the last block BP-1541923511-10.28.4.4-1501148646603:blk_1906958696_833801584 does not have enough n
    umber of replicas.)' 69 FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask

参考: 【HDFS】hive任务报HDFS异常:last block does not have enough number of replicas,知是hadoop服务器负载过大引起,重新执行HIVE SQL脚本即可。若要彻底解决问题,则需要

建议降低任务并发量或者控制cpu使用率来减轻网络的传输,使得DN能顺利向NN汇报block情况。 

问题结论:

减轻系统负载。集群发生的时候负载很重,CPU的32个核(100%)全部分配跑MR认为了,至少要留20%的CPU

  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 3
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值