伪分布式Hadoop系统运行wordcount

wordcount实例测试中,执行之后,报错,生成了输出目录(本人命名为/out),输出目录里面没有任何文件。

报错如下

因为自己忘记复制错误代码了,所以从这位博主
处复制的报错信息
16/09/01 09:32:29 INFO mapreduce.Job: Running job: job_1472644198158_0001
16/09/01 09:32:46 INFO mapreduce.Job: Job job_1472644198158_0001 running in uber mode : false
16/09/01 09:32:46 INFO mapreduce.Job: map 0% reduce 0%
16/09/01 09:33:08 INFO mapreduce.Job: Task Id : attempt_1472644198158_0001_m_000000_0, Status : FAILED
16/09/01 09:33:08 INFO mapreduce.Job: Task Id : attempt_1472644198158_0001_m_000001_0, Status : FAILED
16/09/01 09:33:25 INFO mapreduce.Job: Task Id : attempt_1472644198158_0001_m_000001_1, Status : FAILED
16/09/01 09:33:29 INFO mapreduce.Job: Task Id : attempt_1472644198158_0001_m_000000_1, Status : FAILED
16/09/01 09:33:41 INFO mapreduce.Job: Task Id : attempt_1472644198158_0001_m_000001_2, Status : FAILED
16/09/01 09:33:45 INFO mapreduce.Job: Task Id : attempt_1472644198158_0001_m_000000_2, Status : FAILED
16/09/01 09:33:58 INFO mapreduce.Job: map 100% reduce 100%
16/09/01 09:33:58 INFO mapreduce.Job: Job job_1472644198158_0001 failed with state FAILED due to: Task failed task_1472644198158_0001_m_000001
Job failed as tasks failed. failedMaps:1 failedReduces:0

16/09/01 09:33:58 INFO mapreduce.Job: Counters: 17
Job Counters
Failed map tasks=7
Killed map tasks=1
Killed reduce tasks=1
Launched map tasks=8
Other local map tasks=6
Data-local map tasks=2
Total time spent by all maps in occupied slots (ms)=123536
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=123536
Total time spent by all reduce tasks (ms)=0
Total vcore-milliseconds taken by all map tasks=123536
Total vcore-milliseconds taken by all reduce tasks=0
Total megabyte-milliseconds taken by all map tasks=126500864
Total megabyte-milliseconds taken by all reduce tasks=0
Map-Reduce Framework
CPU time spent (ms)=0
Physical memory (bytes) snapshot=0
Virtual memory (bytes) snapshot=0
[root@slave1 mapreduce]# hadoop jar hadoop-mapreduce-examples-2.7.3.jar wordcount /input /output
16/09/01 10:16:30 INFO client.RMProxy: Connecting to ResourceManager at /114.XXX.XXX.XXX:8032
org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory hdfs://master:9000/output already exists
at org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:146)
at org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:266)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:139)
at org.apache.hadoop.mapreduce.Job10.run(Job.java:1290)atorg.apache.hadoop.mapreduce.Job10.run(Job.java:1290)atorg.apache.hadoop.mapreduce.Job10.run(Job.java:1287)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308)
at org.apache.hadoop.examples.WordCount.main(WordCount.java:87)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)

解决

Hadoop版本:2.10.0
回顾了一下Hadoop配置过程和wordcount运行过程,有可能是以下地方出错:

  1. yarn-site.xml文件配置出错
  2. 不小心格式化了NameNode两次,在撤销第二次操作时,没有彻底撤销干净

(具体为啥还没搞清楚,留个坑,搞清楚了再回来解答)

改yarn-site.xml的配置

<property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
</property>
<property>
    <name>yarn.nodemanager.aux-services.mapreduce.shuffle.clss</name>
    <value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>

其他的Hadoop-evn.sh、core-site.xml、hdfs-site.xml、mapred-site.xml就不赘述了

彻底撤销格式化操作:
再配置上述文件的时候,手动创建了用于存储的/tmp/dfs/name、/tmp/dsf/data(本人的命名是这样),第二次格式化撤销的时候应该要把这几个文件删掉。故用rm -r 目录名 命令将tmp目录及其下面的文件一起删掉。再重新格式化。
start-all.sh开启,检查jps、localhost:50070、localhost:8088,一切正常。
运行wordcount也正常在生成的输出目录下有两个文件。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值