hadoop运行错误 java.io.FileNotFoundException QuasiMonteCarlo

[hadoop@master mapreduce]$ hadoop jar hadoop-mapreduce-examples-2.7.2.jar pi 1 1
Number of Maps  = 1
Samples per Map = 1
Wrote input for Map #0
Starting Job
16/08/10 03:27:13 INFO client.RMProxy: Connecting to ResourceManager at master/192.168.150.30:8032
16/08/10 03:27:14 INFO input.FileInputFormat: Total input paths to process : 1
16/08/10 03:27:14 INFO mapreduce.JobSubmitter: number of splits:1
16/08/10 03:27:15 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1470824551451_0002
16/08/10 03:27:15 INFO impl.YarnClientImpl: Submitted application application_1470824551451_0002
16/08/10 03:27:15 INFO mapreduce.Job: The url to track the job: http://master:8088/proxy/application_1470824551451_0002/
16/08/10 03:27:15 INFO mapreduce.Job: Running job: job_1470824551451_0002
16/08/10 03:27:27 INFO mapreduce.Job: Job job_1470824551451_0002 running in uber mode : false
16/08/10 03:27:27 INFO mapreduce.Job:  map 0% reduce 0%
16/08/10 03:27:27 INFO mapreduce.Job: Job job_1470824551451_0002 failed with state FAILED due to: Application application_1470824551451_0002 failed 2 times due to AM Container for appattempt_1470824551451_0002_000002 exited with  exitCode: -103
For more detailed output, check application tracking page:http://master:8088/cluster/app/application_1470824551451_0002Then, click on links to logs of each attempt.
Diagnostics: Container [pid=4927,containerID=container_1470824551451_0002_02_000001] is running beyond virtual memory limits. Current usage: 86.9 MB of 600 MB physical memory used; 1.6 GB of 1.2 GB virtual memory used. Killing container.
Dump of the process-tree for container_1470824551451_0002_02_000001 :
        |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
        |- 4927 4925 4927 4927 (bash) 0 0 108650496 297 /bin/bash -c /usr/java/latest/bin/java -Djava.io.tmpdir=/hadoop2/tmp/nm-local-dir/usercache/hadoop/appcache/application_1470824551451_0002/container_1470824551451_0002_02_000001/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/home/hadoop/hadoop-2.7.2/logs/userlogs/application_1470824551451_0002/container_1470824551451_0002_02_000001 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog  -Xmx1024m org.apache.hadoop.mapreduce.v2.app.MRAppMaster 1>/home/hadoop/hadoop-2.7.2/logs/userlogs/application_1470824551451_0002/container_1470824551451_0002_02_000001/stdout 2>/home/hadoop/hadoop-2.7.2/logs/userlogs/application_1470824551451_0002/container_1470824551451_0002_02_000001/stderr  
        |- 4935 4927 4927 4927 (java) 400 11 1589190656 21946 /usr/java/latest/bin/java -Djava.io.tmpdir=/hadoop2/tmp/nm-local-dir/usercache/hadoop/appcache/application_1470824551451_0002/container_1470824551451_0002_02_000001/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/home/hadoop/hadoop-2.7.2/logs/userlogs/application_1470824551451_0002/container_1470824551451_0002_02_000001 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog -Xmx1024m org.apache.hadoop.mapreduce.v2.app.MRAppMaster 

Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
Failing this attempt. Failing the application.
16/08/10 03:27:27 INFO mapreduce.Job: Counters: 0
Job Finished in 14.143 seconds
java.io.FileNotFoundException: File does not exist: hdfs://master:9000/user/hadoop/QuasiMonteCarlo_1470824830863_559961706/out/reduce-out
        at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1309)
        at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
        at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
        at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1301)
        at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1819)
        at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1843)
        at org.apache.hadoop.examples.QuasiMonteCarlo.estimatePi(QuasiMonteCarlo.java:314)
        at org.apache.hadoop.examples.QuasiMonteCarlo.run(QuasiMonteCarlo.java:354)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
        at org.apache.hadoop.examples.QuasiMonteCarlo.main(QuasiMonteCarlo.java:363)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
        at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
        at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
[hadoop@master mapreduce]$
[hadoop@master mapreduce]$ vi /home/hadoop/hadoop2/etc/hadoop/yarn-site.xml 
  limitations under the License. See accompanying LICENSE file.
-->
<configuration>

<!-- Site specific YARN configuration properties -->

    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>

    <property>
        <name>yarn.resourcemanager.address</name>
        <value>master:8032</value>
    </property>

    <property>
        <name>yarn.resourcemanager.scheduler.address</name>
        <value>master:8030</value>
    </property>

    <property>
        <name>yarn.resourcemanager.resource-tracker.address</name>
        <value>master:8031</value>
    </property>

    <property>
        <name>yarn.resourcemanager.admin.address</name>
        <value>master:8033</value>
    </property>

    <property>
        <name>yarn.resourcemanager.webapp.address</name>
        <value>master:8088</value>
    </property>

    <property>
        <name>yarn.scheduler.minimum-allocation-mb</name>
        ## 虚拟内存设置太少才造成类似的错误
        <value>2000</value>
    </property>

    <property>
        <name>yarn.scheduler.maximum-allocation-mb</name>
        ## 虚拟内存设置太少才造成类似的错误
        <value>3000</value>
    </property>

</configuration>

"~/hadoop-2.7.2/etc/hadoop/yarn-site.xml" 60L, 1676C written
[hadoop@master mapreduce]$ 
[hadoop@master mapreduce]$ hadoop jar hadoop-mapreduce-examples-2.7.2.jar pi 1 1
Number of Maps  = 1
Samples per Map = 1
Wrote input for Map #0
Starting Job
16/08/10 03:37:35 INFO client.RMProxy: Connecting to ResourceManager at master/192.168.150.30:8032
16/08/10 03:37:36 INFO input.FileInputFormat: Total input paths to process : 1
16/08/10 03:37:36 INFO mapreduce.JobSubmitter: number of splits:1
16/08/10 03:37:36 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1470825334740_0001
16/08/10 03:37:37 INFO impl.YarnClientImpl: Submitted application application_1470825334740_0001
16/08/10 03:37:37 INFO mapreduce.Job: The url to track the job: http://master:8088/proxy/application_1470825334740_0001/
16/08/10 03:37:37 INFO mapreduce.Job: Running job: job_1470825334740_0001
16/08/10 03:37:52 INFO mapreduce.Job: Job job_1470825334740_0001 running in uber mode : false
16/08/10 03:37:52 INFO mapreduce.Job:  map 0% reduce 0%
16/08/10 03:38:04 INFO mapreduce.Job:  map 100% reduce 0%
16/08/10 03:38:15 INFO mapreduce.Job:  map 100% reduce 100%
16/08/10 03:38:15 INFO mapreduce.Job: Job job_1470825334740_0001 completed successfully
16/08/10 03:38:15 INFO mapreduce.Job: Counters: 49
        File System Counters
                FILE: Number of bytes read=28
                FILE: Number of bytes written=235587
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                HDFS: Number of bytes read=263
                HDFS: Number of bytes written=215
                HDFS: Number of read operations=7
                HDFS: Number of large read operations=0
                HDFS: Number of write operations=3
        Job Counters 
                Launched map tasks=1
                Launched reduce tasks=1
                Data-local map tasks=1
                Total time spent by all maps in occupied slots (ms)=9118
                Total time spent by all reduces in occupied slots (ms)=7967
                Total time spent by all map tasks (ms)=9118
                Total time spent by all reduce tasks (ms)=7967
                Total vcore-milliseconds taken by all map tasks=9118
                Total vcore-milliseconds taken by all reduce tasks=7967
                Total megabyte-milliseconds taken by all map tasks=4668416
                Total megabyte-milliseconds taken by all reduce tasks=4079104
        Map-Reduce Framework
                Map input records=1
                Map output records=2
                Map output bytes=18
                Map output materialized bytes=28
                Input split bytes=145
                Combine input records=0
                Combine output records=0
                Reduce input groups=2
                Reduce shuffle bytes=28
                Reduce input records=2
                Reduce output records=0
                Spilled Records=4
                Shuffled Maps =1
                Failed Shuffles=0
                Merged Map outputs=1
                GC time elapsed (ms)=221
                CPU time spent (ms)=2510
                Physical memory (bytes) snapshot=311136256
                Virtual memory (bytes) snapshot=1682472960
                Total committed heap usage (bytes)=164040704
        Shuffle Errors
                BAD_ID=0
                CONNECTION=0
                IO_ERROR=0
                WRONG_LENGTH=0
                WRONG_MAP=0
                WRONG_REDUCE=0
        File Input Format Counters 
                Bytes Read=118
        File Output Format Counters 
                Bytes Written=97
Job Finished in 40.215 seconds
Estimated value of Pi is 4.00000000000000000000
[hadoop@master mapreduce]$ 
评论 6
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值