[hadoop@master mapreduce]$ hadoop jar hadoop-mapreduce-examples-2.7.2.jar pi 11
Number of Maps = 1
Samples per Map = 1
Wrote input for Map #0
Starting Job
16/08/1003:27:13 INFO client.RMProxy: Connecting to ResourceManager at master/192.168.150.30:803216/08/1003:27:14 INFO input.FileInputFormat: Total input paths to process : 116/08/1003:27:14 INFO mapreduce.JobSubmitter: number of splits:116/08/1003:27:15 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1470824551451_0002
16/08/1003:27:15 INFO impl.YarnClientImpl: Submitted application application_1470824551451_0002
16/08/1003:27:15 INFO mapreduce.Job: The url to track the job: http://master:8088/proxy/application_1470824551451_0002/
16/08/1003:27:15 INFO mapreduce.Job: Running job: job_1470824551451_0002
16/08/1003:27:27 INFO mapreduce.Job: Job job_1470824551451_0002 running in uber mode : false
16/08/1003:27:27 INFO mapreduce.Job: map 0% reduce 0%
16/08/1003:27:27 INFO mapreduce.Job: Job job_1470824551451_0002 failed with state FAILED due to: Application application_1470824551451_0002 failed 2 times due to AM Container for appattempt_1470824551451_0002_000002 exited with exitCode: -103
For more detailed output, check application tracking page:http://master:8088/cluster/app/application_1470824551451_0002Then, click on links to logs of each attempt.
Diagnostics: Container [pid=4927,containerID=container_1470824551451_0002_02_000001] is running beyond virtual memory limits. Current usage: 86.9 MB of 600 MB physical memory used; 1.6 GB of 1.2 GB virtual memory used. Killing container.
Dump of the process-tree for container_1470824551451_0002_02_000001 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 4927492549274927 (bash) 00108650496297 /bin/bash -c /usr/java/latest/bin/java -Djava.io.tmpdir=/hadoop2/tmp/nm-local-dir/usercache/hadoop/appcache/application_1470824551451_0002/container_1470824551451_0002_02_000001/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/home/hadoop/hadoop-2.7.2/logs/userlogs/application_1470824551451_0002/container_1470824551451_0002_02_000001 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog -Xmx1024m org.apache.hadoop.mapreduce.v2.app.MRAppMaster1>/home/hadoop/hadoop-2.7.2/logs/userlogs/application_1470824551451_0002/container_1470824551451_0002_02_000001/stdout 2>/home/hadoop/hadoop-2.7.2/logs/userlogs/application_1470824551451_0002/container_1470824551451_0002_02_000001/stderr
|- 4935492749274927 (java) 40011158919065621946 /usr/java/latest/bin/java -Djava.io.tmpdir=/hadoop2/tmp/nm-local-dir/usercache/hadoop/appcache/application_1470824551451_0002/container_1470824551451_0002_02_000001/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/home/hadoop/hadoop-2.7.2/logs/userlogs/application_1470824551451_0002/container_1470824551451_0002_02_000001 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog -Xmx1024m org.apache.hadoop.mapreduce.v2.app.MRAppMaster
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
Failing this attempt. Failing the application.
16/08/1003:27:27 INFO mapreduce.Job: Counters: 0
Job Finished in14.143 seconds
java.io.FileNotFoundException: File does not exist: hdfs://master:9000/user/hadoop/QuasiMonteCarlo_1470824830863_559961706/out/reduce-out
at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1309)
at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1301)
at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1819)
at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1843)
at org.apache.hadoop.examples.QuasiMonteCarlo.estimatePi(QuasiMonteCarlo.java:314)
at org.apache.hadoop.examples.QuasiMonteCarlo.run(QuasiMonteCarlo.java:354)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.examples.QuasiMonteCarlo.main(QuasiMonteCarlo.java:363)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
[hadoop@master mapreduce]$
[hadoop@master mapreduce]$ vi /home/hadoop/hadoop2/etc/hadoop/yarn-site.xml
limitations under the License. See accompanying LICENSE file.
-->
<configuration><!-- Site specific YARN configuration properties --><property><name>yarn.nodemanager.aux-services</name><value>mapreduce_shuffle</value></property><property><name>yarn.resourcemanager.address</name><value>master:8032</value></property><property><name>yarn.resourcemanager.scheduler.address</name><value>master:8030</value></property><property><name>yarn.resourcemanager.resource-tracker.address</name><value>master:8031</value></property><property><name>yarn.resourcemanager.admin.address</name><value>master:8033</value></property><property><name>yarn.resourcemanager.webapp.address</name><value>master:8088</value></property><property><name>yarn.scheduler.minimum-allocation-mb</name>
## 虚拟内存设置太少才造成类似的错误
<value>2000</value></property><property><name>yarn.scheduler.maximum-allocation-mb</name>
## 虚拟内存设置太少才造成类似的错误
<value>3000</value></property></configuration>
"~/hadoop-2.7.2/etc/hadoop/yarn-site.xml" 60L, 1676C written
[hadoop@master mapreduce]$
[hadoop@master mapreduce]$ hadoop jar hadoop-mapreduce-examples-2.7.2.jar pi 11
Number of Maps = 1
Samples per Map = 1
Wrote input forMap #0
Starting Job
16/08/1003:37:35 INFO client.RMProxy: Connecting to ResourceManager at master/192.168.150.30:803216/08/1003:37:36 INFO input.FileInputFormat: Total input paths toprocess : 116/08/1003:37:36 INFO mapreduce.JobSubmitter: number of splits:116/08/1003:37:36 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1470825334740_0001
16/08/1003:37:37 INFO impl.YarnClientImpl: Submitted application application_1470825334740_0001
16/08/1003:37:37 INFO mapreduce.Job: The url to track the job: http://master:8088/proxy/application_1470825334740_0001/
16/08/1003:37:37 INFO mapreduce.Job: Running job: job_1470825334740_0001
16/08/1003:37:52 INFO mapreduce.Job: Job job_1470825334740_0001 running in uber mode : false
16/08/1003:37:52 INFO mapreduce.Job: map0% reduce 0%
16/08/1003:38:04 INFO mapreduce.Job: map100% reduce 0%
16/08/1003:38:15 INFO mapreduce.Job: map100% reduce 100%
16/08/1003:38:15 INFO mapreduce.Job: Job job_1470825334740_0001 completed successfully
16/08/1003:38:15 INFO mapreduce.Job: Counters: 49File System Counters
FILE: Number of bytes read=28FILE: Number of bytes written=235587FILE: Number of read operations=0FILE: Number of large read operations=0FILE: Number of write operations=0
HDFS: Number of bytes read=263
HDFS: Number of bytes written=215
HDFS: Number of read operations=7
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=1
Launched reduce tasks=1
Data-local map tasks=1
Total time spent by all maps in occupied slots (ms)=9118
Total time spent by all reduces in occupied slots (ms)=7967
Total time spent by allmap tasks (ms)=9118
Total time spent by all reduce tasks (ms)=7967
Total vcore-milliseconds taken by allmap tasks=9118
Total vcore-milliseconds taken by all reduce tasks=7967
Total megabyte-milliseconds taken by allmap tasks=4668416
Total megabyte-milliseconds taken by all reduce tasks=4079104Map-Reduce Framework
Map input records=1Map output records=2Map output bytes=18Map output materialized bytes=28
Input split bytes=145
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=28
Reduce input records=2
Reduce output records=0
Spilled Records=4
Shuffled Maps =1
Failed Shuffles=0
Merged Map outputs=1
GC time elapsed (ms)=221
CPU time spent (ms)=2510
Physical memory (bytes) snapshot=311136256
Virtual memory (bytes) snapshot=1682472960
Total committed heap usage (bytes)=164040704
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0File Input Format Counters
Bytes Read=118File Output Format Counters
Bytes Written=97
Job Finished in40.215 seconds
Estimated value of Pi is4.00000000000000000000
[hadoop@master mapreduce]$