第一次运行:
[root@sjfx jar]# hadoop jar /home/tangzw/jar/GameLoginLogAnalyzeA.jar /tangzw/input /tangzw/output
08/01/25 07:13:43 INFO client.RMProxy: Connecting to ResourceManager at sjfx/192.168.57.127:18040
08/01/25 07:13:43 INFO client.RMProxy: Connecting to ResourceManager at sjfx/192.168.57.127:18040
08/01/25 07:13:43 WARN mapreduce.JobSubmitter: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
08/01/25 07:13:44 INFO mapred.FileInputFormat: Total input paths to process : 13
08/01/25 07:13:44 INFO mapreduce.JobSubmitter: number of splits:103
08/01/25 07:13:44 INFO Configuration.deprecation: user.name is deprecated. Instead, use mapreduce.job.user.name
08/01/25 07:13:44 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
08/01/25 07:13:44 INFO Configuration.deprecation: mapred.output.value.class is deprecated. Instead, use mapreduce.job.output.value.class
08/01/25 07:13:44 INFO Configuration.deprecation: mapred.job.name is deprecated. Instead, use mapreduce.job.name
08/01/25 07:13:44 INFO Configuration.deprecation: mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
08/01/25 07:13:44 INFO Configuration.deprecation: mapred.output.dir is deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir
08/01/25 07:13:44 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
08/01/25 07:13:44 INFO Configuration.deprecation: mapred.output.key.class is deprecated. Instead, use mapreduce.job.output.key.class
08/01/25 07:13:44 INFO Configuration.deprecation: mapred.working.dir is deprecated. Instead, use mapreduce.job.working.dir
08/01/25 07:13:45 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1201209004442_0001
08/01/25 07:13:46 INFO impl.YarnClientImpl: Submitted application application_1201209004442_0001 to ResourceManager at sjfx/192.168.57.127:18040
08/01/25 07:13:46 INFO mapreduce.Job: The url to track the job: http://sjfx:18088/proxy/application_1201209004442_0001/
08/01/25 07:13:46 INFO mapreduce.Job: Running job: job_1201209004442_0001
08/01/25 07:13:57 INFO mapreduce.Job: Job job_1201209004442_0001 running in uber mode : false
08/01/25 07:13:57 INFO mapreduce.Job: map 0% reduce 0%
08/01/25 07:14:36 INFO mapreduce.Job: map 2% reduce 0%
08/01/25 07:14:46 INFO mapreduce.Job: map 3% reduce 0%
08/01/25 07:14:51 INFO mapreduce.Job: map 4% reduce 0%
08/01/25 07:14:54 INFO mapreduce.Job: map 5% reduce 0%
08/01/25 07:14:57 INFO mapreduce.Job: map 6% reduce 0%
08/01/25 07:15:04 INFO mapreduce.Job: map 7% reduce 0%
08/01/25 07:15:16 INFO mapreduce.Job: map 8% reduce 0%
08/01/25 07:15:22 INFO mapreduce.Job: map 9% reduce 0%
08/01/25 07:17:01 INFO mapreduce.Job: map 10% reduce 0%
08/01/25 07:17:56 INFO mapreduce.Job: map 11% reduce 0%
08/01/25 07:18:38 INFO mapreduce.Job: map 12% reduce 0%
08/01/25 07:19:11 INFO mapreduce.Job: map 13% reduce 0%
08/01/25 07:19:20 INFO mapreduce.Job: map 14% reduce 0%
08/01/25 07:19:34 INFO mapreduce.Job: map 15% reduce 0%
08/01/25 07:20:37 INFO mapreduce.Job: map 16% reduce 0%
08/01/25 07:20:52 INFO mapreduce.Job: map 17% reduce 0%
08/01/25 07:21:39 INFO mapreduce.Job: map 18% reduce 0%
08/01/25 07:22:06 INFO mapreduce.Job: map 19% reduce 0%
08/01/25 07:23:18 INFO mapreduce.Job: map 20% reduce 0%
08/01/25 07:23:38 INFO mapreduce.Job: map 21% reduce 0%
08/01/25 07:24:24 INFO mapreduce.Job: map 24% reduce 0%
08/01/25 07:24:30 INFO mapreduce.Job: map 25% reduce 0%
08/01/25 07:25:07 INFO mapreduce.Job: map 26% reduce 0%
08/01/25 07:25:35 INFO mapreduce.Job: map 27% reduce 0%
08/01/25 07:26:10 INFO mapreduce.Job: map 28% reduce 0%
08/01/25 07:26:19 INFO mapreduce.Job: map 29% reduce 0%
08/01/25 07:26:35 INFO mapreduce.Job: map 29% reduce 1%
08/01/25 07:27:13 INFO mapreduce.Job: map 30% reduce 1%
08/01/25 07:27:24 INFO mapreduce.Job: map 31% reduce 1%
08/01/25 07:28:42 INFO mapreduce.Job: map 32% reduce 1%
08/01/25 07:28:50 INFO mapreduce.Job: map 33% reduce 1%
08/01/25 07:28:56 INFO mapreduce.Job: map 34% reduce 1%
08/01/25 07:29:02 INFO mapreduce.Job: map 35% reduce 1%
08/01/25 07:29:05 INFO mapreduce.Job: map 36% reduce 1%
08/01/25 07:30:03 INFO mapreduce.Job: map 37% reduce 1%
08/01/25 07:30:31 INFO mapreduce.Job: map 38% reduce 1%
08/01/25 07:30:56 INFO mapreduce.Job: map 39% reduce 1%
08/01/25 07:32:16 INFO mapreduce.Job: map 40% reduce 1%
08/01/25 07:33:15 INFO mapreduce.Job: map 41% reduce 1%
08/01/25 07:33:27 INFO mapreduce.Job: map 42% reduce 1%
08/01/25 07:33:31 INFO mapreduce.Job: map 43% reduce 1%
08/01/25 07:33:36 INFO mapreduce.Job: map 44% reduce 1%
08/01/25 07:33:40 INFO mapreduce.Job: map 45% reduce 1%
08/01/25 07:34:26 INFO mapreduce.Job: map 46% reduce 1%
08/01/25 07:35:30 INFO mapreduce.Job: map 46% reduce 2%
08/01/25 07:35:43 INFO mapreduce.Job: map 47% reduce 2%
08/01/25 07:35:50 INFO mapreduce.Job: map 48% reduce 2%
08/01/25 07:36:04 INFO mapreduce.Job: map 49% reduce 2%
08/01/25 07:37:31 INFO mapreduce.Job: map 50% reduce 2%
08/01/25 07:38:33 INFO mapreduce.Job: map 51% reduce 2%
08/01/25 07:39:26 INFO mapreduce.Job: map 52% reduce 2%
08/01/25 07:39:29 INFO mapreduce.Job: map 53% reduce 2%
08/01/25 07:39:35 INFO mapreduce.Job: map 54% reduce 2%
08/01/25 07:39:38 INFO mapreduce.Job: map 55% reduce 2%
08/01/25 07:40:25 INFO mapreduce.Job: map 56% reduce 2%
08/01/25 07:40:58 INFO mapreduce.Job: map 56% reduce 3%
08/01/25 07:41:25 INFO mapreduce.Job: map 57% reduce 3%
08/01/25 07:42:17 INFO mapreduce.Job: map 58% reduce 3%
08/01/25 07:43:12 INFO mapreduce.Job: map 59% reduce 3%
08/01/25 07:43:18 INFO mapreduce.Job: map 60% reduce 3%
08/01/25 07:43:42 INFO mapreduce.Job: map 61% reduce 3%
08/01/25 07:44:19 INFO mapreduce.Job: map 62% reduce 3%
08/01/25 07:45:12 INFO mapreduce.Job: map 63% reduce 3%
08/01/25 07:46:01 INFO mapreduce.Job: map 64% reduce 3%
08/01/25 07:46:33 INFO mapreduce.Job: map 64% reduce 4%
08/01/25 07:46:36 INFO mapreduce.Job: map 65% reduce 4%
08/01/25 07:47:02 INFO mapreduce.Job: map 66% reduce 4%
08/01/25 07:47:08 INFO mapreduce.Job: map 67% reduce 4%
08/01/25 07:47:30 INFO mapreduce.Job: map 68% reduce 4%
08/01/25 07:47:36 INFO mapreduce.Job: map 69% reduce 4%
08/01/25 07:47:42 INFO mapreduce.Job: map 70% reduce 4%
08/01/25 07:47:51 INFO mapreduce.Job: map 71% reduce 4%
08/01/25 07:48:13 INFO mapreduce.Job: map 72% reduce 4%
08/01/25 07:48:19 INFO mapreduce.Job: map 73% reduce 4%
08/01/25 07:48:40 INFO mapreduce.Job: map 74% reduce 4%
08/01/25 07:48:53 INFO mapreduce.Job: map 75% reduce 4%
08/01/25 07:49:15 INFO mapreduce.Job: map 76% reduce 4%
08/01/25 07:49:55 INFO mapreduce.Job: map 77% reduce 4%
08/01/25 07:51:06 INFO mapreduce.Job: map 78% reduce 4%
08/01/25 07:51:48 INFO mapreduce.Job: map 78% reduce 5%
08/01/25 07:51:54 INFO mapreduce.Job: map 79% reduce 5%
08/01/25 07:52:06 INFO mapreduce.Job: map 80% reduce 5%
08/01/25 07:52:10 INFO mapreduce.Job: map 81% reduce 5%
08/01/25 07:52:19 INFO mapreduce.Job: map 82% reduce 5%
08/01/25 07:52:25 INFO mapreduce.Job: map 83% reduce 5%
08/01/25 07:53:10 INFO mapreduce.Job: map 84% reduce 5%
08/01/25 07:54:10 INFO mapreduce.Job: map 85% reduce 6%
08/01/25 07:54:27 INFO mapreduce.Job: map 86% reduce 6%
08/01/25 07:54:31 INFO mapreduce.Job: map 87% reduce 6%
08/01/25 07:54:34 INFO mapreduce.Job: map 88% reduce 6%
08/01/25 07:54:37 INFO mapreduce.Job: map 89% reduce 6%
08/01/25 07:54:48 INFO mapreduce.Job: map 90% reduce 6%
08/01/25 07:55:17 INFO mapreduce.Job: map 91% reduce 6%
08/01/25 07:55:33 INFO mapreduce.Job: map 93% reduce 6%
08/01/25 07:55:36 INFO mapreduce.Job: map 94% reduce 6%
08/01/25 07:55:40 INFO mapreduce.Job: map 95% reduce 6%
08/01/25 07:55:49 INFO mapreduce.Job: map 96% reduce 6%
08/01/25 07:55:56 INFO mapreduce.Job: map 97% reduce 6%
08/01/25 07:57:25 INFO mapreduce.Job: map 98% reduce 6%
08/01/25 07:57:28 INFO mapreduce.Job: map 99% reduce 6%
08/01/25 07:57:45 INFO mapreduce.Job: map 100% reduce 6%
08/01/25 07:58:50 INFO mapreduce.Job: map 100% reduce 7%
08/01/25 08:00:01 INFO mapreduce.Job: map 100% reduce 8%
08/01/25 08:01:10 INFO mapreduce.Job: map 100% reduce 9%
08/01/25 08:02:44 INFO mapreduce.Job: map 100% reduce 10%
08/01/25 08:03:45 INFO mapreduce.Job: map 100% reduce 11%
08/01/25 08:04:07 INFO mapreduce.Job: Task Id : attempt_1201209004442_0001_r_000000_0, Status : FAILED
Error: org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError: error in shuffle in fetcher#1
at org.apache.hadoop.mapreduce.task.reduce.Shuffle.run(Shuffle.java:121)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:380)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:162)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:157)
Caused by: org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find any valid local directory for output/attempt_1201209004442_0001_r_000000_0/map_41.out
at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:398)
at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:150)
at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:131)
at org.apache.hadoop.mapred.YarnOutputFiles.getInputFileForWrite(YarnOutputFiles.java:213)
at org.apache.hadoop.mapreduce.task.reduce.OnDiskMapOutput.<init>(OnDiskMapOutput.java:61)
at org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl.reserve(MergeManagerImpl.java:257)
at org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyMapOutput(Fetcher.java:411)
at org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyFromHost(Fetcher.java:341)
at org.apache.hadoop.mapreduce.task.reduce.Fetcher.run(Fetcher.java:165)
08/01/25 08:04:08 INFO mapreduce.Job: map 100% reduce 0%
08/01/25 08:05:01 INFO mapreduce.Job: Task Id : attempt_1201209004442_0001_r_000000_1, Status : FAILED
Error: org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError: error in shuffle in fetcher#5
at org.apache.hadoop.mapreduce.task.reduce.Shuffle.run(Shuffle.java:121)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:380)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:162)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:157)
Caused by: org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find any valid local directory for output/attempt_1201209004442_0001_r_000000_1/map_11.out
at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:398)
at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:150)
at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:131)
at org.apache.hadoop.mapred.YarnOutputFiles.getInputFileForWrite(YarnOutputFiles.java:213)
at org.apache.hadoop.mapreduce.task.reduce.OnDiskMapOutput.<init>(OnDiskMapOutput.java:61)
at org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl.reserve(MergeManagerImpl.java:257)
at org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyMapOutput(Fetcher.java:411)
at org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyFromHost(Fetcher.java:341)
at org.apache.hadoop.mapreduce.task.reduce.Fetcher.run(Fetcher.java:165)
08/01/25 08:05:17 INFO mapreduce.Job: map 100% reduce 1%
08/01/25 08:05:42 INFO mapreduce.Job: map 100% reduce 2%
08/01/25 08:05:58 INFO mapreduce.Job: map 100% reduce 3%
08/01/25 08:06:26 INFO mapreduce.Job: map 100% reduce 4%
08/01/25 08:07:15 INFO mapreduce.Job: map 100% reduce 5%
08/01/25 08:07:48 INFO mapreduce.Job: map 100% reduce 6%
08/01/25 08:08:21 INFO mapreduce.Job: Task Id : attempt_1201209004442_0001_r_000000_2, Status : FAILED
Error: org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError: error in shuffle in fetcher#4
at org.apache.hadoop.mapreduce.task.reduce.Shuffle.run(Shuffle.java:121)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:380)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:162)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:157)
Caused by: org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find any valid local directory for output/attempt_1201209004442_0001_r_000000_2/map_20.out
at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:398)
at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:150)
at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:131)
at org.apache.hadoop.mapred.YarnOutputFiles.getInputFileForWrite(YarnOutputFiles.java:213)
at org.apache.hadoop.mapreduce.task.reduce.OnDiskMapOutput.<init>(OnDiskMapOutput.java:61)
at org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl.reserve(MergeManagerImpl.java:257)
at org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyMapOutput(Fetcher.java:411)
at org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyFromHost(Fetcher.java:341)
at org.apache.hadoop.mapreduce.task.reduce.Fetcher.run(Fetcher.java:165)
08/01/25 08:08:22 INFO mapreduce.Job: map 100% reduce 0%
第二次运行:
[root@sjfx jar]# hadoop jar GameLoginLogAnalyzeA.jar /tangzw/input /tangzw/output
14/10/04 11:30:36 INFO client.RMProxy: Connecting to ResourceManager at sjfx/192.168.57.127:18040
14/10/04 11:30:36 INFO client.RMProxy: Connecting to ResourceManager at sjfx/192.168.57.127:18040
14/10/04 11:30:37 WARN mapreduce.JobSubmitter: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
14/10/04 11:30:38 INFO mapred.FileInputFormat: Total input paths to process : 13
14/10/04 11:30:38 INFO mapreduce.JobSubmitter: number of splits:103
14/10/04 11:30:38 INFO Configuration.deprecation: user.name is deprecated. Instead, use mapreduce.job.user.name
14/10/04 11:30:38 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
14/10/04 11:30:38 INFO Configuration.deprecation: mapred.output.value.class is deprecated. Instead, use mapreduce.job.output.value.class
14/10/04 11:30:38 INFO Configuration.deprecation: mapred.job.name is deprecated. Instead, use mapreduce.job.name
14/10/04 11:30:38 INFO Configuration.deprecation: mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
14/10/04 11:30:38 INFO Configuration.deprecation: mapred.output.dir is deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir
14/10/04 11:30:38 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
14/10/04 11:30:38 INFO Configuration.deprecation: mapred.output.key.class is deprecated. Instead, use mapreduce.job.output.key.class
14/10/04 11:30:38 INFO Configuration.deprecation: mapred.working.dir is deprecated. Instead, use mapreduce.job.working.dir
14/10/04 11:30:39 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1412391426441_0002
14/10/04 11:30:39 INFO impl.YarnClientImpl: Submitted application application_1412391426441_0002 to ResourceManager at sjfx/192.168.57.127:18040
14/10/04 11:30:39 INFO mapreduce.Job: The url to track the job: http://sjfx:18088/proxy/application_1412391426441_0002/
14/10/04 11:30:39 INFO mapreduce.Job: Running job: job_1412391426441_0002
14/10/04 11:30:48 INFO mapreduce.Job: Job job_1412391426441_0002 running in uber mode : false
14/10/04 11:30:48 INFO mapreduce.Job: map 0% reduce 0%
14/10/04 11:31:27 INFO mapreduce.Job: map 1% reduce 0%
14/10/04 11:31:31 INFO mapreduce.Job: map 4% reduce 0%
14/10/04 11:31:46 INFO mapreduce.Job: map 5% reduce 0%
14/10/04 11:31:51 INFO mapreduce.Job: map 7% reduce 0%
14/10/04 11:32:08 INFO mapreduce.Job: map 9% reduce 0%
14/10/04 11:32:11 INFO mapreduce.Job: map 10% reduce 0%
14/10/04 11:32:35 INFO mapreduce.Job: map 11% reduce 0%
14/10/04 11:33:16 INFO mapreduce.Job: map 12% reduce 0%
14/10/04 11:33:19 INFO mapreduce.Job: map 13% reduce 0%
14/10/04 11:33:31 INFO mapreduce.Job: map 14% reduce 0%
14/10/04 11:33:35 INFO mapreduce.Job: map 15% reduce 0%
14/10/04 11:33:42 INFO mapreduce.Job: map 16% reduce 0%
14/10/04 11:34:00 INFO mapreduce.Job: map 17% reduce 0%
14/10/04 11:34:25 INFO mapreduce.Job: map 18% reduce 0%
14/10/04 11:34:38 INFO mapreduce.Job: map 19% reduce 0%
14/10/04 11:35:05 INFO mapreduce.Job: map 20% reduce 0%
14/10/04 11:35:16 INFO mapreduce.Job: map 21% reduce 0%
14/10/04 11:35:21 INFO mapreduce.Job: map 22% reduce 0%
14/10/04 11:35:32 INFO mapreduce.Job: map 23% reduce 0%
14/10/04 11:36:04 INFO mapreduce.Job: map 24% reduce 0%
14/10/04 11:36:23 INFO mapreduce.Job: map 24% reduce 1%
14/10/04 11:36:48 INFO mapreduce.Job: map 25% reduce 1%
14/10/04 11:36:51 INFO mapreduce.Job: map 26% reduce 1%
14/10/04 11:36:54 INFO mapreduce.Job: map 27% reduce 1%
14/10/04 11:36:57 INFO mapreduce.Job: map 28% reduce 1%
14/10/04 11:37:06 INFO mapreduce.Job: map 29% reduce 1%
14/10/04 11:37:22 INFO mapreduce.Job: map 31% reduce 1%
14/10/04 11:37:32 INFO mapreduce.Job: map 32% reduce 1%
14/10/04 11:37:43 INFO mapreduce.Job: map 33% reduce 2%
14/10/04 11:38:05 INFO mapreduce.Job: map 34% reduce 2%
14/10/04 11:38:27 INFO mapreduce.Job: map 35% reduce 2%
14/10/04 11:38:53 INFO mapreduce.Job: map 36% reduce 2%
14/10/04 11:39:02 INFO mapreduce.Job: map 37% reduce 2%
14/10/04 11:39:08 INFO mapreduce.Job: map 38% reduce 2%
14/10/04 11:39:34 INFO mapreduce.Job: map 39% reduce 2%
14/10/04 11:39:40 INFO mapreduce.Job: map 39% reduce 3%
14/10/04 11:39:46 INFO mapreduce.Job: map 40% reduce 3%
14/10/04 11:39:49 INFO mapreduce.Job: map 41% reduce 3%
14/10/04 11:40:19 INFO mapreduce.Job: map 43% reduce 3%
14/10/04 11:40:49 INFO mapreduce.Job: map 44% reduce 3%
14/10/04 11:41:11 INFO mapreduce.Job: map 45% reduce 3%
14/10/04 11:41:13 INFO mapreduce.Job: map 45% reduce 4%
14/10/04 11:41:33 INFO mapreduce.Job: map 46% reduce 4%
14/10/04 11:41:44 INFO mapreduce.Job: map 47% reduce 4%
14/10/04 11:41:50 INFO mapreduce.Job: map 48% reduce 4%
14/10/04 11:41:58 INFO mapreduce.Job: map 49% reduce 4%
14/10/04 11:42:14 INFO mapreduce.Job: map 50% reduce 4%
14/10/04 11:42:27 INFO mapreduce.Job: map 51% reduce 4%
14/10/04 11:42:40 INFO mapreduce.Job: map 52% reduce 4%
14/10/04 11:43:02 INFO mapreduce.Job: map 53% reduce 4%
14/10/04 11:43:17 INFO mapreduce.Job: map 54% reduce 4%
14/10/04 11:43:41 INFO mapreduce.Job: map 55% reduce 4%
14/10/04 11:43:55 INFO mapreduce.Job: map 55% reduce 5%
14/10/04 11:44:05 INFO mapreduce.Job: map 56% reduce 5%
14/10/04 11:44:16 INFO mapreduce.Job: map 57% reduce 5%
14/10/04 11:44:32 INFO mapreduce.Job: map 58% reduce 5%
14/10/04 11:44:47 INFO mapreduce.Job: map 59% reduce 5%
14/10/04 11:44:51 INFO mapreduce.Job: map 60% reduce 5%
14/10/04 11:45:09 INFO mapreduce.Job: map 61% reduce 5%
14/10/04 11:45:25 INFO mapreduce.Job: map 62% reduce 5%
14/10/04 11:45:48 INFO mapreduce.Job: map 63% reduce 5%
14/10/04 11:46:06 INFO mapreduce.Job: map 64% reduce 5%
14/10/04 11:46:15 INFO mapreduce.Job: map 64% reduce 6%
14/10/04 11:46:36 INFO mapreduce.Job: map 65% reduce 6%
14/10/04 11:47:10 INFO mapreduce.Job: map 66% reduce 6%
14/10/04 11:47:24 INFO mapreduce.Job: map 67% reduce 6%
14/10/04 11:47:38 INFO mapreduce.Job: map 68% reduce 6%
14/10/04 11:47:42 INFO mapreduce.Job: map 69% reduce 6%
14/10/04 11:47:50 INFO mapreduce.Job: map 70% reduce 6%
14/10/04 11:47:57 INFO mapreduce.Job: map 71% reduce 6%
14/10/04 11:48:03 INFO mapreduce.Job: map 72% reduce 6%
14/10/04 11:48:07 INFO mapreduce.Job: map 73% reduce 6%
14/10/04 11:48:24 INFO mapreduce.Job: map 74% reduce 6%
14/10/04 11:48:52 INFO mapreduce.Job: map 75% reduce 6%
14/10/04 11:48:58 INFO mapreduce.Job: map 75% reduce 7%
14/10/04 11:49:17 INFO mapreduce.Job: map 76% reduce 7%
14/10/04 11:49:43 INFO mapreduce.Job: map 77% reduce 7%
14/10/04 11:50:06 INFO mapreduce.Job: map 78% reduce 7%
14/10/04 11:50:18 INFO mapreduce.Job: map 79% reduce 7%
14/10/04 11:50:25 INFO mapreduce.Job: map 79% reduce 8%
14/10/04 11:50:55 INFO mapreduce.Job: map 80% reduce 8%
14/10/04 11:50:58 INFO mapreduce.Job: map 81% reduce 8%
14/10/04 11:51:18 INFO mapreduce.Job: map 82% reduce 8%
14/10/04 11:51:24 INFO mapreduce.Job: map 83% reduce 8%
14/10/04 11:51:45 INFO mapreduce.Job: map 84% reduce 8%
14/10/04 11:52:05 INFO mapreduce.Job: map 85% reduce 8%
14/10/04 11:52:17 INFO mapreduce.Job: map 85% reduce 9%
14/10/04 11:52:28 INFO mapreduce.Job: map 86% reduce 9%
14/10/04 11:52:31 INFO mapreduce.Job: map 87% reduce 9%
14/10/04 11:52:34 INFO mapreduce.Job: map 88% reduce 9%
14/10/04 11:52:38 INFO mapreduce.Job: map 89% reduce 9%
14/10/04 11:52:42 INFO mapreduce.Job: map 90% reduce 9%
14/10/04 11:52:59 INFO mapreduce.Job: map 91% reduce 9%
14/10/04 11:53:28 INFO mapreduce.Job: map 92% reduce 9%
14/10/04 11:53:55 INFO mapreduce.Job: map 93% reduce 9%
14/10/04 11:53:58 INFO mapreduce.Job: Task Id : attempt_1412391426441_0002_m_000097_0, Status : FAILED
14/10/04 11:54:00 INFO mapreduce.Job: Task Id : attempt_1412391426441_0002_m_000099_0, Status : FAILED
14/10/04 11:54:00 INFO mapreduce.Job: Task Id : attempt_1412391426441_0002_m_000098_0, Status : FAILED
14/10/04 11:54:00 INFO mapreduce.Job: Task Id : attempt_1412391426441_0002_m_000100_0, Status : FAILED
14/10/04 11:54:00 INFO mapreduce.Job: Task Id : attempt_1412391426441_0002_m_000101_0, Status : FAILED
14/10/04 11:54:05 INFO mapreduce.Job: Task Id : attempt_1412391426441_0002_m_000097_1, Status : FAILED
14/10/04 11:54:05 INFO mapreduce.Job: Task Id : attempt_1412391426441_0002_r_000000_0, Status : FAILED
Error: org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError: error in shuffle in fetcher#5
at org.apache.hadoop.mapreduce.task.reduce.Shuffle.run(Shuffle.java:121)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:380)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:162)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:157)
Caused by: org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find any valid local directory for output/attempt_1412391426441_0002_r_000000_0/map_20.out
at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:398)
at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:150)
at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:131)
at org.apache.hadoop.mapred.YarnOutputFiles.getInputFileForWrite(YarnOutputFiles.java:213)
at org.apache.hadoop.mapreduce.task.reduce.OnDiskMapOutput.<init>(OnDiskMapOutput.java:61)
at org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl.reserve(MergeManagerImpl.java:257)
at org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyMapOutput(Fetcher.java:411)
at org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyFromHost(Fetcher.java:341)
at org.apache.hadoop.mapreduce.task.reduce.Fetcher.run(Fetcher.java:165)
14/10/04 11:54:06 INFO mapreduce.Job: map 93% reduce 0%
14/10/04 11:54:09 INFO mapreduce.Job: Task Id : attempt_1412391426441_0002_m_000099_1, Status : FAILED
14/10/04 11:54:09 INFO mapreduce.Job: Task Id : attempt_1412391426441_0002_m_000098_1, Status : FAILED
14/10/04 11:54:18 INFO mapreduce.Job: map 94% reduce 0%
14/10/04 11:54:41 INFO mapreduce.Job: map 96% reduce 0%
14/10/04 11:54:41 INFO mapreduce.Job: Task Id : attempt_1412391426441_0002_m_000101_1, Status : FAILED
Error: java.io.IOException: Spill failed
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.checkSpillException(MapTask.java:1540)
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:1447)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:436)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:162)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:157)
Caused by: org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find any valid local directory for attempt_1412391426441_0002_m_000101_1_spill_0.out
at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:398)
at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:150)
at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:131)
at org.apache.hadoop.mapred.YarnOutputFiles.getSpillFileForWrite(YarnOutputFiles.java:159)
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.sortAndSpill(MapTask.java:1573)
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.access$900(MapTask.java:852)
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer$SpillThread.run(MapTask.java:1510)
14/10/04 11:54:41 INFO mapreduce.Job: Task Id : attempt_1412391426441_0002_m_000100_1, Status : FAILED
Error: java.io.IOException: Spill failed
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.checkSpillException(MapTask.java:1540)
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:1447)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:436)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:162)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:157)
Caused by: org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find any valid local directory for attempt_1412391426441_0002_m_000100_1_spill_0.out
at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:398)
at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:150)
at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:131)
at org.apache.hadoop.mapred.YarnOutputFiles.getSpillFileForWrite(YarnOutputFiles.java:159)
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.sortAndSpill(MapTask.java:1573)
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.access$900(MapTask.java:852)
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer$SpillThread.run(MapTask.java:1510)
14/10/04 11:54:41 INFO mapreduce.Job: Task Id : attempt_1412391426441_0002_m_000099_2, Status : FAILED
Error: java.io.IOException: Spill failed
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.checkSpillException(MapTask.java:1540)
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:1447)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:436)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:162)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:157)
Caused by: org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find any valid local directory for attempt_1412391426441_0002_m_000099_2_spill_0.out
at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:398)
at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:150)
at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:131)
at org.apache.hadoop.mapred.YarnOutputFiles.getSpillFileForWrite(YarnOutputFiles.java:159)
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.sortAndSpill(MapTask.java:1573)
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.access$900(MapTask.java:852)
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer$SpillThread.run(MapTask.java:1510)
14/10/04 11:54:41 INFO mapreduce.Job: Task Id : attempt_1412391426441_0002_m_000097_2, Status : FAILED
Error: java.io.IOException: Spill failed
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.checkSpillException(MapTask.java:1540)
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:1447)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:436)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:162)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:157)
Caused by: org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find any valid local directory for attempt_1412391426441_0002_m_000097_2_spill_0.out
at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:398)
at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:150)
at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:131)
at org.apache.hadoop.mapred.YarnOutputFiles.getSpillFileForWrite(YarnOutputFiles.java:159)
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.sortAndSpill(MapTask.java:1573)
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.access$900(MapTask.java:852)
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer$SpillThread.run(MapTask.java:1510)
14/10/04 11:54:42 INFO mapreduce.Job: map 94% reduce 0%
14/10/04 11:54:42 INFO mapreduce.Job: Task Id : attempt_1412391426441_0002_r_000000_1, Status : FAILED
Error: org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError: error in shuffle in fetcher#3
at org.apache.hadoop.mapreduce.task.reduce.Shuffle.run(Shuffle.java:121)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:380)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:162)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:157)
Caused by: org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find any valid local directory for output/attempt_1412391426441_0002_r_000000_1/map_4.out
at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:398)
at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:150)
at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:131)
at org.apache.hadoop.mapred.YarnOutputFiles.getInputFileForWrite(YarnOutputFiles.java:213)
at org.apache.hadoop.mapreduce.task.reduce.OnDiskMapOutput.<init>(OnDiskMapOutput.java:61)
at org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl.reserve(MergeManagerImpl.java:257)
at org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyMapOutput(Fetcher.java:411)
at org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyFromHost(Fetcher.java:341)
at org.apache.hadoop.mapreduce.task.reduce.Fetcher.run(Fetcher.java:165)
14/10/04 11:54:44 INFO mapreduce.Job: map 100% reduce 100%
14/10/04 11:54:51 INFO mapreduce.Job: Job job_1412391426441_0002 failed with state FAILED due to: Task failed task_1412391426441_0002_m_000097
Job failed as tasks failed. failedMaps:1 failedReduces:0
14/10/04 11:54:52 INFO mapreduce.Job: Counters: 34
File System Counters
FILE: Number of bytes read=13102192174
FILE: Number of bytes written=26212068108
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=12608837345
HDFS: Number of bytes written=0
HDFS: Number of read operations=288
HDFS: Number of large read operations=0
HDFS: Number of write operations=0
Job Counters
Failed map tasks=14
Failed reduce tasks=2
Killed map tasks=5
Launched map tasks=115
Launched reduce tasks=2
Other local map tasks=12
Data-local map tasks=103
Total time spent by all maps in occupied slots (ms)=18567852
Total time spent by all reduces in occupied slots (ms)=1205016
Map-Reduce Framework
Map input records=246865790
Map output records=246865790
Map output bytes=12608458902
Map output materialized bytes=13102191058
Input split bytes=9984
Combine input records=0
Spilled Records=493731580
Failed Shuffles=0
Merged Map outputs=0
GC time elapsed (ms)=180426
CPU time spent (ms)=3391620
Physical memory (bytes) snapshot=25874554880
Virtual memory (bytes) snapshot=86517563392
Total committed heap usage (bytes)=19113443328
File Input Format Counters
Bytes Read=12608827361
Exception in thread "main" java.io.IOException: Job failed!
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:836)
at com.fiveone.game.GameLoginLog.main(GameLoginLog.java:38)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
解决办法:
在mapred-site.xml配置文件中加上一条配置:
<property>
<name>mapreduce.reduce.shuffle.memory.limit.percent</name>
<value>0.20</value>
<description>Expert: Maximum percentage of the in-memory limit that a single shuffle can consume</description>
</property>
默认mapreduce.reduce.shuffle.memory.limit.percent配置的值为:0.25 ,可以根据自己的情况适量降低这个值。
出现这个问题的原因有很多种,要视情况去解决,下面是在网上找的一些解决方案:
2、hadoop能map不能reduce(shuffle阶段报错,fetch#1错误)
日志报错:attempt_201207232344_0002_r_000000_0, Status : FAILED
org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError: error in shuffle in fetcher#1
Error reading task outputConnection refused
No job jar file set. User classes may not be found. See Job or Job#setJar(String).
12/07/23 23:51:42 INFO mapreduce.JobSubmitter: Cleaning up the staging area hdfs://127.0.0.1:9000/tmp/hadoop-root/mapred/staging/root/.staging/job_201207232344_0003
org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError: error in shuffle in fetcher#1
No job jar file set.
12/07/23 23:51:42 INFO mapreduce.JobSubmitter: Cleaning up the staging area hdfs://127.0.0.1:9000/tmp/hadoop-root/mapred/staging/root/.staging/job_201207232344_0003
在网上找了下解决办法,大致有几种可能的原因:
1、 Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out. 系统设置有关,文件较大,无法分配大内存
2、机器位数是64位导致这个问题
3、/etc/hosts 里面没有配master的hostname
2、机器位数是64位导致这个问题
3、/etc/hosts 里面没有配master的hostname
4、防火墙没关
vim /etc/pam.d/login
问题三:由于我是伪分布式,所以只要配本机就行,看了下配置了hostname
问题四:setup下看了我的防火墙,发现防火墙有两个,一个是firework一个是selinux,第一个关了,第二个没关
最后在我重启虚拟机后,终于可以正常运行,但是是不是就是第四步弄好的呢,在我重新打开selinux后,再次运行,依然正常,我想应该不是selinux没关的可能,至于到底什么原因,可能是问题1
5、hadoop能map不能reduce(shuffle阶段报错,fetch#1错误)
Too many fetch-failures
13/02/25 21:20:48 INFO mapreduce.Job: Task Id : attempt_201302230043_0022_r_000000_0, Status : FAILED
org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError: error in shuffle in fetcher#4
at org.apache.hadoop.mapreduce.task.reduce.Shuffle.run(Shuffle.java:124)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:362)
at org.apache.hadoop.mapred.Child$4.run(Child.java:223)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1153)
at org.apache.hadoop.mapred.Child.main(Child.java:217)
Caused by: java.io.IOException: Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out.
at org.apache.hadoop.mapreduce.task.reduce.ShuffleScheduler.checkReducerHealth(ShuffleScheduler.java:253)
at org.apache.hadoop.mapreduce.task.reduce.ShuffleScheduler.copyFailed(ShuffleScheduler.java:187)
at org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyMapOutput(Fetcher.java:347)
at org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyFromHost(Fetcher.java:251)
at org.apache.hado
13/02/25 21:20:48 INFO mapreduce.Job: Task Id : attempt_201302230043_0022_r_000000_0, Status : FAILED
org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError: error in shuffle in fetcher#4
Caused by: java.io.IOException: Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out.
job.getConfiguration().setClass("mapred.map.runner.class", MultithreadedMapRunner.class, MapRunnable.class);
job.getConfiguration().setBoolean("mapreduce.map.output.compress", true);
job.getConfiguration().setBoolean("mapreduce.map.output.compress", true);
6、hadoop执行JOB在reduce阶段报了下面的错误:
解决方法:
修改mapred-site.xml配置中的mapreduce.reduce.shuffle.memory.limit.percent参数
这个值默认是0.25,可以根据需要适当的减小这个值。
- Error: org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError: error in shuffle in fetcher#4
- at org.apache.hadoop.mapreduce.task.reduce.Shuffle.run(Shuffle.java:121)
- at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:380)
- at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:165)
- at java.security.AccessController.doPrivileged(Native Method)
- at javax.security.auth.Subject.doAs(Subject.java:415)
- at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
- at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:160)
- Caused by: java.lang.OutOfMemoryError: Java heap space
- at org.apache.hadoop.io.BoundedByteArrayOutputStream.<init>(BoundedByteArrayOutputStream.java:56)
- at org.apache.hadoop.io.BoundedByteArrayOutputStream.<init>(BoundedByteArrayOutputStream.java:46)
- at org.apache.hadoop.mapreduce.task.reduce.InMemoryMapOutput.<init>(InMemoryMapOutput.java:63)
- at org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl.unconditionalReserve(MergeManagerImpl.java:297)
- at org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl.reserve(MergeManagerImpl.java:287)
- at org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyMapOutput(Fetcher.java:411)
- at org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyFromHost(Fetcher.java:341)
- at org.apache.hadoop.mapreduce.task.reduce.Fetcher.run(Fetcher.java:165)
修改mapred-site.xml配置中的mapreduce.reduce.shuffle.memory.limit.percent参数
- <property>
- <name>mapreduce.reduce.shuffle.memory.limit.percent</name>
- <value>0.25</value>
- <description>Expert: Maximum percentage of the in-memory limit that a
- single shuffle can consume</description>
- </property>
这个值默认是0.25,可以根据需要适当的减小这个值。