Linux 下运行wordcount的过程详解

[root@hadoop1 ~]# hadoop jar wordcount.jar bigDate.LinuxWordCount    // 运行的命令
16/10/05 01:40:03 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id
16/10/05 01:40:03 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
16/10/05 01:40:04 WARN mapreduce.JobResourceUploader: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
16/10/05 01:40:04 INFO input.FileInputFormat: Total input paths to process : 8     // 获取输入的文件的数量
16/10/05 01:40:04 INFO mapreduce.JobSubmitter: number of splits:8     // 切片的个数,也就是说一个文件一片
16/10/05 01:40:06 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local1040123772_0001   // job的ID
16/10/05 01:40:06 INFO mapreduce.Job: The url to track the job: http://localhost:8080/       // jobTrack 运行
16/10/05 01:40:06 INFO mapreduce.Job: Running job: job_local1040123772_0001
16/10/05 01:40:06 INFO mapred.LocalJobRunner: OutputCommitter set in config null
16/10/05 01:40:06 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
16/10/05 01:40:07 INFO mapred.LocalJobRunner: Waiting for map tasks
16/10/05 01:40:07 INFO mapred.LocalJobRunner: Starting task: attempt_local1040123772_0001_m_000000_0
16/10/05 01:40:07 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
16/10/05 01:40:07 INFO mapred.MapTask: Processing split: hdfs://hadoop1:9000/wordcount/hadoop-policy.xml:0+9683
16/10/05 01:40:07 INFO mapreduce.Job: Job job_local1040123772_0001 running in uber mode : false
16/10/05 01:40:07 INFO mapreduce.Job:  map 0% reduce 0%             // map与reduce的进度
16/10/05 01:40:08 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
16/10/05 01:40:08 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
16/10/05 01:40:08 INFO mapred.MapTask: soft limit at 83886080
16/10/05 01:40:08 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
16/10/05 01:40:08 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
16/10/05 01:40:08 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
16/10/05 01:40:08 INFO mapred.LocalJobRunner: 
16/10/05 01:40:08 INFO mapred.MapTask: Starting flush of map output
16/10/05 01:40:08 INFO mapred.MapTask: Spilling map output
16/10/05 01:40:08 INFO mapred.MapTask: bufstart = 0; bufend = 16910; bufvoid = 104857600
16/10/05 01:40:08 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26207164(104828656); length = 7233/6553600
16/10/05 01:40:08 INFO mapred.MapTask: Finished spill 0
16/10/05 01:40:08 INFO mapred.Task: Task:attempt_local1040123772_0001_m_000000_0 is done. And is in the process of committing
16/10/05 01:40:08 INFO mapred.LocalJobRunner: map
16/10/05 01:40:08 INFO mapred.Task: Task 'attempt_local1040123772_0001_m_000000_0' done.
16/10/05 01:40:08 INFO mapred.LocalJobRunner: Finishing task: attempt_local1040123772_0001_m_000000_0
16/10/05 01:40:08 INFO mapred.LocalJobRunner: Starting task: attempt_local1040123772_0001_m_000001_0
16/10/05 01:40:08 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
16/10/05 01:40:08 INFO mapred.MapTask: Processing split: hdfs://hadoop1:9000/wordcount/kms-site.xml:0+5511
16/10/05 01:40:08 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
16/10/05 01:40:08 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
16/10/05 01:40:08 INFO mapred.MapTask: soft limit at 83886080
16/10/05 01:40:08 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
16/10/05 01:40:08 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
16/10/05 01:40:08 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
16/10/05 01:40:08 INFO mapred.LocalJobRunner: 
16/10/05 01:40:08 INFO mapred.MapTask: Starting flush of map output
16/10/05 01:40:08 INFO mapred.MapTask: Spilling map output
16/10/05 01:40:08 INFO mapred.MapTask: bufstart = 0; bufend = 9727; bufvoid = 104857600
16/10/05 01:40:08 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26210184(104840736); length = 4213/6553600
16/10/05 01:40:08 INFO mapred.MapTask: Finished spill 0
16/10/05 01:40:08 INFO mapreduce.Job:  map 13% reduce 0%            // map  与reduce 的进度
16/10/05 01:40:08 INFO mapred.Task: Task:attempt_local1040123772_0001_m_000001_0 is done. And is in the process of committing
16/10/05 01:40:08 INFO mapred.LocalJobRunner: map
16/10/05 01:40:08 INFO mapred.Task: Task 'attempt_local1040123772_0001_m_000001_0' done.
16/10/05 01:40:08 INFO mapred.LocalJobRunner: Finishing task: attempt_local1040123772_0001_m_000001_0
16/10/05 01:40:08 INFO mapred.LocalJobRunner: Starting task: attempt_local1040123772_0001_m_000002_0
16/10/05 01:40:08 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
16/10/05 01:40:08 INFO mapred.MapTask: Processing split: hdfs://hadoop1:9000/wordcount/capacity-scheduler.xml:0+4436
16/10/05 01:40:09 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
16/10/05 01:40:09 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
16/10/05 01:40:09 INFO mapred.MapTask: soft limit at 83886080
16/10/05 01:40:09 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
16/10/05 01:40:09 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
16/10/05 01:40:09 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
16/10/05 01:40:09 INFO mapred.LocalJobRunner: 
16/10/05 01:40:09 INFO mapred.MapTask: Starting flush of map output
16/10/05 01:40:09 INFO mapred.MapTask: Spilling map output
16/10/05 01:40:09 INFO mapred.MapTask: bufstart = 0; bufend = 7953; bufvoid = 104857600
16/10/05 01:40:09 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26210876(104843504); length = 3521/6553600
16/10/05 01:40:09 INFO mapred.MapTask: Finished spill 0
16/10/05 01:40:09 INFO mapred.Task: Task:attempt_local1040123772_0001_m_000002_0 is done. And is in the process of committing
16/10/05 01:40:09 INFO mapred.LocalJobRunner: map
16/10/05 01:40:09 INFO mapred.Task: Task 'attempt_local1040123772_0001_m_000002_0' done.
16/10/05 01:40:09 INFO mapred.LocalJobRunner: Finishing task: attempt_local1040123772_0001_m_000002_0
16/10/05 01:40:09 INFO mapred.LocalJobRunner: Starting task: attempt_local1040123772_0001_m_000003_0
16/10/05 01:40:09 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
16/10/05 01:40:09 INFO mapred.MapTask: Processing split: hdfs://hadoop1:9000/wordcount/kms-acls.xml:0+3523
16/10/05 01:40:09 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
16/10/05 01:40:09 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
16/10/05 01:40:09 INFO mapred.MapTask: soft limit at 83886080
16/10/05 01:40:09 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
16/10/05 01:40:09 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
16/10/05 01:40:09 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
16/10/05 01:40:09 INFO mapred.LocalJobRunner: 
16/10/05 01:40:09 INFO mapred.MapTask: Starting flush of map output
16/10/05 01:40:09 INFO mapred.MapTask: Spilling map output
16/10/05 01:40:09 INFO mapred.MapTask: bufstart = 0; bufend = 6587; bufvoid = 104857600
16/10/05 01:40:09 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26211336(104845344); length = 3061/6553600
16/10/05 01:40:09 INFO mapred.MapTask: Finished spill 0
16/10/05 01:40:09 INFO mapred.Task: Task:attempt_local1040123772_0001_m_000003_0 is done. And is in the process of committing
16/10/05 01:40:09 INFO mapred.LocalJobRunner: map
16/10/05 01:40:09 INFO mapred.Task: Task 'attempt_local1040123772_0001_m_000003_0' done.
16/10/05 01:40:09 INFO mapred.LocalJobRunner: Finishing task: attempt_local1040123772_0001_m_000003_0
16/10/05 01:40:09 INFO mapred.LocalJobRunner: Starting task: attempt_local1040123772_0001_m_000004_0
16/10/05 01:40:09 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
16/10/05 01:40:09 INFO mapred.MapTask: Processing split: hdfs://hadoop1:9000/wordcount/hdfs-site.xml:0+1149
16/10/05 01:40:09 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
16/10/05 01:40:09 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
16/10/05 01:40:09 INFO mapred.MapTask: soft limit at 83886080
16/10/05 01:40:09 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
16/10/05 01:40:09 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
16/10/05 01:40:09 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
16/10/05 01:40:09 INFO mapred.LocalJobRunner: 
16/10/05 01:40:09 INFO mapred.MapTask: Starting flush of map output
16/10/05 01:40:09 INFO mapred.MapTask: Spilling map output
16/10/05 01:40:09 INFO mapred.MapTask: bufstart = 0; bufend = 1713; bufvoid = 104857600
16/10/05 01:40:09 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26213836(104855344); length = 561/6553600
16/10/05 01:40:09 INFO mapred.MapTask: Finished spill 0
16/10/05 01:40:09 INFO mapred.Task: Task:attempt_local1040123772_0001_m_000004_0 is done. And is in the process of committing
16/10/05 01:40:09 INFO mapred.LocalJobRunner: map
16/10/05 01:40:09 INFO mapred.Task: Task 'attempt_local1040123772_0001_m_000004_0' done.
16/10/05 01:40:09 INFO mapred.LocalJobRunner: Finishing task: attempt_local1040123772_0001_m_000004_0
16/10/05 01:40:09 INFO mapred.LocalJobRunner: Starting task: attempt_local1040123772_0001_m_000005_0
16/10/05 01:40:09 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
16/10/05 01:40:09 INFO mapred.MapTask: Processing split: hdfs://hadoop1:9000/wordcount/core-site.xml:0+952
16/10/05 01:40:09 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
16/10/05 01:40:09 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
16/10/05 01:40:09 INFO mapred.MapTask: soft limit at 83886080
16/10/05 01:40:09 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
16/10/05 01:40:09 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
16/10/05 01:40:09 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
16/10/05 01:40:09 INFO mapred.LocalJobRunner: 
16/10/05 01:40:09 INFO mapred.MapTask: Starting flush of map output
16/10/05 01:40:09 INFO mapred.MapTask: Spilling map output
16/10/05 01:40:09 INFO mapred.MapTask: bufstart = 0; bufend = 1484; bufvoid = 104857600
16/10/05 01:40:09 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26213868(104855472); length = 529/6553600
16/10/05 01:40:09 INFO mapred.MapTask: Finished spill 0
16/10/05 01:40:09 INFO mapred.Task: Task:attempt_local1040123772_0001_m_000005_0 is done. And is in the process of committing
16/10/05 01:40:09 INFO mapred.LocalJobRunner: map
16/10/05 01:40:09 INFO mapred.Task: Task 'attempt_local1040123772_0001_m_000005_0' done.
16/10/05 01:40:09 INFO mapred.LocalJobRunner: Finishing task: attempt_local1040123772_0001_m_000005_0
16/10/05 01:40:09 INFO mapred.LocalJobRunner: Starting task: attempt_local1040123772_0001_m_000006_0
16/10/05 01:40:09 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
16/10/05 01:40:09 INFO mapred.MapTask: Processing split: hdfs://hadoop1:9000/wordcount/yarn-site.xml:0+823
16/10/05 01:40:09 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
16/10/05 01:40:09 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
16/10/05 01:40:09 INFO mapred.MapTask: soft limit at 83886080
16/10/05 01:40:09 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
16/10/05 01:40:09 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
16/10/05 01:40:09 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
16/10/05 01:40:09 INFO mapreduce.Job:  map 100% reduce 0%      // map 与reduce的进度
16/10/05 01:40:09 INFO mapred.LocalJobRunner: 
16/10/05 01:40:09 INFO mapred.MapTask: Starting flush of map output
16/10/05 01:40:09 INFO mapred.MapTask: Spilling map output
16/10/05 01:40:09 INFO mapred.MapTask: bufstart = 0; bufend = 1295; bufvoid = 104857600
16/10/05 01:40:09 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26213928(104855712); length = 469/6553600
16/10/05 01:40:09 INFO mapred.MapTask: Finished spill 0
16/10/05 01:40:09 INFO mapred.Task: Task:attempt_local1040123772_0001_m_000006_0 is done. And is in the process of committing
16/10/05 01:40:10 INFO mapred.LocalJobRunner: map
16/10/05 01:40:10 INFO mapred.Task: Task 'attempt_local1040123772_0001_m_000006_0' done.
16/10/05 01:40:10 INFO mapred.LocalJobRunner: Finishing task: attempt_local1040123772_0001_m_000006_0
16/10/05 01:40:10 INFO mapred.LocalJobRunner: Starting task: attempt_local1040123772_0001_m_000007_0
16/10/05 01:40:10 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
16/10/05 01:40:10 INFO mapred.MapTask: Processing split: hdfs://hadoop1:9000/wordcount/httpfs-site.xml:0+620
16/10/05 01:40:10 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
16/10/05 01:40:10 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
16/10/05 01:40:10 INFO mapred.MapTask: soft limit at 83886080
16/10/05 01:40:10 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
16/10/05 01:40:10 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
16/10/05 01:40:10 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
16/10/05 01:40:10 INFO mapred.LocalJobRunner: 
16/10/05 01:40:10 INFO mapred.MapTask: Starting flush of map output
16/10/05 01:40:10 INFO mapred.MapTask: Spilling map output
16/10/05 01:40:10 INFO mapred.MapTask: bufstart = 0; bufend = 1044; bufvoid = 104857600
16/10/05 01:40:10 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26213976(104855904); length = 421/6553600
16/10/05 01:40:10 INFO mapred.MapTask: Finished spill 0
16/10/05 01:40:10 INFO mapred.Task: Task:attempt_local1040123772_0001_m_000007_0 is done. And is in the process of committing
16/10/05 01:40:10 INFO mapred.LocalJobRunner: map
16/10/05 01:40:10 INFO mapred.Task: Task 'attempt_local1040123772_0001_m_000007_0' done.
16/10/05 01:40:10 INFO mapred.LocalJobRunner: Finishing task: attempt_local1040123772_0001_m_000007_0
16/10/05 01:40:10 INFO mapred.LocalJobRunner: map task executor complete.
16/10/05 01:40:10 INFO mapred.LocalJobRunner: Waiting for reduce tasks
16/10/05 01:40:10 INFO mapred.LocalJobRunner: Starting task: attempt_local1040123772_0001_r_000000_0
16/10/05 01:40:10 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
16/10/05 01:40:10 INFO mapred.ReduceTask: Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@43e3a075
16/10/05 01:40:10 INFO reduce.MergeManagerImpl: MergerManager: memoryLimit=363285696, maxSingleShuffleLimit=90821424, mergeThreshold=239768576, ioSortFactor=10, memToMemMergeOutputsThreshold=10
16/10/05 01:40:10 INFO reduce.EventFetcher: attempt_local1040123772_0001_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events
16/10/05 01:40:10 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1040123772_0001_m_000003_0 decomp: 8121 len: 8125 to MEMORY
16/10/05 01:40:10 INFO reduce.InMemoryMapOutput: Read 8121 bytes from map-output for attempt_local1040123772_0001_m_000003_0
16/10/05 01:40:10 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 8121, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->8121
16/10/05 01:40:10 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1040123772_0001_m_000000_0 decomp: 20530 len: 20534 to MEMORY
16/10/05 01:40:10 INFO reduce.InMemoryMapOutput: Read 20530 bytes from map-output for attempt_local1040123772_0001_m_000000_0
16/10/05 01:40:10 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 20530, inMemoryMapOutputs.size() -> 2, commitMemory -> 8121, usedMemory ->28651
16/10/05 01:40:10 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1040123772_0001_m_000004_0 decomp: 1997 len: 2001 to MEMORY
16/10/05 01:40:10 INFO reduce.InMemoryMapOutput: Read 1997 bytes from map-output for attempt_local1040123772_0001_m_000004_0
16/10/05 01:40:10 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 1997, inMemoryMapOutputs.size() -> 4, commitMemory -> 40488, usedMemory ->42485
16/10/05 01:40:10 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1040123772_0001_m_000006_0 decomp: 1533 len: 1537 to MEMORY
16/10/05 01:40:10 WARN io.ReadaheadPool: Failed readahead on ifile
16/10/05 01:40:10 INFO reduce.InMemoryMapOutput: Read 1533 bytes from map-output for attempt_local1040123772_0001_m_000006_0
16/10/05 01:40:10 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 1533, inMemoryMapOutputs.size() -> 5, commitMemory -> 42485, usedMemory ->44018
16/10/05 01:40:10 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1040123772_0001_m_000005_0 decomp: 1752 len: 1756 to MEMORY
16/10/05 01:40:10 INFO reduce.InMemoryMapOutput: Read 1752 bytes from map-output for attempt_local1040123772_0001_m_000005_0
16/10/05 01:40:10 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 1752, inMemoryMapOutputs.size() -> 6, commitMemory -> 44018, usedMemory ->45770
16/10/05 01:40:10 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1040123772_0001_m_000002_0 decomp: 9717 len: 9721 to MEMORY
16/10/05 01:40:10 INFO reduce.InMemoryMapOutput: Read 9717 bytes from map-output for attempt_local1040123772_0001_m_000002_0
16/10/05 01:40:10 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 9717, inMemoryMapOutputs.size() -> 7, commitMemory -> 45770, usedMemory ->55487
16/10/05 01:40:10 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1040123772_0001_m_000007_0 decomp: 1258 len: 1262 to MEMORY
16/10/05 01:40:10 INFO reduce.InMemoryMapOutput: Read 1258 bytes from map-output for attempt_local1040123772_0001_m_000007_0
16/10/05 01:40:10 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 1258, inMemoryMapOutputs.size() -> 8, commitMemory -> 55487, usedMemory ->56745
16/10/05 01:40:10 INFO reduce.EventFetcher: EventFetcher is interrupted.. Returning
16/10/05 01:40:10 INFO mapred.LocalJobRunner: 8 / 8 copied.
16/10/05 01:40:10 INFO reduce.MergeManagerImpl: finalMerge called with 8 in-memory map-outputs and 0 on-disk map-outputs
16/10/05 01:40:10 INFO mapred.Merger: Merging 8 sorted segments
16/10/05 01:40:10 INFO mapred.Merger: Down to the last merge-pass, with 8 segments left of total size: 56721 bytes
16/10/05 01:40:10 INFO reduce.MergeManagerImpl: Merged 8 segments, 56745 bytes to disk to satisfy reduce memory limit
16/10/05 01:40:10 INFO reduce.MergeManagerImpl: Merging 1 files, 56735 bytes from disk
16/10/05 01:40:10 INFO reduce.MergeManagerImpl: Merging 0 segments, 0 bytes from memory into reduce
16/10/05 01:40:10 INFO mapred.Merger: Merging 1 sorted segments
16/10/05 01:40:10 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 56728 bytes
16/10/05 01:40:10 INFO mapred.LocalJobRunner: 8 / 8 copied.
16/10/05 01:40:10 INFO Configuration.deprecation: mapred.skip.on is deprecated. Instead, use mapreduce.job.skiprecords
16/10/05 01:40:12 INFO mapred.Task: Task:attempt_local1040123772_0001_r_000000_0 is done. And is in the process of committing
16/10/05 01:40:12 INFO mapred.LocalJobRunner: 8 / 8 copied.
16/10/05 01:40:12 INFO mapred.Task: Task attempt_local1040123772_0001_r_000000_0 is allowed to commit now
16/10/05 01:40:12 INFO output.FileOutputCommitter: Saved output of task 'attempt_local1040123772_0001_r_000000_0' to hdfs://hadoop1:9000/wordcount/output/_temporary/0/task_local1040123772_0001_r_000000
16/10/05 01:40:12 INFO mapred.LocalJobRunner: reduce > reduce
16/10/05 01:40:12 INFO mapred.Task: Task 'attempt_local1040123772_0001_r_000000_0' done.
16/10/05 01:40:12 INFO mapred.LocalJobRunner: Finishing task: attempt_local1040123772_0001_r_000000_0
16/10/05 01:40:12 INFO mapred.LocalJobRunner: reduce task executor complete.
16/10/05 01:40:12 INFO mapreduce.Job:  map 100% reduce 100%     // map  与reduce的进度
16/10/05 01:40:13 INFO mapreduce.Job: Job job_local1040123772_0001 completed successfully
16/10/05 01:40:14 INFO mapreduce.Job: Counters: 38
File System Counters
FILE: Number of bytes read=241244      // 总共读的字节数
FILE: Number of bytes written=2935620    // 总共输出的字节数
FILE: Number of read operations=0    // 读的操作 
FILE: Number of large read operations=0   // 大规模的读取的操作
FILE: Number of write operations=0
HDFS: Number of bytes read=196687    // HDFS读取的个数
HDFS: Number of bytes written=10550   // HDFS写入的个数

HDFS: Number of read operations=127
HDFS: Number of large read operations=0
HDFS: Number of write operations=11
Map-Reduce Framework
Map input records=773      // map  输入的记录
Map output records=5008    // reduce 输出的记录

Map output bytes=46713  
Map output materialized bytes=56777
Input split bytes=877
Combine input records=0
Combine output records=0
Reduce input groups=603
Reduce shuffle bytes=56777     // reduce的shuffer的字节数
Reduce input records=5008      // reduce 输入的记录
Reduce output records=603     // reduce输出的记录

Spilled Records=10016
Shuffled Maps =8                        // shuffer的map
Failed Shuffles=0
Merged Map outputs=8
GC time elapsed (ms)=540
CPU time spent (ms)=0
Physical memory (bytes) snapshot=0
Virtual memory (bytes) snapshot=0
Total committed heap usage (bytes)=1370157056
Shuffle Errors          // 错误的计数器
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters     // input的计数器
Bytes Read=26697
File Output Format Counters   // 输出的计数器
Bytes Written=10550
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

盒马coding

你的支持是我最大的动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值