hadoop 单机 本地 多输入 mapreduce

码上代码:

 

建立测试环境:

创建seq 序列化文件:

/**
     * 写操作
     */
    @Test
    public void zipGzip() throws Exception {
        Configuration conf = new Configuration();
        conf.set("fs.defaultFS","file:///");
        FileSystem fs = FileSystem.get(conf);
        Path p = new Path("d:/seq/1.seq") ;
        SequenceFile.Writer writer = SequenceFile.createWriter(fs,
                conf,
                p,
                IntWritable.class,
                Text.class,
                SequenceFile.CompressionType.BLOCK,
                new GzipCodec());
        for(int i = 0 ; i < 10 ; i ++){
            writer.append(new IntWritable(i),new Text("tom" + i));
            //添加一个同步点
            writer.sync();
        }
        for(int i = 0 ; i < 10 ; i ++){
            writer.append(new IntWritable(i),new Text("tom" + i));
            if(i % 2 == 0){
                writer.sync();
            }
        }
        writer.close();
    }

写文本文件:

 

在txt下建立1.txt 2.txt

运行:

19/01/16 10:25:52 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id
19/01/16 10:25:52 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
19/01/16 10:25:54 WARN mapreduce.JobResourceUploader: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
19/01/16 10:25:54 WARN mapreduce.JobResourceUploader: No job jar file set.  User classes may not be found. See Job or Job#setJar(String).
19/01/16 10:25:54 INFO input.FileInputFormat: Total input paths to process : 1
19/01/16 10:25:54 INFO input.FileInputFormat: Total input paths to process : 2
19/01/16 10:25:54 INFO mapreduce.JobSubmitter: number of splits:3
19/01/16 10:25:55 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local1578362493_0001
19/01/16 10:25:55 INFO mapreduce.Job: The url to track the job: http://localhost:8080/
19/01/16 10:25:55 INFO mapreduce.Job: Running job: job_local1578362493_0001
19/01/16 10:25:55 INFO mapred.LocalJobRunner: OutputCommitter set in config null
19/01/16 10:25:55 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
19/01/16 10:25:55 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
19/01/16 10:25:55 INFO mapred.LocalJobRunner: Waiting for map tasks
19/01/16 10:25:55 INFO mapred.LocalJobRunner: Starting task: attempt_local1578362493_0001_m_000000_0
19/01/16 10:25:55 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
19/01/16 10:25:55 INFO util.ProcfsBasedProcessTree: ProcfsBasedProcessTree currently is supported only on Linux.
19/01/16 10:25:55 INFO mapred.Task:  Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@68701d3d
19/01/16 10:25:55 INFO mapred.MapTask: Processing split: file:/d:/mr/seq/1.seq:0+928
19/01/16 10:25:55 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
19/01/16 10:25:55 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
19/01/16 10:25:55 INFO mapred.MapTask: soft limit at 83886080
19/01/16 10:25:55 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
19/01/16 10:25:55 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
19/01/16 10:25:55 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
19/01/16 10:25:55 WARN zlib.ZlibFactory: Failed to load/initialize native-zlib library
19/01/16 10:25:55 INFO compress.CodecPool: Got brand-new decompressor [.deflate]
19/01/16 10:25:55 INFO mapred.LocalJobRunner: 
19/01/16 10:25:55 INFO mapred.MapTask: Starting flush of map output
19/01/16 10:25:55 INFO mapred.MapTask: Spilling map output
19/01/16 10:25:55 INFO mapred.MapTask: bufstart = 0; bufend = 180; bufvoid = 104857600
19/01/16 10:25:55 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26214320(104857280); length = 77/6553600
19/01/16 10:25:55 INFO mapred.MapTask: Finished spill 0
19/01/16 10:25:55 INFO mapred.Task: Task:attempt_local1578362493_0001_m_000000_0 is done. And is in the process of committing
19/01/16 10:25:55 INFO mapred.LocalJobRunner: map
19/01/16 10:25:55 INFO mapred.Task: Task 'attempt_local1578362493_0001_m_000000_0' done.
19/01/16 10:25:55 INFO mapred.LocalJobRunner: Finishing task: attempt_local1578362493_0001_m_000000_0
19/01/16 10:25:55 INFO mapred.LocalJobRunner: Starting task: attempt_local1578362493_0001_m_000001_0
19/01/16 10:25:55 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
19/01/16 10:25:55 INFO util.ProcfsBasedProcessTree: ProcfsBasedProcessTree currently is supported only on Linux.
19/01/16 10:25:55 INFO mapred.Task:  Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@16424340
19/01/16 10:25:55 INFO mapred.MapTask: Processing split: file:/d:/mr/txt/1.txt:0+19
19/01/16 10:25:55 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
19/01/16 10:25:55 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
19/01/16 10:25:55 INFO mapred.MapTask: soft limit at 83886080
19/01/16 10:25:55 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
19/01/16 10:25:55 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
19/01/16 10:25:55 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
19/01/16 10:25:55 INFO mapred.LocalJobRunner: 
19/01/16 10:25:55 INFO mapred.MapTask: Starting flush of map output
19/01/16 10:25:55 INFO mapred.MapTask: Spilling map output
19/01/16 10:25:55 INFO mapred.MapTask: bufstart = 0; bufend = 23; bufvoid = 104857600
19/01/16 10:25:55 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26214392(104857568); length = 5/6553600
19/01/16 10:25:55 INFO mapred.MapTask: Finished spill 0
19/01/16 10:25:55 INFO mapred.Task: Task:attempt_local1578362493_0001_m_000001_0 is done. And is in the process of committing
19/01/16 10:25:55 INFO mapred.LocalJobRunner: map
19/01/16 10:25:55 INFO mapred.Task: Task 'attempt_local1578362493_0001_m_000001_0' done.
19/01/16 10:25:55 INFO mapred.LocalJobRunner: Finishing task: attempt_local1578362493_0001_m_000001_0
19/01/16 10:25:55 INFO mapred.LocalJobRunner: Starting task: attempt_local1578362493_0001_m_000002_0
19/01/16 10:25:55 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
19/01/16 10:25:55 INFO util.ProcfsBasedProcessTree: ProcfsBasedProcessTree currently is supported only on Linux.
19/01/16 10:25:56 INFO mapred.Task:  Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@3609fbe9
19/01/16 10:25:56 INFO mapred.MapTask: Processing split: file:/d:/mr/txt/2.txt:0+10
19/01/16 10:25:56 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
19/01/16 10:25:56 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
19/01/16 10:25:56 INFO mapred.MapTask: soft limit at 83886080
19/01/16 10:25:56 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
19/01/16 10:25:56 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
19/01/16 10:25:56 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
19/01/16 10:25:56 INFO mapred.LocalJobRunner: 
19/01/16 10:25:56 INFO mapred.MapTask: Starting flush of map output
19/01/16 10:25:56 INFO mapred.MapTask: Spilling map output
19/01/16 10:25:56 INFO mapred.MapTask: bufstart = 0; bufend = 16; bufvoid = 104857600
19/01/16 10:25:56 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26214392(104857568); length = 5/6553600
19/01/16 10:25:56 INFO mapreduce.Job: Job job_local1578362493_0001 running in uber mode : false
19/01/16 10:25:56 INFO mapreduce.Job:  map 67% reduce 0%
19/01/16 10:25:56 INFO mapred.MapTask: Finished spill 0
19/01/16 10:25:56 INFO mapred.Task: Task:attempt_local1578362493_0001_m_000002_0 is done. And is in the process of committing
19/01/16 10:25:56 INFO mapred.LocalJobRunner: map
19/01/16 10:25:56 INFO mapred.Task: Task 'attempt_local1578362493_0001_m_000002_0' done.
19/01/16 10:25:56 INFO mapred.LocalJobRunner: Finishing task: attempt_local1578362493_0001_m_000002_0
19/01/16 10:25:56 INFO mapred.LocalJobRunner: map task executor complete.
19/01/16 10:25:56 INFO mapred.LocalJobRunner: Waiting for reduce tasks
19/01/16 10:25:56 INFO mapred.LocalJobRunner: Starting task: attempt_local1578362493_0001_r_000000_0
19/01/16 10:25:56 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
19/01/16 10:25:56 INFO util.ProcfsBasedProcessTree: ProcfsBasedProcessTree currently is supported only on Linux.
19/01/16 10:25:56 INFO mapred.Task:  Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@505b9fa2
19/01/16 10:25:56 INFO mapred.ReduceTask: Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@44be8833
19/01/16 10:25:56 INFO reduce.MergeManagerImpl: MergerManager: memoryLimit=1320838784, maxSingleShuffleLimit=330209696, mergeThreshold=871753664, ioSortFactor=10, memToMemMergeOutputsThreshold=10
19/01/16 10:25:56 INFO reduce.EventFetcher: attempt_local1578362493_0001_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events
19/01/16 10:25:56 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1578362493_0001_m_000001_0 decomp: 14 len: 18 to MEMORY
19/01/16 10:25:56 INFO reduce.InMemoryMapOutput: Read 14 bytes from map-output for attempt_local1578362493_0001_m_000001_0
19/01/16 10:25:56 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 14, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->14
19/01/16 10:25:56 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1578362493_0001_m_000002_0 decomp: 13 len: 17 to MEMORY
19/01/16 10:25:56 INFO reduce.InMemoryMapOutput: Read 13 bytes from map-output for attempt_local1578362493_0001_m_000002_0
19/01/16 10:25:56 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 13, inMemoryMapOutputs.size() -> 2, commitMemory -> 14, usedMemory ->27
19/01/16 10:25:56 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1578362493_0001_m_000000_0 decomp: 68 len: 72 to MEMORY
19/01/16 10:25:56 INFO reduce.InMemoryMapOutput: Read 68 bytes from map-output for attempt_local1578362493_0001_m_000000_0
19/01/16 10:25:56 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 68, inMemoryMapOutputs.size() -> 3, commitMemory -> 27, usedMemory ->95
19/01/16 10:25:56 INFO reduce.EventFetcher: EventFetcher is interrupted.. Returning
19/01/16 10:25:56 INFO mapred.LocalJobRunner: 3 / 3 copied.
19/01/16 10:25:56 INFO reduce.MergeManagerImpl: finalMerge called with 3 in-memory map-outputs and 0 on-disk map-outputs
19/01/16 10:25:56 INFO mapred.Merger: Merging 3 sorted segments
19/01/16 10:25:56 INFO mapred.Merger: Down to the last merge-pass, with 3 segments left of total size: 73 bytes
19/01/16 10:25:56 INFO reduce.MergeManagerImpl: Merged 3 segments, 95 bytes to disk to satisfy reduce memory limit
19/01/16 10:25:56 INFO reduce.MergeManagerImpl: Merging 1 files, 95 bytes from disk
19/01/16 10:25:56 INFO reduce.MergeManagerImpl: Merging 0 segments, 0 bytes from memory into reduce
19/01/16 10:25:56 INFO mapred.Merger: Merging 1 sorted segments
19/01/16 10:25:56 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 84 bytes
19/01/16 10:25:56 INFO mapred.LocalJobRunner: 3 / 3 copied.
19/01/16 10:25:56 INFO Configuration.deprecation: mapred.skip.on is deprecated. Instead, use mapreduce.job.skiprecords
pool-3-thread-1 : WCReducer :Lisi=1
pool-3-thread-1 : WCReducer :tom2=2
pool-3-thread-1 : WCReducer :tom5=2
pool-3-thread-1 : WCReducer :tom8=2
pool-3-thread-1 : WCReducer :zhang=1
19/01/16 10:25:56 INFO mapred.Task: Task:attempt_local1578362493_0001_r_000000_0 is done. And is in the process of committing
19/01/16 10:25:56 INFO mapred.LocalJobRunner: 3 / 3 copied.
19/01/16 10:25:56 INFO mapred.Task: Task attempt_local1578362493_0001_r_000000_0 is allowed to commit now
19/01/16 10:25:56 INFO output.FileOutputCommitter: Saved output of task 'attempt_local1578362493_0001_r_000000_0' to file:/d:/mr/out/_temporary/0/task_local1578362493_0001_r_000000
19/01/16 10:25:56 INFO mapred.LocalJobRunner: reduce > reduce
19/01/16 10:25:56 INFO mapred.Task: Task 'attempt_local1578362493_0001_r_000000_0' done.
19/01/16 10:25:56 INFO mapred.LocalJobRunner: Finishing task: attempt_local1578362493_0001_r_000000_0
19/01/16 10:25:56 INFO mapred.LocalJobRunner: Starting task: attempt_local1578362493_0001_r_000001_0
19/01/16 10:25:56 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
19/01/16 10:25:56 INFO util.ProcfsBasedProcessTree: ProcfsBasedProcessTree currently is supported only on Linux.
19/01/16 10:25:56 INFO mapred.Task:  Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@6110dc83
19/01/16 10:25:56 INFO mapred.ReduceTask: Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@3a2033df
19/01/16 10:25:56 INFO reduce.MergeManagerImpl: MergerManager: memoryLimit=1320838784, maxSingleShuffleLimit=330209696, mergeThreshold=871753664, ioSortFactor=10, memToMemMergeOutputsThreshold=10
19/01/16 10:25:56 INFO reduce.EventFetcher: attempt_local1578362493_0001_r_000001_0 Thread started: EventFetcher for fetching Map Completion Events
19/01/16 10:25:56 INFO reduce.LocalFetcher: localfetcher#2 about to shuffle output of map attempt_local1578362493_0001_m_000001_0 decomp: 17 len: 21 to MEMORY
19/01/16 10:25:56 INFO reduce.InMemoryMapOutput: Read 17 bytes from map-output for attempt_local1578362493_0001_m_000001_0
19/01/16 10:25:56 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 17, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->17
19/01/16 10:25:56 INFO reduce.LocalFetcher: localfetcher#2 about to shuffle output of map attempt_local1578362493_0001_m_000002_0 decomp: 2 len: 6 to MEMORY
19/01/16 10:25:56 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local1578362493_0001_m_000002_0
19/01/16 10:25:56 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 2, commitMemory -> 17, usedMemory ->19
19/01/16 10:25:56 INFO reduce.LocalFetcher: localfetcher#2 about to shuffle output of map attempt_local1578362493_0001_m_000000_0 decomp: 90 len: 94 to MEMORY
19/01/16 10:25:56 INFO reduce.InMemoryMapOutput: Read 90 bytes from map-output for attempt_local1578362493_0001_m_000000_0
19/01/16 10:25:56 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 90, inMemoryMapOutputs.size() -> 3, commitMemory -> 19, usedMemory ->109
19/01/16 10:25:56 INFO reduce.EventFetcher: EventFetcher is interrupted.. Returning
19/01/16 10:25:56 INFO mapred.LocalJobRunner: 3 / 3 copied.
19/01/16 10:25:56 INFO reduce.MergeManagerImpl: finalMerge called with 3 in-memory map-outputs and 0 on-disk map-outputs
19/01/16 10:25:56 INFO mapred.Merger: Merging 3 sorted segments
19/01/16 10:25:56 INFO mapred.Merger: Down to the last merge-pass, with 2 segments left of total size: 89 bytes
19/01/16 10:25:56 INFO reduce.MergeManagerImpl: Merged 3 segments, 109 bytes to disk to satisfy reduce memory limit
19/01/16 10:25:56 INFO reduce.MergeManagerImpl: Merging 1 files, 109 bytes from disk
19/01/16 10:25:56 INFO reduce.MergeManagerImpl: Merging 0 segments, 0 bytes from memory into reduce
19/01/16 10:25:56 INFO mapred.Merger: Merging 1 sorted segments
19/01/16 10:25:56 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 98 bytes
19/01/16 10:25:56 INFO mapred.LocalJobRunner: 3 / 3 copied.
pool-3-thread-1 : WCReducer :tom0=2
pool-3-thread-1 : WCReducer :tom3=2
pool-3-thread-1 : WCReducer :tom6=2
pool-3-thread-1 : WCReducer :tom9=2
19/01/16 10:25:56 INFO mapred.Task: Task:attempt_local1578362493_0001_r_000001_0 is done. And is in the process of committing
19/01/16 10:25:56 INFO mapred.LocalJobRunner: 3 / 3 copied.
pool-3-thread-1 : WCReducer :zhangsan=1
19/01/16 10:25:56 INFO mapred.Task: Task attempt_local1578362493_0001_r_000001_0 is allowed to commit now
19/01/16 10:25:56 INFO output.FileOutputCommitter: Saved output of task 'attempt_local1578362493_0001_r_000001_0' to file:/d:/mr/out/_temporary/0/task_local1578362493_0001_r_000001
19/01/16 10:25:56 INFO mapred.LocalJobRunner: reduce > reduce
19/01/16 10:25:56 INFO mapred.Task: Task 'attempt_local1578362493_0001_r_000001_0' done.
19/01/16 10:25:56 INFO mapred.LocalJobRunner: Finishing task: attempt_local1578362493_0001_r_000001_0
19/01/16 10:25:56 INFO mapred.LocalJobRunner: Starting task: attempt_local1578362493_0001_r_000002_0
19/01/16 10:25:56 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
19/01/16 10:25:56 INFO util.ProcfsBasedProcessTree: ProcfsBasedProcessTree currently is supported only on Linux.
19/01/16 10:25:56 INFO mapred.Task:  Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@16604c62
19/01/16 10:25:56 INFO mapred.ReduceTask: Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@71f37ee9
19/01/16 10:25:56 INFO reduce.MergeManagerImpl: MergerManager: memoryLimit=1320838784, maxSingleShuffleLimit=330209696, mergeThreshold=871753664, ioSortFactor=10, memToMemMergeOutputsThreshold=10
19/01/16 10:25:56 INFO reduce.EventFetcher: attempt_local1578362493_0001_r_000002_0 Thread started: EventFetcher for fetching Map Completion Events
19/01/16 10:25:56 INFO reduce.LocalFetcher: localfetcher#3 about to shuffle output of map attempt_local1578362493_0001_m_000001_0 decomp: 2 len: 6 to MEMORY
19/01/16 10:25:56 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local1578362493_0001_m_000001_0
19/01/16 10:25:56 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->2
19/01/16 10:25:56 INFO reduce.LocalFetcher: localfetcher#3 about to shuffle output of map attempt_local1578362493_0001_m_000002_0 decomp: 11 len: 15 to MEMORY
19/01/16 10:25:56 INFO reduce.InMemoryMapOutput: Read 11 bytes from map-output for attempt_local1578362493_0001_m_000002_0
19/01/16 10:25:56 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 11, inMemoryMapOutputs.size() -> 2, commitMemory -> 2, usedMemory ->13
19/01/16 10:25:56 INFO reduce.LocalFetcher: localfetcher#3 about to shuffle output of map attempt_local1578362493_0001_m_000000_0 decomp: 68 len: 72 to MEMORY
19/01/16 10:25:56 INFO reduce.InMemoryMapOutput: Read 68 bytes from map-output for attempt_local1578362493_0001_m_000000_0
19/01/16 10:25:56 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 68, inMemoryMapOutputs.size() -> 3, commitMemory -> 13, usedMemory ->81
19/01/16 10:25:56 INFO reduce.EventFetcher: EventFetcher is interrupted.. Returning
19/01/16 10:25:56 INFO mapred.LocalJobRunner: 3 / 3 copied.
19/01/16 10:25:56 INFO reduce.MergeManagerImpl: finalMerge called with 3 in-memory map-outputs and 0 on-disk map-outputs
19/01/16 10:25:56 INFO mapred.Merger: Merging 3 sorted segments
19/01/16 10:25:56 INFO mapred.Merger: Down to the last merge-pass, with 2 segments left of total size: 67 bytes
19/01/16 10:25:56 INFO reduce.MergeManagerImpl: Merged 3 segments, 81 bytes to disk to satisfy reduce memory limit
19/01/16 10:25:56 INFO reduce.MergeManagerImpl: Merging 1 files, 81 bytes from disk
19/01/16 10:25:56 INFO reduce.MergeManagerImpl: Merging 0 segments, 0 bytes from memory into reduce
19/01/16 10:25:56 INFO mapred.Merger: Merging 1 sorted segments
19/01/16 10:25:56 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 72 bytes
19/01/16 10:25:56 INFO mapred.LocalJobRunner: 3 / 3 copied.
pool-3-thread-1 : WCReducer :Li=1
pool-3-thread-1 : WCReducer :tom1=2
pool-3-thread-1 : WCReducer :tom4=2
pool-3-thread-1 : WCReducer :tom7=2
19/01/16 10:25:56 INFO mapred.Task: Task:attempt_local1578362493_0001_r_000002_0 is done. And is in the process of committing
19/01/16 10:25:56 INFO mapred.LocalJobRunner: 3 / 3 copied.
19/01/16 10:25:56 INFO mapred.Task: Task attempt_local1578362493_0001_r_000002_0 is allowed to commit now
19/01/16 10:25:56 INFO output.FileOutputCommitter: Saved output of task 'attempt_local1578362493_0001_r_000002_0' to file:/d:/mr/out/_temporary/0/task_local1578362493_0001_r_000002
19/01/16 10:25:56 INFO mapred.LocalJobRunner: reduce > reduce
19/01/16 10:25:56 INFO mapred.Task: Task 'attempt_local1578362493_0001_r_000002_0' done.
19/01/16 10:25:56 INFO mapred.LocalJobRunner: Finishing task: attempt_local1578362493_0001_r_000002_0
19/01/16 10:25:56 INFO mapred.LocalJobRunner: reduce task executor complete.
19/01/16 10:25:57 INFO mapreduce.Job:  map 100% reduce 100%
19/01/16 10:25:57 INFO mapreduce.Job: Job job_local1578362493_0001 completed successfully
19/01/16 10:25:57 INFO mapreduce.Job: Counters: 30
    File System Counters
        FILE: Number of bytes read=20902
        FILE: Number of bytes written=1719987
        FILE: Number of read operations=0
        FILE: Number of large read operations=0
        FILE: Number of write operations=0
    Map-Reduce Framework
        Map input records=23
        Map output records=24
        Map output bytes=219
        Map output materialized bytes=321
        Input split bytes=730
        Combine input records=0
        Combine output records=0
        Reduce input groups=14
        Reduce shuffle bytes=321
        Reduce input records=24
        Reduce output records=14
        Spilled Records=48
        Shuffled Maps =9
        Failed Shuffles=0
        Merged Map outputs=9
        GC time elapsed (ms)=0
        Total committed heap usage (bytes)=2354577408
    Shuffle Errors
        BAD_ID=0
        CONNECTION=0
        IO_ERROR=0
        WRONG_LENGTH=0
        WRONG_MAP=0
        WRONG_REDUCE=0
    File Input Format Counters 
        Bytes Read=0
    File Output Format Counters 
        Bytes Written=137

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值