wordcount 代码解析

MapRuduce

map表示映射 reduce表示化简。它的思想就是‘分而治之’,具体思想就不用说了 这里主要解析wordcount源代码。代码里的思想是一直是K,V对(键值对)传输的重要的是map ()、reduce()两个函数。 main方法里主要job作业的配置、启动

main

Configuration configuration = new Configuration();
        Job job = new Job(configuration, WordCount.class.getSimpleName());
        job.setJarByClass(WordCount.class);
        // 打jar包

        job.setInputFormatClass(TextInputFormat.class);
        job.setOutputFormatClass(TextOutputFormat.class);
        // 通过job设置输入/输出格式
        FileInputFormat.setInputPaths(job, new Path(args[0]));
        FileOutputFormat.setOutputPath(job, new Path(args[1]));
        // 设置输入/输出路径
        job.setMapperClass(WordMap.class);
        job.setReducerClass(WordReduce.class);
        // 设置处理Map/Reduce阶段的类
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(IntWritable.class);
        // 设置最终输出key/value的类型m
        job.waitForCompletion(true);
        // 提交作业

数据是:两行文本

    i am Malik Cheng
    i am hadoop

完整的WordCount.java


import java.io.IOException;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;

public class WordCount {

    public static void main(String[] args) throws IOException,
            ClassNotFoundException, InterruptedException {
        if (args.length != 2 || args == null) {
            System.out.println("please input Path!");
            System.exit(0);
        }
        Configuration configuration = new Configuration();
        Job job = new Job(configuration, WordCount.class.getSimpleName());
        job.setJarByClass(WordCount.class);
        // 打jar包

        job.setInputFormatClass(TextInputFormat.class);
        job.setOutputFormatClass(TextOutputFormat.class);
        // 通过job设置输入/输出格式
        FileInputFormat.setInputPaths(job, new Path(args[0]));
        FileOutputFormat.setOutputPath(job, new Path(args[1]));
        // 设置输入/输出路径
        job.setMapperClass(WordMap.class);
        job.setReducerClass(WordReduce.class);
        // 设置处理Map/Reduce阶段的类
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(IntWritable.class);
        // 设置最终输出key/value的类型m
        job.waitForCompletion(true);
        // 提交作业
    }

    /*
     * keyin:输入每行文本的偏移量,类型为LongerWritable(ObjectWritable) value:每行文本的内容,类型为Text
     * keyout:输出中间结果的key,类型根据实际情况设定 valueout:输出中间结果的value,类型根据实际情况设定
     */

    static class WordMap extends Mapper<LongWritable, Text, Text, IntWritable> {
        protected void map(
                LongWritable key,
                Text value,
                org.apache.hadoop.mapreduce.Mapper<LongWritable, Text, Text, IntWritable>.Context context)
                throws java.io.IOException, InterruptedException {
            System.out.println("WordMap.map...");
            System.out.println("Map key:" + key.toString() + " ,Map value:"
                    + value.toString());
            String[] lines = value.toString().split(" ");
            for (String word : lines) {

                context.write(new Text(word), new IntWritable(1));
                // 每个单词出现1次,作为中间结果输出
                System.out.println("word:" + word + ",one:"
                        + new IntWritable(1).toString());
            }
            System.out.println("context:" + context.toString());
        };
    }

    /*
     * keyin:输入的key值,类型与map中的keyout一致 valuein:输入中间结果的value值,类型与map中的valueout一致
     * keyout:最终结果的key值 valueout:最终结果的value值
     */

    static class WordReduce extends
            Reducer<Text, IntWritable, Text, IntWritable> {
        protected void reduce(
                Text key,
                java.lang.Iterable<IntWritable> values,
                org.apache.hadoop.mapreduce.Reducer<Text, IntWritable, Text, IntWritable>.Context context)
                throws java.io.IOException, InterruptedException {
            System.out.println("WordReduce rudece...");
            int sum = 0;
            System.out.println("---------------------values:");
            for (IntWritable count : values) {
                sum = sum + count.get();

                System.out.println("count:"+count+", sum:"+sum);
            }
            context.write(key, new IntWritable(sum));// 输出最终结果
            System.out.println("Rudece key:"+key.toString()+", sum :"+new IntWritable(sum).toString());
            System.out.println("Rudece context:" + context.toString()+", sum :"+new IntWritable(sum).toString());
        };
    }
}

对打印的结果解析:

map阶段
WordMap.map...
Map key:0 ,Map value:i am Malik Cheng
word:i,one:1
word:am,one:1
word:Malik,one:1
word:Cheng,one:1
context:org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context@2ceec589
WordMap.map...
Map key:17 ,Map value:i am hadoop
word:i,one:1
word:am,one:1
word:hadoop,one:1
context:org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context@2ceec589
map函数参数 :map(LongWritable key,Text value,Context context)         key 和 value是我们传入的数据,value其实是真是数据(i am Malik….),key 是用来帮助换行的偏移量而context上下文对象,context作为了map和reduce执行中各个函数的一个桥梁,这个设计和java web中的session对象、application对象很相似。

从上面的输出可以看出 有2次 WordMap.map…说明map函数被调用了2次,问什么会有2次调用尼? 原来因为TextInputFormat类型的,都是按行处理。 每一行的内容会在value参数中传进来,也就是说每一行的内容都对应了一个key,这个key为此行的开头位置在本文件中的所在位置(所以第1行的key是0,第2行的key是17)。这样每个单词像这样word:Malik,one:1记录在context中,用来传给reduce

reduce阶段
 WordReduce rudece...
---------------------values:
count:1, sum:1
Rudece key:Cheng, sum :1
Rudece context:org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer$Context@5e76ee18, sum :1
WordReduce rudece...
---------------------values:
count:1, sum:1
Rudece key:Malik, sum :1
Rudece context:org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer$Context@5e76ee18, sum :1
WordReduce rudece...
---------------------values:
count:1, sum:1
count:1, sum:2
Rudece key:am, sum :2
Rudece context:org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer$Context@5e76ee18, sum :2
WordReduce rudece...
---------------------values:
count:1, sum:1
Rudece key:hadoop, sum :1
Rudece context:org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer$Context@5e76ee18, sum :1
WordReduce rudece...
---------------------values:
count:1, sum:1
count:1, sum:2
Rudece key:i, sum :2
Rudece context:org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer$Context@5e76ee18, sum :2

分析:reduce 化简的意思,第一个问题:为什么是化简? 在Wordcount里,化简就是有对map传来的相同key(每个key对应的value是1)进行遍历求和。第二个问题为什么会有WordReduce rudece…输出,也就是为什么有5次调用reduce尼?其实知道第一个问题的答案,这个也就有答案了。因为数据里面有5个不同的单词(5个不同key),所以也就有5次化简咯。总结 map()是按行调用、map()按key调用。


控制台输出

```
2017-07-20 18:07:02,175 WARN  [main] util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2017-07-20 18:07:02,820 INFO  [main] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(1019)) - session.id is deprecated. Instead, use dfs.metrics.session-id
2017-07-20 18:07:02,821 INFO  [main] jvm.JvmMetrics (JvmMetrics.java:init(76)) - Initializing JVM Metrics with processName=JobTracker, sessionId=
2017-07-20 18:07:03,090 WARN  [main] mapreduce.JobSubmitter (JobSubmitter.java:copyAndConfigureFiles(150)) - Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
2017-07-20 18:07:03,093 WARN  [main] mapreduce.JobSubmitter (JobSubmitter.java:copyAndConfigureFiles(259)) - No job jar file set.  User classes may not be found. See Job or Job#setJar(String).
2017-07-20 18:07:03,175 INFO  [main] input.FileInputFormat (FileInputFormat.java:listStatus(281)) - Total input paths to process : 1
2017-07-20 18:07:03,213 INFO  [main] mapreduce.JobSubmitter (JobSubmitter.java:submitJobInternal(396)) - number of splits:1
2017-07-20 18:07:03,394 INFO  [main] mapreduce.JobSubmitter (JobSubmitter.java:printTokens(479)) - Submitting tokens for job: job_local130204698_0001
2017-07-20 18:07:03,479 WARN  [main] conf.Configuration (Configuration.java:loadProperty(2368)) - file:/tmp/hadoop-zkpk/mapred/staging/zkpk130204698/.staging/job_local130204698_0001/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval;  Ignoring.
2017-07-20 18:07:03,489 WARN  [main] conf.Configuration (Configuration.java:loadProperty(2368)) - file:/tmp/hadoop-zkpk/mapred/staging/zkpk130204698/.staging/job_local130204698_0001/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts;  Ignoring.
2017-07-20 18:07:03,713 WARN  [main] conf.Configuration (Configuration.java:loadProperty(2368)) - file:/tmp/hadoop-zkpk/mapred/local/localRunner/zkpk/job_local130204698_0001/job_local130204698_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval;  Ignoring.
2017-07-20 18:07:03,733 WARN  [main] conf.Configuration (Configuration.java:loadProperty(2368)) - file:/tmp/hadoop-zkpk/mapred/local/localRunner/zkpk/job_local130204698_0001/job_local130204698_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts;  Ignoring.
2017-07-20 18:07:03,761 INFO  [main] mapreduce.Job (Job.java:submit(1289)) - The url to track the job: http://localhost:8080/
2017-07-20 18:07:03,762 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1334)) - Running job: job_local130204698_0001
2017-07-20 18:07:03,763 INFO  [Thread-12] mapred.LocalJobRunner (LocalJobRunner.java:createOutputCommitter(471)) - OutputCommitter set in config null
2017-07-20 18:07:03,779 INFO  [Thread-12] mapred.LocalJobRunner (LocalJobRunner.java:createOutputCommitter(489)) - OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
2017-07-20 18:07:03,936 INFO  [Thread-12] mapred.LocalJobRunner (LocalJobRunner.java:runTasks(448)) - Waiting for map tasks
2017-07-20 18:07:03,937 INFO  [LocalJobRunner Map Task Executor #0] mapred.LocalJobRunner (LocalJobRunner.java:run(224)) - Starting task: attempt_local130204698_0001_m_000000_0
2017-07-20 18:07:03,995 INFO  [LocalJobRunner Map Task Executor #0] mapred.Task (Task.java:initialize(587)) -  Using ResourceCalculatorProcessTree : [ ]
2017-07-20 18:07:04,001 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:runNewMapper(733)) - Processing split: hdfs://master:9000/user/wordcount/input1/h:0+29
2017-07-20 18:07:04,017 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:createSortingCollector(388)) - Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
2017-07-20 18:07:04,083 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:setEquator(1182)) - (EQUATOR) 0 kvi 26214396(104857584)
2017-07-20 18:07:04,083 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:init(975)) - mapreduce.task.io.sort.mb: 100
2017-07-20 18:07:04,083 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:init(976)) - soft limit at 83886080
2017-07-20 18:07:04,084 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:init(977)) - bufstart = 0; bufvoid = 104857600
2017-07-20 18:07:04,084 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:init(978)) - kvstart = 26214396; length = 6553600
WordMap.map...
Map key:0 ,Map value:i am Malik Cheng
word:i,one:1
word:am,one:1
word:Malik,one:1
word:Cheng,one:1
context:org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context@2ceec589
WordMap.map...
Map key:17 ,Map value:i am hadoop
word:i,one:1
word:am,one:1
word:hadoop,one:1
context:org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context@2ceec589
2017-07-20 18:07:04,420 INFO  [LocalJobRunner Map Task Executor #0] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(591)) - 
2017-07-20 18:07:04,423 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1437)) - Starting flush of map output
2017-07-20 18:07:04,424 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1455)) - Spilling map output
2017-07-20 18:07:04,424 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1456)) - bufstart = 0; bufend = 57; bufvoid = 104857600
2017-07-20 18:07:04,424 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1458)) - kvstart = 26214396(104857584); kvend = 26214372(104857488); length = 25/6553600
2017-07-20 18:07:04,437 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:sortAndSpill(1641)) - Finished spill 0
2017-07-20 18:07:04,441 INFO  [LocalJobRunner Map Task Executor #0] mapred.Task (Task.java:done(1001)) - Task:attempt_local130204698_0001_m_000000_0 is done. And is in the process of committing
2017-07-20 18:07:04,453 INFO  [LocalJobRunner Map Task Executor #0] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(591)) - map
2017-07-20 18:07:04,453 INFO  [LocalJobRunner Map Task Executor #0] mapred.Task (Task.java:sendDone(1121)) - Task 'attempt_local130204698_0001_m_000000_0' done.
2017-07-20 18:07:04,453 INFO  [LocalJobRunner Map Task Executor #0] mapred.LocalJobRunner (LocalJobRunner.java:run(249)) - Finishing task: attempt_local130204698_0001_m_000000_0
2017-07-20 18:07:04,453 INFO  [Thread-12] mapred.LocalJobRunner (LocalJobRunner.java:runTasks(456)) - map task executor complete.
2017-07-20 18:07:04,456 INFO  [Thread-12] mapred.LocalJobRunner (LocalJobRunner.java:runTasks(448)) - Waiting for reduce tasks
2017-07-20 18:07:04,457 INFO  [pool-6-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:run(302)) - Starting task: attempt_local130204698_0001_r_000000_0
2017-07-20 18:07:04,463 INFO  [pool-6-thread-1] mapred.Task (Task.java:initialize(587)) -  Using ResourceCalculatorProcessTree : [ ]
2017-07-20 18:07:04,467 INFO  [pool-6-thread-1] mapred.ReduceTask (ReduceTask.java:run(362)) - Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@5031c1e1
2017-07-20 18:07:04,479 INFO  [pool-6-thread-1] reduce.MergeManagerImpl (MergeManagerImpl.java:<init>(193)) - MergerManager: memoryLimit=304244320, maxSingleShuffleLimit=76061080, mergeThreshold=200801264, ioSortFactor=10, memToMemMergeOutputsThreshold=10
2017-07-20 18:07:04,484 INFO  [EventFetcher for fetching Map Completion Events] reduce.EventFetcher (EventFetcher.java:run(61)) - attempt_local130204698_0001_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events
2017-07-20 18:07:04,517 INFO  [localfetcher#1] reduce.LocalFetcher (LocalFetcher.java:copyMapOutput(140)) - localfetcher#1 about to shuffle output of map attempt_local130204698_0001_m_000000_0 decomp: 73 len: 77 to MEMORY
2017-07-20 18:07:04,521 INFO  [localfetcher#1] reduce.InMemoryMapOutput (InMemoryMapOutput.java:shuffle(100)) - Read 73 bytes from map-output for attempt_local130204698_0001_m_000000_0
2017-07-20 18:07:04,523 INFO  [localfetcher#1] reduce.MergeManagerImpl (MergeManagerImpl.java:closeInMemoryFile(307)) - closeInMemoryFile -> map-output of size: 73, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->73
2017-07-20 18:07:04,524 INFO  [EventFetcher for fetching Map Completion Events] reduce.EventFetcher (EventFetcher.java:run(76)) - EventFetcher is interrupted.. Returning
2017-07-20 18:07:04,525 INFO  [pool-6-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(591)) - 1 / 1 copied.
2017-07-20 18:07:04,525 INFO  [pool-6-thread-1] reduce.MergeManagerImpl (MergeManagerImpl.java:finalMerge(667)) - finalMerge called with 1 in-memory map-outputs and 0 on-disk map-outputs
2017-07-20 18:07:04,534 INFO  [pool-6-thread-1] mapred.Merger (Merger.java:merge(591)) - Merging 1 sorted segments
2017-07-20 18:07:04,534 INFO  [pool-6-thread-1] mapred.Merger (Merger.java:merge(690)) - Down to the last merge-pass, with 1 segments left of total size: 65 bytes
2017-07-20 18:07:04,536 INFO  [pool-6-thread-1] reduce.MergeManagerImpl (MergeManagerImpl.java:finalMerge(742)) - Merged 1 segments, 73 bytes to disk to satisfy reduce memory limit
2017-07-20 18:07:04,536 INFO  [pool-6-thread-1] reduce.MergeManagerImpl (MergeManagerImpl.java:finalMerge(772)) - Merging 1 files, 77 bytes from disk
2017-07-20 18:07:04,537 INFO  [pool-6-thread-1] reduce.MergeManagerImpl (MergeManagerImpl.java:finalMerge(787)) - Merging 0 segments, 0 bytes from memory into reduce
2017-07-20 18:07:04,537 INFO  [pool-6-thread-1] mapred.Merger (Merger.java:merge(591)) - Merging 1 sorted segments
2017-07-20 18:07:04,537 INFO  [pool-6-thread-1] mapred.Merger (Merger.java:merge(690)) - Down to the last merge-pass, with 1 segments left of total size: 65 bytes
2017-07-20 18:07:04,538 INFO  [pool-6-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(591)) - 1 / 1 copied.
2017-07-20 18:07:04,572 INFO  [pool-6-thread-1] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(1019)) - mapred.skip.on is deprecated. Instead, use mapreduce.job.skiprecords
WordReduce rudece...
---------------------values:
count:1, sum:1
Rudece key:Cheng, sum :1
Rudece context:org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer$Context@5e76ee18, sum :1
WordReduce rudece...
---------------------values:
count:1, sum:1
Rudece key:Malik, sum :1
Rudece context:org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer$Context@5e76ee18, sum :1
WordReduce rudece...
---------------------values:
count:1, sum:1
count:1, sum:2
Rudece key:am, sum :2
Rudece context:org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer$Context@5e76ee18, sum :2
WordReduce rudece...
---------------------values:
count:1, sum:1
Rudece key:hadoop, sum :1
Rudece context:org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer$Context@5e76ee18, sum :1
WordReduce rudece...
---------------------values:
count:1, sum:1
count:1, sum:2
Rudece key:i, sum :2
Rudece context:org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer$Context@5e76ee18, sum :2
2017-07-20 18:07:04,700 INFO  [pool-6-thread-1] mapred.Task (Task.java:done(1001)) - Task:attempt_local130204698_0001_r_000000_0 is done. And is in the process of committing
2017-07-20 18:07:04,703 INFO  [pool-6-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(591)) - 1 / 1 copied.
2017-07-20 18:07:04,703 INFO  [pool-6-thread-1] mapred.Task (Task.java:commit(1162)) - Task attempt_local130204698_0001_r_000000_0 is allowed to commit now
2017-07-20 18:07:04,713 INFO  [pool-6-thread-1] output.FileOutputCommitter (FileOutputCommitter.java:commitTask(439)) - Saved output of task 'attempt_local130204698_0001_r_000000_0' to hdfs://master:9000/user/wordcount/output1/_temporary/0/task_local130204698_0001_r_000000
2017-07-20 18:07:04,715 INFO  [pool-6-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(591)) - reduce > reduce
2017-07-20 18:07:04,715 INFO  [pool-6-thread-1] mapred.Task (Task.java:sendDone(1121)) - Task 'attempt_local130204698_0001_r_000000_0' done.
2017-07-20 18:07:04,716 INFO  [pool-6-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:run(325)) - Finishing task: attempt_local130204698_0001_r_000000_0
2017-07-20 18:07:04,716 INFO  [Thread-12] mapred.LocalJobRunner (LocalJobRunner.java:runTasks(456)) - reduce task executor complete.
2017-07-20 18:07:04,765 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1355)) - Job job_local130204698_0001 running in uber mode : false
2017-07-20 18:07:04,766 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1362)) -  map 100% reduce 100%
2017-07-20 18:07:04,767 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1373)) - Job job_local130204698_0001 completed successfully
2017-07-20 18:07:04,793 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1380)) - Counters: 38
    File System Counters
        FILE: Number of bytes read=502
        FILE: Number of bytes written=457295
        FILE: Number of read operations=0
        FILE: Number of large read operations=0
        FILE: Number of write operations=0
        HDFS: Number of bytes read=58
        HDFS: Number of bytes written=34
        HDFS: Number of read operations=15
        HDFS: Number of large read operations=0
        HDFS: Number of write operations=4
    Map-Reduce Framework
        Map input records=2
        Map output records=7
        Map output bytes=57
        Map output materialized bytes=77
        Input split bytes=107
        Combine input records=0
        Combine output records=0
        Reduce input groups=5
        Reduce shuffle bytes=77
        Reduce input records=7
        Reduce output records=5
        Spilled Records=14
        Shuffled Maps =1
        Failed Shuffles=0
        Merged Map outputs=1
        GC time elapsed (ms)=0
        CPU time spent (ms)=0
        Physical memory (bytes) snapshot=0
        Virtual memory (bytes) snapshot=0
        Total committed heap usage (bytes)=396361728
    Shuffle Errors
        BAD_ID=0
        CONNECTION=0
        IO_ERROR=0
        WRONG_LENGTH=0
        WRONG_MAP=0
        WRONG_REDUCE=0
    File Input Format Counters 
        Bytes Read=29
    File Output Format Counters 
        Bytes Written=34
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值