hadoop 初记-试用

Hadoop Learning

My Demo
Statistic.java
1. 初始化配置文件,临时文件存放目录,还有具体的Job。

        Configuration defaults = new Configuration();
        File tempDir = new File("tmp/stat-temp-"+Integer.toString(
                new Random().nextInt(Integer.MAX_VALUE)));
        JobConf statJob = new JobConf(defaults, Statistic.class);

2. 设置Job的相关参数

        statJob.setJobName("StatTestJob");
        statJob.setInputDir(new File("tmp/input/"));
        statJob.setMapperClass(StatMapper.class);
        statJob.setReducerClass(StatReducer.class);
        statJob.setOutputDir(tempDir);

3. 运行Job,清理临时文件

        JobClient.runJob(statJob);
        new JobClient(defaults).getFs().delete(tempDir);

4. 运行这个Demo,会输出
060328 151414 parsing jar:file:/E:/workground/opensource/hadoop-nightly/hadoop-nightly.jar!/hadoop-default.xml
060328 151414 parsing file:/E:/workground/opensource/hadoop-nightly/bin/mapred-default.xml
060328 151414 parsing file:/E:/workground/opensource/hadoop-nightly/bin/hadoop-site.xml
060328 151414 parsing jar:file:/E:/workground/opensource/hadoop-nightly/hadoop-nightly.jar!/hadoop-default.xml
060328 151414 parsing file:/E:/workground/opensource/hadoop-nightly/bin/mapred-default.xml
060328 151414 parsing file:/E:/workground/opensource/hadoop-nightly/bin/mapred-default.xml
060328 151414 parsing file:/E:/workground/opensource/hadoop-nightly/bin/hadoop-site.xml
060328 151414 parsing jar:file:/E:/workground/opensource/hadoop-nightly/hadoop-nightly.jar!/hadoop-default.xml
Key: 0, Value: For the latest information about Hadoop, please visit our website at:
Key: 70, Value:
Key: 71, Value:   
http://lucene.apache.org/hadoop/
Key: 107, Value:
Key: 108, Value: and our wiki, at:
Key: 126, Value:
Key: 127, Value:   
http://wiki.apache.org/hadoop/
060328 151414 parsing file:/E:/workground/opensource/hadoop-nightly/bin/mapred-default.xml
060328 151414 parsing build/test/mapred/local/job_lck1iq.xml/localRunner
060328 151414 parsing file:/E:/workground/opensource/hadoop-nightly/bin/hadoop-site.xml
060328 151414 Running job: job_lck1iq
060328 151414 parsing jar:file:/E:/workground/opensource/hadoop-nightly/hadoop-nightly.jar!/hadoop-default.xml
060328 151414 parsing file:/E:/workground/opensource/hadoop-nightly/bin/mapred-default.xml
060328 151414 parsing build/test/mapred/local/job_lck1iq.xml/localRunner
060328 151414 parsing file:/E:/workground/opensource/hadoop-nightly/bin/mapred-default.xml
060328 151414 parsing file:/E:/workground/opensource/hadoop-nightly/bin/hadoop-site.xml
060328 151414 E:/workground/opensource/hadoop-nightly/tmp/input/README.txt:0+161
060328 151414 parsing jar:file:/E:/workground/opensource/hadoop-nightly/hadoop-nightly.jar!/hadoop-default.xml
060328 151414 parsing file:/E:/workground/opensource/hadoop-nightly/bin/mapred-default.xml
060328 151414 parsing file:/E:/workground/opensource/hadoop-nightly/bin/mapred-default.xml
060328 151414 parsing file:/E:/workground/opensource/hadoop-nightly/bin/hadoop-site.xml
060328 151414 parsing jar:file:/E:/workground/opensource/hadoop-nightly/hadoop-nightly.jar!/hadoop-default.xml
060328 151415 parsing file:/E:/workground/opensource/hadoop-nightly/bin/mapred-default.xml
060328 151415 parsing build/test/mapred/local/job_lck1iq.xml/localRunner
060328 151415 parsing file:/E:/workground/opensource/hadoop-nightly/bin/mapred-default.xml
060328 151415 parsing file:/E:/workground/opensource/hadoop-nightly/bin/hadoop-site.xml
060328 151415 reduce > reduce
060328 151415  map 100%  reduce 100%
060328 151415 Job complete: job_lck1iq
060328 151415 parsing jar:file:/E:/workground/opensource/hadoop-nightly/hadoop-nightly.jar!/hadoop-default.xml
060328 151415 parsing file:/E:/workground/opensource/hadoop-nightly/bin/hadoop-site.xml

5. 分析一下输出。
刚开始hadoop加载了一大堆配置文件,这里先不管。接着程序对 tmp/input/ 下面的 readme.txt 进行了解析,调用我的 StatMapper.map(WritableComparable key, Writable value, OutputCollector output, Reporter reporter) ,程序输出了key和value的值。可见key是当前指针在文件中的位置,而value是当前行的内容。接着还看到了解析xml文件的log,估计是程序框架启动了多个线程来进行操作,提高效率。
因为我的 StatMapper 只输出key-value,没有做其它事情,reduce这步被略过了。

6. 想办法让Reduce执行,看来要在StatMapper那里动动手脚。
StatMapper.java:
    public void map(WritableComparable key, Writable value, OutputCollector output, Reporter reporter)
            throws IOException
    {
        String tokenLength = String.valueOf(value.toString().split(" ").length);
        output.collect(new UTF8(""), new LongWritable(1));
    }
每行的单词数作key,1作value提交给output.collect(),这样应该就能够统计文件里面每行的单词数频率了。
7. 接着还要修改Statistic.java:
        statJob.setOutputDir(tempDir );
        statJob.setOutputFormat(SequenceFileOutputFormat.class);
        statJob.setOutputKeyClass(UTF8.class);
        statJob.setOutputValueClass(LongWritable.class);

8. 以及StatReducer.java:
    public void reduce(WritableComparable key, Iterator values, OutputCollector output, Reporter reporter)
            throws IOException
    {
        long sum = 0;
        while (values.hasNext())
        {
            sum += ((LongWritable)values.next()).get();
        }
        System.out.println("Length: " + key + ", Count: " + sum);

        output.collect(key, new LongWritable(sum));
    }

9. 放一堆java文件到input目录下面,再次运行。
(省略无数的xml解析log)
Length: 0, Count: 359
Length: 1, Count: 3474
Length: 10, Count: 1113
Length: 11, Count: 1686
Length: 12, Count: 1070
Length: 13, Count: 1725
Length: 14, Count: 773
Length: 15, Count: 707
Length: 16, Count: 490
Length: 17, Count: 787
Length: 18, Count: 348
Length: 19, Count: 303
Length: 2, Count: 1543
Length: 20, Count: 227
Length: 21, Count: 421
Length: 22, Count: 155
Length: 23, Count: 143
Length: 24, Count: 109
Length: 25, Count: 219
Length: 26, Count: 83
Length: 27, Count: 70
Length: 28, Count: 55
Length: 29, Count: 107
Length: 3, Count: 681
Length: 30, Count: 53
Length: 31, Count: 43
Length: 32, Count: 38
Length: 33, Count: 66
Length: 34, Count: 36
Length: 35, Count: 26
Length: 36, Count: 42
Length: 37, Count: 52
Length: 38, Count: 32
Length: 39, Count: 33
Length: 4, Count: 236
Length: 40, Count: 17
Length: 41, Count: 40
Length: 42, Count: 15
Length: 43, Count: 23
Length: 44, Count: 14
Length: 45, Count: 27
Length: 46, Count: 15
Length: 47, Count: 30
Length: 48, Count: 2
Length: 49, Count: 18
Length: 5, Count: 1940
Length: 50, Count: 8
Length: 51, Count: 11
Length: 52, Count: 2
Length: 53, Count: 5
Length: 54, Count: 2
Length: 55, Count: 1
Length: 57, Count: 4
Length: 58, Count: 1
Length: 59, Count: 3
Length: 6, Count: 1192
Length: 60, Count: 1
Length: 61, Count: 4
Length: 62, Count: 1
Length: 63, Count: 3
Length: 66, Count: 1
Length: 7, Count: 1382
Length: 8, Count: 1088
Length: 9, Count: 2151
060328 154741 reduce > reduce
060328 154741  map 100%  reduce 100%
060328 154741 Job complete: job_q618hy

Cool,统计出来了,但是这些数据有什么意义............. 看来该搞一个(2)来捣鼓一些更有意义的事情了


之前做的Demo太无聊了,决心改造一下~~
1.  输入格式。
之前的程序,StatMapper莫名其妙被输入了一堆key,value,应该是一种默认的输入格式,找了一下,原来是这个: org.apache.hadoop.mapred.InputFormatBase,  继承了InputFormat接口。接口里面有一个
  FileSplit[] getSplits(FileSystem fs, JobConf job, int numSplits)
    throws IOException;
看来所有输入输出都必须以文件为单位存放了,就像Lucene一样。一般输入数据都是按照行来分隔的,看来一般用这个InputFormatBase就可以了。
2. 输入数据。
这东东本来就是用来高效处理海量数据的,于是我想到了那iHome的ActionLog....,加起来好几百个M的,符合要求吧。这里统计一下这几天,指令被调用的次数。
3. 修改程序。
StatMapper.java:
    public void map(WritableComparable key, Writable value, OutputCollector output, Reporter reporter)
            throws IOException
    {
        String[] token = value.toString().split(" ");
        String id = token[6];
        String act = token[7];
        output.collect(new UTF8(act), new LongWritable(1));
    }
StatReducer.java:
    public void reduce(WritableComparable key, Iterator values, OutputCollector output, Reporter reporter)
            throws IOException
    {
        long sum = 0;
        while (values.hasNext())
        {
            sum += ((LongWritable)values.next()).get();
        }
        System.out.println("Action: " + key + ", Count: " + sum);
        output.collect(key, new LongWritable(sum));
    }
4. 运行。
这回日志看清楚了:
...
060328 162626 E:/workground/opensource/hadoop-nightly/tmp/input/action_log.txt.2006-03-21:0+21357898
060328 162626  map 8%  reduce 0%
060328 162627 E:/workground/opensource/hadoop-nightly/tmp/input/action_log.txt.2006-03-21:0+21357898
060328 162627  map 22%  reduce 0%
060328 162628 E:/workground/opensource/hadoop-nightly/tmp/input/action_log.txt.2006-03-21:0+21357898
060328 162628  map 37%  reduce 0%
060328 162629 E:/workground/opensource/hadoop-nightly/tmp/input/action_log.txt.2006-03-21:0+21357898
060328 162629  map 52%  reduce 0%
060328 162630 E:/workground/opensource/hadoop-nightly/tmp/input/action_log.txt.2006-03-21:0+21357898
060328 162631 E:/workground/opensource/hadoop-nightly/tmp/input/action_log.txt.2006-03-21:0+21357898
060328 162631  map 80%  reduce 0%
060328 162632 E:/workground/opensource/hadoop-nightly/tmp/input/action_log.txt.2006-03-21:0+21357898
060328 162632  map 92%  reduce 0%
060328 162632 E:/workground/opensource/hadoop-nightly/tmp/input/action_log.txt.2006-03-21:0+21357898
...
060328 162813  map 100%  reduce 0%
...
060328 162816 reduce > append > build/test/mapred/local/map_hjcxj9.out/reduce_z97f6i
060328 162816  map 100%  reduce 29%
060328 162817 reduce > append > build/test/mapred/local/map_8qdis7.out/reduce_z97f6i
060328 162817  map 100%  reduce 35%
060328 162818 reduce > append > build/test/mapred/local/map_m19cmw.out/reduce_z97f6i
060328 162818  map 100%  reduce 40%
060328 162819 reduce > append > build/test/mapred/local/map_kx1fnb.out/reduce_z97f6i
060328 162819  map 100%  reduce 44%
060328 162820 reduce > append > build/test/mapred/local/map_87oxwt.out/reduce_z97f6i
060328 162820  map 100%  reduce 49%
060328 162821 reduce > sort
060328 162822 reduce > sort
060328 162822  map 100%  reduce 50%
060328 162823 reduce > sort
060328 162824 reduce > sort
060328 162826 reduce > sort
060328 162827 reduce > sort
060328 162828 reduce > sort
060328 162830 reduce > sort
060328 162832 reduce > sort
060328 162833 reduce > sort
060328 162835 reduce > sort
060328 162837 reduce > sort
060328 162838 reduce > reduce
060328 162839  map 100%  reduce 75%
060328 162839 reduce > reduce
060328 162840  map 100%  reduce 78%
060328 162840 reduce > reduce
060328 162841  map 100%  reduce 82%
Action: ACTION, Count: 1354644
060328 162841 reduce > reduce
060328 162842  map 100%  reduce 85%
060328 162842 reduce > reduce
060328 162843  map 100%  reduce 89%
060328 162843 reduce > reduce
060328 162844  map 100%  reduce 93%
060328 162844 reduce > reduce
060328 162845  map 100%  reduce 96%
060328 162845 reduce > reduce
060328 162846 reduce > reduce
...
060328 162846  map 100%  reduce 100%
060328 162846 Job complete: job_2pn9y8

简单分析一下。首先程序用几个线程来并发处理输入,然后将第一次输出保存到临时目录下面。接着程序读取第一次输出,执行聚集操作,对Key相同的数据传输到Reducer进行处理和排序,Reducer再把输出保存到output目录下面。


上面的例子还不完整,统计数据没有排序,而且输出的output文件是二进制格式的。现在修改一下
Statistic.java:
    public static void main(String[] args) throws IOException
    {
        Configuration defaults = new Configuration();
        new JobClient(defaults).getFs().delete(new File("tmp/output/"));
        
        File tempDir = new File("tmp/stat-temp-"+Integer.toString(
                new Random().nextInt(Integer.MAX_VALUE)));
        JobConf statJob = new JobConf(defaults, Statistic.class);
        
        statJob.setJobName("StatTestJob");
        statJob.setInputDir(new File("tmp/input/"));
        
        statJob.setMapperClass(StatMapper.class);
        statJob.setReducerClass(StatReducer.class);
        
        statJob.setOutputDir(tempDir);
        statJob.setOutputFormat(SequenceFileOutputFormat.class);
        statJob.setOutputKeyClass(UTF8.class);
        statJob.setOutputValueClass(LongWritable.class);
        
        JobClient.runJob(statJob);
        
        JobConf sortJob = new JobConf(defaults, Statistic.class);
        sortJob.setInputDir(tempDir);
        sortJob.setInputFormat(SequenceFileInputFormat.class);
        sortJob.setInputKeyClass(UTF8.class);
        sortJob.setInputValueClass(LongWritable.class);

        sortJob.setMapperClass(InverseMapper.class);

        sortJob.setNumReduceTasks(1);   // write a single file
        sortJob.setOutputDir(new File("tmp/output/"));
        sortJob.setOutputKeyComparatorClass(LongWritable.DecreasingComparator.class);// sort by decreasing freq

        JobClient.runJob(sortJob);
        
        new JobClient(defaults).getFs().delete(tempDir);
    }

这里将Key和Value(Action,统计值)反转,然后再重新排序后输出成单一的文本文件,以 (统计值,动作)的方式。
感觉处理过程每次都需要经历Map和Reduce的过程,“可能”会造成效率下降,但是Google的MapReduce论文上面清楚写着:

The performance of the MapReduce library is good enough that we can keep conceptually unrelated computations separate, instead of mixing them togetherto avoid extra passes over the data.
MapReduce
的函数库的性能已经非常好,所以我们可以把概念上不相关的计算步骤分开处理,而不是混在一起以期减少处理次数。
这里以及说明了,尽可能让Map和Reduce每次都执行较为简单的操作,而不应该为了追求效率而将不相关的操作集中在一起执行。

Hadoop 表面上的东西简单说了一下,如果要深入研究运行机制,以及如何再多台机器上面执行分布式计算,这就需要对源码进行研究了。

来自北斗
 
  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值