为了简单测试一下hadoop,我用了2台机器,一台作为namenode,datanode另外一台作为datanode

任务:

统计日志中访问ip的次数

日志内容如下:

61.135.189.75 - - [18/Nov/2012:04:00:15 +0800] "GET /robots.txt HTTP/1.1" 200 556 "-" "-"
61.135.189.75 - - [18/Nov/2012:04:00:26 +0800] "GET /portal.php HTTP/1.1" 200 13929 "-" "Sogou web spider/4.0(+http://www.sogou.com/docs/help/webmasters.htm#07)"
121.14.98.63 - - [18/Nov/2012:04:00:43 +0800] "POST /bbs/api/manyou/my.php HTTP/1.0" 200 150 "http://cless.bnu.edu.cn/bbs/api/manyou/my.php" "Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN; rv:1.9.1.9) Gecko/20100315 Firefox/3.5.9"
66.249.66.73 - - [18/Nov/2012:04:01:27 +0800] "GET /syshtml/?action-tag-tagname-%B3%CC%D0%F2 HTTP/1.1" 200 797 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
157.55.33.19 - - [18/Nov/2012:04:01:31 +0800] "GET /dp3.php HTTP/1.1" 200 12219 "-" "Mozilla/5.0 (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm)"
大概84M,嘿嘿数据较小啊。。。

1用perl测试写了一个mapper,reducer如下:

  mapper.pl

  #!/usr/bin/perl -w
my %hash;
while(<>)
 {
   chomp;
   my $ip=(split /\s+/)[0];
   if(!exists($hash{$ip}))
     {
        $hash{$ip}=1;
     }
   else{
       $hash{$ip}=$hash{$ip}+1;
     }
 }

while(my($a,$b)=each %hash)
  {
    print "$a\t$b\n";
  }
 

 reducer.pl

  #!/usr/bin/perl -w
my %last;
while(<>)
 {
   chomp;
   my($ip,$num)=split /\s+/;
   if(!exists($last{$ip}))
    {
      $last{$ip}=$num;
    }
   else{
      $last{$ip}=$last{$ip}+$num;
    }
 }

my @aa=sort { $last{$b} <=> $last{$a} } keys %last;

foreach my $a(@aa)
 {
   print "$a\t$last{$a}\n";
 }
 

运行以后大概跑了2min+8s

 

2.用shell写个mapper.sh ,reducer还是用上面的reducer.pl

  mapper.sh

#!/bin/sh
awk '{!a[$1]?a[$1]=1:a[$1]=a[$1]+1}END{for(i in a){print i,a[i]}}'

  大概花了55s

 

3. 用shell写mapper.sh以及rducer.sh

  不过reducer.sh没有按访问次数大小排序哈。。。

   mapper.sh

   #!/bin/sh
awk '{!a[$1]?a[$1]=1:a[$1]=a[$1]+1}END{for(i in a){print i,a[i]}}'

  reducer.sh

   #!/bin/sh
awk '{!a[$1]?a[$1]=$2:a[$1]=a[$1]+$2}END{for(i in a){print i,a[i]}}'

 结果用了54s,差不多啊。。

 

 所以感觉perl还是效率啊。。