今天,58笔试有一道设计题。题目描述如下:服务器记录了很多访问日志,现在想找出访问前十多的ip地址。给出解决方法。
我的思路是:首先,利用hadoop,统计每个ip的访问次数,生成<ip,iCount>序列。然后,建立容量大小为10的小头堆,利用堆排序的方法,找出前十多的ip。
首先,生成<ip,iCount> 的算法和wordCount算法相似。因此我首先搭建了hadoop,编译好wordcount后,在hadoop上运行。
1、hadoop搭建过程。
官网下载hadoop2.6.5。根据官网的started (http://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-common/SingleCluster.html)进行一步一步设置。当上传完文件到hdfs后,运行
bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.5.jar wordcount input output 命令。
输出如下:
![](https://img-blog.csdn.net/20170917211022215?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQvZ2FpeG0=/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/Center)
说明运行wordcount成功。
然后执行命令:bin/hadoop fs -ls output
![](https://img-blog.csdn.net/20170917211217066?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQvZ2FpeG0=/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/Center)
接着执行:bin/hadoop fs -cat output/part-r-00000
![](https://img-blog.csdn.net/20170917211345378?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQvZ2FpeG0=/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/Center)
看到统计结果。
2、eclipse编写wordcount。
新建java项目。
在src同级建立lib文件夹,将hadoop/share下的common、hdfs、mapreduce下的lib拷贝到lib文件夹下。在eclipse中将lib下的jar包加入build path。
![](https://img-blog.csdn.net/20170917211701225?watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQvZ2FpeG0=/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/Center)
新建一个WordCount class,将下面的代码拷贝。
import java.io.IOException; import java.util.StringTokenizer; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.LongWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.Mapper; import org.apache.hadoop.mapreduce.Reducer; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.input.TextInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat; public class WordCount { public static class WordCountMap extends Mapper<LongWritable, Text, Text, IntWritable> { private final IntWritable one = new IntWritable(1); private Text word = new Text(); public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException { String line = value.toString(); StringTokenizer token = new StringTokenizer(line); while (token.hasMoreTokens()) { word.set(token.nextToken()); context.write(word, one); } } } public static class WordCountReduce extends Reducer<Text, IntWritable, Text, IntWritable> { public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException { int sum = 0; for (IntWritable val : values) { sum += val.get(); } context.write(key, new IntWritable(sum)); } } public static void main(String[] args) throws Exception { Configuration conf = new Configuration(); Job job = new Job(conf); job.setJarByClass(WordCount.class); job.setJobName("wordcount"); job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); job.setMapperClass(WordCountMap.class); job.setReducerClass(WordCountReduce.class); job.setInputFormatClass(TextInputFormat.class); job.setOutputFormatClass(TextOutputFormat.class); FileInputFormat.addInputPath(job, new Path(args[0])); FileOutputFormat.setOutputPath(job, new Path(args[1])); job.waitForCompletion(true); } }
然后,右击项目,选择export ->java -> jar file,命名为WordCount.jar导出。
将导出的jar包拷贝到hadoop下的share/hadoop/mapreduce目录下。
执行命令:
bin/hadoop jar share/hadoop/mapreduce/WordCount.jar WordCount input output
即可正常运行。