hadoop wordcount

本文介绍了Hadoop自带的WordCount程序,详细阐述了MapReduce的核心思想、核心函数及其工作过程,通过一个三明治制作的类比帮助理解。MapReduce通过map和reduce函数实现数据的并行处理,将大规模数据拆分、处理并汇总。WordCount程序作为MapReduce的典型应用,用于统计文本中每个单词的出现频率。
摘要由CSDN通过智能技术生成

1. hadoop自带的wordcount

1.1位置

[root@hadoop001 ~]# ls $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar
/export/servers/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar

1.2 使用

[root@hadoop001 ~]# hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar
An example program must be given as the first argument.
Valid program names are:
  aggregatewordcount: An Aggregate based map/reduce program that counts the words in the input files.
  aggregatewordhist: An Aggregate based map/reduce program that computes the histogram of the words in the input files.
  bbp: A map/reduce program that uses Bailey-Borwein-Plouffe to compute exact digits of Pi.
  dbcount: An example job that count the pageview counts from a database.
  distbbp: A map/reduce program that uses a BBP-type formula to compute exact bits of Pi.
  grep: A map/reduce program that counts the matches of a regex in the input.
  join: A job that effects a join over sorted, equally partitioned datasets
  multifilewc: A job that counts words from several files.
  pentomino: A map/reduce tile laying program to find solutions to pentomino problems.
  pi: A map/reduce program that estimates Pi using a quasi-Monte Carlo method.
  randomtextwriter: A map/reduce program that writes 10GB of random textual data per node.
  randomwriter: A map/reduce program that writes 10GB of random data per node.
  secondarysort: An example defining a secondary sort to the reduce.
  sort: A map/reduce program that sorts the data written by the random writer.
  sudoku: A sudoku solver.
  teragen: Generate data for the terasort
  terasort: Run the terasort
  teravalidate: Checking results of terasort
  wordcount: A map/reduce program that counts the words in the input files.
  wordmean: A map/reduce program that counts the average length of the words in the input files.
  wordmedian: A map/reduce program that counts the median length of the words in the input files.
  wordstandarddeviation: A map/reduce program that counts the standard deviation of the length of the words in the input files.
[root@hadoop001 ~]# hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar wordcount
Usage: wordcount <in> [<in>...] <out>

2

2.1. MapReduce的核心思想

MapReduce是一种并行编程模型,是Hadoop生态系统的核心组件之一,“分而治之”是MapReduce的核心思想,它表示把一个大规模的数据集切分成很多小的单独的数据集,然后放在多个机器上同时处理。

我们用一个通俗易懂的例子来体现“分而治之”的思想。

2.2 MapReduce的核心函数

MapReduce把整个并行运算过程高度抽象到两个函数上,一个是map另一个是reduce。Map函数就是分而治之中的“分”,reduce函数就是分而治之中的“治”。

MapReduce把一个存储在分布式文件系统中的大规模数据集切分成许多独立的小的数据集,然后分配给多个map任务处理。然后map任务的输出结果会进一步处理成reduce任务的输入,最后由reduce任务进行汇总,然后上传到分布式文件系统中。

Map函数:map函数会将小的数据集转换为适合输入的<key,value>键值对的形式,然后处理成一系列具有相同key的<key,value>作为输出,我们可以把输出看做list(<key,value>)

Reduce函数:reduce函数会把map函数的输出作为输入,然后提取具有相同key的元素,并进行操作,最后的输出结果也是<key,value>键值对的形式,并合并成一个文件。

2.3 MapReduce的工作过程

如图展示的就是MapReduce的工作过程,一共分为input、split、map、shuffle、reduce、output六个阶段。下面用三明治的例子来演示一下MapReduce的工作流程:
在这里插入图片描述

√ input阶段相当于准备食材的步骤;

Hadoop WordCount是一个经典的MapReduce程序,用于对文本数据进行词频统计。实现方法如下: 1. Map阶段:将文本数据按照指定的分隔符进行切分,将每个单词作为Key,将频率作为Value传递给Reduce阶段。 2. Reduce阶段:将Map阶段传递过来的Key-Value对进行合并,统计每个单词在文本中出现的次数,输出最终的词频统计结果。 以下是Hadoop WordCount的Java代码示例: ```java public class WordCount { public static class TokenizerMapper extends Mapper<Object, Text, Text, IntWritable>{ private final static IntWritable one = new IntWritable(1); private Text word = new Text(); public void map(Object key, Text value, Context context) throws IOException, InterruptedException { StringTokenizer itr = new StringTokenizer(value.toString()); while (itr.hasMoreTokens()) { word.set(itr.nextToken()); context.write(word, one); } } } public static class IntSumReducer extends Reducer<Text,IntWritable,Text,IntWritable> { private IntWritable result = new IntWritable(); public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException { int sum = 0; for (IntWritable val : values) { sum += val.get(); } result.set(sum); context.write(key, result); } } public static void main(String[] args) throws Exception { Configuration conf = new Configuration(); Job job = Job.getInstance(conf, "word count"); job.setJarByClass(WordCount.class); job.setMapperClass(TokenizerMapper.class); job.setCombinerClass(IntSumReducer.class); job.setReducerClass(IntSumReducer.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); FileInputFormat.addInputPath(job, new Path(args[0])); FileOutputFormat.setOutputPath(job, new Path(args[1])); System.exit(job.waitForCompletion(true) ? 0 : 1); } } ``` 以上代码中,TokenizerMapper类实现了Map阶段的逻辑,IntSumReducer类实现了Reduce阶段的逻辑,main函数用于配置和提交MapReduce作业。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值