Hadoop之MapReduce单词计数经典实例

5 篇文章 0 订阅

MapReduce实现单词计数示意图

这里写图片描述

MapReduce实现单词计数实例代码(Java)

①Mapper

import java.io.IOException;

import org.apache.commons.lang.StringUtils;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;

public class WM extends Mapper<LongWritable, Text, Text, LongWritable> {
    protected void map(LongWritable key, Text value, Context context)
            throws IOException, InterruptedException {

        String line = value.toString();
        String[] words = StringUtils.split(line, " ");
        for (String word : words) {
            context.write(new Text(word), new LongWritable(1));
        }

    }
}

②Reducer

import java.io.IOException;

import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;

public class WR extends Reducer<Text, LongWritable, Text, LongWritable> {
    protected void reduce(Text word, Iterable<LongWritable> counts,
            Context context) throws IOException, InterruptedException {     
        int counter = 0;

        for (LongWritable count : counts) {
            long i = count.get();
            counter += i;
        }

        context.write(word, new LongWritable(counter));

    }
}

③主程序

import java.io.IOException;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.Job;

public class Launcher {
    public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {
        Configuration conf = new Configuration();

        Job job = Job.getInstance(conf);

        job.setMapperClass(WM.class);

        job.setMapOutputKeyClass(Text.class);

        job.setMapOutputValueClass(LongWritable.class);

        job.setReducerClass(WR.class);

        job.setOutputKeyClass(Text.class);

        job.setOutputValueClass(LongWritable.class);

        job.setJarByClass(Launcher.class);

        FileInputFormat.setInputPaths(job, "e://hello//words.txt");

        FileOutputFormat.setOutputPath(job, new Path("e://hello//output"));

        job.waitForCompletion(true);
    }
}
  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
好的,下面是一个使用Java语言基于MapReduce计算框架完成wordcount的例子: Mapper类实现: ```java import java.io.IOException; import org.apache.hadoop.io.LongWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Mapper; public class WordCountMapper extends Mapper<LongWritable, Text, Text, LongWritable> { // 定义key变量,存储每个单词 private Text key = new Text(); // 定义value变量,存储每个单词的词频 private LongWritable value = new LongWritable(1); public void map(LongWritable k, Text v, Context context) throws IOException, InterruptedException { // 将每一行文本转换成字符串 String line = v.toString(); // 将字符串按照空格切分成单词 String[] words = line.split(" "); // 遍历每个单词,输出<word, 1> for (String word : words) { key.set(word); context.write(key, value); } } } ``` Reducer类实现: ```java import java.io.IOException; import org.apache.hadoop.io.LongWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Reducer; public class WordCountReducer extends Reducer<Text, LongWritable, Text, LongWritable> { // 定义key变量,存储每个单词 private Text key = new Text(); // 定义value变量,存储每个单词的总词频 private LongWritable value = new LongWritable(); public void reduce(Text k, Iterable<LongWritable> v, Context context) throws IOException, InterruptedException { // 定义一个计数器,用于统计每个单词的词频 long count = 0; // 遍历每个value,累加每个单词的词频 for (LongWritable val : v) { count += val.get(); } // 将词频累加结果输出为<word, count> key.set(k); value.set(count); context.write(key, value); } } ``` Driver类实现: ```java import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.LongWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; public class WordCountDriver { public static void main(String[] args) throws Exception { // 创建一个配置对象 Configuration conf = new Configuration(); // 创建一个Job对象 Job job = Job.getInstance(conf, "word count"); // 设置job的主类 job.setJarByClass(WordCountDriver.class); // 设置Mapper类 job.setMapperClass(WordCountMapper.class); // 设置Reducer类 job.setReducerClass(WordCountReducer.class); // 设置Mapper的输出key类型 job.setMapOutputKeyClass(Text.class); // 设置Mapper的输出value类型 job.setMapOutputValueClass(LongWritable.class); // 设置Reducer的输出key类型 job.setOutputKeyClass(Text.class); // 设置Reducer的输出value类型 job.setOutputValueClass(LongWritable.class); // 设置输入路径 FileInputFormat.addInputPath(job, new Path(args[0])); // 设置输出路径 FileOutputFormat.setOutputPath(job, new Path(args[1])); // 等待job完成 System.exit(job.waitForCompletion(true) ? 0 : 1); } } ``` 运行该代码需要在Hadoop集群中进行,可以使用Hadoop单节点伪分布式模式进行测试。您需要创建一个文本文件作为输入,将其上传到HDFS中,并将该文件的HDFS路径作为参数传递给上述Driver的main()函数。输出将保存在另一个HDFS目录中,您可以使用Hadoop命令将其下载到本地进行查看。 希望这个例子可以帮助您理解如何使用Java语言基于MapReduce计算框架完成wordcount。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值