计算文件中单词出现的次数,试题如下图
1、创建读取单词的文件tast,内容如下:
hadoop core map reduce hiv hbase Hbase
pig hadoop mapreduce MapReduce Hadoop Hbase
spark
2、流程图如下:
根据上图得知,计算流程中Mapping和Reducing是需要自己编写功能,其他交给Map/Reduce完成的
那么,我们首先编写Mapping步骤的代码,
新建WcMapper.java
package com.all58.mr;
import java.io.IOException;
import java.util.StringTokenizer;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
public class WcMapper extends Mapper<LongWritable, Text, Text, IntWritable>{
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
/**
* 每次调用map方法会传入split中一行数据;
* key:该行数据所在文件中的位置下标
* value:该行数据
*/
@Override
protected void map(LongWritable key, Text value, Context context)
throws IOException, InterruptedException {
String line = value.toString();
StringTokenizer itr = new StringTokenizer(line);
while (itr.hasMoreTokens()) {
word.set(itr.nextToken());
context.write(word, one);//map的输出
}
}
}
新建WcReduce.java
package com.all58.mr;
import java.io.IOException;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;
public class WcReducer extends Reducer<Text, IntWritable, Text, IntWritable> {
private IntWritable result = new IntWritable();
@Override
protected void reduce(Text key, Iterable<IntWritable> iter,
Context context) throws IOException, InterruptedException {
int sum = 0;
for (IntWritable value : iter) {
sum += value.get();
}
result.set(sum);
context.write(key, result);
}
}
到此,计算程序全部完成,下面编写Job执行程序
新建JobRun.java
package com.all58.mr;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
public class JobRun {
public static void main(String[] args) {
Configuration conf = new Configuration();
conf.set("mapred.job.tracker", "node1:9001");
try {
Job job = new Job(conf);
job.setJarByClass(JobRun.class);
job.setMapperClass(WcMapper.class);
job.setReducerClass(WcReducer.class);
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(IntWritable.class);
//job.setNumReduceTasks(1);//设置reduce任务的个数
//mapreduce输入数据所在目录或文件
FileInputFormat.addInputPath(job, new Path("/opt/hadoop-1.2/mapred/xiaoming"));
//mapreduce执行之后的输出数据的目录
FileOutputFormat.setOutputPath(job, new Path("/opt/hadoop-1.2/mapred/xiaoming/output"));
System.exit(job.waitForCompletion(true) ? 0 : 1);
} catch (Exception e) {
e.printStackTrace();
}
}
}
运行
1、eclipse导出jar包 wc.jar,使用scp上传至node1服务器
2、进入node1服务器~/hadoop-1.2.1/bin,执行命令
./hadoop jar ~/wc.jar com.all58.mr.JobRun
执行完毕,如下图
打开eclipse,查看结果
part-r-00000的内容:
Hadoop 1
Hbase 2
MapReduce 1
core 1
hadoop 1
hbase 1
hiv 1
map 1
mapreduce 1
pig 1
reduce 1
spark 1
hadoop 1