Map
package com.test.dx;
/*如何给eclipse的main函数传递参数
* run as--> run configures -->Arguments
*/
public class WordCount extends Configured implements Tool {
static int mapnum = 0;
public static int redunum = 0;
public static class Map extends Mapper<LongWritable, Text, Text, IntWritable> {
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
@Override
public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
// TODO Auto-generated method stub
// super.map(key, value, context);
String line = value.toString();
mapnum++;
System.out.println("..........key" + key);
System.out.println("..........value" + line);
StringTokenizer tokenizer = new StringTokenizer(line);
while (tokenizer.hasMoreTokens()) {
word.set(tokenizer.nextToken());
System.out.println("+++" + word.toString());
context.write(word, one);
}
}
}
文件内容:
input/test1.text
Hello world bye world
Hello world bye world
input/test1.text
Hello Hadoop bye Hadoop
Map任务综述:
Map首先将输入读入后切出其中的单词,并标记他的数目为1,形成
Map实现原理:
Mapper参数解析:
Mapper
函数解析
StringTokenizer功能说明,默认分隔符是‘ ’,也可以指定特定的分隔符,
当文件中文本的格式为:
Hello,world,byr,word
就需要指定特定的分割符为‘,’:new StringTokenizer(line,‘,’);
StringTokenizer st = new StringTokenizer("this is a test"); while(st.hasMoreTokens()) {
System.out.println(st.nextToken()); }
输出如下:
this
is
a
test
Map最后将结果写入本地磁盘,并不是写到HDFS上。
Reducer
// 有相同的key的键/值被送到同一个Reducer个
public static class Reduce extends Reducer<Text, IntWritable, Text, IntWritable> {
@Override
protected void reduce(Text key, Iterable<IntWritable> values, Context context)
throws IOException, InterruptedException {
int sum = 0;
// System.out.println(key.toString());
for (IntWritable val : values) {
// System.out.println("----" + val);
sum += val.get();
}
redunum = redunum + 1;
// reduce通过这个方法将结果写入文件中
context.write(key, new IntWritable(sum));
}
}
Reducer任务综述
Reduce将相同的key值的value收集起来,形成
主函数
@Override
public int run(String[] args) throws Exception {
// TODO Auto-generated method stub
Job job = new Job(getConf());
job.setJarByClass(WordCount.class);
job.setJobName("wordcount");
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
job.setMapperClass(Map.class);
job.setReducerClass(Reduce.class);
job.setInputFormatClass(TextInputFormat.class);
job.setOutputFormatClass(TextOutputFormat.class);
FileInputFormat.setInputPaths(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
boolean success = job.waitForCompletion(true);
return success ? 0 : 1;
}
public static void main(String[] args) throws Exception {
int ret = ToolRunner.run(new WordCount(), args);
System.out.println(WordCount.mapnum);
System.out.println(WordCount.redunum);
System.exit(ret);
}
}
函数详解:
InputFormat的作用是那数据集切割成小的数据集,每一个小的数据集将由一个Mapper处理(因此这里的map函数运行的次数为3),TextInputFormat的作用是针对文本文件,按行将文本切割成InputSplit并用lineRecordReader将InputSplit解析成