感觉接触Hadoop也有些时间了,但是因为机器环境的关系,一直没能好好折腾。Hadoop处理数据用MapReduce,存储数据用HDFS,其他先不考虑。
接触的第一个程序自然是WordCount,源代码的example中就有:
import java.io.IOException;
import java.util.StringTokenizer;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
public class WordCount {
public static class WordCountMapper extends
Mapper<Object, Text, Text, IntWritable> {
private static final IntWritable ONE = new IntWritable(1);
private Text word = new Text();
@Override
public void map(Object key, Text value, Context context)
throws IOException, InterruptedException {
System.out.println("map() is being called");
StringTokenizer tokenizer = new StringTokenizer(value.toString());
while (tokenizer.hasMoreTokens()) {
word.set(tokenizer.nextToken());
context.write(word, ONE);
}
}
}
public static class WordCountReducer extends
Reducer<Text, IntWritable, Text, IntWritable> {
@Override
public void reduce(Text key, Iterable<IntWritable> values,
Context context) throws IOException, InterruptedException {
System.out.println("reduce() is being called");
int wordcount = 0;
for (IntWritable value : values) {
wordcount += value.get();
}
context.write(key, new IntWritable(wordcount));
}
}
public static void main(String[] args) throws Exception {
//Configuration conf = new Configuration();
Job job = new Job();
job.setNumReduceTasks(0);
job.setJarByClass(WordCount.class);
job.setMapperClass(WordCountMapper.class);
job.setCombinerClass(WordCountReducer.class);
job.setReducerClass(WordCountReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(job, new Path("input"));
FileOutputFormat.setOutputPath(job, new Path("output"));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
有几点要说:
1、如果在eclipse中跑,可以想跑普通普通Java程序一样直接run就好了,如果要在集群上跑,就要打成jar包,原因我还不太清楚
2、要想看到所谓的intermediate output,可以用Job的setNumReducer()将reducer的个数设置为0,此时没有所谓的shuffle和sort,也就是看到的就是intermediate output
3、因为要将mapper处理的结果拷贝给reducer处理,那么最多有多少次拷贝呢?假设有m个mapper和r个reducer,那么将最多有m*r次拷贝,因为有可能一个mapper产生的intermediate output会被拷贝到r个不同的reducer,比如刚好每个mapper的intermediate output有r个不同的key。
4、只有所有的intermediate output都已经shuffle和sort了,reducer的计算才能开始
5、因为Hadoop实现的关系,虽然我已经setCombinerClass(),但结果并没有对intermediate output做所谓的"local aggregation",Hadoop的combiner可以被invoke 0次,1次或者多次,Hadoop决定,而且,Hadoop的combiner在reduce阶段才被invoke,也就是intermediate ouput key-value pairs已经被拷贝到reducer了,但是reducer的代码还没开始运行。
目前就大概知道这些,但是在哪里发生split的呢?还有看上去我传的是一个文件夹(input),但是map方法处理的是keyin-valuein,valuein是Text类型的,那么Hadoop是在哪里怎么对文件进行读取的呢,而且做实验发现是一行一行读取,每读取一行就会得到一个key-value pair,key是Object,value是那一行文本。这些细节都还不清楚是在哪什么时候发生的。