Hadoop数据类型
MapReduce程序的各种阶段:
为了能让MapReduce的key/value对能够在集群中移动,MapReduce框架提供了一个序列化key/value对的方法
但MapReduce并不允许任意的类都能做为key,只有实现了WriableComparable或者Wirable接口(说明,Wriable也实现了WriableComparable接口)的类才能做为键,因为在reduce阶段要根据key来进行排序,并将相同key的值进行归并。
所以要想自己写的类能做为key的话,则此类必须实现Comparable接口。
下面我们来写一个这样的类代表两个城市之间的航
package csdn.jtlyuan;
import java.io.DataInput;
import java.io.DataOutput;
import java.io.IOException;
import org.apache.hadoop.io.WritableComparable;
public class Edge implements WritableComparable<Edge>{
private String startNode;
private String endNode;
@Override
public void readFields(DataInput in) throws IOException {
this.startNode=in.readUTF();
this.endNode=in.readUTF();
}
@Override
public void write(DataOutput out) throws IOException {
out.writeUTF(this.startNode);
out.writeUTF(this.endNode);
}
@Override
public int compareTo(Edge o) {
return (this.startNode.compareTo(o.startNode) != 0)?
this.startNode.compareTo(o.startNode):
this.endNode.compareTo(o.endNode);
}
}
MapReduce程序的各种阶段:
Mapper:把输入映射为key value形式
Reducer:接受来自各个mapper的输出,它按照key/value对的key进行排序,并将相同key的值归并放在同一个reducer里。
Partition:重定向Mapper输出,默认的作法是对键的散列来觉得reducer,hadoop可以通过HashPartitioner类改变这种策略。
Combiner:本地reduce
Shuffle:当第一个map任务完成后,节点可能还要继续执行更多的map任务,但这时候也开始把map任务的中间输出交换到需要它们的reducer那里去,这个移动map输出到reducer的过程叫做shuffle
举例:WordCount程序:
import java.io.IOException;
import java.util.StringTokenizer;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.GenericOptionsParser;
public class WordCount {
public static class TokenizerMapper
extends Mapper<Object, Text, Text, IntWritable>{
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(Object key, Text value, Context context
) throws IOException, InterruptedException {
StringTokenizer itr = new StringTokenizer(value.toString());
while (itr.hasMoreTokens()) {
word.set(itr.nextToken());
context.write(word, one);
}
}
}
public static class IntSumReducer
extends Reducer<Text,IntWritable,Text,IntWritable> {
private IntWritable result = new IntWritable();
public void reduce(Text key, Iterable<IntWritable> values,
Context context
) throws IOException, InterruptedException {
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
}
result.set(sum);
context.write(key, result);//Context接受reduce阶段的输出
}
}
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();
if (otherArgs.length != 2) {
System.err.println("Usage: wordcount <in> <out>");
System.exit(2);
}
Job job = new Job(conf, "word count");//根据配置来实例化一个Job,并给Job设置名字
job.setJarByClass(WordCount.class);//Set the Jar by finding where a given class came from.
job.setMapperClass(TokenizerMapper.class);//Set the Mapper for the job
job.setCombinerClass(IntSumReducer.class);//Set the combiner class for the job.
job.setReducerClass(IntSumReducer.class);//Set the Reducer for the job
job.setOutputKeyClass(Text.class);//Set the key class for the job output data.
job.setOutputValueClass(IntWritable.class);//Set the value class for job outputs.
FileInputFormat.addInputPath(job, new Path(otherArgs[0]));//设置输入路径
FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));//设置输出位置 (在运行之前,该目录不应该存在,否则会报错并拒绝运行该任务)
System.exit(job.waitForCompletion(true) ? 0 : 1);//job.waitForCompletion Submit the job to the cluster and wait for it to finish.
}
}
下面的图可以清晰得开到怎样对两个文件的单词进行统计的。