实例来自于《Hadoop实战》5.4和5.5节,也见于《Hadoop集群(第9期)_MapReduce初级案例》。
第一个实例是单表关联,给出child-parent表,要求输出grandchild-grandparent表。
在关系数据库里这是一个连接操作,用MapReduce来处理我觉得效率变低了,但可以处理海量的数据。
我对其进行了些许改进。简要描述如下。
map输入的每行数据,拆分为两条数据,比如A B,变成A <B和B >A两条,前者表示A是B的child,后者表示B是A的parent;
reduce依次处理每个人的关系,它将此人的parent存入一个数组中,将此人的child存入一个数组中,然后这对两个数组进行笛卡尔积运算即可。
输入数据之间必须用一个空格来分隔。
import java.io.IOException;
import java.util.StringTokenizer;
import java.util.Iterator;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.GenericOptionsParser;
public class joint
{
public static int once = 0;
public static class MyMapper
extends Mapper<Object, Text, Text, Text>
{
public void map(Object key, Text value, Context context)
throws IOException, InterruptedException
{
int i = 0;
String line = value.toString();
while(line.charAt(i) != ' ') ++i;
String[] values = {line.substring(0,i),line.substring(i+1)};
if(values[0].compareTo("child") != 0)
{
context.write(new Text(values[0]), new Text("<"+values[1]));
context.write(new Text(values[1]), new Text(">"+values[0]));
}
}
}
public static class MyReducer
extends Reducer<Text,Text,Text,Text>
{
public void reduce(Text key, Iterable<Text> values, Context context)
throws IOException, InterruptedException
{
if(once == 0)
{
context.write(new Text("grandchild"),new Text("grandparent"));
++once;
}
int grandchildnum = 0;
int grandparentnum = 0;
String grandchild[] = new String[10];
String grandparent[] = new String[10];
Iterator ite = values.iterator();
while(ite.hasNext())
{
String record = ite.next().toString();
if(record.charAt(0) == '<')
{
grandparent[grandparentnum++] = record.substring(1);
}
else if(record.charAt(0) == '>')
{
grandchild[grandchildnum++] = record.substring(1);
}
}
if(grandchildnum != 0 && grandparentnum != 0)
{
for(int m = 0; m<grandchildnum; ++m)
for(int n = 0; n<grandparentnum; ++n)
context.write(new Text(grandchild[m]),new Text(grandparent[n]));
}
}
}
public static void main(String[] args) throws Exception
{
Configuration conf = new Configuration();
Job job = new Job(conf, "number sort");
job.setJarByClass(joint.class);
job.setMapperClass(MyMapper.class);
job.setReducerClass(MyReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(Text.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
第二个实例差不多。就不说了。