需求:将原始数据文件和类别数据文件根据用户id合并成一个文件。
原始数据文件:用户id,详细信息
类别数据文件:用户id,所属类别
两个Mapper分别为OriDataMapper,IdKindDataMapper,输出key,value均为Text。
private void job1(Configuration config, Path outputdata, String idkinddata,
String outkindData) throws Exception {
Job job1 = Job.getInstance(config);
job1.setJobName("PostProcessor");
job1.setJarByClass(Postprocessor.class);
// 多个输入文件的mapreduce job
MultipleInputs.addInputPath(job1, outputdata, TextInputFormat.class,
OriDataMapper.class);
MultipleInputs.addInputPath(job1, new Path(idkinddata),
TextInputFormat.class, IdKindDataMapper.class);
// 设置Reducer相关属性
job1.setReducerClass(DataReducer.class);
job1.setNumReduceTasks(1);
job1.setOutputKeyClass(Text.class);
job1.setOutputValueClass(Text.class);
Path outputPath = new Path(outkindData);
FileSystem.get(config).delete(outputPath, true);
FileOutputFormat.setOutputPath(job1, outputPath);
job1.waitForCompletion(true);
}
public class IdKindDataMapper extends Mapper<LongWritable, Text, Text, Text>{
Text outputKey = new Text();
Text outputValue = new Text();
@Override
protected void map(LongWritable key, Text value,Context context)
throws IOException, InterruptedException {
String str = value.toString();
String[] cols = str.split("\t");
outputKey.set(cols[0]);
outputValue.set(cols[1]);
context.write(outputKey, outputValue);
}
}
public class OriDataMapper extends Mapper<LongWritable, Text, Text, Text>{
Text outputKey = new Text();
Text outputValue = new Text();
@Override
protected void map(LongWritable key, Text value,Context context)
throws IOException, InterruptedException {
String str = value.toString();
String[] cols = str.split("\t");
outputKey.set(cols[0]);
outputValue.set(cols[1]);
context.write(outputKey, outputValue);
}
}
其中DataReducer读取Mapper产生的数据。具体Reducer读入的格式为:
<用户id,{详细信息,所属类别}>
根据具体需求,对数据进行相应操作操作。
关键代码:
// 多个输入文件的mapreduce job
MultipleInputs.addInputPath(job1, outputdata, TextInputFormat.class,
OriDataMapper.class);
MultipleInputs.addInputPath(job1, new Path(idkinddata),
TextInputFormat.class, IdKindDataMapper.class);