在MapReduce中,通过指定分区,会将同一个分区的数据发送到同一个reduce中,例如为了数据的统计,可以把一批类似的数据发 送到同一个reduce当中去,在同一个reduce中统计相同类型的数据,就可以实现类似数据的分区,统计等
直观的说就是相同类型的数据,送到一起去处理,在reduce当中默认分区只有1个。
MapReduce当中的分区类图
需求:将以下数据进行分开处理
其中第六个字段表示开奖结果数值,现在以15为分界点,将15以上的结果保存到一个文件,15以下的结果保存到一个文件。
注意:分区的案例,只能打成jar包发布到集群上面去运行,本地模式已经不能正常运行了(使用2.7的hadoop的pom依赖可以本地运行)
pom文件
<dependencies>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
<version>2.7.4</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-hdfs</artifactId>
<version>2.7.4</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>2.7.4</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-mapreduce-client-core</artifactId>
<version>2.7.4</version>
</dependency>
</dependencies>
第一步:定义mapper
这里的mapper程序不做任何逻辑,也不对key,与value做任何改变,只是接收数据,然后往下发送
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
import java.io.IOException;
public class MyMapper extends Mapper<LongWritable,Text,Text,NullWritable> {
@Override
protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
context.write(value,NullWritable.get());
}
}
第二步:定义reducer逻辑
reducer也不做任何处理,将数据原封不动的输出即可
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;
import java.io.IOException;
public class MyReducer extends Reducer<Text,NullWritable,Text,NullWritable> {
@Override
protected void reduce(Text key, Iterable<NullWritable> values, Context context) throws IOException, InterruptedException {
context.write(key,NullWritable.get());
}
}
第三步:自定义partitioner
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Partitioner;
public class MyPartitioner extends Partitioner<Text,NullWritable> {
@Override
public int getPartition(Text text, NullWritable nullWritable, int i) {
int s = Integer.parseInt(text.toString().split("\t")[5]);
if (s<15){
return 0;
}else{
return 1;
}
}
}
第四步:程序main函数入口
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
import org.apache.hadoop.util.ToolRunner;
public class PartitionMain extends Configured implements org.apache.hadoop.util.Tool {
public static void main(String[] args) throws Exception {
int run = ToolRunner.run(new PartitionMain(), args);
System.exit(run);
}
@Override
public int run(String[] args) throws Exception {
Job job = Job.getInstance(super.getConf(), PartitionMain.class.getSimpleName());
job.setMapperClass(MyMapper.class);
job.setReducerClass(MyReducer.class);
// job.setJarByClass(PartitionMain.class);
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(NullWritable.class);
TextInputFormat.addInputPath(job,new Path("M:\\第二学年资料\\韩老师课上视频资料\\第二阶段(hdfs_MapReduce)\\day22\\4\\自定义分区\\input\\partition.csv"));
TextOutputFormat.setOutputPath(job,new Path("E:/output/partition"));
job.setPartitionerClass(MyPartitioner.class);
job.setNumReduceTasks(2);
return job.waitForCompletion(true)?0:1;
}
}
集群上的运行结果是: