自定义InputFormat输入格式

自定义InputFormat输入格式

<通过自定义InputFormat输入格式求文件中的奇数行和偶数行的平均数>

0. 示例数据

22
18
34
46
19
24
56
55
33
41
49

1. 思路:

将默认的<k1,v1>(偏移量,行值) --> <k1,v1> (行号,行值)

1.1 重写TextInputFormat类
1.1.1 构建【LineNumInputFormat】:创建行号阅读器和设置是否可切分
1.1.1.1 createRecordReader()【创建行号阅读器】

return new LineNumRecordReader()

1.1.1.2 isSplitable()【设置是否可切分】

return false

1.1.1.3 创建实例
package num;

import java.io.IOException;

import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.InputSplit;
import org.apache.hadoop.mapreduce.JobContext;
import org.apache.hadoop.mapreduce.RecordReader;
import org.apache.hadoop.mapreduce.TaskAttemptContext;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;

public class LineNumInputFormat extends FileInputFormat<LongWritable, Text> {
	@Override
	public RecordReader<LongWritable, Text> createRecordReader(InputSplit split, TaskAttemptContext context)
			throws IOException, InterruptedException {
		return new LineNumRecordReader();
	}
	@Override
	protected boolean isSplitable(JobContext context, Path filename) {
		return false;
	}
}
  1. 重写FileInputFormat类,继承于FileInputFormat实现一个LineNumInputFormat类,实现父类的RecordReader()和isSplitable()方法
  2. RecordReader()方法中返回一个自定义的LineNumRecordReader类
  3. isSplitable()方法中返回false,表示文件不可切分(如果可切分,会导致部分数据损坏)
1.2 重写LineRecordReader【继承RecordReader类,实现RecordReader所有方法,参考LineRecordReader】
  • 【LineNumRecordReader】:将KV转化为行号和行值
    • initialize()
      • start/end/pos/in赋值
    • nextKeyValue()
      • 读取一行,将行号赋给key,行值赋给value
    • getCurrentKey()
    • getCurrentValue()
    • getProgress()
    • close()
package num;

import java.io.IOException;

import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.InputSplit;
import org.apache.hadoop.mapreduce.RecordReader;
import org.apache.hadoop.mapreduce.TaskAttemptContext;
import org.apache.hadoop.mapreduce.lib.input.FileSplit;
import org.apache.hadoop.util.LineReader;

public class LineNumRecordReader extends RecordReader<LongWritable, Text> {
	private long start;
	private long pos;
	private long end;
	private LineReader in;
	private FSDataInputStream fileIn;
	private LongWritable key;
	private Text value;
	
	@Override
	public void initialize(InputSplit split, TaskAttemptContext context) throws IOException, InterruptedException {
		FileSplit _split = (FileSplit) split;
		Path file = _split.getPath();
		FileSystem fs = file.getFileSystem(context.getConfiguration());
		fileIn =fs.open(file);
		fileIn.seek(start);
		in = new LineReader(fileIn);
		start = _split.getStart();
		end = start + _split.getLength();
		pos = 1;
	}
	
	@Override
	public boolean nextKeyValue() throws IOException, InterruptedException {
		if(key == null) {
			key = new LongWritable();
		}
		key.set(pos);
		if(value ==null) {
			value = new Text();
		}
		if (in.readLine(value) == 0) {
			return false;
		}
		pos++;
	    return true;
	}
	
	@Override
	public LongWritable getCurrentKey() throws IOException, InterruptedException {
		return key;
	}
	
	@Override
	public Text getCurrentValue() throws IOException, InterruptedException {
		return value;
	}
	
	@Override
	public float getProgress() throws IOException, InterruptedException {
		return 0;
	}
	
	@Override
	public void close() throws IOException {
		in.close();
	}
}
  1. 重写LineRecordReader类(参考该类的实现方法),继承于RecordReader类,实现LineNumRecordReader
  2. 继承于RecordReader类,必须实现其6个方法(initialize()、nextKeyValue()、getCurrentKey()、getCurrentValue()、getProgress()、close())
  3. 通过pos的自增代表行数的自增
1.3 Mapper类
package num;

import java.io.IOException;

import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;

public class NumMapper extends Mapper<LongWritable, Text, Text, IntWritable> {
	Text _ji = new Text("奇数");
	Text _ou = new Text("偶数");
	IntWritable _age = new IntWritable();
	@Override
	protected void map(LongWritable key, Text value,
			Mapper<LongWritable, Text, Text, IntWritable>.Context context)
			throws IOException, InterruptedException {
		_age.set(Integer.valueOf(value.toString()));
		if(key.get() % 2 == 1) {
			context.write(_ji, _age);
		}
		else if(key.get() % 2 == 0) {
			context.write(_ou, _age);
		}
	}
}
  1. 根据读入的行号,进行奇偶行的判断,奇数行内容与Text(奇数)结合输出,偶数行内容与Text(偶数)结合输出
1.4 Reducer类
package num;

import java.io.IOException;

import org.apache.hadoop.io.FloatWritable;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;

public class NumReducer extends Reducer<Text, IntWritable, Text, FloatWritable> {
	FloatWritable _avg = new FloatWritable(0);
	@Override
	protected void reduce(Text key, Iterable<IntWritable> values,
			Reducer<Text, IntWritable, Text, FloatWritable>.Context context) throws IOException, InterruptedException {
		int sum = 0;
		int n = 0;
		for (IntWritable value : values) {
			sum += value.get();
			n++;
		}
		_avg.set((float) (sum/(n*1.0)));
		context.write(key, _avg);
	}
}
  1. 处理就是遍历奇数行集合求和,记录奇数行个数,取平均值
  2. 遍历偶数行求和,记录偶数行个数,取平均值
  3. 写入输出
1.5 Driver类
package num;

import java.io.IOException;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.FloatWritable;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

public class NumDriver {
	public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {
		Configuration conf = new Configuration();
		conf.set("mapreduce.framework.name", "local");
		Path outPut = new Path("file:///D:/out");
		FileSystem fs = outPut.getFileSystem(conf);
		if(fs.exists(outPut)) {
			fs.delete(outPut, true);
		}
		Job job = Job.getInstance(conf);
		job.setJobName("age");
		job.setJarByClass(NumDriver.class);
		job.setMapperClass(NumMapper.class);
		job.setReducerClass(NumReducer.class);
		job.setMapOutputKeyClass(Text.class);
		job.setMapOutputValueClass(IntWritable.class);
		job.setOutputValueClass(FloatWritable.class);
		job.setInputFormatClass(LineNumInputFormat.class);
		FileInputFormat.addInputPath(job, new Path("file:///D:/age"));
		FileOutputFormat.setOutputPath(job, outPut);
		System.exit(job.waitForCompletion(true) ? 0 : 1);
	}
}
  1. Mapper输入<LongWritable,Text>,输出<Text,IntWritable>,Reducer输入<Text,IntWritable>,输出<Text,FloatWritable>,因此不用设置Mapper的输出数据结构
  2. 指定输入的格式为自定义的LineNumInputFormat类

1.6 结果

偶数 36.8
奇数 35.5

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值