MapReduce之小文件问题
小文件在MapReduce/Hadoop环境中指文件大小远远小于HDFS块大小的文件,默认的HDFS块为64MB,因此2MB,4MB均可以看作小文件,一般情况下,Hadoop可以很好的处理大文件,不过当文件很小时,它会把每一个小文件都传递到一个map()函数中,这样做由于会产生大量的映射器导致效率并不高,要解决这个问题 ,需要将多个小文件合并为一个文件,然后在进行处理,因此,该问题的解决方案主要是通过将小文件合并成更大的文件来加快Hadoop的执行,在这个问题中,使用ComBinFileInputFormat<K,V>来进行实现,当然,其他方法日后列举
要使用抽象类CombinFileInputFormat,需要提供/实现三个定制类:
- CustomCFIF要扩展CombinFileInputFormat
- PairOfStringLong是一个Writable类会存储小文件名及其偏移量,通过覆盖compartTo’()方法,首先比较文件名,其次比较偏移量。
- CustomRecordReader是一个定制RecordReader
CustomCFIF类编码
根据HDFS块大小来设置MAX_SPLIT_SIZE,使其不超过块的大小
public class CustomCFIF extends CombineFileInputFormat<PairOfStringLong, Text> {
final static long MAX_SPLIT_SIZE_64MB = 67108864; // 64 MB = 64*1024*1024
public CustomCFIF() {
super();
setMaxSplitSize(MAX_SPLIT_SIZE_64MB); //设置最大分片的大小
}
//定义一个定制记录阅读器,然后提供一个插件类CustomRecordReader,将小文件读取到大分片
public RecordReader<PairOfStringLong, Text> createRecordReader(InputSplit split,
TaskAttemptContext context)
throws IOException {
return new CombineFileRecordReader<PairOfStringLong, Text>((CombineFileSplit)split,
context,
CustomRecordReader.class);
}
@Override
protected boolean isSplitable(JobContext context, Path file) { //指示合并的文件不应分片
return false;
}
}
CustomRecoreReader编码
将小文件读取到大分片中
public class CustomRecordReader extends RecordReader<PairOfStringLong, Text> {
// define (K,V)
private PairOfStringLong key;
private Text value;
// define pos and offsets
private long startOffset;
private long endOffset;
private long pos;
private FileSystem fs;
private Path path;
private FSDataInputStream fileIn;
private LineReader reader;
public CustomRecordReader(CombineFileSplit split, TaskAttemptContext context, Integer index)
throws IOException{
path = split.getPath(index);
fs = path.getFileSystem(context.getConfiguration());
startOffset = split.getOffset(index);
endOffset = startOffset + split.getLength(index);
fileIn = fs.open(path);
reader = new LineReader(fileIn);
pos = startOffset;
}
@Override
public void initialize(InputSplit arg0, TaskAttemptContext arg1)
throws IOException, InterruptedException {
// This will not be called, use custom Constructor
}
@Override
public void close() throws IOException {
}
@Override
public float getProgress() throws IOException {
if (startOffset == endOffset) {
return 0;
}
return Math.min(1.0f, (pos - startOffset) / (float) (endOffset - startOffset));
}
@Override
public PairOfStringLong getCurrentKey() throws IOException, InterruptedException {
return key;
}
@Override
public Text getCurrentValue() throws IOException, InterruptedException {
return value;
}
@Override
public boolean nextKeyValue() throws IOException {
if (key == null) {
// key.filename = path.getName()
// key.offset = pos
key = new PairOfStringLong(path.getName(), pos);
}
if (value == null){
value = new Text();
}
int newSize = 0;
if (pos < endOffset) {
newSize = reader.readLine(value);
pos += newSize;
}
if (newSize == 0) {
key = null;
value = null;
return false;
}
else{
return true;
}
}
}
PairOfStrtingLong编码
定义输入到映射器的键-值对,使用String表示文件名,long表示文件偏移量
public class PairOfStringLong implements Writable, WritableComparable<PairOfStringLong> {
private String leftElement;
private long rightElement;
public PairOfStringLong(){}
public PairOfStringLong(String left,long right){
set(left,right);
}
private void set(String left,long right){
this.leftElement=left;
this.rightElement=right;
}
public String getLeftElement() {
return leftElement;
}
public void setLeftElement(String leftElement) {
this.leftElement = leftElement;
}
public long getRightElement() {
return rightElement;
}
public void setRightElement(long rightElement) {
this.rightElement = rightElement;
}
public void write(DataOutput dataOutput) throws IOException {
dataOutput.writeUTF(leftElement);
dataOutput.writeLong(rightElement);
}
public void readFields(DataInput dataInput) throws IOException {
leftElement= dataInput.readUTF();
rightElement= dataInput.readLong();
}
public int compareTo(PairOfStringLong o) {
if(leftElement.equals(o.getLeftElement())){
return this.rightElement>o.getRightElement()?1:0;
}
return leftElement.compareTo(o.getLeftElement());
}
}
mapper和reducer编码
这个阶段采用词频统计作为演示,代码如下
mapper编码
public class WordCountMapper extends Mapper<PairOfStringLong, Text,Text, IntWritable> {
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(PairOfStringLong key, Text value, Context context
) throws IOException, InterruptedException {
StringTokenizer itr = new StringTokenizer(value.toString());
while (itr.hasMoreTokens()) {
word.set(itr.nextToken());
context.write(word, one);
}
}
}
reducer编码
public class WordCountReducer extends Reducer<Text, IntWritable, Text,IntWritable> {
private IntWritable result = new IntWritable();
public void reduce(Text key, Iterable<IntWritable> values,
Context context
) throws IOException, InterruptedException {
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
}
result.set(sum);
context.write(key, result);
}
}
驱动代码
public class Driver {
public static void main(String[] args) throws Exception {
FileUtil.fullyDelete(new File("output"));
long beginTime = System.currentTimeMillis();
Configuration conf=new Configuration();
Job job=new Job(conf,"combinerFile");
job.setInputFormatClass(CustomCFIF.class);
String[] otherArgs = new String[]{"input/neg/cv000_29416.txt", "output2"};
if (otherArgs.length != 2) {
System.out.println("参数错误");
System.exit(2);
}
job.setJarByClass(Driver.class);
job.setMapperClass(WordCountMapper.class);
job.setReducerClass(WordCountReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(job, new Path(otherArgs[0]));
FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));
long elapsedTime = System.currentTimeMillis() - beginTime;
System.out.println("elapsed time(millis): "+ elapsedTime);
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
执行结果
可能是由于数据量比较少,所以效果不是特别明显