自定义 hadoop MapReduce InputFormat 切分输入文件

在上一篇中,我们实现了按 cookieId 和 time 进行二次排序,现在又有新问题:假如我需要按 cookieId 和 cookieId&time 的组合进行分析呢?此时最好的办法是自定义 InputFormat,让 mapreduce 一次读取一个 cookieId 下的所有记录,然后再按 time 进行切分 session,逻辑伪码如下:

for OneSplit in MyInputFormat.getSplit() // OneSplit 是某个 cookieId 下的所有记录

    for session in OneSplit // session 是按 time 把 OneSplit 进行了二次分割

        for line in session // line 是 session 中的每条记录,对应原始日志的某条记录

1、原理:

InputFormat是MapReduce中一个很常用的概念,它在程序的运行中到底起到了什么作用呢?
InputFormat其实是一个接口,包含了两个方法:
public interface InputFormat<K, V> {
  InputSplit[] getSplits(JobConf job, int numSplits) throws IOException;
  RecordReader<K, V> getRecordReader(InputSplit split,
                                     JobConf job,
                                     Reporter reporter) throws IOException;
}
这两个方法有分别完成着以下工作:
      方法 getSplits 将输入数据切分成splits,splits的个数即为map tasks的个数,splits的大小默认为块大小,即64M
     方法  getRecordReader 将每个 split   解析成records, 再依次将record解析成<K,V>对
也就是说  InputFormat完成以下工作:
 InputFile -->   splits   -->   <K,V>

系统常用的   InputFormat 又有哪些呢?
                      
其中Text  InputFormat便是最常用的,它的  <K,V>就代表  <行偏移,该行内容>

然而系统所提供的这几种固定的将   InputFile转换为  <K,V>的方式有时候并不能满足我们的需求:
此时需要我们自定义     InputFormat ,从而使Hadoop框架按照我们预设的方式来将
InputFile解析为<K,V>
在领会自定义     InputFormat 之前,需要弄懂一下几个抽象类、接口及其之间的关系:

InputFormat(interface), FileInputFormat(abstract class), TextInputFormat(class),
RecordReader  (interface), Line  RecordReader(class)的关系
       FileInputFormat implements   InputFormat
       TextInputFormat extends   FileInputFormat
       TextInputFormat.get  RecordReader calls   Line  RecordReader
       Line  RecordReader   implements   RecordReader

对于InputFormat接口,上面已经有详细的描述
再看看  FileInputFormat,它实现了  InputFormat接口中的  getSplits方法,而将  getRecordReader与isSplitable留给具体类(如  TextInputFormat  )实现,  isSplitable方法通常不用修改,所以只需要在自定义的  InputFormat中实现
getRecordReader方法即可,而该方法的核心是调用  Line  RecordReader(即由LineRecorderReader类来实现 "  将每个s  plit解析成records, 再依次将record解析成<K,V>对"  ),该方法实现了接口RecordReader

  public interface RecordReader<K, V> {
  boolean     next(K key, V value) throws IOException; 
  K     createKey(); 
  V     createValue(); 
  long     getPos() throws IOException; 
  public void     close() throws IOException; 
  float     getProgress() throws IOException; 
}

     因此自定义InputFormat的核心是自定义一个实现接口RecordReader类似于LineRecordReader的类,该类的核心也正是重写接口RecordReader中的几大方法,
     定义一个InputFormat的核心是定义一个类似于LineRecordReader的,自己的RecordReader


2、代码:  

01 package MyInputFormat;
02  
03 import org.apache.hadoop.fs.Path;
04 import org.apache.hadoop.io.LongWritable;
05 import org.apache.hadoop.io.Text;
06 import org.apache.hadoop.io.compress.CompressionCodec;
07 import org.apache.hadoop.io.compress.CompressionCodecFactory;
08 import org.apache.hadoop.mapreduce.InputSplit;
09 import org.apache.hadoop.mapreduce.JobContext;
10 import org.apache.hadoop.mapreduce.RecordReader;
11 import org.apache.hadoop.mapreduce.TaskAttemptContext;
12 import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
13  
14 public class TrackInputFormat extends FileInputFormat<LongWritable, Text> {
15  
16     @SuppressWarnings("deprecation")
17     @Override
18     public RecordReader<LongWritable, Text> createRecordReader(
19             InputSplit split, TaskAttemptContext context) {
20         return new TrackRecordReader();
21     }
22  
23     @Override
24     protected boolean isSplitable(JobContext context, Path file) {
25         CompressionCodec codec = new CompressionCodecFactory(
26                 context.getConfiguration()).getCodec(file);
27         return codec == null;
28     }
29  
30 }

001 package MyInputFormat;
002  
003 import java.io.IOException;
004 import java.io.InputStream;
005  
006 import org.apache.commons.logging.Log;
007 import org.apache.commons.logging.LogFactory;
008 import org.apache.hadoop.conf.Configuration;
009 import org.apache.hadoop.fs.FSDataInputStream;
010 import org.apache.hadoop.fs.FileSystem;
011 import org.apache.hadoop.fs.Path;
012 import org.apache.hadoop.io.LongWritable;
013 import org.apache.hadoop.io.Text;
014 import org.apache.hadoop.io.compress.CompressionCodec;
015 import org.apache.hadoop.io.compress.CompressionCodecFactory;
016 import org.apache.hadoop.mapreduce.InputSplit;
017 import org.apache.hadoop.mapreduce.RecordReader;
018 import org.apache.hadoop.mapreduce.TaskAttemptContext;
019 import org.apache.hadoop.mapreduce.lib.input.FileSplit;
020  
021 /**
022  * Treats keys as offset in file and value as line.
023  *
024  * @deprecated Use
025  *             {@link org.apache.hadoop.mapreduce.lib.input.LineRecordReader}
026  *             instead.
027  */
028 public class TrackRecordReader extends RecordReader<LongWritable, Text> {
029     private static final Log LOG = LogFactory.getLog(TrackRecordReader.class);
030  
031     private CompressionCodecFactory compressionCodecs = null;
032     private long start;
033     private long pos;
034     private long end;
035     private NewLineReader in;
036     private int maxLineLength;
037     private LongWritable key = null;
038     private Text value = null;
039     // ----------------------
040     // 行分隔符,即一条记录的分隔符
041     private byte[] separator = "END\n".getBytes();
042  
043     // --------------------
044  
045     public void initialize(InputSplit genericSplit, TaskAttemptContext context)
046             throws IOException {
047         FileSplit split = (FileSplit) genericSplit;
048         Configuration job = context.getConfiguration();
049         this.maxLineLength = job.getInt("mapred.linerecordreader.maxlength",
050                 Integer.MAX_VALUE);
051         start = split.getStart();
052         end = start + split.getLength();
053         final Path file = split.getPath();
054         compressionCodecs = new CompressionCodecFactory(job);
055         final CompressionCodec codec = compressionCodecs.getCodec(file);
056  
057         FileSystem fs = file.getFileSystem(job);
058         FSDataInputStream fileIn = fs.open(split.getPath());
059         boolean skipFirstLine = false;
060         if (codec != null) {
061             in = new NewLineReader(codec.createInputStream(fileIn), job);
062             end = Long.MAX_VALUE;
063         else {
064             if (start != 0) {
065                 skipFirstLine = true;
066                 this.start -= separator.length;//
067                 // --start;
068                 fileIn.seek(start);
069             }
070             in = new NewLineReader(fileIn, job);
071         }
072         if (skipFirstLine) { // skip first line and re-establish "start".
073             start += in.readLine(new Text(), 0,
074                     (int) Math.min((long) Integer.MAX_VALUE, end - start));
075         }
076         this.pos = start;
077     }
078  
079     public boolean nextKeyValue() throws IOException {
080         if (key == null) {
081             key = new LongWritable();
082         }
083         key.set(pos);
084         if (value == null) {
085             value = new Text();
086         }
087         int newSize = 0;
088         while (pos < end) {
089             newSize = in.readLine(value, maxLineLength,
090                     Math.max((int) Math.min(Integer.MAX_VALUE, end - pos),
091                             maxLineLength));
092             if (newSize == 0) {
093                 break;
094             }
095             pos += newSize;
096             if (newSize < maxLineLength) {
097                 break;
098             }
099  
100             LOG.info("Skipped line of size " + newSize + " at pos "
101                     + (pos - newSize));
102         }
103         if (newSize == 0) {
104             key = null;
105             value = null;
106             return false;
107         else {
108             return true;
109         }
110     }
111  
112     @Override
113     public LongWritable getCurrentKey() {
114         return key;
115     }
116  
117     @Override
118     public Text getCurrentValue() {
119         return value;
120     }
121  
122     /**
123      * Get the progress within the split
124      */
125     public float getProgress() {
126         if (start == end) {
127             return 0.0f;
128         else {
129             return Math.min(1.0f, (pos - start) / (float) (end - start));
130         }
131     }
132  
133     public synchronized void close() throws IOException {
134         if (in != null) {
135             in.close();
136         }
137     }
138  
139     public class NewLineReader {
140         private static final int DEFAULT_BUFFER_SIZE = 64 1024;
141         private int bufferSize = DEFAULT_BUFFER_SIZE;
142         private InputStream in;
143         private byte[] buffer;
144         private int bufferLength = 0;
145         private int bufferPosn = 0;
146  
147         public NewLineReader(InputStream in) {
148             this(in, DEFAULT_BUFFER_SIZE);
149         }
150  
151         public NewLineReader(InputStream in, int bufferSize) {
152             this.in = in;
153             this.bufferSize = bufferSize;
154             this.buffer = new byte[this.bufferSize];
155         }
156  
157         public NewLineReader(InputStream in, Configuration conf)
158                 throws IOException {
159             this(in, conf.getInt("io.file.buffer.size", DEFAULT_BUFFER_SIZE));
160         }
161  
162         public void close() throws IOException {
163             in.close();
164         }
165  
166         public int readLine(Text str, int maxLineLength, int maxBytesToConsume)
167                 throws IOException {
168             str.clear();
169             Text record = new Text();
170             int txtLength = 0;
171             long bytesConsumed = 0L;
172             boolean newline = false;
173             int sepPosn = 0;
174             do {
175                 // 已经读到buffer的末尾了,读下一个buffer
176                 if (this.bufferPosn >= this.bufferLength) {
177                     bufferPosn = 0;
178                     bufferLength = in.read(buffer);
179                     // 读到文件末尾了,则跳出,进行下一个文件的读取
180                     if (bufferLength <= 0) {
181                         break;
182                     }
183                 }
184                 int startPosn = this.bufferPosn;
185                 for (; bufferPosn < bufferLength; bufferPosn++) {
186                     // 处理上一个buffer的尾巴被切成了两半的分隔符(如果分隔符中重复字符过多在这里会有问题)
187                     if (sepPosn > 0 && buffer[bufferPosn] != separator[sepPosn]) {
188                         sepPosn = 0;
189                     }
190                     // 遇到行分隔符的第一个字符
191                     if (buffer[bufferPosn] == separator[sepPosn]) {
192                         bufferPosn++;
193                         int i = 0;
194                         // 判断接下来的字符是否也是行分隔符中的字符
195                         for (++sepPosn; sepPosn < separator.length; i++, sepPosn++) {
196                             // buffer的最后刚好是分隔符,且分隔符被不幸地切成了两半
197                             if (bufferPosn + i >= bufferLength) {
198                                 bufferPosn += i - 1;
199                                 break;
200                             }
201                             // 一旦其中有一个字符不相同,就判定为不是分隔符
202                             if (this.buffer[this.bufferPosn + i] != separator[sepPosn]) {
203                                 sepPosn = 0;
204                                 break;
205                             }
206                         }
207                         // 的确遇到了行分隔符
208                         if (sepPosn == separator.length) {
209                             bufferPosn += i;
210                             newline = true;
211                             sepPosn = 0;
212                             break;
213                         }
214                     }
215                 }
216                 int readLength = this.bufferPosn - startPosn;
217                 bytesConsumed += readLength;
218                 // 行分隔符不放入块中
219                 if (readLength > maxLineLength - txtLength) {
220                     readLength = maxLineLength - txtLength;
221                 }
222                 if (readLength > 0) {
223                     record.append(this.buffer, startPosn, readLength);
224                     txtLength += readLength;
225                     // 去掉记录的分隔符
226                     if (newline) {
227                         str.set(record.getBytes(), 0, record.getLength()
228                                 - separator.length);
229                     }
230                 }
231             while (!newline && (bytesConsumed < maxBytesToConsume));
232             if (bytesConsumed > (long) Integer.MAX_VALUE) {
233                 throw new IOException("Too many bytes before newline: "
234                         + bytesConsumed);
235             }
236  
237             return (int) bytesConsumed;
238         }
239  
240         public int readLine(Text str, int maxLineLength) throws IOException {
241             return readLine(str, maxLineLength, Integer.MAX_VALUE);
242         }
243  
244         public int readLine(Text str) throws IOException {
245             return readLine(str, Integer.MAX_VALUE, Integer.MAX_VALUE);
246         }
247     }
248 }

01 package MyInputFormat;
02  
03 import java.io.IOException;
04  
05 import org.apache.hadoop.conf.Configuration;
06 import org.apache.hadoop.fs.FileSystem;
07 import org.apache.hadoop.fs.Path;
08 import org.apache.hadoop.io.LongWritable;
09 import org.apache.hadoop.io.Text;
10 import org.apache.hadoop.mapreduce.Job;
11 import org.apache.hadoop.mapreduce.Mapper;
12 import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
13  
14 public class TestMyInputFormat {
15  
16     public static class MapperClass extends Mapper<LongWritable, Text, Text, Text> {
17  
18         public void map(LongWritable key, Text value, Context context) throws IOException,
19                 InterruptedException {
20             System.out.println("key:\t " + key);
21             System.out.println("value:\t " + value);
22             System.out.println("-------------------------");
23         }
24     }
25  
26     public static void main(String[] args) throws IOException, InterruptedException, ClassNotFoundException {
27         Configuration conf = new Configuration();
28          Path outPath = new Path("/hive/11");
29          FileSystem.get(conf).delete(outPath, true);
30         Job job = new Job(conf, "TestMyInputFormat");
31         job.setInputFormatClass(TrackInputFormat.class);
32         job.setJarByClass(TestMyInputFormat.class);
33         job.setMapperClass(TestMyInputFormat.MapperClass.class);
34         job.setNumReduceTasks(0);
35         job.setMapOutputKeyClass(Text.class);
36         job.setMapOutputValueClass(Text.class);
37  
38         FileInputFormat.addInputPath(job, new Path(args[0]));
39         org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.setOutputPath(job, outPath);
40  
41         System.exit(job.waitForCompletion(true) ? 0 1);
42     }
43 }

3、测试数据:

  cookieId    time     url                 cookieOverFlag

1 1       a        1_hao123
2 1       a        1_baidu
3 1       b        1_google       2END
4 2       c        2_google
5 2       c        2_hao123
6 2       c        2_google       1END
7 3       a        3_baidu
8 3       a        3_sougou
9 3       b        3_soso         2END

4、结果:

01 key:     0
02 value:   1  a   1_hao123   
03 1   a    1_baidu   
04 1   b    1_google   2
05 -------------------------
06 key:     47
07 value:   2  c    2_google  
08 2   c    2_hao123  
09 2   c    2_google   1
10 -------------------------
11 key:     96
12 value:   3  a    3_baidu   
13 3   a    3_sougou  
14 3   b    3_soso 2
15 -------------------------

REF:

自定义hadoop map/reduce输入文件切割InputFormat

http://hi.baidu.com/lzpsky/item/0d9d84c05afb43ba0c0a7b27

MapReduce高级编程之自定义InputFormat

http://datamining.xmu.edu.cn/bbs/home.php?mod=space&uid=91&do=blog&id=190

http://irwenqiang.iteye.com/blog/1448164

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值