hivesql编译过程(写的超好):https://tech.meituan.com/2014/02/12/hive-sql-to-mapreduce.html
1. 数据类型
参考:http://www.cnblogs.com/wuyudong/p/hadoop-writable.htm
Java基本类型 | Writable类 |
long | LongWritable |
boolean | BooleanWritable |
byte | ByteWritable |
int | IntWritable |
float | FloatWritable |
double | DoubleWritable |
- Text:utf-8文本类型
- NullWritable:<key, value>中的key或value为空
- 自定义hadoop数据类型
2. 原理
参考:http://www.cnblogs.com/stardjyeah/p/4643628.html。这篇讲得很清楚,很好理解。
框架:
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapreduce.*;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;
import java.io.IOException;
/**
* Created by jesmine.zhang on 2017/4/5.
*/
public class MyMapReduce extends Configured implements Tool {
public static class MyMapper extends Mapper<LongWritable, Text, Text, IntWritable> {
// private String fInputPath = "";
private Text MapKey = new Text();
private IntWritable MapValue = new IntWritable();
protected void map(Text key, IntWritable value, Context context)
throws IOException, InterruptedException {
}
}
public static class MyReducer extends Reducer<LongWritable, Text, Text, IntWritable> {
protected void reduce(Text key, IntWritable value, Context context)
throws IOException, InterruptedException{
}
}
public int Run(String[] args) throws Exception {
//JobTracker
// set Conf env
Configuration conf = new Configuration();
// conf.set("mapreduce.map.output.compress", true);
// get job by conf
Job job = Job.getInstance(super.getConf(),
MyMapReduce.class.getSimpleName());
job.setJarByClass(MyMapReduce.class);
// set job
// step 1 : map phase
job.setMapperClass(MyMapper.class);
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(IntWritable.class);
// step 2 :reduce phase
job.setCombinerClass(MyReducer.class);
job.setReducerClass(MyReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
// submit
// job.submit();
boolean isSucceed = job.waitForCompletion(true);
return isSucceed ? 1 : 0;
}
public static void main(String[] args) throws Exception {
args = new String[]{""};
int status = ToolRunner.run(new MyMapReduce(), args);
}
}
3. Context类
参考:http://blog.csdn.net/songchunhong/article/details/50435717
context应该是用来传递数据以及其他运行状态信息,map中的key、value写入context,让它传递给Reducer进行reduce,而reduce进行处理之后数据继续写入context,继续交给Hadoop写入hdfs系统。