有时候我们在项目中会遇到输入结果集很大,但是输出结果很小,比如一些 pv、uv 数据,然后为了实时查询的需求,或者一些 OLAP 的需求,我们需要 mapreduce 与 mysql 进行数据的交互,而这些特性正是 hbase 或者 hive 目前亟待改进的地方。
好了言归正传,简单的说说背景、原理以及需要注意的地方:
1、为了方便 MapReduce 直接访问关系型数据库(Mysql,Oracle),Hadoop提供了DBInputFormat和DBOutputFormat两个类。通过DBInputFormat类把数据库表数据读入到HDFS,根据DBOutputFormat类把MapReduce产生的结果集导入到数据库表中。
2、由于0.20版本对DBInputFormat和DBOutputFormat支持不是很好,该例用了0.19版本来说明这两个类的用法。
至少在我的 0.20.203 中的 org.apache.hadoop.mapreduce.lib 下是没见到 db 包,所以本文也是以老版的 API 来为例说明的。
3、运行MapReduce时候报错:java.io.IOException: com.mysql.jdbc.Driver,一般是由于程序找不到mysql驱动包。解决方法是让每个tasktracker运行MapReduce程序时都可以找到该驱动包。
添加包有两种方式:
(1)在每个节点下的${HADOOP_HOME}/lib下添加该包。重启集群,一般是比较原始的方法。
(2)a)把包传到集群上: hadoop fs -put mysql-connector-java-5.1.0- bin.jar /hdfsPath/
b)在mr程序提交job前,添加语句:DistributedCache.addFileToClassPath(new Path(“/hdfsPath/mysql- connector-java- 5.1.0-bin.jar”), conf);
(3)虽然API用的是0.19的,但是使用0.20的API一样可用,只是会提示方法已过时而已。
4、测试数据:
02 | `id` int DEFAULT NULL , |
03 | ` name ` varchar (10) DEFAULT NULL |
04 | ) ENGINE=InnoDB DEFAULT CHARSET=utf8; |
07 | `id` int DEFAULT NULL , |
08 | ` name ` varchar (10) DEFAULT NULL |
09 | ) ENGINE=InnoDB DEFAULT CHARSET=utf8; |
11 | insert into t values (1, "june" ),(2, "decli" ),(3, "hello" ), |
12 | (4, "june" ),(5, "decli" ),(6, "hello" ),(7, "june" ), |
13 | (8, "decli" ),(9, "hello" ),(10, "june" ), |
14 | (11, "june" ),(12, "decli" ),(13, "hello" ); |
5、代码:
package mysql2mr;
import java.io.DataInput;
import java.io.DataOutput;
import java.io.File;
import java.io.IOException;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;
import mapr.EJob;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.filecache.DistributedCache;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.io.Writable;
import org.apache.hadoop.mapred.JobConf;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.db.DBConfiguration;
import org.apache.hadoop.mapreduce.lib.db.DBInputFormat;
import org.apache.hadoop.mapreduce.lib.db.DBOutputFormat;
import org.apache.hadoop.mapreduce.lib.db.DBWritable;
/**
* Function: 测试 mr 与 mysql 的数据交互,此测试用例将一个表中的数据复制到另一张表中 实际当中,可能只需要从 mysql 读,或者写到
* mysql 中。
*
* @author administrator
*
*/
public class Mysql2Mr {
public static class StudentinfoRecord implements Writable, DBWritable {
int id;
String name;
public StudentinfoRecord() {
}
public String toString() {
return new String(this.id + " " + this.name);
}
@Override
public void readFields(ResultSet result) throws SQLException {
this.id = result.getInt(1);
this.name = result.getString(2);
}
@Override
public void write(PreparedStatement stmt) throws SQLException {
stmt.setInt(1, this.id);
stmt.setString(2, this.name);
}
@Override
public void readFields(DataInput in) throws IOException {
this.id = in.readInt();
this.name = Text.readString(in);
}
@Override
public void write(DataOutput out) throws IOException {
out.writeInt(this.id);
Text.writeString(out, this.name);
}
}
// 记住此处是静态内部类,要不然你自己实现无参构造器,或者等着抛异常:
// Caused by: java.lang.NoSuchMethodException: DBInputMapper.<init>()
// http://stackoverflow.com/questions/7154125/custom-mapreduce-input-format-cant-find-constructor
// 网上脑残式的转帖,没见到一个写对的。。。
public static class DBInputMapper extends
Mapper<LongWritable, StudentinfoRecord, LongWritable, Text> {
@Override
public void map(LongWritable key, StudentinfoRecord value,
Context context) throws IOException, InterruptedException {
context.write(new LongWritable(value.id), new Text(value.toString()));
}
}
public static class MyReducer extends Reducer<LongWritable, Text, StudentinfoRecord, Text> {
@Override
public void reduce(LongWritable key, Iterable<Text> values, Context context) throws IOException, InterruptedException {
String[] splits = values.iterator().next().toString().split(" ");
StudentinfoRecord r = new StudentinfoRecord();
r.id = Integer.parseInt(splits[0]);
r.name = splits[1];
context.write(r, new Text(r.name));
}
}
@SuppressWarnings("deprecation")
public static void main(String[] args) throws IOException, InterruptedException, ClassNotFoundException {
File jarfile = EJob.createTempJar("bin");
EJob.addClasspath("usr/hadoop/conf");
ClassLoader classLoader = EJob.getClassLoader();
Thread.currentThread().setContextClassLoader(classLoader);
Configuration conf = new Configuration();
// 这句话很关键
conf.set("mapred.job.tracker", "172.30.1.245:9001");
DistributedCache.addFileToClassPath(new Path(
"hdfs://172.30.1.245:9000/user/hadoop/jar/mysql-connector-java-5.1.6-bin.jar"), conf);
DBConfiguration.configureDB(conf, "com.mysql.jdbc.Driver", "jdbc:mysql://172.30.1.245:3306/sqooptest", "sqoop", "sqoop");
Job job = new Job(conf, "Mysql2Mr");
// job.setJarByClass(Mysql2Mr.class);
((JobConf)job.getConfiguration()).setJar(jarfile.toString());
job.setMapOutputKeyClass(LongWritable.class);
job.setMapOutputValueClass(Text.class);
job.setMapperClass(DBInputMapper.class);
job.setReducerClass(MyReducer.class);
job.setOutputKeyClass(LongWritable.class);
job.setOutputValueClass(Text.class);
job.setOutputFormatClass(DBOutputFormat.class);
job.setInputFormatClass(DBInputFormat.class);
String[] fields = {"id","name"};
// 从 t 表读数据
DBInputFormat.setInput(job, StudentinfoRecord.class, "t", null, "id", fields);
// mapreduce 将数据输出到 t2 表
DBOutputFormat.setOutput(job, "t2", "id", "name");
System.exit(job.waitForCompletion(true)? 0:1);
}
}
6、结果:
执行两次后,你可以看到mysql结果:
01 | mysql> select * from t2; |
32 | 26 rows in set (0.00 sec) |
7、日志:
01 | 13/07/29 02:33:03 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same. |
02 | 13/07/29 02:33:03 INFO filecache.TrackerDistributedCacheManager: Creating mysql-connector-java-5.0.8-bin.jar in /tmp/hadoop-june/mapred/ local /archive/-8943686319031389138_-1232673160_640840668/192.168.1.101/tmp-work--8372797484204470322 with rwxr-xr-x |
03 | 13/07/29 02:33:03 INFO filecache.TrackerDistributedCacheManager: Cached hdfs://192.168.1.101:9000/tmp/mysql-connector-java-5.0.8-bin.jar as /tmp/hadoop-june/mapred/ local /archive/-8943686319031389138_-1232673160_640840668/192.168.1.101/tmp/mysql-connector-java-5.0.8-bin.jar |
04 | 13/07/29 02:33:03 INFO filecache.TrackerDistributedCacheManager: Cached hdfs://192.168.1.101:9000/tmp/mysql-connector-java-5.0.8-bin.jar as /tmp/hadoop-june/mapred/ local /archive/-8943686319031389138_-1232673160_640840668/192.168.1.101/tmp/mysql-connector-java-5.0.8-bin.jar |
05 | 13/07/29 02:33:03 INFO mapred.JobClient: Running job: job_local_0001 |
06 | 13/07/29 02:33:03 INFO mapred.MapTask: numReduceTasks: 1 |
07 | 13/07/29 02:33:03 INFO mapred.MapTask: io. sort .mb = 100 |
08 | 13/07/29 02:33:03 INFO mapred.MapTask: data buffer = 79691776/99614720 |
09 | 13/07/29 02:33:03 INFO mapred.MapTask: record buffer = 262144/327680 |
10 | 13/07/29 02:33:03 INFO mapred.MapTask: Starting flush of map output |
11 | 13/07/29 02:33:03 INFO mapred.MapTask: Finished spill 0 |
12 | 13/07/29 02:33:03 INFO mapred.Task: Task:attempt_local_0001_m_000000_0 is done . And is in the process of commiting |
13 | 13/07/29 02:33:04 INFO mapred.JobClient: map 0% reduce 0% |
14 | 13/07/29 02:33:06 INFO mapred.LocalJobRunner: |
15 | 13/07/29 02:33:06 INFO mapred.Task: Task 'attempt_local_0001_m_000000_0' done . |
16 | 13/07/29 02:33:06 INFO mapred.LocalJobRunner: |
17 | 13/07/29 02:33:06 INFO mapred.Merger: Merging 1 sorted segments |
18 | 13/07/29 02:33:06 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 235 bytes |
19 | 13/07/29 02:33:06 INFO mapred.LocalJobRunner: |
20 | 13/07/29 02:33:06 INFO mapred.Task: Task:attempt_local_0001_r_000000_0 is done . And is in the process of commiting |
21 | 13/07/29 02:33:07 INFO mapred.JobClient: map 100% reduce 0% |
22 | 13/07/29 02:33:09 INFO mapred.LocalJobRunner: reduce > reduce |
23 | 13/07/29 02:33:09 INFO mapred.Task: Task 'attempt_local_0001_r_000000_0' done . |
24 | 13/07/29 02:33:09 WARN mapred.FileOutputCommitter: Output path is null in cleanup |
25 | 13/07/29 02:33:10 INFO mapred.JobClient: map 100% reduce 100% |
26 | 13/07/29 02:33:10 INFO mapred.JobClient: Job complete: job_local_0001 |
27 | 13/07/29 02:33:10 INFO mapred.JobClient: Counters: 18 |
28 | 13/07/29 02:33:10 INFO mapred.JobClient: File Input Format Counters |
29 | 13/07/29 02:33:10 INFO mapred.JobClient: Bytes Read=0 |
30 | 13/07/29 02:33:10 INFO mapred.JobClient: File Output Format Counters |
31 | 13/07/29 02:33:10 INFO mapred.JobClient: Bytes Written=0 |
32 | 13/07/29 02:33:10 INFO mapred.JobClient: FileSystemCounters |
33 | 13/07/29 02:33:10 INFO mapred.JobClient: FILE_BYTES_READ=1211691 |
34 | 13/07/29 02:33:10 INFO mapred.JobClient: HDFS_BYTES_READ=1081704 |
35 | 13/07/29 02:33:10 INFO mapred.JobClient: FILE_BYTES_WRITTEN=2392844 |
36 | 13/07/29 02:33:10 INFO mapred.JobClient: Map-Reduce Framework |
37 | 13/07/29 02:33:10 INFO mapred.JobClient: Map output materialized bytes=239 |
38 | 13/07/29 02:33:10 INFO mapred.JobClient: Map input records=13 |
39 | 13/07/29 02:33:10 INFO mapred.JobClient: Reduce shuffle bytes=0 |
40 | 13/07/29 02:33:10 INFO mapred.JobClient: Spilled Records=26 |
41 | 13/07/29 02:33:10 INFO mapred.JobClient: Map output bytes=207 |
42 | 13/07/29 02:33:10 INFO mapred.JobClient: Map input bytes=13 |
43 | 13/07/29 02:33:10 INFO mapred.JobClient: SPLIT_RAW_BYTES=75 |
44 | 13/07/29 02:33:10 INFO mapred.JobClient: Combine input records=0 |
45 | 13/07/29 02:33:10 INFO mapred.JobClient: Reduce input records=13 |
46 | 13/07/29 02:33:10 INFO mapred.JobClient: Reduce input groups =13 |
47 | 13/07/29 02:33:10 INFO mapred.JobClient: Combine output records=0 |
48 | 13/07/29 02:33:10 INFO mapred.JobClient: Reduce output records=13 |
49 | 13/07/29 02:33:10 INFO mapred.JobClient: Map output records=13 |
8、REF:
新版 API 写法:
http://superlxw1234.iteye.com/blog/1880712
老版:
http://blog.csdn.net/dajuezhao/article/details/5799371
http://www.zhengmenbb.com/archives/583.htm