I want to share a little hack I used to be able to consume TableMapper with or without filters in Oozie Workflow. The first think to understand is how TableMapReduceUtil.initTableMapperJob works.
TableMapReduceUtil.initTableMapperJob(tableName, scan, MyTableMapper.class, Writable.class, Writable.class, job); |
tableName goes to : hbase.mapreduce.inputtable property
scan goes to : hbase.mapreduce.scan property (its converted to string)
TableMapper class goes to : mapreduce.map.class property (if you are not using new-api use mapred.mapper.class)
Mapper Key Output Class : mapred.mapoutput.key.class (it works for both new and old api)
Mapper Value Output Class : mapred.mapoutput.value.class
So basicly we are going to set these values in oozie workflow.xml and we would be able to run our TableMapper via oozie. The only tricky here is to pass scan to oozie in which I used a java job to get the string value.
package com.ozbuyucusu.hbase.helper;
import java.io.ByteArrayOutputStream;
import java.io.DataOutputStream;
import java.io.File;
import java.io.FileNotFoundException;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.OutputStream;
import java.util.Properties;
import org.apache.hadoop.hbase.client.Scan;
import org.apache.hadoop.hbase.util.Base64;
import org.apache.hadoop.hbase.util.Bytes;
public class ScanStringGenerator {
}
You can modify the Scan to add filters and giving a start and an end (which I did in most cases). The scan parameters can be passed to this helper class via arguments easily. And finally we are gonna put this helper class and in workflow.
<workflow-app xmlns='uri:oozie:workflow:0.2' name='sample-job-wf'>
</workflow-app>
I guess there would be better ways to use hbase tablemapper inside oozie workflow (which I couldn't find) but this small hack (it looks ugly indeed) works like a charm for me.