原文
在Hadoop上运行MapReduce任务的标准做法是把代码打包到jar里面,上传到服务器,然后用命令行启动。如果你是从一个Java应用中想要启动一个MapReduce,那么这个方法真是又土又麻烦。
其实YARN是可以通过Java程序向Hadoop集群提交MapReduce任务的。与普通的任务不同的是,远程提交的Job由于读不到服务器上的mapred-site.xml和yarn-site.xml,所以在Job的Configuration里面需要添加一些设置,然后再提交就可以了。
贴上一个示例代码,大家一看就明白了:
- public class RemoteMapReduceService {
- public static String startJob() throws Exception {
- Job job = Job.getInstance();
- job.setJobName("xxxx");
-
-
-
-
-
-
-
- Configuration conf = job.getConfiguration();
- conf.set("mapreduce.framework.name", "yarn");
- conf.set("hbase.zookeeper.quorum", "MASTER:2181");
- conf.set("fs.default.name", "hdfs://MASTER:8020");
- conf.set("yarn.resourcemanager.resource-tracker.address", "MASTER:8031");
- conf.set("yarn.resourcemanager.address", "MASTER:8032");
- conf.set("yarn.resourcemanager.scheduler.address", "MASTER:8030");
- conf.set("yarn.resourcemanager.admin.address", "MASTER:8033");
- conf.set("yarn.application.classpath", "$HADOOP_CONF_DIR,"
- +"$HADOOP_COMMON_HOME/*,$HADOOP_COMMON_HOME/lib/*,"
- +"$HADOOP_HDFS_HOME/*,$HADOOP_HDFS_HOME/lib/*,"
- +"$HADOOP_MAPRED_HOME/*,$HADOOP_MAPRED_HOME/lib/*,"
- +"$YARN_HOME/*,$YARN_HOME/lib/*,"
- +"$HBASE_HOME/*,$HBASE_HOME/lib/*,$HBASE_HOME/conf/*");
- conf.set("mapreduce.jobhistory.address", "MASTER:10020");
- conf.set("mapreduce.jobhistory.webapp.address", "MASTER:19888");
- conf.set("mapred.child.java.opts", "-Xmx1024m");
-
- job.submit();
-
- return job.getJobID().toString();
- }
- }