基于linux和eclipse的mapreduce程序本地模式和集群模式

本地模式

  1. mapreduce程序是被提交给LocalJobRunner在本地以单进程的形式运行。在本地运行可以使用debug进行跟踪代码,方便查错,在本地运行主要作用是看mapreduce的业务逻辑是否正确;
  2. 处理的数据及输出结果可以在本地文件系统,也可以在hdfs上;
  3. 本地模式非常便于进行业务逻辑的debug,只要在eclipse中打断点即可;
  4. 如何区分本地模式:主要看mr程序的conf中是否有mapreduce.framework.name=local和mapreduce.cluster.local.dir=×××参数。

本地模式(输入输出数据在本地)

    	Configuration conf = new Configuration();
    	//是否运行为本地模式,就是看这个参数值是否为local,默认就是local
    	conf.set("mapreduce.framework.name", "local");
    	conf.set("mapreduce.cluster.local.dir", "/home/workspace/mapred/local");

    	//本地模式运行mr程序时,输入输出的数据可以在本地,也可以在hdfs上
    	//到底在哪里,就看以下两行配置你用哪行,默认就是file:///
//    	conf.set("fs.defaultFS", "hdfs://hadoopSvr1:8020/");
    	conf.set("fs.defaultFS", "file:///");

本地模式(输入输出数据在HDFS)

    	System.setProperty("HADOOP_USER_NAME", "root");
    	Configuration conf = new Configuration();
    	//是否运行为本地模式,就是看这个参数值是否为local,默认就是local
    	conf.set("mapreduce.framework.name", "local");
    	conf.set("mapreduce.cluster.local.dir", "/home/workspace/mapred/local");

    	//本地模式运行mr程序时,输入输出的数据可以在本地,也可以在hdfs上
    	//到底在哪里,就看以下两行配置你用哪行,默认就是file:///
    	conf.set("fs.defaultFS", "hdfs://hadoopSvr1:8020/");
//    	conf.set("fs.defaultFS", "file:///");

集群模式

1 将hadoop集群的配置文件copy到eclipse工程的resources目录下

主要是如下几个配置文件

core-site.xml  
hdfs-site.xml  
log4j.properties  
mapred-site.xml 
yarn-site.xml

读取配置部分代码如下

    	Configuration conf = new YarnConfiguration();
//    	conf.addResource("log4j.properties");
    	conf.addResource("core-site.xml");
        conf.addResource("hdfs-site.xml");
        conf.addResource("mapred-site.xml");
        conf.addResource("yarn-site.xml");
        
        Job job = Job.getInstance(conf);

        //设置job的各种属性
        job.setJobName("WCApp");           //作业名称
        job.setJarByClass(WCApp.class);    //搜索类

2 使用SBT工具打包发布

具体参考:https://blog.csdn.net/wangkai_123456/article/details/88933417
其中scala工程的构建定义文件build.sbt内容如下:

ThisBuild / scalaVersion := "2.11.12"
ThisBuild / organization := "org.kaidy"

// Dependencies for hdfs
// Excluding JAR files that are already part of the container (like Spark), consider scoping the dependent library to "provided" configuration
val hadoopHdfs = "org.apache.hadoop" % "hadoop-hdfs" % "3.1.0"
val hadoopCommon = "org.apache.hadoop" % "hadoop-common" % "3.1.0"
// val hadoopClient = "org.apache.hadoop" % "hadoop-client" % "3.1.0"
val hadoopMrClientJobClient = "org.apache.hadoop" % "hadoop-mapreduce-client-jobclient" % "3.1.0"
val hadoopMrClientCore = "org.apache.hadoop" % "hadoop-mapreduce-client-core" % "3.1.0"

// https://mvnrepository.com/artifact/org.scalaj/scalaj-http
// val scalaJson = "org.scalaj" %% "scalaj-http" % "2.4.1"

lazy val root = (project in file("."))
  .settings(
    name := "mrWordCount",
    version := "1.0",
	
	libraryDependencies ++= Seq(
		hadoopHdfs % "provided",
      	hadoopCommon % "provided",
		hadoopMrClientJobClient % "provided",
		hadoopMrClientCore % "provided"
    ),

	// libraryDependencies += scalaJson
  )

3 将打包生成的jar包提交到集群运行

假设打包生成jar包为:mrWordCount-assembly-1.0.jar,提交命令大致如下:

hadoop jar ../runableJars/mrWordCount-assembly-1.0.jar  /test/kaidy/input /test/kaidy/output

其中,输入数据内容如下:

[root@hadoopSvr1 hadoop]# hadoop fs -ls /test/kaidy/input
Found 2 items
-rw-r--r--   2 root supergroup         24 2019-04-09 18:33 /test/kaidy/input/file01
-rw-r--r--   2 root supergroup         33 2019-04-09 18:33 /test/kaidy/input/file02
[root@hadoopSvr1 hadoop]# hadoop fs -cat /test/kaidy/input/file01
Hello World, Bye World!
[root@hadoopSvr1 hadoop]# hadoop fs -cat /test/kaidy/input/file02
Hello Hadoop, Goodbye to hadoop.
[root@hadoopSvr1 hadoop]# 

运行结果:

[root@hadoopSvr1 hadoop]# hadoop fs -ls /test/kaidy/output
Found 2 items
-rw-r--r--   2 root supergroup          0 2019-04-10 17:32 /test/kaidy/output/_SUCCESS
-rw-r--r--   2 root supergroup         67 2019-04-10 17:32 /test/kaidy/output/part-r-00000
[root@hadoopSvr1 hadoop]# hadoop fs -cat /test/kaidy/output/part-r-00000
Bye	1
Goodbye	1
Hadoop,	1
Hello	2
World!	1
World,	1
hadoop.	1
to	1

附完整代码:

package kaidy.mr;

import org.apache.hadoop.yarn.conf.YarnConfiguration;

import java.io.IOException;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
//import org.apache.hadoop.mapreduce.Mapper.Context;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

public class WCApp { 
    public static void main(String[] args) throws Exception {
    	System.setProperty("HADOOP_USER_NAME", "root");
//    	Configuration conf = new Configuration();
    	//是否运行为本地模式,就是看这个参数值是否为local,默认就是local
//    	conf.set("mapreduce.framework.name", "local");
//    	conf.set("mapreduce.cluster.local.dir", "/home/***/workspace/mapred/local");

    	//本地模式运行mr程序时,输入输出的数据可以在本地,也可以在hdfs上
    	//到底在哪里,就看以下两行配置你用哪行,默认就是file:///
//    	conf.set("fs.defaultFS", "hdfs://hadoopSvr1:8020/");
//    	conf.set("fs.defaultFS", "file:///");
    	
    	Configuration conf = new YarnConfiguration();
//    	conf.addResource("log4j.properties");
    	conf.addResource("core-site.xml");
        conf.addResource("hdfs-site.xml");
        conf.addResource("mapred-site.xml");
        conf.addResource("yarn-site.xml");
        
        Job job = Job.getInstance(conf);

        //设置job的各种属性
        job.setJobName("WCApp");           //作业名称
        job.setJarByClass(WCApp.class);    //搜索类
//        job.setJar("mrWordCount.jar");
        job.setInputFormatClass(TextInputFormat.class);  //设置输入格式
        
        //设置输入路径
//        FileInputFormat.addInputPath(job, new Path(args[0]));
        FileInputFormat.setInputPaths(job, new Path(args[0]));
        //设置输出路径
        FileOutputFormat.setOutputPath(job, new Path(args[1]));
        
        job.setMapperClass(WCMapperTmp.class);   //mapper类
        job.setReducerClass(WCReducerTmp.class); //reducer类

        job.setNumReduceTasks(1);             //reduce个数

        job.setMapOutputKeyClass(Text.class);	
        job.setMapOutputValueClass(IntWritable.class);	

        job.setOutputKeyClass(Text.class);	//设置输出的key格式
        job.setOutputValueClass(IntWritable.class);	//设置输出的value格式

        job.waitForCompletion(true);
    }
}

class WCMapperTmp extends Mapper<LongWritable, Text, Text, IntWritable> {
    @Override
    protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {

        Text keyOut = new Text();
        IntWritable valueOut = new IntWritable();
        String[] arr = value.toString().split(" ");
        for (String s : arr) {
            keyOut.set(s);
            valueOut.set(1);
            context.write(keyOut, valueOut);
        }
    }
}

class WCReducerTmp extends Reducer<Text, IntWritable, Text, IntWritable> {
    /**
     * reduce
     */
    @Override
    protected void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException {

        int count = 0;
        for (IntWritable iw : values) {
            count = count + iw.get();
        }
        //输出
        context.write(key, new IntWritable(count));
    }
}
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值