Spark进阶之路-Spark提交Jar包执行

            Spark进阶之路-Spark提交Jar包执行

                                    作者:尹正杰

版权声明:原创作品,谢绝转载!否则将追究法律责任。

 

  在实际开发中,使用spark-submit提交jar包是很常见的方式,因为用spark-shell来开发项目是很苦难的(一般都用IDE),当我们开发程序完毕后,需要打成jar包。

 

一.通过jar包提交任务注意事项【工作中主要的工作方式】

  1>.需要通过spark-submit来提交;

  2>.必须使用“--class”指定你jar包的主类;

  3>.必须使用“--master”指定你访问的集群地址,如果你的jar包中已经配置了master,那么可以不指定master地址;

  4>.必须指定你jar包的具体路径(即:jar的参数)

  5>.我们在命令行输入“spark-submit”可以看到所有的你可以指定的参数,常用的参数就是以上几点。

[yinzhengjie@s101 ~]$ spark-submit 
Usage: spark-submit [options] <app jar | python file> [app arguments]
Usage: spark-submit --kill [submission ID] --master [spark://...]
Usage: spark-submit --status [submission ID] --master [spark://...]
Usage: spark-submit run-example [options] example-class [example args]

Options:
  --master MASTER_URL         spark://host:port, mesos://host:port, yarn, or local.
  --deploy-mode DEPLOY_MODE   Whether to launch the driver program locally ("client") or
                              on one of the worker machines inside the cluster ("cluster")
                              (Default: client).
  --class CLASS_NAME          Your application's main class (for Java / Scala apps).
  --name NAME                 A name of your application.
  --jars JARS                 Comma-separated list of local jars to include on the driver
                              and executor classpaths.
  --packages                  Comma-separated list of maven coordinates of jars to include
                              on the driver and executor classpaths. Will search the local
                              maven repo, then maven central and any additional remote
                              repositories given by --repositories. The format for the
                              coordinates should be groupId:artifactId:version.
  --exclude-packages          Comma-separated list of groupId:artifactId, to exclude while
                              resolving the dependencies provided in --packages to avoid
                              dependency conflicts.
  --repositories              Comma-separated list of additional remote repositories to
                              search for the maven coordinates given with --packages.
  --py-files PY_FILES         Comma-separated list of .zip, .egg, or .py files to place
                              on the PYTHONPATH for Python apps.
  --files FILES               Comma-separated list of files to be placed in the working
                              directory of each executor.

  --conf PROP=VALUE           Arbitrary Spark configuration property.
  --properties-file FILE      Path to a file from which to load extra properties. If not
                              specified, this will look for conf/spark-defaults.conf.

  --driver-memory MEM         Memory for driver (e.g. 1000M, 2G) (Default: 1024M).
  --driver-java-options       Extra Java options to pass to the driver.
  --driver-library-path       Extra library path entries to pass to the driver.
  --driver-class-path         Extra class path entries to pass to the driver. Note that
                              jars added with --jars are automatically included in the
                              classpath.

  --executor-memory MEM       Memory per executor (e.g. 1000M, 2G) (Default: 1G).

  --proxy-user NAME           User to impersonate when submitting the application.
                              This argument does not work with --principal / --keytab.

  --help, -h                  Show this help message and exit.
  --verbose, -v               Print additional debug output.
  --version,                  Print the version of current Spark.

 Spark standalone with cluster deploy mode only:
  --driver-cores NUM          Cores for driver (Default: 1).

 Spark standalone or Mesos with cluster deploy mode only:
  --supervise                 If given, restarts the driver on failure.
  --kill SUBMISSION_ID        If given, kills the driver specified.
  --status SUBMISSION_ID      If given, requests the status of the driver specified.

 Spark standalone and Mesos only:
  --total-executor-cores NUM  Total cores for all executors.

 Spark standalone and YARN only:
  --executor-cores NUM        Number of cores per executor. (Default: 1 in YARN mode,
                              or all available cores on the worker in standalone mode)

 YARN-only:
  --driver-cores NUM          Number of cores used by the driver, only in cluster mode
                              (Default: 1).
  --queue QUEUE_NAME          The YARN queue to submit to (Default: "default").
  --num-executors NUM         Number of executors to launch (Default: 2).
                              If dynamic allocation is enabled, the initial number of
                              executors will be at least NUM.
  --archives ARCHIVES         Comma separated list of archives to be extracted into the
                              working directory of each executor.
  --principal PRINCIPAL       Principal to be used to login to KDC, while running on
                              secure HDFS.
  --keytab KEYTAB             The full path to the file that contains the keytab for the
                              principal specified above. This keytab will be copied to
                              the node running the Application Master via the Secure
                              Distributed Cache, for renewing the login tickets and the
                              delegation tokens periodically.
      
[yinzhengjie@s101 ~]$ 
获取spark-submit过多帮助信息([yinzhengjie@s101 ~]$ spark-submit )

 

二.使用idea的Maven编写WordCount

1>.新建一个项目

2>.选择Maven类型

3>.指定项目的版本,点击下一步

 

4>.指定项目的路径

5>.添加框架支持

6>.选择Maven

 

7>.删除src目录

8>.添加二级Maven项目

9>.选择二级Maven类型

10>.输入二级Maven的名称

11>.指定二级Maven存放路径,默认即可。

 

12>.删除二级Maven的src目录并创建三级Maven

13>.点击下一步

 

14>.输入三级Maven的名称

15>.安装路径默认即可

16>.编辑三个Maven的pom文件内容

  1 <?xml version="1.0" encoding="UTF-8"?>
  2 <project xmlns="http://maven.apache.org/POM/4.0.0"
  3          xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  4          xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  5     <modelVersion>4.0.0</modelVersion>
  6 
  7     <groupId>cn.org.yinzhengjie</groupId>
  8     <artifactId>Spark</artifactId>
  9     <version>1.0-yinzhengjieCode</version>
 10     <modules>
 11         <module>spark-core</module>
 12     </modules>
 13 
 14     <!--表明当前项目是一个父项目,没有具体代码,只有声明的共有信息-->
 15     <packaging>pom</packaging>
 16 
 17     <!--声明共有的属性-->
 18     <properties>
 19         <spark.version>2.1.1</spark.version>             <!--声明spark的版本-->
 20         <scala.verison>2.11.8</scala.verison>           <!--声明Scala的版本-->
 21         <log4j.version>1.2.17</log4j.version>           <!--声明log4j的版本,方便在下面调用-->
 22         <slf4j.version>1.7.22</slf4j.version>           <!--声明slf4j的版本,方便在下面调用-->
 23     </properties>
 24 
 25     <!--声明并引入共有的依赖-->
 26     <dependencies>
 27         <!--Logging start-->
 28         <!-- https://mvnrepository.com/artifact/org.apache.logging.log4j/log4j-core -->
 29         <dependency>
 30             <groupId>org.slf4j</groupId>
 31             <artifactId>jcl-over-slf4j</artifactId>
 32             <version>${slf4j.version}</version>       <!--版本引用上面定义的版本变量-->
 33         </dependency>
 34 
 35         <dependency>
 36             <groupId>org.slf4j</groupId>
 37             <artifactId>slf4j-api</artifactId>
 38             <version>${slf4j.version}</version>           <!--版本引用上面定义的版本变量-->
 39         </dependency>
 40 
 41         <dependency>
 42             <groupId>org.slf4j</groupId>
 43             <artifactId>slf4j-log4j12</artifactId>
 44             <version>${slf4j.version}</version>           <!--版本引用上面定义的版本变量-->
 45         </dependency>
 46 
 47         <dependency>
 48             <groupId>log4j</groupId>
 49             <artifactId>log4j</artifactId>
 50             <version>${log4j.version}</version>               <!--版本引用上面定义的版本变量-->
 51         </dependency>
 52         <!--Logging end-->
 53 
 54         <!--引入Scala依赖包-->
 55         <!-- https://mvnrepository.com/artifact/org.scala-lang/scala-library -->
 56         <dependency>
 57             <groupId>org.scala-lang</groupId>
 58             <artifactId>scala-library</artifactId>
 59             <version>${scala.verison}</version>         <!--版本引用上面定义的版本变量-->
 60         </dependency>
 61 
 62 
 63     </dependencies>
 64 
 65     <!--仅声明公有的依赖,注意和上面的区别,它之生命并没有引入包-->
 66     <dependencyManagement>
 67         <dependencies>
 68             <!-- https://mvnrepository.com/artifact/org.apache.spark/spark-core -->
 69             <dependency>
 70                 <groupId>org.apache.spark</groupId>
 71                 <artifactId>spark-core_2.11</artifactId>           <!--在dependencyManagement标签声明的maven模块可以在子项目中调用且不用制定版本-->
 72                 <version>${spark.version}</version>               <!--版本引用上面定义的版本变量-->
 73             </dependency>
 74 
 75         </dependencies>
 76     </dependencyManagement>
 77 
 78     <!--配置构建的插件-->
 79     <build>
 80         <!--声明并引入构建的插件-->
 81         <plugins>
 82                 <!--设置项目的编译版本-->
 83                 <plugin>
 84                     <groupId>org.apache.maven.plugins</groupId>
 85                     <artifactId>maven-compiler-plugin</artifactId>
 86                     <version>3.6.1</version>
 87                     <configuration>
 88                         <source>1,8</source>
 89                         <target>1.8</target>
 90                     </configuration>
 91                 </plugin>
 92 
 93             <!--制定编译Scala代码的插件(即用于编译Scala代码到class),如果你使用java和Scala混合编程的话,这个是必须设置的哟-->
 94                 <plugin>
 95                     <groupId>net.alchim31.maven</groupId>
 96                     <artifactId>scala-maven-plugin</artifactId>
 97                     <version>3.2.2</version>
 98                     <executions>
 99                         <execution>
100                             <goals>
101                                 <goal>compile</goal>
102                                 <goal>testCompile</goal>
103                             </goals>
104                         </execution>
105                     </executions>
106                 </plugin>
107         </plugins>
108 
109         <!--仅声明构建的插件-->
110         <pluginManagement>
111             <plugins>
112                 <!--设置打包插件-->
113                 <plugin>
114                     <groupId>org.apache.maven.plugins</groupId>
115                     <artifactId>maven-assembly-plugin</artifactId>
116                     <version>3.0.0</version>
117                     <executions>
118                         <execution>
119                             <id>make-assembly</id>
120                             <phase>package</phase>
121                             <goals>
122                                 <goal>single</goal>
123                             </goals>
124                         </execution>
125                     </executions>
126                 </plugin>
127             </plugins>
128         </pluginManagement>
129     </build>
130 
131 </project>
Spark.xml 文件内容
 1 <?xml version="1.0" encoding="UTF-8"?>
 2 <project xmlns="http://maven.apache.org/POM/4.0.0"
 3          xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
 4          xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
 5     <parent>
 6         <artifactId>Spark</artifactId>
 7         <groupId>cn.org.yinzhengjie</groupId>
 8         <version>1.0-yinzhengjieCode</version>
 9     </parent>
10     <modelVersion>4.0.0</modelVersion>
11 
12     <artifactId>spark-core</artifactId>
13     <packaging>pom</packaging>
14     <modules>
15         <module>spark-wordCount</module>
16     </modules>
17 
18     <dependencies>
19         <dependency>
20             <groupId>org.apache.spark</groupId>
21             <artifactId>spark-core_2.11</artifactId>
22         </dependency>
23         <!--引用父项目的spark版本,也不需要声明版本-->
24     </dependencies>
25 
26 </project>
spark-core.xml 文件内容
 1 <?xml version="1.0" encoding="UTF-8"?>
 2 <project xmlns="http://maven.apache.org/POM/4.0.0"
 3          xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
 4          xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
 5     <parent>
 6         <artifactId>spark-core</artifactId>
 7         <groupId>cn.org.yinzhengjie</groupId>
 8         <version>1.0-yinzhengjieCode</version>
 9     </parent>
10     <modelVersion>4.0.0</modelVersion>
11 
12     <artifactId>spark-wordCount</artifactId>
13 
14 
15 </project>
spark-wordCount.xml 文件内容

17>.编写代码

package cn.org.yinzhengjie.wordCount

import org.apache.spark.{SparkConf, SparkContext}

object WordCount  extends Serializable {
    def main(args: Array[String]): Unit = {
        //设置访问hdfs的用户名,Windows操作系统默认是user=Administrator,因此这里需要指定具体的写入用户。
        System.setProperty("HADOOP_USER_NAME", "yinzhengjie")
        //声明配置
        val sparkConf = new SparkConf()
        //设置应用的名称
        sparkConf.setAppName("WordCount")
        //指定master,如果这里指定的话,在提交jar包时就可以不指定。
        sparkConf.setMaster("spark://s101:7077,s105:7077")
        //创建SparkContext
        val sc = new SparkContext(sparkConf)

        /**
          * 以下代码就是业务逻辑
          */
        //读取hdfs的配置文件
        val  file = sc.textFile("hdfs://s105:8020/yinzhengjie/data/README.md")
        //按照空格切割文件,得到每个单词
        val words = file.flatMap(_.split(" "))
        //将words里面的每一个单词都标记为1
        val wordOne = words.map((_,1))
        //聚合操作,将相同的key进行聚合
        val result = wordOne.reduceByKey(_+_)
        //将计算的结果写入到hdfs中
        result.saveAsTextFile("hdfs://s105:8020/yinzhengjie/data/wordCount2")
        //关闭Spark链接
        sc.stop()
    }
}

 

 

 

 

10>.

 

11>.

 

12>.

 

 

三.使用spark-shell来开发提交jar包执行

 

转载于:https://www.cnblogs.com/yinzhengjie/p/9461915.html

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值