通过spark-submit提交第一个spark 应用到集群中运行
bin/spark-submit --master spark://node-4:7077,node-5:7077 --class org.apache.spark.examples.SparkPi --executor-memory 2048mb --total-executor-cores 12 --executor-cores 1 examples/jars/spark-examples_2.11-2.2.0.jar 1000
参数介绍:
–class: 业务运行代码class
–master: 提交到具体的master 地址 可以是 spark 的一个节点,可以是yarn,也可以指定多个master地址,目的是为了提交任务高可用
–total-executor-cores: 总核数
–executor-cores: 每个executor的核心数
–executor-memory: 每个executor使用的内存数
xxx.jar 为实际提交的jar 包 (jar 包的位置必须是位于Spark 的节点机器,或者是hdfs 文件目录上)
100 是业务运行代码class需要传入的参数
提交一个spark程序到spark集群,会产生哪些进程?
SparkSubmint(Driver)提交任务
Executor 执行真正的计算任务的
通过spark-submit提交自己的应用到集群中运行
在IDEA中编写WordCount程序
新建maven工程,在pom.xml中添加配置如下:
<properties>
<maven.compiler.source>1.8</maven.compiler.source>
<maven.compiler.target>1.8</maven.compiler.target>
<scala.version>2.11.8</scala.version>
<spark.version>2.2.0</spark.version>
<hadoop.version>2.7.3</hadoop.version>
<encoding>UTF-8</encoding>
</properties>
<dependencies>
<!-- 导入scala的依赖 -->
<dependency>
<groupId>org.scala-lang</groupId>
<artifactId>scala-library</artifactId>
<version>${scala.version}</version>
</dependency>
<!-- 导入spark的依赖 -->
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.11</artifactId>
<version>${spark.version}</version>
</dependency>
<!-- 指定hadoop-client API的版本 -->
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>${hadoop.version}</version>
</dependency>
</dependencies>
<build>
<pluginManagement>
<plugins>
<!-- 编译scala的插件 -->
<plugin>
<groupId>net.alchim31.maven</groupId>
<artifactId>scala-maven-plugin</artifactId>
<version>3.2.2</version>
</plugin>
<!-- 编译java的插件 -->
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.5.1</version>
</plugin>
</plugins>
</pluginManagement>
<plugins>
<plugin>
<groupId>net.alchim31.maven</groupId>
<artifactId>scala-maven-plugin</artifactId>
<executions>
<execution>
<id>scala-compile-first</id>
<phase>process-resources</phase>
<goals>
<goal>add-source</goal>
<goal>compile</goal>
</goals>
</execution>
<execution>
<id>scala-test-compile</id>
<phase>process-test-resources</phase>
<goals>
<goal>testCompile</goal>
</goals>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<executions>
<execution>
<phase>compile</phase>
<goals>
<goal>compile</goal>
</goals>
</execution>
</executions>
</plugin>
<!-- 打jar插件 -->
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>2.4.3</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<filters>
<filter>
<artifact>*:*</artifact>
<excludes>
<exclude>META-INF/*.SF</exclude>
<exclude>META-INF/*.DSA</exclude>
<exclude>META-INF/*.RSA</exclude>
</excludes>
</filter>
</filters>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
这里可以通过scala编写,也可以通过java编写 ,以下演示这两种方式
1.新建一个scala class,类型为Object,编写spark程序(以scala为例)
package cn.spark
import org.apache.spark.rdd.RDD
import org.apache.spark.{SparkConf, SparkContext}
object WordCount {
def main(args: Array[String]): Unit = {
//创建spark配置,设置应用程序名字
val conf = new SparkConf().setAppName("ScalaWordCount")
//本地调试
//val conf = new SparkConf().setAppName("ScalaWordCount").setMaster("local[4]")
//创建spark执行的入口
val sc = new SparkContext(conf)
//指定以后从哪里读取数据创建RDD(弹性分布式数据集)
//sc.textFile(args(0)).flatMap(_.split(" ")).map((_, 1)).reduceByKey(_+_).sortBy(_._2, false).saveAsTextFile(args(1))
val lines: RDD[String] = sc.textFile(args(0))
//切分压平
val words: RDD[String] = lines.flatMap(_.split(" "))
//将单词和一组合
val wordAndOne: RDD[(String, Int)] = words.map((_, 1))
//按key进行聚合
val reduced:RDD[(String, Int)] = wordAndOne.reduceByKey(_+_)
//排序
val sorted: RDD[(String, Int)] = reduced.sortBy(_._2, false)
//将结果保存到HDFS中
sorted.saveAsTextFile(args(1))
//释放资源
sc.stop()
}
}
2. 新建java class(以java为例)
package cn.spark;
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaPairRDD;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import scala.Tuple2;
import java.util.Arrays;
public class WordCount {
public static void main(String[] args) {
SparkConf conf = new SparkConf().setAppName("JavaWordCount");
//创建sparkContext
JavaSparkContext jsc = new JavaSparkContext(conf);
//指定以后从哪里读取数据
JavaRDD<String> lines = jsc.textFile(args[0]);
//切分压平
JavaRDD<String> words = lines.flatMap(line -> Arrays.asList(line.split(" ")).iterator());
//将单词和一组合
JavaPairRDD<String, Integer> wordAndOne = words.mapToPair(w -> new Tuple2<>(w, 1));
//聚合
JavaPairRDD<String, Integer> reduced = wordAndOne.reduceByKey((m, n) -> m + n);
//调整顺序
JavaPairRDD<Integer, String> swaped = reduced.mapToPair(tp -> tp.swap());
//排序
JavaPairRDD<Integer, String> sorted = swaped.sortByKey(false);
//调整顺序
JavaPairRDD<String, Integer> result = sorted.mapToPair(tp -> tp.swap());
//将结果保存到hdfs
result.saveAsTextFile(args[1]);
//释放资源
jsc.stop();
}
}
打包,mvn package打包即可,并将该jar上传到Spark集群中的某个节点上
首先启动hdfs和Spark集群
启动hdfs
/usr/local/hadoop-2.6.5/sbin/start-dfs.sh
启动spark
/usr/local/spark-2.1.0-bin-hadoop2.6/sbin/start-all.sh
使用spark-submit命令提交Spark应用(注意参数的顺序)
/usr/local/spark-2.1.0-bin-hadoop2.6/bin/spark-submit \
--class cn.spark.WordCount \
--master spark://node1:7077 \
--executor-memory 2G \
--total-executor-cores 4 \
/root/spark-mvn-1.0-SNAPSHOT.jar \
hdfs://node1:9000/words.txt \
hdfs://node1.:9000/out
查看程序执行结果
hdfs dfs -cat hdfs://node1:9000/out/part-00000
本地调试
如果是windows系统,则需要配置本地Hadoop运行环境
Index of /dist/hadoop/common 下载hadoop安装包
解压到指定路径,配置环境变量
HADOOP_HOME D:\软件\大数据\hadoop\hadoop-2.7.3-bin\hadoop-2.7.3
Path %HADOOP_HOME%\bin;%HADOOP_HOME%\sbin;
这里需要下载一个工具winUtils.ext和hadoop.dll,这个工具是编译hadoop用的,下载完之后解压hadoop文件,然后把winutils.exe和hadoop.dll放到hadoop文件的bin目录下面.下载地址:
百度网盘 请输入提取码 65hy
如果要本地调试,我们要设置一个在本地模式下执行:
在setAppName后面加上.setMaster("local[*]")
,然后debug就可以了,如下
val conf = new SparkConf().setAppName("ScalaWordCount").setMaster("local[4]")