一. Spark四种运行模式
1.local 模式
spark-shell
local模式没有指定master地址,仅在本机启动一个进程(SparkSubmit),没有与集群建立联系。但是也可以正常启动spark shell和执行spark shell中的程序
2.standalone模式
Spark Shell中已经默认将SparkContext类初始化为对象sc。用户代码如果需要用到,则直接应用sc即可。
写spark程序,必须搞一个sparkContext的对象出来。
spark-shell –master spark://hadoop1:7077
val data=sc.textFile(“hdfs://hadoop1:9000/WordCount/text.txt”)
data.collect
val res1=data.flatMap(_.split(” “))
val tes3=res2.reduceByKey((+))
tes3.saveAsTextFile(“hdfs://hadoop1:9000/WordCount/output1”)
3.提交到yarn集群
4.memos集群
二. 在IDEA编写程序并提交到集群上运行
0.Maven工程pom文件
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>cn.edu360</groupId>
<artifactId>spark</artifactId>
<version>1.0-SNAPSHOT</version>
<properties>
<maven.compiler.source>1.8</maven.compiler.source>
<maven.compiler.target>1.8</maven.compiler.target>
<scala.version>2.11.8</scala.version>
<spark.version>2.2.0</spark.version>
<hadoop.version>2.8.1</hadoop.version>
<encoding>UTF-8</encoding>
</properties>
<dependencies>
<!-- 导入scala的依赖 -->
<dependency>
<groupId>org.scala-lang</groupId>
<artifactId>scala-library</artifactId>
<version>${scala.version}</version>
</dependency>
<!-- 导入spark的依赖 -->
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.11</artifactId>
<version>${spark.version}</version>
</dependency>
<!-- 指定hadoop-client API的版本 -->
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>${hadoop.version}</version>
</dependency>
</dependencies>
<build>
<pluginManagement>
<plugins>
<!-- 编译scala的插件 -->
<plugin>
<groupId>net.alchim31.maven</groupId>
<artifactId>scala-maven-plugin</artifactId>
<version>3.2.2</version>
</plugin>
<!-- 编译java的插件 -->
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.5.1</version>
</plugin>
</plugins>
</pluginManagement>
<plugins>
<plugin>
<groupId>net.alchim31.maven</groupId>
<artifactId>scala-maven-plugin</artifactId>
<executions>
<execution>
<id>scala-compile-first</id>
<phase>process-resources</phase>
<goals>
<goal>add-source</goal>
<goal>compile</goal>
</goals>
</execution>
<execution>
<id>scala-test-compile</id>
<phase>process-test-resources</phase>
<goals>
<goal>testCompile</goal>
</goals>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<executions>
<execution>
<phase>compile</phase>
<goals>
<goal>compile</goal>
</goals>
</execution>
</executions>
</plugin>
<!-- 打jar插件 -->
<!--<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>2.4.3</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<filters>
<filter>
<artifact>*:*</artifact>
<excludes>
<exclude>META-INF/*.SF</exclude>
<exclude>META-INF/*.DSA</exclude>
<exclude>META-INF/*.RSA</exclude>
</excludes>
</filter>
</filters>
<transformers>
<transformer
implementation="org.apache.maven.plugins.shade.resource.AppendingTransformer">
<resource>reference.conf</resource>
</transformer>
<!– 指定maven方法 –>
<transformer
implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
<mainClass></mainClass>
</transformer>
</transformers>
</configuration>
</execution>
</executions>
</plugin>-->
</plugins>
</build>
</project>
1.scala-Api下的WordCount
flatMap map 都是rdd上的方法
scala的api中,也有flatMap map方法
仅仅是名称一样而已,一个属于RDD,一个属于本地集合
在操作RDD的时候,是不是和本地集合一样的。
使用spark来运行程序,不需要再指定main方法了。
import org.apache.spark.SparkContext
import org.apache.spark.rdd.RDD
object WordCount {
def main(args: Array[String]): Unit = {
if(args.length!=2){
println("cn.edu360.sparkDay01.WordCount <input> <output>")
sys.exit()
}
val Array(input,output)=args
val sc: SparkContext = new SparkContext()
//获取输入流
val data: RDD[String] = sc.textFile(input)
//切分
val cutRes: RDD[String] = data.flatMap(_.split(" "))
//组合
val combyWithOne: RDD[(String, Int)] = cutRes.map((_,1))
//分组聚合
val combyRes: RDD[(String, Int)] = combyWithOne.reduceByKey((_+_))
//排序
val result: RDD[(String, Int)] = combyRes.sortBy(_._2)
//写出
result.saveAsTextFile(output)
}
}
2.java-Api下的WordCount
port org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaPairRDD;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.api.java.function.FlatMapFunction;
import org.apache.spark.api.java.function.Function2;
import org.apache.spark.api.java.function.PairFunction;
import scala.Tuple2;
import java.util.Arrays;
import java.util.Iterator;
public class WordCount {
public static void main(String[] args) {
if(args.length!=2){
System.out.println("cn.edu360.java.WordCount <input> <output>");
return;
}
SparkConf conf = new SparkConf();
JavaSparkContext jsc = new JavaSparkContext(conf);
//读取数据
JavaRDD<String> data = jsc.textFile(args[0]);
//切分
JavaRDD<String> cutRes = data.flatMap(new FlatMapFunction<String, String>() {
@Override
public Iterator<String> call(String s) throws Exception {
return Arrays.asList(s.split(" ")).iterator();
}
});
//组合和1
JavaPairRDD<String, Integer> combyWithOne = cutRes.mapToPair(new PairFunction<String, String, Integer>() {
@Override
public Tuple2<String, Integer> call(String word) throws Exception {
return new Tuple2<>(word, 1);
}
});
//分组聚合
JavaPairRDD<String, Integer> reduceRes = combyWithOne.reduceByKey(new Function2<Integer, Integer, Integer>() {
@Override
public Integer call(Integer integer1, Integer integer2) throws Exception {
return integer1 + integer2;
}
});
//排序
//1.交换key value
JavaPairRDD<Integer, String> change1 = reduceRes.mapToPair(new PairFunction<Tuple2<String, Integer>, Integer, String>() {
@Override
public Tuple2<Integer, String> call(Tuple2<String, Integer> stringIntegerTuple2) throws Exception {
return stringIntegerTuple2.swap();
}
});
//2.排序
JavaPairRDD<Integer, String> sortRes = change1.sortByKey(false);
//3.交换复原
JavaPairRDD<String, Integer> change2 = sortRes.mapToPair(new PairFunction<Tuple2<Integer, String>, String, Integer>() {
@Override
public Tuple2<String, Integer> call(Tuple2<Integer, String> integerStringTuple2) throws Exception {
return integerStringTuple2.swap();
}
});
//写出数据
change2.saveAsTextFile(args[1]);
}
}
3.javaLambda下的WordCount
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaPairRDD;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import scala.Tuple2;
import scala.tools.cmd.gen.AnyVals;
import java.util.Arrays;
import java.util.Iterator;
public class WordCountLambda {
public static void main(String[] args) {
if(args.length!=2){
System.out.println("cn.edu360.java.WordCount <input> <output>");
return;
}
SparkConf conf = new SparkConf();
JavaSparkContext jsc = new JavaSparkContext(conf);
//读取数据
JavaRDD<String> data = jsc.textFile(args[0]);
//切分
JavaRDD<String> cutRes = data.flatMap(t -> Arrays.asList(t.split(" ")).iterator());
//和1组合
JavaPairRDD<String, Integer> combyRes = cutRes.mapToPair(t -> new Tuple2<>(t, 1));
//分组聚合
JavaPairRDD<String, Integer> reduceRes = combyRes.reduceByKey((a, b) -> a + b);
//排序
JavaPairRDD<Integer, String> swapRes = reduceRes.mapToPair(t -> t.swap());
JavaPairRDD<Integer, String> sortRes = swapRes.sortByKey(false);
JavaPairRDD<String, Integer> result = sortRes.mapToPair(t -> t.swap());
//写出
result.saveAsTextFile(args[1]);
}
}
4.Local模式下spark程序
conf.setMaster(“local[]”) local local[] local[2]均可,通常用local[*]
conf.setAppName(WordCount.getClass.getSimpleName)
5.程序打包上传集群运行
1.提交
spark-submit
2.指明master位置
–master spark://hadoop1:7077
3. 要执行方法的全路径名
–class cn.edu360.sparkDay01.WordCount
4. 已经上传的jar包
/root/spark-1.0-SNAPSHOT.jar
5. 获取hdfs的文件路径
hdfs://hadoop1:9000/WordCount/text.txt
6. 存入文件系统路径
hdfs://hadoop1:9000/WordCount/output2
三.Spark提供三个位置用来配置系统
1. Spark属性
控制大部分的应用程序参数,可以用SparkConf对象或者Java系统属性设置;
Spark属性控制大部分的应用程序设置,并且为每个应用程序分别配置它。这些属性可以直接在SparkConf上配置,然后传递给SparkContext。SparkConf允许你配置一些通用的属性(如master URL、应用程序名称等等)以及通过set()方法设置的任意键值对。
本地模式下:
val conf = new SparkConf()
conf .setMaster(“local[2]”)
conf .setAppName(“CountingSheep”)
conf .set(“spark.executor.memory”, “1g”)
val sc = new SparkContext(conf)
- 配置文件
可以通过每个节点的conf/spark-env.sh脚本设置。
- SPARK_EXECUTOR_CORES, Number of cores for the executors (Default: 1).
- SPARK_EXECUTOR_MEMORY, Memory per Executor (e.g. 1000M, 2G) (Default: 1G)
- SPARK_DRIVER_MEMORY, Memory for Driver (e.g. 1000M, 2G) (Default: 1G)
- 执行命令直接指定
spark-submit
–master spark://hdp-01:7077 指定Master的地址
–executor-memory 2g 指定每个executor可用内存为2G( 512m) 默认是1024mb
–total-executor-cores 2 指定运行任务使用的cup核数为2个
–name “appName” 指定程序运行的名称
–executor-cores 1 指定每一个executor可用的内存
–jars xx.jar 程序额外使用的jar包
注意:如果worker节点的内存不足,那么在启动spark-shell的时候,就不能为executor分配超出worker可用的内存容量,大家根据自己worker的容量进行分配任务资源。
四.spark运行流程图