IDEA中运行Spark有两种方式本地模式和远程模式。
1.本地模式
本地Spark程序调试需要使用local提交模式,即将本机当做运行环境,Master和Worker都为本机。
- Maven依赖
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.edwin</groupId>
<artifactId>CoreWordCount</artifactId>
<version>1.0-SNAPSHOT</version>
<dependencies>
<dependency>
<groupId>org.scala-lang</groupId>
<artifactId>scala-library</artifactId>
<version>2.11.8</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.11</artifactId>
<version>2.1.1</version>
<!--<scope>provided</scope>-->
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>2.7.2</version>
<!--<scope>provided</scope>-->
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.scala-tools</groupId>
<artifactId>maven-scala-plugin</artifactId>
<version>2.15.2</version>
<executions>
<execution>
<id>scala-compile-first</id>
<goals>
<goal>compile</goal>
</goals>
<configuration>
<includes>
<include>**/*.scala</include>
</includes>
</configuration>
</execution>
<execution>
<id>scala-test-compile</id>
<goals>
<goal>testCompile</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
</project>
- Scala代码
package main
import org.apache.spark.{SparkConf, SparkContext}
object WordCount {
def main(args: Array[String]): Unit = {
//创建SparkConf()并设置App名称
val conf = new SparkConf().setMaster("local[*]").setAppName("WC")
//创建SparkContext,该对象是提交spark App的入口
val sc = new SparkContext(conf)
//使用sc创建RDD并执行相应的transformation和action
sc.textFile("D:\\words.txt")
.flatMap(_.split(" "))
.map((_, 1))
.reduceByKey(_+_, 1)
.sortBy(_._2, false)
.saveAsTextFile("D:\\output")
//停止sc,结束该任务
sc.stop()
}
}
2.远程调试
通过IDEA进行远程调试,主要是将IDEA作为Driver来提交应用程序。
package main
import org.apache.spark.{SparkConf, SparkContext}
object WordCount {
def main(args: Array[String]): Unit = {
//创建SparkConf()并设置App名称
val conf = new SparkConf()
.setMaster("spark://L0:7077")
.setAppName("WordCount")
.setJars(Array("D:\\\\CoreWordCount-1.0-SNAPSHOT.jar"))
.setIfMissing("spark.driver.host", "192.168.191.130")
//创建SparkContext,该对象是提交spark App的入口
val sc = new SparkContext(conf)
//使用sc创建RDD并执行相应的transformation和action
sc.textFile("hdfs://l0:8020/words.txt")
.flatMap(_.split(" "))
.map((_, 1))
.reduceByKey(_+_, 1)
.sortBy(_._2, false)
.saveAsTextFile("hdfs://l0:8020/output")
//停止sc,结束该任务
sc.stop()
}
}