Spark2.4.8编写WordCount程序(Scala版)
一、本地开发运行测试
-
新建maven工程
-
在pom.xml中添加spark相关依赖:
<packaging>jar</packaging> <properties> <scala.version>2.11.8</scala.version> <spark.version>2.4.8</spark.version> <spark.artifact.version>2.12</spark.artifact.version> <hadoop.version>2.7.3</hadoop.version> </properties> <dependencies> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-core_${spark.artifact.version}</artifactId> <version>${spark.version}</version> </dependency> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-client</artifactId> <version>${hadoop.version}</version> </dependency> <!-- 使用scala2.11.8进行编译和打包 --> <dependency> <groupId>org.scala-lang</groupId> <artifactId>scala-library</artifactId> <version>${scala.version}</version> </dependency> </dependencies> <build> <!-- 指定scala源代码所在的目录 --> <sourceDirectory>src/main/scala</sourceDirectory> <testSourceDirectory>src/test/scala</testSourceDirectory> <plugins> <!--对src/main/java下的后缀名为.java的文件进行编译 --> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>3.8.1</version> <configuration> <source>1.8</source> <target>1.8</target> </configuration> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-assembly-plugin</artifactId> <configuration> <descriptorRefs> <descriptorRef>jar-with-dependencies</descriptorRef> </descriptorRefs> </configuration> </plugin> <!-- scala的打包插件 --> <plugin> <groupId>net.alchim31.maven</groupId> <artifactId>scala-maven-plugin</artifactId> <version>4.5.4</version> <executions> <execution> <goals> <goal>compile</goal> <goal>testCompile</goal> </goals> </execution> </executions> <configuration> <scalaVersion>${scala.version}</scalaVersion> </configuration> </plugin> </plugins> </build>
-
在src下新建包com.niit.scala.wc
-
在4步骤创建的包下新建object类取名为WordCount
-
编写如下代码:
-
本地运行,查看运行结果如下:
二、打包上传至远程服务器
将上述本地运行代码改写成远程模式代码,并打包上传到虚拟机中,以spark-submit方式提交到cluster manager(学生自己做)
- 改写代码,改成如下:
import org.apache.spark.{SparkConf, SparkContext} // 提交到spark集群运行 object ScalaRemoteWc { def main(args: Array[String]): Unit = { // 1、创建sc:SparkContext val sparkConf = new SparkConf().setAppName("ScalaWordCount").setMaster("spark://niit110:7077") val sc = new SparkContext(sparkConf) //2、写逻辑代码 val rddResult = sc.textFile("hdfs://niit110:9000/datas/data.txt") //设置读取的是hdfs上的数据(hadoop必须是正常启动状态) .flatMap(_.split(" ")) .map((_,1)) .reduceByKey(_+_) .saveAsTextFile("hdfs://niit110:9000/datas/out/wc01/") // action算子 // 3、sc关闭,释放资源 sc.stop() } }
- 打包程序
- 上传到虚拟机
- 使用spark-submit
bin/spark-submit --master spark://nn-master:7077 --class com.niit.core.wc.SparkCoreWordCount /tools/SparkCoreModule-1.0-SNAPSHOT.jar