参照WorldCount在Spark shell中的运行(对比学习)
写WorldCount程序(IDEA)
- 前期准备:新建maven工程,在pom中加入相关配置:(注意相对应的版本)
<properties>
<maven.compiler.source>1.8</maven.compiler.source>
<maven.compiler.target>1.8</maven.compiler.target>
<scala.version>2.11.8</scala.version>
<spark.version>2.2.0</spark.version>
<hadoop.version>2.6.5</hadoop.version>
<encoding>UTF-8</encoding>
</properties>
<dependencies>
<!-- 导入scala的依赖 -->
<dependency>
<groupId>org.scala-lang</groupId>
<artifactId>scala-library</artifactId>
<version>${scala.version}</version>
</dependency>
<!-- 导入spark的依赖 -->
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.11</artifactId>
<version>${spark.version}</version>
</dependency>
<!-- 指定hadoop-client API的版本 -->
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>${hadoop.version}</version>
</dependency>
</dependencies>
<build>
<pluginManagement>
<plugins>
<!-- 编译scala的插件 -->
<plugin>
<groupId>net.alchim31.maven</groupId>
<artifactId>scala-maven-plugin</artifactId>
<version>3.2.2</version>
</plugin>
<!-- 编译java的插件 -->
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.5.1</version>
</plugin>
</plugins>
</pluginManagement>
<plugins>
<plugin>
<groupId>net.alchim31.maven</groupId>
<artifactId>scala-maven-plugin</artifactId>
<executions>
<execution>
<id>scala-compile-first</id>
<phase>process-resources</phase>
<goals>
<goal>add-source</goal>
<goal>compile</goal>
</goals>
</execution>
<execution>
<id>scala-test-compile</id>
<phase>process-test-resources</phase>
<goals>
<goal>testCompile</goal>
</goals>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<executions>
<execution>
<phase>compile</phase>
<goals>
<goal>compile</goal>
</goals>
</execution>
</executions>
</plugin>
<!-- 打jar插件 -->
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>2.4.3</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<filters>
<filter>
<artifact>*:*</artifact>
<excludes>
<exclude>META-INF/*.SF</exclude>
<exclude>META-INF/*.DSA</exclude>
<exclude>META-INF/*.RSA</exclude>
</excludes>
</filter>
</filters>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
- 创建Scala的资源文件夹(用于编写Scala代码)之后编写WorldCount实现代码
package com.zpark
import org.apache.spark.rdd.RDD
import org.apache.spark.{SparkConf, SparkContext}
/*
SparkApi在Linux上运行的代码
连接Spark就要先配置:Liunx上运行时要访问的类名setAppName;Spark的master运行的位置setMaster;Spark要执行的操作;
通过main方法上的Array将需要的处理的数据接受进来
*/
object WorldCount {
def main(args: Array[String]): Unit = {
val conf = new SparkConf()
conf.setAppName("WorldCount")//--class com.zpark.sparkwc.ScalaWordCount(运行的类在jar包工程下的位置)
conf.setMaster("local")// /root/apps/original-sparks-1.0-SNAPSHOT.jar(jar包所在Linux位置)
val sc = new SparkContext(conf)
// 指定类型:返回的数据类型
val lines: RDD[String] = sc.textFile(args(0))//一行一行的获取数据 hdfs://hdp-1:9000/spark/hi.txt(处理的数据所在的位置)
val words: RDD[String] = lines.flatMap(_.split(","))//以逗号分隔每一行数据
val world: RDD[(String, Int)] = words.map((_,1))//把分隔好的所有单词和1拼接
val reduced: RDD[(String, Int)] = world.reduceByKey(_+_)//进入reduce阶段
val function = reduced.sortBy(_._2)//排序:以元组中的第二个参数排序,后面加true是正序,false是倒序
val unit = function.saveAsTextFile(args(1))//hdfs://hdp-1:9000/spark/outs(将结果上传到hdfs上的位置)
sc.stop()
}
}
- 将工程打好jar包,放到Linux上。
在Spark集群上执行
- 在Linux上将hdfs,zookeeper,spark启动
- 在spark的bin下运行jar包
./spark-submit --master spark://hdp-1:7077(集群提供的访问URL) --class com.zpark.sparkwc.ScalaWordCount(运行类在工程下的位置) /root/apps/original-sparks-1.0-SNAPSHOT.jar hdfs://hdp-1:9000/spark/hi.txt(jar包所在Linux位置) hdfs://hdp-1:9000/spark/outs(上传到hdfs上的位置)
查看结果
- 页面访问:http://hdp-0:50070查看是否上传了结果文件