一,代码实现之Scala命令行实现
$scala>val rdd1 = sc.textFile("/home/centos/test.txt") //获取文本文件,按行切分,以行为单位的String $scala>val rdd2 = rdd1.flatMap(line=>line.split(" ")) // 压扁打散行数据,获取所有行的所有单词,按“ ”分隔,形成String数组; $scala>val rdd3 = rdd2.map(word = > (word,1)) //RDD变换,将RDD<String>变换为RDD<String,1> $scala>val rdd4 = rdd3.reduceByKey(_ + _) //RDD变换 聚合RDD;将RDD<String,1>中相同key的值进行纵向捏合,累加; $scala>rdd4.collect //输出结果;
简单写法:
$scala>sc.textFile("/home/centos/test.txt").flatMap(_.split(" ")).map((_,1)).reduceByKey(_ + _).collect
单词过滤:
$scala>sc.textFile("/home/centos/test.txt").flatMap(_.split(" ")).filter(_.contains("wor")).map((_,1)).reduceByKey(_ + _).collect
其中红色部分为transformation 操作,该操作为懒加载,整个过程不输出任何中间结果,只有出发action操作时,才会触发结果输出。transformation和action具体介绍上一篇博客已经做过详细介绍,在此不多分析。
二,代码实现之Scala代码实现
通常情况下业务复杂度不允许通过命令行完成spark任务调度,此时就需要通过编写scala程序来实现较为复杂的spark任务,在此需要引入spark类库,从而完成wordcount
1)具体类库如下:
<?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion>
<groupId>com.it18zhang</groupId> <artifactId>SparkDemo1</artifactId> <version>1.0-SNAPSHOT</version> <dependencies> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-core_2.11</artifactId> <version>2.1.0</version> </dependency> </dependencies> </project>
2)编写scala文件本地模式 import org.apache.spark.{SparkConf, SparkContext}
/** * Created by Administrator on 2017/4/20. */ object WordCountDemo { def main(args: Array[String]): Unit = { //创建Spark配置对象 val conf = new SparkConf(); conf.setAppName("WordCountSpark") //设置master属性 conf.setMaster("local") ;
//通过conf创建sc val sc = new SparkContext(conf);
//加载文本文件 val rdd1 = sc.textFile("d:/scala/test.txt"); //压扁 val rdd2 = rdd1.flatMap(line => line.split(" ")) ; //映射w => (w,1) val rdd3 = rdd2.map((_,1)) val rdd4 = rdd3.reduceByKey(_ + _) val r = rdd4.collect() r.foreach(println) } }
红色部分为Transformation 不多介绍;
三,代码实现之java代码实现
import org.apache.spark.SparkConf; import org.apache.spark.SparkContext; import org.apache.spark.api.java.JavaPairRDD; import org.apache.spark.api.java.JavaRDD; import org.apache.spark.api.java.JavaSparkContext; import org.apache.spark.api.java.function.FlatMapFunction; import org.apache.spark.api.java.function.Function2; import org.apache.spark.api.java.function.PairFunction; import scala.Tuple2;
import java.util.ArrayList; import java.util.Iterator; import java.util.List;
/** * java版 */ public class WordCountJava2 { public static void main(String[] args) { //创建SparkConf对象 SparkConf conf = new SparkConf(); conf.setAppName("WordCountJava2"); conf.setMaster("local");
//创建java sc JavaSparkContext sc = new JavaSparkContext(conf); //加载文本文件 JavaRDD<String> rdd1 = sc.textFile("d:/scala//test.txt");
//压扁 JavaRDD<String> rdd2 = rdd1.flatMap(new FlatMapFunction<String, String>() { public Iterator<String> call(String s) throws Exception { List<String> list = new ArrayList<String>(); String[] arr = s.split(" "); for(String ss :arr){ list.add(ss); } return list.iterator(); } });
//映射,word -> (word,1) JavaPairRDD<String,Integer> rdd3 = rdd2.mapToPair(new PairFunction<String, String, Integer>() { public Tuple2<String, Integer> call(String s) throws Exception { return new Tuple2<String, Integer>(s,1); } });
//reduce化简 JavaPairRDD<String,Integer> rdd4 = rdd3.reduceByKey(new Function2<Integer, Integer, Integer>() { public Integer call(Integer v1, Integer v2) throws Exception { return v1 + v2; } });
// List<Tuple2<String,Integer>> list = rdd4.collect(); for(Tuple2<String, Integer> t : list){ System.out.println(t._1() + " : " + t._2()); } } }
代码内容不再过多描述,在此提到一点注意事项,都知道IDEA,eclipse等编译工具对java的兼容性最好,都默认支持java的自动编译,IDEA目前是scala编译支持最好的工具,但是使用IDEA编写Scala代码时,也可能会出现编译失败的情况(主动编译未开启或其他情况导致),解决办法为:添加针对scala编译插件,可以防止编译失败的问题,具体插件配置如下:
<?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion>
<groupId>com.it18zhang</groupId> <artifactId>SparkDemo1</artifactId> <version>1.0-SNAPSHOT</version>
<build> <sourceDirectory>src/main/java</sourceDirectory> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <configuration> <source>1.8</source> <target>1.8</target> </configuration> </plugin> <plugin> <groupId>net.alchim31.maven</groupId> <artifactId>scala-maven-plugin</artifactId> <version>3.2.2</version> <configuration> <recompileMode>incremental</recompileMode> </configuration> <executions> <execution> <goals> <goal>compile</goal> <goal>testCompile</goal> </goals> </execution> </executions> </plugin> </plugins> </build>
<dependencies> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-core_2.11</artifactId> <version>2.1.0</version> </dependency> </dependencies> </project>