环境scala ide+maven
scala ide 创建maven项目。然后创建src/main/scala目录。
pom文件配置:
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.le.hive.test</groupId>
<artifactId>test.spark</artifactId>
<version>0.0.1-SNAPSHOT</version>
<packaging>jar</packaging>
<name>test.spark</name>
<url>http://maven.apache.org</url>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
</properties>
<dependencies>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>3.8.1</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>hadoop</groupId>
<artifactId>lzo</artifactId>
<version>0.4.20</version>
<scope>system</scope>
<systemPath>/bg/hadoop/share/hadoop/common/hadoop-lzo-0.4.20-SNAPSHOT.jar</systemPath>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.10</artifactId>
<version>1.6.1</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-mllib_2.10</artifactId>
<version>1.6.1</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.10</artifactId>
<version>1.6.1</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming_2.10</artifactId>
<version>1.6.1</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-hive_2.10</artifactId>
<version>1.6.1</version>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.scala-tools</groupId>
<artifactId>maven-scala-plugin</artifactId>
<version>2.15.2</version>
<executions>
<execution>
<goals>
<goal>compile</goal>
<goal>testCompile</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
</project>
两个简单的scala例子
package com.test.spark
import org.apache.spark.SparkConf
import org.apache.spark.SparkContext
object Pallelizeddata {
def main(args:Array[String]){
val conf = new SparkConf().setAppName("pallelizeddata example").setMaster("local");
val sc = new SparkContext(conf);
val data= Array(1,2,3,4,5,6,7,8,9,10)
//将集合装换为rdd
val rdd = sc.parallelize(data, 3)
val count = rdd.count()
println(count)
val sum = rdd.sum();
println(sum)
}
}
wordcount的例子:
package com.test.spark
import org.apache.spark.SparkConf
import org.apache.spark.SparkContext
object Textfile {
def main(args:Array[String]){
val conf = new SparkConf().setAppName("textfile").setMaster("local")
val sc = new SparkContext(conf)
//可以是外部数据路径,如hdfs路径
val rdd=sc.textFile("file:///usr/local/spark-1.6.1/README.md", 3);
val result = rdd.flatMap(line => line.split("\\s+") ).map ( word => (word,1) ).reduceByKey(_ + _);
result.collect().foreach(x=>println(x._1+" "+x._2))
}
}