scala sdk下载地址 https://downloads.lightbend.com/scala/2.12.0/scala-2.12.0.zip
scala eclipse 下载地址 http://downloads.typesafe.com/scalaide-pack/4.7.0-vfinal-oxygen-212-20170929/scala-SDK-4.7.0-vfinal-2.12-win32.win32.x86_64.zip
先新建项目
File->new->Scala Project
点击项目然后右键
Configure->
转为mven项目
scala语言也是基于jvm之上
scala eclipse 下载地址 http://downloads.typesafe.com/scalaide-pack/4.7.0-vfinal-oxygen-212-20170929/scala-SDK-4.7.0-vfinal-2.12-win32.win32.x86_64.zip
安装我就不说了,不过在安装之前要确保jdk8和maven已经安装
scala 语言如果有java基础的很好学,两天基本可以学会大体语言可以编码了
先新建项目
File->new->Scala Project
点击项目然后右键
Configure->
转为mven项目
scala语言也是基于jvm之上
所以jar是可以相互引用的,所以像java maven项目一样配置pom文件就可以了
项目结构
pom.xml
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>spark-scala</groupId>
<artifactId>spark-scala</artifactId>
<version>0.0.1-SNAPSHOT</version>
<dependencies>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.10</artifactId>
<version>2.2.0</version>
</dependency>
</dependencies>
<build>
<sourceDirectory>src</sourceDirectory>
<plugins>
<plugin>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.6.1</version>
<configuration>
<source>1.8</source>
<target>1.8</target>
</configuration>
</plugin>
</plugins>
</build>
</project>
package com.test.spark
import org.apache.spark.SparkConf
import org.apache.spark.SparkConf
import org.apache.spark.SparkContext
object WorkCount {
def main(args: Array[String]): Unit = {
//配置信息
var conf=new SparkConf().setAppName("WorkCount-scala")
var sc=new SparkContext(conf)
//从hadoop读取文件,默认每行RDD
var textFile=sc.textFile("hdfs://192.168.7.202:900/test/sql.txt")
//每条记录再根据空格分成单词
var words=textFile.flatMap(line=>line.split(" "))
//转化成键值对
var kv=words.map(word=>(word,1)).reduceByKey{case(num1,num2)=>num1+num2}
//保存到hadoop文件
kv.saveAsTextFile("hdfs://192.168.7.202:900/test/word-count-scala")
}
}