1、Spark Submit工具:提交Spark的任务(jar文件)
(*)spark提供的用于提交Spark任务工具
(*)example:/root/training/spark-2.1.0-bin-hadoop2.7/examples/jars/spark-examples_2.11-2.1.0.jar
(*)SparkPi.scala 例子:蒙特卡罗求PI
bin/spark-submit --master spark://bigdata11:7077 --class org.apache.spark.examples.SparkPi examples/jars/spark-examples_2.11-2.1.0.jar 100
Pi is roughly 3.1419547141954713
bin/spark-submit --master spark://bigdata11:7077 --class org.apache.spark.examples.SparkPi examples/jars/spark-examples_2.11-2.1.0.jar 300
Pi is roughly 3.141877971395932
蒙特卡罗求PI
2、Spark Shell 工具:交互式命令行工具、作为一个Application运行
两种模式:(1)本地模式
bin/spark-shell
日志:Spark context available as 'sc' (master = local[*], app id = local-1518181597235).
(2)集群模式
bin/spark-shell
日志:Spark context available as 'sc' (master = spark://bigdata11:7077, app id = app-20180209210815-0002).
对象:Spark context available as 'sc'
Spark session available as 'spark'
是一个统一的访问接口:Spark Core、Spark SQL、Spark Streaming
sc.textFile("hdfs://bigdata11:9000/input/data.txt") 通过sc对象读取HDFS的文件
.flatMap(_.split(" ")) 分词操作、压平
.map((_,1)) 每个单词记一次数
.reduceByKey(_+_) 按照key进行reduce,再将value进行累加
.saveAsTextFile("hdfs://bigdata11:9000/output/spark/day0209/wc")
多说一句:
.reduceByKey(_+_)
完整
.reduceByKey((a,b) => a+b)
Array((Tom,1),(Tom,2),(Mary,3),(Tom,6))
(Tom,(1,2,6))
1+2 = 3
3+6 = 9
3、开发WordCount程序
http://spark.apache.org/docs/2.1.0/api/scala/index.html
(1)Scala版本: 在IDEA中
(2)Java版本(比较麻烦) :在eclipse中
package mydemo
import org.apache.spark.{SparkConf, SparkContext}
object MyWordCount {
def main(args: Array[String]): Unit = {
val conf = new SparkConf().setAppName("MyScalaWordCount")
val sc = new SparkContext(conf)
sc.textFile(args(0))
.flatMap(_.split(" "))
.map((_,1))
.reduceByKey(_+_)
.saveAsTextFile(args(1))
sc.stop()
}
}
package demo;
import java.util.Arrays;
import java.util.Iterator;
import java.util.List;
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaPairRDD;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.api.java.function.FlatMapFunction;
import org.apache.spark.api.java.function.Function2;
import org.apache.spark.api.java.function.PairFunction;
import scala.Tuple2;
public class JavaWordCount {
public static void main(String[] args) {
SparkConf conf = new SparkConf().setAppName("MyJavaWordCount");
JavaSparkContext context = new JavaSparkContext(conf);
JavaRDD<String> lines = context.textFile(args[0]);
JavaRDD<String> words = lines.flatMap(new FlatMapFunction<String, String>() {
@Override
public Iterator<String> call(String line) throws Exception {
return Arrays.asList(line.split(" ")).iterator();
}
});
JavaPairRDD<String, Integer> wordOne = words.mapToPair(new PairFunction<String, String, Integer>() {
@Override
public Tuple2<String, Integer> call(String word) throws Exception {
return new Tuple2<String, Integer>(word, 1);
}
});
JavaPairRDD<String, Integer> count = wordOne.reduceByKey(new Function2<Integer, Integer, Integer>() {
@Override
public Integer call(Integer a, Integer b) throws Exception {
return a + b;
}
});
List<Tuple2<String, Integer>> result = count.collect();
for(Tuple2<String, Integer> tuple: result){
System.out.println(tuple._1+"\t"+tuple._2);
}
context.stop();
}
}