Spark的任务执行
Spark-submit
(1)修改conf/slaves配置文件hadoop1
(2)启动spark的伪分布式集群
./sbin/start-all.sh
(3)spark-submit提交任务(以蒙特卡洛求圆周率为例)
spark-submit --master spark://hadoop1:7077 --class org.apache.spark.examples.SparkPi /usr/local/spark/spark-2.1.0-bin-hadoop2.7/examples/jars/spark-examples_2.11-2.1.0.jar 100
(4)spark-submit运行结果
Spark-shell
本地模式
(1)修改conf/slaves配置文件hadoop1
(2)启动spark的伪分布式集群
./sbin/start-all.sh
(3)启动spark-shell
spark-shell
(4)提交任务
sc.textFile("spark_workCount.txt").flatMap(_.split(" ")).map((_,1)).reduceByKey(_+_).collect
集群模式
(1)修改conf/slaves配置文件hadoop1
(2)启动spark的伪分布式集群
./sbin/start-all.sh
(3)启动spark-shell
spark-shell --master spark://hadoop1:7077
(4)hdfs创建/spark/tmp文件夹
hdfs dfs -mkdir -p /spark/tmp
(5)hdfs上传spark_workCount.txt文件
hdfs dfs -put spark_workCount.txt /spark/tmp
(6)提交任务
sc.textFile("hdfs://hadoop1:9000/spark/tmp/spark_workCount.txt").flatMap(_.split(" ")).map((_,1)).reduceByKey(_+_).saveAsTextFile("hdfs://hadoop1:9000/spark/output")
Spark的WordCount
Scala本地模式
(1)将jars文件夹下的jar包放到IDEA项目的resource下注意:保持scala版本跟jar包的版本一致
(2)启动spark-shell
./sbin/start-all.sh
spark-shell
(3)运行本地模式的WorkCount
package spark
import org.apache.spark.{SparkConf, SparkContext}
object WordCount {
def main(args: Array[String]): Unit = {
//创建一个spark的配置文件
val conf = new SparkConf().setAppName("Scala WorkCount").setMaster("local")
//实例化SparkContext对象
val sc = new SparkContext(conf)
//本地模式
val result = sc.textFile("hdfs://192.168.138.130:9000/spark/tmp/spark_workCount.txt")
.flatMap(_.split(" "))
.map((_,1))
.reduceByKey(_+_)
//输出结果
result.foreach(println)
}
}
Scala集群模式
(1)编写scala代码import org.apache.spark.{SparkConf, SparkContext}
object WorkCount {
def main(args: Array[String]): Unit = {
//创建一个spark的配置文件
val conf = new SparkConf().setAppName("Scala WorkCount")
//实例化SparkContext对象
val sc = new SparkContext(conf)
//集群模式
val result = sc.textFile(args(0))
.flatMap(_.split(" "))
.map((_,1))
.reduceByKey(_+_)
.saveAsTextFile(args(1));
//关闭
sc.stop();
}
}
(2)将代码打成jar包放到linux上
(3)运行spark
./sbin/start-all.sh
(4)提交任务
./sbin/start-all.sh
spark-submit --master spark://hadoop1:7077 --class spark.WordCount /root/Spark-1.0-SNAPSHOT.jar hdfs://192.168.138.130:9000/spark/tmp/spark_workCount.txt hdfs://192.168.138.130:9000/spark/wordcount
Java本地模式
(1)将jars文件夹下的jar包放到IDEA项目的resource下注意:保持scala版本跟jar包的版本一致
(2)启动spark-shell
./sbin/start-all.sh
spark-shell
(3)运行本地模式的WorkCount
package com.spark.util;
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaPairRDD;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.api.java.function.FlatMapFunction;
import org.apache.spark.api.java.function.Function2;
import org.apache.spark.api.java.function.PairFunction;
import scala.Tuple2;
import java.util.Arrays;
import java.util.Iterator;
import java.util.List;
/**
* Spark WordCount
*
* @author Jabin
* @version 1.00 2019/
*/
public class WordCount {
public static void main(String[] args) {
//创建Spark配置
SparkConf conf = new SparkConf().setAppName("Spark.WordCount").setMaster("local");
//加载Spark配置
JavaSparkContext sc = new JavaSparkContext(conf);
//本地模式
JavaRDD<String> textFile = sc.textFile("hdfs://192.168.138.130:9000/spark/tmp/spark_workCount.txt");
JavaRDD<String> flatMap = textFile.flatMap(new FlatMapFunction<String, String>() {
public Iterator<String> call(String s) {
return Arrays.asList(s.split(" ")).iterator();
}
});
JavaPairRDD<String, Integer> map = flatMap.mapToPair(new PairFunction<String, String, Integer>() {
public Tuple2<String, Integer> call(String s) {
return new Tuple2<String, Integer>(s, 1);
}
});
JavaPairRDD<String, Integer> reduce = map.reduceByKey(new Function2<Integer, Integer, Integer>() {
public Integer call(Integer a, Integer b) {
return a + b;
}
});
List<Tuple2<String, Integer>> list = reduce.collect();
for (Tuple2<String, Integer> tuple: list){
System.out.println(tuple._1+" : "+tuple._2);
}
}
}