在windows上使用eclipse提交Spark任务到Spark平台上
平台环境:
- 本地win7系统
- 本地spark和集群spark都是2.0.0
- eclipse(luna)
运行模式:
- local
- Spark Standalone
- YARN
程序代码如下:
package sparkproject1;
import scala.Tuple2;
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaPairRDD;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.api.java.function.FlatMapFunction;
import org.apache.spark.api.java.function.Function2;
import org.apache.spark.api.java.function.PairFunction;
import org.apache.spark.sql.SparkSession;
import java.util.Arrays;
import java.util.Iterator;
import java.util.List;
import java.util.regex.Pattern;
public final class wordcount {
private static final Pattern SPACE = Pattern.compile(" ");
public static void main(String[] args) throws Exception {
SparkConf conf = new SparkConf().setAppName("JavaWordCount").setMaster("spark://**.**.*.*:7077");
JavaSparkContext sc = new JavaSparkContext(conf);
sc.addJar("F:\\大数据\\jar包\\wordcount.jar");
JavaRDD<String> lines = sc.textFile("hdfs://****:9000/input/input.txt");
JavaRDD<String> words = lines.flatMap(new FlatMapFunction<String, String>() {
@Override
public Iterator<String> call(String s) {
return Arrays.asList(SPACE.split(s)).iterator();
}
});
JavaPairRDD<String, Integer> ones = words.mapToPair(
new PairFunction<String, String, Integer>() {
@Override
public Tuple2<String, Integer> call(String s) {
return new Tuple2<>(s, 1);
}
});
JavaPairRDD<String, Integer> counts = ones.reduceByKey(
new Function2<Integer, Integer, Integer>() {
@Override
public Integer call(Integer i1, Integer i2) {
return i1 + i2;
}
});
List<Tuple2<String, Integer>> output = counts.collect();
for (Tuple2<?,?> tuple : output) {
System.out.println(tuple._1() + ": " + tuple._2());
}
}
}
local模式
local模式只需要将程序中的setMaster(“local”)就可以了,一般不会出现什么问题。
Spark Standalone模式
在没有加sc.addJar(“F:\大数据\jar包\wordcount.jar”);这条语句之前报如下错误:
ava.lang.ClassCastException: cannot assign instance of scala.collection.immutable.List$SerializationProxy to field
org.apache.spark.rdd.RDD.org$apache$spark$rdd$RDD$$dependencies_ of type scala.collection.Seq in instance of
org.apache.spark.rdd.MapPartitionsRDD
设置之后,所有工作节点报错:
java.lang.RuntimeException: Stream '/jars/wordcount.jar' was not found.
显然是因为jar包未能传给工作节点,将生成的jar包放在上面路径中,运行成功。
YARN模式
修改的代码如下:
SparkConf conf = new SparkConf().setAppName("JavaWordCount").setMaster("yarn-client");
conf.set("spark.yarn.dist.files", "src\\yarn-site.xml");
将core-site.xml、hdfs-site.xml、yarn-site.xml三个文件放在项目src文件夹下,这三个文件从hadoop集群配置文件夹中复制下来,直接run java application就可以了。
有些教程还会有如下代码:
sparkConf.set("spark.yarn.jar", "hdfs://192.168.0.1:9000/user/bigdatagfts/spark-assembly-1.5.2-hadoop2.6.0.jar");
设置spark jar包地址,我并没有设置,但是日志显示,也会有jar包的上传过程。具体原理还不是很清楚,为什么需要上传这样的jar包。上面设置的yarn-client,如果你的电脑是在集群里面的,应该是设置为yarn-cluster的。