1. IDE支持Maven,建立一个最简单的Maven-quickstart类型的artifact.
2.编辑pom.xml,添加spark支持。
org.apache.maven.plugins
maven-resources-plugin
2.4.3
org.apache.spark
spark-core_2.10
1.1.0
3.右击project maven-clean, maven-install.
4.添加一个Spark的分词代码
package MavenDemo.SparkDemoSrc;
/**
* Hello world!
*
*/
/**
4 * User: hadoop
5 * Date: 2014/10/10 0010
6 * Time: 19:26
7 */
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaPairRDD;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.api.java.function.FlatMapFunction;
import org.apache.spark.api.java.function.Function2;
import org.apache.spark.api.java.function.PairFunction;
import scala.Tuple2;
import java.util.Arrays;
import java.util.List;
import java.util.regex.Pattern;
public final class App {
private static final Pattern SPACE = Pattern.compile(" ");
public static void main(String[] args) throws Exception {
if (args.length < 1) {
System.err.println("Usage: JavaWordCount ");
System.exit(1);
}
SparkConf sparkConf = new SparkConf().setAppName("JavaWordCount");
JavaSparkContext ctx = new JavaSparkContext(sparkConf);
JavaRDD lines = ctx.textFile(args[0], 1);
JavaRDD words = lines
.flatMap(new FlatMapFunction() {
public Iterable call(String s) {
return Arrays.asList(SPACE.split(s));
}
});
JavaPairRDD ones = words
.mapToPair(new PairFunction() {
public Tuple2 call(String s) {
return new Tuple2(s, 1);
}
});
JavaPairRDD counts = ones
.reduceByKey(new Function2() {
public Integer call(Integer i1, Integer i2) {
return i1 + i2;
}
});
List> output = counts.collect();
for (Tuple2, ?> tuple : output) {
System.out.println(tuple._1() + ": " + tuple._2());
}
ctx.stop();
}
}
4. 用的是local模式运行main
5.
下载spark-1.6.0-bin-hadoop2.6,配置SPARK_HOME.
6.注意这个配置是专门为Windows服务的。
下载windows下hadoop工具包(分为32位和64位的),在本地新建一个hadoop目录,必须有 bin目录例如:D:\spark\hadoop-2.6.0\bin
然后将winutil等文件放在bin目录下
地址:https://github.com/sdravida/hadoop2.6_Win_x64/tree/master/bin
配置HADOOP_HOME
7.运行main访问,可以看到分词结果