首先启动Master:./sbin/start-master.sh
然后启动Worker:./sbin/start-slave.sh master:7077
启动Spark Shell:./bin/spark-shell --master spark://master:7077
在Spark shell中执行如下命令:
val line = sc.textFile("file:home/spark/spark-2.2.0-bin-hadoop2.7/README.md")
val result = line.flatMap(_.split(" ")).map((_, 1)).reduceByKey(_ + _)
result.saveAsTextFile("file:///home/spark/spark-2.2.0-bin-hadoop2.7/out")
然后启动Worker:./sbin/start-slave.sh master:7077
启动Spark Shell:./bin/spark-shell --master spark://master:7077
在Spark shell中执行如下命令:
val line = sc.textFile("file:home/spark/spark-2.2.0-bin-hadoop2.7/README.md")
val result = line.flatMap(_.split(" ")).map((_, 1)).reduceByKey(_ + _)
result.saveAsTextFile("file:///home/spark/spark-2.2.0-bin-hadoop2.7/out")