1 启动集群
- 启动 HDFS
start-dfs.sh
- 启动 Spark 集群
/home/hadoop/apps/spark-1.6.3-bin-hadoop2.6/sbin/start-all.sh
- 启动 Spark Shell
/home/hadoop/apps/spark-1.6.3-bin-hadoop2.6/bin/spark-shell --master spark://node1:7077 --executor-memory 512m --total-executor-cores 2
2 wordcount 执行流程
一共产生 5 个RDD
- textFile 产生2个RDD, HadoopRDD 和 MapPartitionsRDD
- flatMap 产生一个RDD,MapPartitionsRDD
- map 产生一个 RDD MapPartitionsRDD
- reduceByKey 产生一个RDD,ShuffleRDD
scala> val rdd = sc.textFile("hdfs://node1:9000/wc").flatMap(_.split(" ")).map((_,1)).reduceByKey(_+_)
rdd: org.apache.spark.rdd.RDD[(String, Int)] = ShuffledRDD[9] at reduceByKey at <console>:27
scala> rdd.saveAsTextFile("hdfs://node1:9000/wcout1")
scala> rdd.toDebugString
res4: String =
(3) ShuffledRDD[9] at reduceByKey at <console>:27 []
+-(3) MapPartitionsRDD[8] at map at <console>:27 []
| MapPartitionsRDD[7] at flatMap at <console>:27 []
| hdfs://node1:9000/wc MapPartitionsRDD[6] at textFile at <console>:27 []
| hdfs://node1:9000/wc HadoopRDD[5] at textFile at <console>:27 []