Spark学习 基础函数功能详细说明 附代码和执行过程 上机学习详细说明

http://blog.csdn.net/yunlong34574/article/details/38635853

root@Master:/usr/local/spark/spark-1.0.0-bin-hadoop1/bin# spark-shell
Spark assembly has been built with Hive, including Datanucleus jars on classpath
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0
16/07/20 20:25:20 INFO spark.SecurityManager: Changing view acls to: root
16/07/20 20:25:20 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root)
16/07/20 20:25:20 INFO spark.HttpServer: Starting HTTP Server
16/07/20 20:25:20 INFO server.Server: jetty-8.y.z-SNAPSHOT
16/07/20 20:25:20 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0:44920
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 1.0.0
      /_/

Using Scala version 2.10.4 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_11)
Type in expressions to have them evaluated.
Type :help for more information.
16/07/20 20:25:28 INFO spark.SecurityManager: Changing view acls to: root
16/07/20 20:25:28 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root)
16/07/20 20:25:30 INFO slf4j.Slf4jLogger: Slf4jLogger started
16/07/20 20:25:30 INFO Remoting: Starting remoting
16/07/20 20:25:30 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://spark@Master:45186]
16/07/20 20:25:30 INFO Remoting: Remoting now listens on addresses: [akka.tcp://spark@Master:45186]
16/07/20 20:25:30 INFO spark.SparkEnv: Registering MapOutputTracker
16/07/20 20:25:30 INFO spark.SparkEnv: Registering BlockManagerMaster
16/07/20 20:25:30 INFO storage.DiskBlockManager: Created local directory at /tmp/spark-local-20160720202530-86bb
16/07/20 20:25:30 INFO storage.MemoryStore: MemoryStore started with capacity 297.0 MB.
16/07/20 20:25:30 INFO network.ConnectionManager: Bound socket to port 45003 with id = ConnectionManagerId(Master,45003)
16/07/20 20:25:30 INFO storage.BlockManagerMaster: Trying to register BlockManager
16/07/20 20:25:30 INFO storage.BlockManagerInfo: Registering block manager Master:45003 with 297.0 MB RAM
16/07/20 20:25:30 INFO storage.BlockManagerMaster: Registered BlockManager
16/07/20 20:25:30 INFO spark.HttpServer: Starting HTTP Server
16/07/20 20:25:30 INFO server.Server: jetty-8.y.z-SNAPSHOT
16/07/20 20:25:30 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0:42925
16/07/20 20:25:30 INFO broadcast.HttpBroadcast: Broadcast server started at http://192.168.2.103:42925
16/07/20 20:25:30 INFO spark.HttpFileServer: HTTP File server directory is /tmp/spark-375d9369-2cd5-4e1c-ae22-765426b7f8be
16/07/20 20:25:30 INFO spark.HttpServer: Starting HTTP Server
16/07/20 20:25:30 INFO server.Server: jetty-8.y.z-SNAPSHOT
16/07/20 20:25:30 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0:36573
16/07/20 20:25:30 INFO server.Server: jetty-8.y.z-SNAPSHOT
16/07/20 20:25:30 INFO server.AbstractConnector: Started SelectChannelConnector@0.0.0.0:4040
16/07/20 20:25:30 INFO ui.SparkUI: Started SparkUI at http://Master:4040
16/07/20 20:25:31 INFO executor.Executor: Using REPL class URI: http://192.168.2.103:44920
16/07/20 20:25:31 INFO repl.SparkILoop: Created spark context..
Spark context available as sc.

scala> val rdd = sc.parallelize(List(1,2,3,4,5,6))  
rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[0] at parallelize at <console>:12

scala> val mapRdd = rdd.map(_*2)  //这是典型的函数式编程
mapRdd: org.apache.spark.rdd.RDD[Int] = MappedRDD[1] at map at <console>:14

scala> mapRdd.collect()  //上面的map是transformation,到了这里的collect才开始执array   Array(2,4,6,8,10,12)
16/07/20 21:23:57 INFO spark.SparkContext: Starting job: collect at <console>:17
16/07/20 21:23:57 INFO scheduler.DAGScheduler: Got job 0 (collect at <console>:17) with 2 output partitions (allowLocal=false)
16/07/20 21:23:57 INFO scheduler.DAGScheduler: Final stage: Stage 0(collect at <console>:17)
16/07/20 21:23:57 INFO scheduler.DAGScheduler: Parents of final stage: List()
16/07/20 21:23:57 INFO scheduler.DAGScheduler: Missing parents: List()
16/07/20 21:23:57 INFO scheduler.DAGScheduler: Submitting Stage 0 (MappedRDD[1] at map at <console>:14), which has no missing parents
16/07/20 21:23:57 INFO scheduler.DAGScheduler: Submitting 2 missing tasks from Stage 0 (MappedRDD[1] at map at <console>:14)
16/07/20 21:23:57 INFO scheduler.TaskSchedulerImpl: Adding task set 0.0 with 2 tasks
16/07/20 21:23:57 INFO scheduler.TaskSetManager: Starting task 0.0:0 as TID 0 on executor localhost: localhost (PROCESS_LOCAL)
16/07/20 21:23:57 INFO scheduler.TaskSetManager: Serialized task 0.0:0 as 1276 bytes in 4 ms
16/07/20 21:23:57 INFO scheduler.TaskSetManager: Starting task 0.0:1 as TID 1 on executor localhost: localhost (PROCESS_LOCAL)
16/07/20 21:23:57 INFO scheduler.TaskSetManager: Serialized task 0.0:1 as 1276 bytes in 2 ms
16/07/20 21:23:57 INFO executor.Executor: Running task ID 0
16/07/20 21:23:57 INFO executor.Executor: Running task ID 1
16/07/20 21:23:58 INFO executor.Executor: Serialized size of result for 0 is 554
16/07/20 21:23:58 INFO executor.Executor: Serialized size of result for 1 is 554
16/07/20 21:23:58 INFO executor.Executor: Sending result for 0 directly to driver
16/07/20 21:23:58 INFO executor.Executor: Finished task ID 0
16/07/20 21:23:58 INFO executor.Executor: Sending result for 1 directly to driver
16/07/20 21:23:58 INFO executor.Executor: Finished task ID 1
16/07/20 21:23:58 INFO scheduler.TaskSetManager: Finished TID 1 in 287 ms on localhost (progress: 1/2)
16/07/20 21:23:58 INFO scheduler.DAGScheduler: Completed ResultTask(0, 1)
16/07/20 21:23:58 INFO scheduler.DAGScheduler: Completed ResultTask(0, 0)
16/07/20 21:23:58 INFO scheduler.TaskSetManager: Finished TID 0 in 328 ms on localhost (progress: 2/2)
16/07/20 21:23:58 INFO scheduler.DAGScheduler: Stage 0 (collect at <console>:17) finished in 0.383 s
16/07/20 21:23:58 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool 
16/07/20 21:23:58 INFO spark.SparkContext: Job finished: collect at <console>:17, took 0.958419514 s
res0: Array[Int] = Array(2, 4, 6, 8, 10, 12)

scala> val filterRdd = mapRdd.filter(_ > 5)
filterRdd: org.apache.spark.rdd.RDD[Int] = FilteredRDD[2] at filter at <console>:16

scala> filterRdd.collect() //返回所有大于5的数据的一个Array, Array(6,8,10,12)
16/07/20 21:24:20 INFO spark.SparkContext: Starting job: collect at <console>:19
16/07/20 21:24:20 INFO scheduler.DAGScheduler: Got job 1 (collect at <console>:19) with 2 output partitions (allowLocal=false)
16/07/20 21:24:20 INFO scheduler.DAGScheduler: Final stage: Stage 1(collect at <console>:19)
16/07/20 21:24:20 INFO scheduler.DAGScheduler: Parents of final stage: List()
16/07/20 21:24:20 INFO scheduler.DAGScheduler: Missing parents: List()
16/07/20 21:24:20 INFO scheduler.DAGScheduler: Submitting Stage 1 (FilteredRDD[2] at filter at <console>:16), which has no missing parents
16/07/20 21:24:20 INFO scheduler.DAGScheduler: Submitting 2 missing tasks from Stage 1 (FilteredRDD[2] at filter at <console>:16)
16/07/20 21:24:20 INFO scheduler.TaskSchedulerImpl: Adding task set 1.0 with 2 tasks
16/07/20 21:24:20 INFO scheduler.TaskSetManager: Starting task 1.0:0 as TID 2 on executor localhost: localhost (PROCESS_LOCAL)
16/07/20 21:24:20 INFO scheduler.TaskSetManager: Serialized task 1.0:0 as 1347 bytes in 2 ms
16/07/20 21:24:20 INFO scheduler.TaskSetManager: Starting task 1.0:1 as TID 3 on executor localhost: localhost (PROCESS_LOCAL)
16/07/20 21:24:20 INFO scheduler.TaskSetManager: Serialized task 1.0:1 as 1347 bytes in 1 ms
16/07/20 21:24:20 INFO executor.Executor: Running task ID 2
16/07/20 21:24:20 INFO executor.Executor: Running task ID 3
16/07/20 21:24:20 INFO executor.Executor: Serialized size of result for 2 is 546
16/07/20 21:24:20 INFO executor.Executor: Sending result for 2 directly to driver
16/07/20 21:24:20 INFO executor.Executor: Finished task ID 2
16/07/20 21:24:20 INFO executor.Executor: Serialized size of result for 3 is 554
16/07/20 21:24:20 INFO executor.Executor: Sending result for 3 directly to driver
16/07/20 21:24:20 INFO executor.Executor: Finished task ID 3
16/07/20 21:24:20 INFO scheduler.DAGScheduler: Completed ResultTask(1, 0)
16/07/20 21:24:20 INFO scheduler.TaskSetManager: Finished TID 2 in 28 ms on localhost (progress: 1/2)
16/07/20 21:24:20 INFO scheduler.DAGScheduler: Completed ResultTask(1, 1)
16/07/20 21:24:20 INFO scheduler.TaskSetManager: Finished TID 3 in 29 ms on localhost (progress: 2/2)
16/07/20 21:24:20 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool 
16/07/20 21:24:20 INFO scheduler.DAGScheduler: Stage 1 (collect at <console>:19) finished in 0.033 s
16/07/20 21:24:20 INFO spark.SparkContext: Job finished: collect at <console>:19, took 0.048239651 s
res1: Array[Int] = Array(6, 8, 10, 12)

scala> val rdd = sc.textFile("/xxx/sss/ee")
16/07/20 21:24:45 INFO storage.MemoryStore: ensureFreeSpace(32816) called with curMem=0, maxMem=311387750
16/07/20 21:24:45 INFO storage.MemoryStore: Block broadcast_0 stored as values to memory (estimated size 32.0 KB, free 296.9 MB)
rdd: org.apache.spark.rdd.RDD[String] = MappedRDD[4] at textFile at <console>:12

scala> rdd.count //计算行数
16/07/20 21:24:58 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/07/20 21:24:58 WARN snappy.LoadSnappy: Snappy native library not loaded
org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://Master:9000/xxx/sss/ee
	at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:197)
	at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:208)
	at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:172)
	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)
	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202)
	at scala.Option.getOrElse(Option.scala:120)
	at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)
	at org.apache.spark.rdd.MappedRDD.getPartitions(MappedRDD.scala:28)
	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)
	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202)
	at scala.Option.getOrElse(Option.scala:120)
	at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:1094)
	at org.apache.spark.rdd.RDD.count(RDD.scala:847)
	at $iwC$$iwC$$iwC$$iwC.<init>(<console>:15)
	at $iwC$$iwC$$iwC.<init>(<console>:20)
	at $iwC$$iwC.<init>(<console>:22)
	at $iwC.<init>(<console>:24)
	at <init>(<console>:26)
	at .<init>(<console>:30)
	at .<clinit>(<console>)
	at .<init>(<console>:7)
	at .<clinit>(<console>)
	at $print(<console>)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:483)
	at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:788)
	at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1056)
	at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:614)
	at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:645)
	at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:609)
	at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:796)
	at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:841)
	at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:753)
	at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:601)
	at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:608)
	at org.apache.spark.repl.SparkILoop.loop(SparkILoop.scala:611)
	at org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply$mcZ$sp(SparkILoop.scala:936)
	at org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply(SparkILoop.scala:884)
	at org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply(SparkILoop.scala:884)
	at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
	at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:884)
	at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:982)
	at org.apache.spark.repl.Main$.main(Main.scala:31)
	at org.apache.spark.repl.Main.main(Main.scala)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:483)
	at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:292)
	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:55)
	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
scala> val rdd = sc.textFile("file:///home/README.md")
16/07/20 21:25:43 INFO storage.MemoryStore: ensureFreeSpace(648) called with curMem=32816, maxMem=311387750
16/07/20 21:25:43 INFO storage.MemoryStore: Block broadcast_1 stored as values to memory (estimated size 648.0 B, free 296.9 MB)
rdd: org.apache.spark.rdd.RDD[String] = MappedRDD[6] at textFile at <console>:12

scala> rdd.count //计算行数
16/07/20 21:25:50 INFO mapred.FileInputFormat: Total input paths to process : 1
16/07/20 21:25:51 INFO spark.SparkContext: Starting job: count at <console>:15
16/07/20 21:25:51 INFO scheduler.DAGScheduler: Got job 2 (count at <console>:15) with 2 output partitions (allowLocal=false)
16/07/20 21:25:51 INFO scheduler.DAGScheduler: Final stage: Stage 2(count at <console>:15)
16/07/20 21:25:51 INFO scheduler.DAGScheduler: Parents of final stage: List()
16/07/20 21:25:51 INFO scheduler.DAGScheduler: Missing parents: List()
16/07/20 21:25:51 INFO scheduler.DAGScheduler: Submitting Stage 2 (MappedRDD[6] at textFile at <console>:12), which has no missing parents
16/07/20 21:25:51 INFO scheduler.DAGScheduler: Submitting 2 missing tasks from Stage 2 (MappedRDD[6] at textFile at <console>:12)
16/07/20 21:25:51 INFO scheduler.TaskSchedulerImpl: Adding task set 2.0 with 2 tasks
16/07/20 21:25:51 INFO scheduler.TaskSetManager: Starting task 2.0:0 as TID 4 on executor localhost: localhost (PROCESS_LOCAL)
16/07/20 21:25:51 INFO scheduler.TaskSetManager: Serialized task 2.0:0 as 1697 bytes in 3 ms
16/07/20 21:25:51 INFO scheduler.TaskSetManager: Starting task 2.0:1 as TID 5 on executor localhost: localhost (PROCESS_LOCAL)
16/07/20 21:25:51 INFO scheduler.TaskSetManager: Serialized task 2.0:1 as 1697 bytes in 2 ms
16/07/20 21:25:51 INFO executor.Executor: Running task ID 4
16/07/20 21:25:51 INFO executor.Executor: Running task ID 5
16/07/20 21:25:51 INFO storage.BlockManager: Found block broadcast_1 locally
16/07/20 21:25:51 INFO storage.BlockManager: Found block broadcast_1 locally
16/07/20 21:25:51 INFO rdd.HadoopRDD: Input split: file:/home/README.md:43+43
16/07/20 21:25:51 INFO rdd.HadoopRDD: Input split: file:/home/README.md:0+43
16/07/20 21:25:51 INFO executor.Executor: Serialized size of result for 4 is 597
16/07/20 21:25:51 INFO executor.Executor: Serialized size of result for 5 is 597
16/07/20 21:25:51 INFO executor.Executor: Sending result for 5 directly to driver
16/07/20 21:25:51 INFO executor.Executor: Sending result for 4 directly to driver
16/07/20 21:25:51 INFO executor.Executor: Finished task ID 4
16/07/20 21:25:51 INFO executor.Executor: Finished task ID 5
16/07/20 21:25:51 INFO scheduler.TaskSetManager: Finished TID 5 in 489 ms on localhost (progress: 1/2)
16/07/20 21:25:51 INFO scheduler.DAGScheduler: Completed ResultTask(2, 1)
16/07/20 21:25:51 INFO scheduler.DAGScheduler: Completed ResultTask(2, 0)
16/07/20 21:25:51 INFO scheduler.TaskSetManager: Finished TID 4 in 584 ms on localhost (progress: 2/2)
16/07/20 21:25:51 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 2.0, whose tasks have all completed, from pool 
16/07/20 21:25:51 INFO scheduler.DAGScheduler: Stage 2 (count at <console>:15) finished in 0.618 s
16/07/20 21:25:51 INFO spark.SparkContext: Job finished: count at <console>:15, took 0.959816292 s
res3: Long = 1

scala> rdd.cache   //可以把rdd保留在内存里面
res4: rdd.type = MappedRDD[6] at textFile at <console>:12

scala> rdd.count //计算行数,但是因为上面进行了cache,这里速度会很快
16/07/20 21:26:07 INFO spark.SparkContext: Starting job: count at <console>:15
16/07/20 21:26:07 INFO scheduler.DAGScheduler: Got job 3 (count at <console>:15) with 2 output partitions (allowLocal=false)
16/07/20 21:26:07 INFO scheduler.DAGScheduler: Final stage: Stage 3(count at <console>:15)
16/07/20 21:26:07 INFO scheduler.DAGScheduler: Parents of final stage: List()
16/07/20 21:26:07 INFO scheduler.DAGScheduler: Missing parents: List()
16/07/20 21:26:07 INFO scheduler.DAGScheduler: Submitting Stage 3 (MappedRDD[6] at textFile at <console>:12), which has no missing parents
16/07/20 21:26:07 INFO scheduler.DAGScheduler: Submitting 2 missing tasks from Stage 3 (MappedRDD[6] at textFile at <console>:12)
16/07/20 21:26:07 INFO scheduler.TaskSchedulerImpl: Adding task set 3.0 with 2 tasks
16/07/20 21:26:07 INFO scheduler.TaskSetManager: Starting task 3.0:0 as TID 6 on executor localhost: localhost (PROCESS_LOCAL)
16/07/20 21:26:07 INFO scheduler.TaskSetManager: Serialized task 3.0:0 as 1702 bytes in 3 ms
16/07/20 21:26:07 INFO scheduler.TaskSetManager: Starting task 3.0:1 as TID 7 on executor localhost: localhost (PROCESS_LOCAL)
16/07/20 21:26:07 INFO scheduler.TaskSetManager: Serialized task 3.0:1 as 1702 bytes in 3 ms
16/07/20 21:26:07 INFO executor.Executor: Running task ID 6
16/07/20 21:26:07 INFO executor.Executor: Running task ID 7
16/07/20 21:26:07 INFO storage.BlockManager: Found block broadcast_1 locally
16/07/20 21:26:07 INFO storage.BlockManager: Found block broadcast_1 locally
16/07/20 21:26:07 INFO spark.CacheManager: Partition rdd_6_1 not found, computing it
16/07/20 21:26:07 INFO spark.CacheManager: Partition rdd_6_0 not found, computing it
16/07/20 21:26:07 INFO rdd.HadoopRDD: Input split: file:/home/README.md:0+43
16/07/20 21:26:07 INFO rdd.HadoopRDD: Input split: file:/home/README.md:43+43
16/07/20 21:26:07 INFO storage.MemoryStore: ensureFreeSpace(112) called with curMem=33464, maxMem=311387750
16/07/20 21:26:07 INFO storage.MemoryStore: Block rdd_6_1 stored as values to memory (estimated size 112.0 B, free 296.9 MB)
16/07/20 21:26:07 INFO storage.MemoryStore: ensureFreeSpace(312) called with curMem=33576, maxMem=311387750
16/07/20 21:26:07 INFO storage.MemoryStore: Block rdd_6_0 stored as values to memory (estimated size 312.0 B, free 296.9 MB)
16/07/20 21:26:07 INFO storage.BlockManagerInfo: Added rdd_6_1 in memory on Master:45003 (size: 112.0 B, free: 297.0 MB)
16/07/20 21:26:07 INFO storage.BlockManagerInfo: Added rdd_6_0 in memory on Master:45003 (size: 312.0 B, free: 297.0 MB)
16/07/20 21:26:07 INFO storage.BlockManagerMaster: Updated info of block rdd_6_0
16/07/20 21:26:07 INFO storage.BlockManagerMaster: Updated info of block rdd_6_1
16/07/20 21:26:07 INFO executor.Executor: Serialized size of result for 6 is 1174
16/07/20 21:26:07 INFO executor.Executor: Serialized size of result for 7 is 1174
16/07/20 21:26:07 INFO executor.Executor: Sending result for 6 directly to driver
16/07/20 21:26:07 INFO executor.Executor: Finished task ID 6
16/07/20 21:26:07 INFO executor.Executor: Sending result for 7 directly to driver
16/07/20 21:26:07 INFO executor.Executor: Finished task ID 7
16/07/20 21:26:07 INFO scheduler.DAGScheduler: Completed ResultTask(3, 0)
16/07/20 21:26:07 INFO scheduler.TaskSetManager: Finished TID 6 in 486 ms on localhost (progress: 1/2)
16/07/20 21:26:07 INFO scheduler.DAGScheduler: Completed ResultTask(3, 1)
16/07/20 21:26:07 INFO scheduler.DAGScheduler: Stage 3 (count at <console>:15) finished in 0.501 s
16/07/20 21:26:07 INFO spark.SparkContext: Job finished: count at <console>:15, took 0.518032915 s
16/07/20 21:26:07 INFO scheduler.TaskSetManager: Finished TID 7 in 482 ms on localhost (progress: 2/2)
16/07/20 21:26:07 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 3.0, whose tasks have all completed, from pool 
res5: Long = 1

scala> val wordcount = rdd.flatMap(_.split(' ')).map((_, 1)).reduceByKey(_+_)  //把每一行进行根据空格分割,然后flatMap会把多个list合并成一个list,最后把每个元素变成一个元组
wordcount: org.apache.spark.rdd.RDD[(String, Int)] = MapPartitionsRDD[11] at reduceByKey at <console>:14

scala> wordcount.saveAsTextFile("file:///home/README.txt")   //把结果存入文件系 统
16/07/20 21:27:20 INFO spark.SparkContext: Starting job: saveAsTextFile at <console>:17
16/07/20 21:27:20 INFO scheduler.DAGScheduler: Registering RDD 9 (reduceByKey at <console>:14)
16/07/20 21:27:20 INFO scheduler.DAGScheduler: Got job 4 (saveAsTextFile at <console>:17) with 2 output partitions (allowLocal=false)
16/07/20 21:27:20 INFO scheduler.DAGScheduler: Final stage: Stage 4(saveAsTextFile at <console>:17)
16/07/20 21:27:20 INFO scheduler.DAGScheduler: Parents of final stage: List(Stage 5)
16/07/20 21:27:20 INFO scheduler.DAGScheduler: Missing parents: List(Stage 5)
16/07/20 21:27:20 INFO scheduler.DAGScheduler: Submitting Stage 5 (MapPartitionsRDD[9] at reduceByKey at <console>:14), which has no missing parents
16/07/20 21:27:20 INFO scheduler.DAGScheduler: Submitting 2 missing tasks from Stage 5 (MapPartitionsRDD[9] at reduceByKey at <console>:14)
16/07/20 21:27:20 INFO scheduler.TaskSchedulerImpl: Adding task set 5.0 with 2 tasks
16/07/20 21:27:20 INFO scheduler.TaskSetManager: Starting task 5.0:0 as TID 8 on executor localhost: localhost (PROCESS_LOCAL)
16/07/20 21:27:20 INFO scheduler.TaskSetManager: Serialized task 5.0:0 as 2061 bytes in 4 ms
16/07/20 21:27:20 INFO scheduler.TaskSetManager: Starting task 5.0:1 as TID 9 on executor localhost: localhost (PROCESS_LOCAL)
16/07/20 21:27:20 INFO scheduler.TaskSetManager: Serialized task 5.0:1 as 2061 bytes in 2 ms
16/07/20 21:27:20 INFO executor.Executor: Running task ID 8
16/07/20 21:27:20 INFO executor.Executor: Running task ID 9
16/07/20 21:27:20 INFO storage.BlockManager: Found block broadcast_1 locally
16/07/20 21:27:20 INFO storage.BlockManager: Found block broadcast_1 locally
16/07/20 21:27:20 INFO storage.BlockManager: Found block rdd_6_1 locally
16/07/20 21:27:20 INFO storage.BlockManager: Found block rdd_6_0 locally
16/07/20 21:27:20 INFO executor.Executor: Serialized size of result for 9 is 777
16/07/20 21:27:20 INFO executor.Executor: Sending result for 9 directly to driver
16/07/20 21:27:20 INFO executor.Executor: Finished task ID 9
16/07/20 21:27:20 INFO scheduler.TaskSetManager: Finished TID 9 in 174 ms on localhost (progress: 1/2)
16/07/20 21:27:20 INFO scheduler.DAGScheduler: Completed ShuffleMapTask(5, 1)
16/07/20 21:27:20 INFO executor.Executor: Serialized size of result for 8 is 777
16/07/20 21:27:20 INFO executor.Executor: Sending result for 8 directly to driver
16/07/20 21:27:20 INFO executor.Executor: Finished task ID 8
16/07/20 21:27:20 INFO scheduler.TaskSetManager: Finished TID 8 in 254 ms on localhost (progress: 2/2)
16/07/20 21:27:20 INFO scheduler.DAGScheduler: Completed ShuffleMapTask(5, 0)
16/07/20 21:27:20 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 5.0, whose tasks have all completed, from pool 
16/07/20 21:27:20 INFO scheduler.DAGScheduler: Stage 5 (reduceByKey at <console>:14) finished in 0.262 s
16/07/20 21:27:20 INFO scheduler.DAGScheduler: looking for newly runnable stages
16/07/20 21:27:20 INFO scheduler.DAGScheduler: running: Set()
16/07/20 21:27:20 INFO scheduler.DAGScheduler: waiting: Set(Stage 4)
16/07/20 21:27:20 INFO scheduler.DAGScheduler: failed: Set()
16/07/20 21:27:20 INFO scheduler.DAGScheduler: Missing parents for Stage 4: List()
16/07/20 21:27:20 INFO scheduler.DAGScheduler: Submitting Stage 4 (MappedRDD[12] at saveAsTextFile at <console>:17), which is now runnable
16/07/20 21:27:20 INFO scheduler.DAGScheduler: Submitting 2 missing tasks from Stage 4 (MappedRDD[12] at saveAsTextFile at <console>:17)
16/07/20 21:27:20 INFO scheduler.TaskSchedulerImpl: Adding task set 4.0 with 2 tasks
16/07/20 21:27:20 INFO scheduler.TaskSetManager: Starting task 4.0:0 as TID 10 on executor localhost: localhost (PROCESS_LOCAL)
16/07/20 21:27:20 INFO scheduler.TaskSetManager: Serialized task 4.0:0 as 5465 bytes in 12 ms
16/07/20 21:27:20 INFO scheduler.TaskSetManager: Starting task 4.0:1 as TID 11 on executor localhost: localhost (PROCESS_LOCAL)
16/07/20 21:27:20 INFO scheduler.TaskSetManager: Serialized task 4.0:1 as 5465 bytes in 9 ms
16/07/20 21:27:20 INFO executor.Executor: Running task ID 10
16/07/20 21:27:20 INFO executor.Executor: Running task ID 11
16/07/20 21:27:20 INFO storage.BlockManager: Found block broadcast_1 locally
16/07/20 21:27:20 INFO storage.BlockManager: Found block broadcast_1 locally
16/07/20 21:27:21 INFO storage.BlockFetcherIterator$BasicBlockFetcherIterator: maxBytesInFlight: 50331648, targetRequestSize: 10066329
16/07/20 21:27:21 INFO storage.BlockFetcherIterator$BasicBlockFetcherIterator: maxBytesInFlight: 50331648, targetRequestSize: 10066329
16/07/20 21:27:21 INFO storage.BlockFetcherIterator$BasicBlockFetcherIterator: Getting 1 non-empty blocks out of 2 blocks
16/07/20 21:27:21 INFO storage.BlockFetcherIterator$BasicBlockFetcherIterator: Getting 1 non-empty blocks out of 2 blocks
16/07/20 21:27:21 INFO storage.BlockFetcherIterator$BasicBlockFetcherIterator: Started 0 remote fetches in 42 ms
16/07/20 21:27:21 INFO storage.BlockFetcherIterator$BasicBlockFetcherIterator: Started 0 remote fetches in 42 ms
16/07/20 21:27:21 INFO mapred.FileOutputCommitter: Saved output of task 'attempt_201607202127_0000_m_000001_11' to file:/home/README.txt
16/07/20 21:27:21 INFO spark.SparkHadoopWriter: attempt_201607202127_0000_m_000001_11: Committed
16/07/20 21:27:21 INFO mapred.FileOutputCommitter: Saved output of task 'attempt_201607202127_0000_m_000000_10' to file:/home/README.txt
16/07/20 21:27:21 INFO spark.SparkHadoopWriter: attempt_201607202127_0000_m_000000_10: Committed
16/07/20 21:27:21 INFO executor.Executor: Serialized size of result for 11 is 825
16/07/20 21:27:21 INFO executor.Executor: Sending result for 11 directly to driver
16/07/20 21:27:21 INFO executor.Executor: Serialized size of result for 10 is 825
16/07/20 21:27:21 INFO executor.Executor: Sending result for 10 directly to driver
16/07/20 21:27:21 INFO executor.Executor: Finished task ID 10
16/07/20 21:27:21 INFO executor.Executor: Finished task ID 11
16/07/20 21:27:21 INFO scheduler.DAGScheduler: Completed ResultTask(4, 1)
16/07/20 21:27:21 INFO scheduler.TaskSetManager: Finished TID 11 in 385 ms on localhost (progress: 1/2)
16/07/20 21:27:21 INFO scheduler.TaskSetManager: Finished TID 10 in 421 ms on localhost (progress: 2/2)
16/07/20 21:27:21 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 4.0, whose tasks have all completed, from pool 
16/07/20 21:27:21 INFO scheduler.DAGScheduler: Completed ResultTask(4, 0)
16/07/20 21:27:21 INFO scheduler.DAGScheduler: Stage 4 (saveAsTextFile at <console>:17) finished in 0.452 s
16/07/20 21:27:21 INFO spark.SparkContext: Job finished: saveAsTextFile at <console>:17, took 1.178924049 s

scala> wordcount.collect //可以得到一个数组
16/07/20 21:27:28 INFO spark.SparkContext: Starting job: collect at <console>:17
16/07/20 21:27:28 INFO spark.MapOutputTrackerMaster: Size of output statuses for shuffle 0 is 142 bytes
16/07/20 21:27:28 INFO scheduler.DAGScheduler: Got job 5 (collect at <console>:17) with 2 output partitions (allowLocal=false)
16/07/20 21:27:28 INFO scheduler.DAGScheduler: Final stage: Stage 6(collect at <console>:17)
16/07/20 21:27:28 INFO scheduler.DAGScheduler: Parents of final stage: List(Stage 7)
16/07/20 21:27:28 INFO scheduler.DAGScheduler: Missing parents: List()
16/07/20 21:27:28 INFO scheduler.DAGScheduler: Submitting Stage 6 (MapPartitionsRDD[11] at reduceByKey at <console>:14), which has no missing parents
16/07/20 21:27:28 INFO scheduler.DAGScheduler: Submitting 2 missing tasks from Stage 6 (MapPartitionsRDD[11] at reduceByKey at <console>:14)
16/07/20 21:27:28 INFO scheduler.TaskSchedulerImpl: Adding task set 6.0 with 2 tasks
16/07/20 21:27:28 INFO scheduler.TaskSetManager: Starting task 6.0:0 as TID 12 on executor localhost: localhost (PROCESS_LOCAL)
16/07/20 21:27:28 INFO scheduler.TaskSetManager: Serialized task 6.0:0 as 1950 bytes in 2 ms
16/07/20 21:27:28 INFO scheduler.TaskSetManager: Starting task 6.0:1 as TID 13 on executor localhost: localhost (PROCESS_LOCAL)
16/07/20 21:27:28 INFO scheduler.TaskSetManager: Serialized task 6.0:1 as 1950 bytes in 2 ms
16/07/20 21:27:28 INFO executor.Executor: Running task ID 12
16/07/20 21:27:28 INFO executor.Executor: Running task ID 13
16/07/20 21:27:28 INFO storage.BlockManager: Found block broadcast_1 locally
16/07/20 21:27:28 INFO storage.BlockManager: Found block broadcast_1 locally
16/07/20 21:27:28 INFO storage.BlockFetcherIterator$BasicBlockFetcherIterator: maxBytesInFlight: 50331648, targetRequestSize: 10066329
16/07/20 21:27:28 INFO storage.BlockFetcherIterator$BasicBlockFetcherIterator: maxBytesInFlight: 50331648, targetRequestSize: 10066329
16/07/20 21:27:28 INFO storage.BlockFetcherIterator$BasicBlockFetcherIterator: Getting 1 non-empty blocks out of 2 blocks
16/07/20 21:27:28 INFO storage.BlockFetcherIterator$BasicBlockFetcherIterator: Started 0 remote fetches in 1 ms
16/07/20 21:27:28 INFO storage.BlockFetcherIterator$BasicBlockFetcherIterator: Getting 1 non-empty blocks out of 2 blocks
16/07/20 21:27:28 INFO storage.BlockFetcherIterator$BasicBlockFetcherIterator: Started 0 remote fetches in 4 ms
16/07/20 21:27:28 INFO executor.Executor: Serialized size of result for 12 is 978
16/07/20 21:27:28 INFO executor.Executor: Serialized size of result for 13 is 1083
16/07/20 21:27:28 INFO executor.Executor: Sending result for 13 directly to driver
16/07/20 21:27:28 INFO executor.Executor: Sending result for 12 directly to driver
16/07/20 21:27:28 INFO executor.Executor: Finished task ID 12
16/07/20 21:27:28 INFO executor.Executor: Finished task ID 13
16/07/20 21:27:28 INFO scheduler.DAGScheduler: Completed ResultTask(6, 1)
16/07/20 21:27:28 INFO scheduler.TaskSetManager: Finished TID 13 in 32 ms on localhost (progress: 1/2)
16/07/20 21:27:28 INFO scheduler.DAGScheduler: Completed ResultTask(6, 0)
16/07/20 21:27:28 INFO scheduler.TaskSetManager: Finished TID 12 in 39 ms on localhost (progress: 2/2)
16/07/20 21:27:28 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 6.0, whose tasks have all completed, from pool 
16/07/20 21:27:28 INFO scheduler.DAGScheduler: Stage 6 (collect at <console>:17) finished in 0.043 s
16/07/20 21:27:28 INFO spark.SparkContext: Job finished: collect at <console>:17, took 0.087566343 s
res7: Array[(String, Int)] = Array((奇艺高清,1), (57375476989eea12893c0c3811601bcf,1), (http://www.qiyi.com/,1), (20111230000005,1), (1,2))

scala> val rdd1 = sc.parallelize(List(('a',1),(‘a’, 2)))
<console>:1: error: illegal character '\u2018'
       val rdd1 = sc.parallelize(List(('a',1),(‘a’, 2)))
                                               ^                                            ^

scala> val rdd1 = sc.parallelize(List(('a',1),(a', 2)))
<console>:1: error: unclosed character literal
       val rdd1 = sc.parallelize(List(('a',1),(a', 2)))
                                                ^

scala> val rdd1 = sc.parallelize(List(('a',1),('a', 2)))
rdd1: org.apache.spark.rdd.RDD[(Char, Int)] = ParallelCollectionRDD[13] at parallelize at <console>:12

scala> val rdd2 = sc.parallelize(List(('b',1),('b, 2)))
rdd2: org.apache.spark.rdd.RDD[(Any, Int)] = ParallelCollectionRDD[14] at parallelize at <console>:12

scala> val rdd2 = sc.parallelize(List(('b',1),('b', 2)))
rdd2: org.apache.spark.rdd.RDD[(Char, Int)] = ParallelCollectionRDD[15] at parallelize at <console>:12

scala> val result_union = rdd1 union rdd2 //结果是把两个list合并成一个,List(('a),('b',1),(‘b’, 2))
result_union: org.apache.spark.rdd.RDD[(Char, Int)] = UnionRDD[16] at union at <console>:16

scala> result_union.count
16/07/20 21:29:06 INFO spark.SparkContext: Starting job: count at <console>:19
16/07/20 21:29:06 INFO scheduler.DAGScheduler: Got job 6 (count at <console>:19) with 4 output partitions (allowLocal=false)
16/07/20 21:29:06 INFO scheduler.DAGScheduler: Final stage: Stage 8(count at <console>:19)
16/07/20 21:29:06 INFO scheduler.DAGScheduler: Parents of final stage: List()
16/07/20 21:29:06 INFO scheduler.DAGScheduler: Missing parents: List()
16/07/20 21:29:06 INFO scheduler.DAGScheduler: Submitting Stage 8 (UnionRDD[16] at union at <console>:16), which has no missing parents
16/07/20 21:29:06 INFO scheduler.DAGScheduler: Submitting 4 missing tasks from Stage 8 (UnionRDD[16] at union at <console>:16)
16/07/20 21:29:06 INFO scheduler.TaskSchedulerImpl: Adding task set 8.0 with 4 tasks
16/07/20 21:29:06 INFO scheduler.TaskSetManager: Starting task 8.0:0 as TID 14 on executor localhost: localhost (PROCESS_LOCAL)
16/07/20 21:29:06 INFO scheduler.TaskSetManager: Serialized task 8.0:0 as 2354 bytes in 4 ms
16/07/20 21:29:06 INFO scheduler.TaskSetManager: Starting task 8.0:1 as TID 15 on executor localhost: localhost (PROCESS_LOCAL)
16/07/20 21:29:06 INFO scheduler.TaskSetManager: Serialized task 8.0:1 as 2354 bytes in 2 ms
16/07/20 21:29:06 INFO executor.Executor: Running task ID 14
16/07/20 21:29:06 INFO executor.Executor: Running task ID 15
16/07/20 21:29:06 INFO executor.Executor: Serialized size of result for 14 is 597
16/07/20 21:29:06 INFO executor.Executor: Sending result for 14 directly to driver
16/07/20 21:29:06 INFO scheduler.TaskSetManager: Starting task 8.0:2 as TID 16 on executor localhost: localhost (PROCESS_LOCAL)
16/07/20 21:29:06 INFO executor.Executor: Serialized size of result for 15 is 597
16/07/20 21:29:06 INFO executor.Executor: Finished task ID 14
16/07/20 21:29:06 INFO executor.Executor: Sending result for 15 directly to driver
16/07/20 21:29:06 INFO executor.Executor: Finished task ID 15
16/07/20 21:29:06 INFO scheduler.TaskSetManager: Serialized task 8.0:2 as 2354 bytes in 5 ms
16/07/20 21:29:06 INFO executor.Executor: Running task ID 16
16/07/20 21:29:06 INFO scheduler.TaskSetManager: Starting task 8.0:3 as TID 17 on executor localhost: localhost (PROCESS_LOCAL)
16/07/20 21:29:06 INFO scheduler.TaskSetManager: Serialized task 8.0:3 as 2354 bytes in 3 ms
16/07/20 21:29:06 INFO executor.Executor: Running task ID 17
16/07/20 21:29:06 INFO scheduler.TaskSetManager: Finished TID 14 in 52 ms on localhost (progress: 1/4)
16/07/20 21:29:06 INFO scheduler.DAGScheduler: Completed ResultTask(8, 0)
16/07/20 21:29:06 INFO scheduler.DAGScheduler: Completed ResultTask(8, 1)
16/07/20 21:29:06 INFO scheduler.TaskSetManager: Finished TID 15 in 49 ms on localhost (progress: 2/4)
16/07/20 21:29:06 INFO executor.Executor: Serialized size of result for 16 is 597
16/07/20 21:29:06 INFO executor.Executor: Sending result for 16 directly to driver
16/07/20 21:29:06 INFO executor.Executor: Finished task ID 16
16/07/20 21:29:06 INFO scheduler.DAGScheduler: Completed ResultTask(8, 2)
16/07/20 21:29:06 INFO scheduler.TaskSetManager: Finished TID 16 in 33 ms on localhost (progress: 3/4)
16/07/20 21:29:06 INFO executor.Executor: Serialized size of result for 17 is 597
16/07/20 21:29:06 INFO executor.Executor: Sending result for 17 directly to driver
16/07/20 21:29:06 INFO executor.Executor: Finished task ID 17
16/07/20 21:29:06 INFO scheduler.TaskSetManager: Finished TID 17 in 32 ms on localhost (progress: 4/4)
16/07/20 21:29:06 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 8.0, whose tasks have all completed, from pool 
16/07/20 21:29:06 INFO scheduler.DAGScheduler: Completed ResultTask(8, 3)
16/07/20 21:29:06 INFO scheduler.DAGScheduler: Stage 8 (count at <console>:19) finished in 0.080 s
16/07/20 21:29:06 INFO spark.SparkContext: Job finished: count at <console>:19, took 0.155748088 s
res8: Long = 4

scala> result_union
res9: org.apache.spark.rdd.RDD[(Char, Int)] = UnionRDD[16] at union at <console>:16

scala> val rdd1 = sc.parallelize(List(('a',1),('a', 2), ('b', 3)))
rdd1: org.apache.spark.rdd.RDD[(Char, Int)] = ParallelCollectionRDD[17] at parallelize at <console>:12

scala> val rdd2 = sc.parallelize(List(('a',4),('b\, 5)))
<console>:12: error: value \ is not a member of Symbol
Error occurred in an application involving default arguments.
       val rdd2 = sc.parallelize(List(('a',4),('b\, 5)))
                                                 ^

scala> val rdd2 = sc.parallelize(List(('a',4),('b', 5)))
rdd2: org.apache.spark.rdd.RDD[(Char, Int)] = ParallelCollectionRDD[18] at parallelize at <console>:12

scala> val result_union = rdd1 join rdd2 //结果是把两个list做笛卡尔积,Array(('aa', (2,4), ('b', (3, 5)))
result_union: org.apache.spark.rdd.RDD[(Char, (Int, Int))] = FlatMappedValuesRDD[21] at join at <console>:16

scala> val rdd = sc.parallelize(List(1,2,3,4))
rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[22] at parallelize at <console>:12

scala> rdd.reduce(_+_) //reduce是一个action,这里的结果是10
16/07/20 21:31:14 INFO spark.SparkContext: Starting job: reduce at <console>:15
16/07/20 21:31:14 INFO scheduler.DAGScheduler: Got job 7 (reduce at <console>:15) with 2 output partitions (allowLocal=false)
16/07/20 21:31:14 INFO scheduler.DAGScheduler: Final stage: Stage 9(reduce at <console>:15)
16/07/20 21:31:14 INFO scheduler.DAGScheduler: Parents of final stage: List()
16/07/20 21:31:14 INFO scheduler.DAGScheduler: Missing parents: List()
16/07/20 21:31:14 INFO scheduler.DAGScheduler: Submitting Stage 9 (ParallelCollectionRDD[22] at parallelize at <console>:12), which has no missing parents
16/07/20 21:31:14 INFO scheduler.DAGScheduler: Submitting 2 missing tasks from Stage 9 (ParallelCollectionRDD[22] at parallelize at <console>:12)
16/07/20 21:31:14 INFO scheduler.TaskSchedulerImpl: Adding task set 9.0 with 2 tasks
16/07/20 21:31:14 INFO scheduler.TaskSetManager: Starting task 9.0:0 as TID 18 on executor localhost: localhost (PROCESS_LOCAL)
16/07/20 21:31:14 INFO scheduler.TaskSetManager: Serialized task 9.0:0 as 1059 bytes in 18 ms
16/07/20 21:31:14 INFO scheduler.TaskSetManager: Starting task 9.0:1 as TID 19 on executor localhost: localhost (PROCESS_LOCAL)
16/07/20 21:31:14 INFO scheduler.TaskSetManager: Serialized task 9.0:1 as 1059 bytes in 33 ms
16/07/20 21:31:14 INFO executor.Executor: Running task ID 19
16/07/20 21:31:14 INFO executor.Executor: Running task ID 18
16/07/20 21:31:14 INFO executor.Executor: Serialized size of result for 18 is 675
16/07/20 21:31:14 INFO executor.Executor: Sending result for 18 directly to driver
16/07/20 21:31:14 INFO executor.Executor: Finished task ID 18
16/07/20 21:31:14 INFO executor.Executor: Serialized size of result for 19 is 675
16/07/20 21:31:14 INFO executor.Executor: Sending result for 19 directly to driver
16/07/20 21:31:14 INFO executor.Executor: Finished task ID 19
16/07/20 21:31:14 INFO scheduler.DAGScheduler: Completed ResultTask(9, 0)
16/07/20 21:31:14 INFO scheduler.TaskSetManager: Finished TID 18 in 218 ms on localhost (progress: 1/2)
16/07/20 21:31:14 INFO scheduler.TaskSetManager: Finished TID 19 in 182 ms on localhost (progress: 2/2)
16/07/20 21:31:14 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 9.0, whose tasks have all completed, from pool 
16/07/20 21:31:14 INFO scheduler.DAGScheduler: Completed ResultTask(9, 1)
16/07/20 21:31:14 INFO scheduler.DAGScheduler: Stage 9 (reduce at <console>:15) finished in 0.230 s
16/07/20 21:31:14 INFO spark.SparkContext: Job finished: reduce at <console>:15, took 0.311988164 s
res10: Int = 10

scala> val rdd = sc.parallelize(List(('a',1),('a', 2),('b',1),('b', 2))
     | 
     | '
<console>:3: error: unclosed character literal
       '
       ^

scala> val rdd = sc.parallelize(List(('a',1),('a', 2),('b',1),('b', 2)))
rdd: org.apache.spark.rdd.RDD[(Char, Int)] = ParallelCollectionRDD[23] at parallelize at <console>:12

scala> rdd.lookup("a") //返回一个seq, (1, 2) 是把a对应的所有元素的value提出来组成一个seq
<console>:15: error: type mismatch;
 found   : String("a")
 required: Char
              rdd.lookup("a") //返回一个seq, (1, 2) 是把a对应的所有元素的value提出来组成一个seq
                         ^

scala> rdd.lookup("a") //返回一个seq, (1, 2) 是把a对应的所有元素的value提出来组成一个seq
<console>:15: error: type mismatch;
 found   : String("a")
 required: Char
              rdd.lookup("a") //返回一个seq, (1, 2) 是把a对应的所有元素的value提出来组成一个seq
                         ^

scala> rdd.lookup("a") //返回一个seq, (1, 2) 是把a对应的所有元素的value提出来组成一个sval rdd = sc.parallelize(List(('a',1),('a', 2),('b',1),('b', 2))
     | val rdd = sc.parallelize(List(('a',1),('a', 2),('b',1),('b', 2));
<console>:2: error: ')' expected but 'val' found.
       val rdd = sc.parallelize(List(('a',1),('a', 2),('b',1),('b', 2));
       ^

scala> rdd.lookup("a") //返回一个seq, (1, 2) 是把a对应的所有元素的value提出来组成一个srdd.lookup("a") //返回一个seq, (1, 2) 是把a对应的所有元素的value提出来组成一个seq
<console>:15: error: type mismatch;
 found   : String("a")
 required: Char
              rdd.lookup("a") //返回一个seq, (1, 2) 是把a对应的所有元素的value提出来组成一个seq
                         ^

scala> scala> rdd.lookup('a') //返回一个seq,(1, 2) 是把a对应的所有元素的value提出来组成一个seq

// Detected repl transcript paste: ctrl-D to finish.

// Replaying 1 commands from transcript.

scala> rdd.lookup('a') //返回一个seq,(1, 2) 是把a对应的所有元素的value提出来组成一个seq
16/07/20 21:33:54 INFO spark.SparkContext: Starting job: lookup at <console>:15
16/07/20 21:33:54 INFO scheduler.DAGScheduler: Got job 8 (lookup at <console>:15) with 2 output partitions (allowLocal=false)
16/07/20 21:33:54 INFO scheduler.DAGScheduler: Final stage: Stage 10(lookup at <console>:15)
16/07/20 21:33:54 INFO scheduler.DAGScheduler: Parents of final stage: List()
16/07/20 21:33:54 INFO scheduler.DAGScheduler: Missing parents: List()
16/07/20 21:33:54 INFO scheduler.DAGScheduler: Submitting Stage 10 (MappedRDD[25] at lookup at <console>:15), which has no missing parents
16/07/20 21:33:54 INFO scheduler.DAGScheduler: Submitting 2 missing tasks from Stage 10 (MappedRDD[25] at lookup at <console>:15)
16/07/20 21:33:54 INFO scheduler.TaskSchedulerImpl: Adding task set 10.0 with 2 tasks
16/07/20 21:33:54 INFO scheduler.TaskSetManager: Starting task 10.0:0 as TID 20 on executor localhost: localhost (PROCESS_LOCAL)
16/07/20 21:33:54 INFO scheduler.TaskSetManager: Serialized task 10.0:0 as 1677 bytes in 3 ms
16/07/20 21:33:54 INFO scheduler.TaskSetManager: Starting task 10.0:1 as TID 21 on executor localhost: localhost (PROCESS_LOCAL)
16/07/20 21:33:54 INFO scheduler.TaskSetManager: Serialized task 10.0:1 as 1677 bytes in 0 ms
16/07/20 21:33:54 INFO executor.Executor: Running task ID 21
16/07/20 21:33:54 INFO executor.Executor: Running task ID 20
16/07/20 21:33:54 INFO executor.Executor: Serialized size of result for 21 is 542
16/07/20 21:33:54 INFO executor.Executor: Sending result for 21 directly to driver
16/07/20 21:33:54 INFO executor.Executor: Serialized size of result for 20 is 550
16/07/20 21:33:54 INFO executor.Executor: Finished task ID 21
16/07/20 21:33:54 INFO executor.Executor: Sending result for 20 directly to driver
16/07/20 21:33:54 INFO executor.Executor: Finished task ID 20
16/07/20 21:33:54 INFO scheduler.TaskSetManager: Finished TID 21 in 263 ms on localhost (progress: 1/2)
16/07/20 21:33:54 INFO scheduler.DAGScheduler: Completed ResultTask(10, 1)
16/07/20 21:33:54 INFO scheduler.TaskSetManager: Finished TID 20 in 271 ms on localhost (progress: 2/2)
16/07/20 21:33:54 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 10.0, whose tasks have all completed, from pool 
16/07/20 21:33:54 INFO scheduler.DAGScheduler: Completed ResultTask(10, 0)
16/07/20 21:33:54 INFO scheduler.DAGScheduler: Stage 10 (lookup at <console>:15) finished in 0.273 s
16/07/20 21:33:54 INFO spark.SparkContext: Job finished: lookup at <console>:15, took 0.30474025 s
res14: Seq[Int] = WrappedArray(1, 2)


scala> scala> rdd.lookup('a') //返回一个seq,(1, 2) 是把a对应的所有元素的value提出来组 rdd.lookup("a") //返回一个seq, (1, 2) 是把a对应的所有元素的value提出来组成一个sscala> rdd.lookup('a') //返回一个seq,(1, 2) 是把a对应的所有元素的value提出来组成一个seq

// Detected repl transcript paste: ctrl-D to finish.

// Replaying 1 commands from transcript.

scala> rdd.lookup('a') //返回一个seq,(1, 2) 是把a对应的所有元素的value提出来组成一个seq
16/07/20 21:34:04 INFO spark.SparkContext: Starting job: lookup at <console>:15
16/07/20 21:34:04 INFO scheduler.DAGScheduler: Got job 9 (lookup at <console>:15) with 2 output partitions (allowLocal=false)
16/07/20 21:34:04 INFO scheduler.DAGScheduler: Final stage: Stage 11(lookup at <console>:15)
16/07/20 21:34:04 INFO scheduler.DAGScheduler: Parents of final stage: List()
16/07/20 21:34:04 INFO scheduler.DAGScheduler: Missing parents: List()
16/07/20 21:34:04 INFO scheduler.DAGScheduler: Submitting Stage 11 (MappedRDD[27] at lookup at <console>:15), which has no missing parents
16/07/20 21:34:04 INFO scheduler.DAGScheduler: Submitting 2 missing tasks from Stage 11 (MappedRDD[27] at lookup at <console>:15)
16/07/20 21:34:04 INFO scheduler.TaskSchedulerImpl: Adding task set 11.0 with 2 tasks
16/07/20 21:34:04 INFO scheduler.TaskSetManager: Starting task 11.0:0 as TID 22 on executor localhost: localhost (PROCESS_LOCAL)
16/07/20 21:34:04 INFO scheduler.TaskSetManager: Serialized task 11.0:0 as 1677 bytes in 2 ms
16/07/20 21:34:04 INFO scheduler.TaskSetManager: Starting task 11.0:1 as TID 23 on executor localhost: localhost (PROCESS_LOCAL)
16/07/20 21:34:04 INFO scheduler.TaskSetManager: Serialized task 11.0:1 as 1677 bytes in 2 ms
16/07/20 21:34:04 INFO executor.Executor: Running task ID 22
16/07/20 21:34:04 INFO executor.Executor: Running task ID 23
16/07/20 21:34:04 INFO executor.Executor: Serialized size of result for 22 is 550
16/07/20 21:34:04 INFO executor.Executor: Sending result for 22 directly to driver
16/07/20 21:34:04 INFO executor.Executor: Finished task ID 22
16/07/20 21:34:04 INFO scheduler.DAGScheduler: Completed ResultTask(11, 0)
16/07/20 21:34:04 INFO scheduler.TaskSetManager: Finished TID 22 in 20 ms on localhost (progress: 1/2)
16/07/20 21:34:04 INFO executor.Executor: Serialized size of result for 23 is 542
16/07/20 21:34:04 INFO executor.Executor: Sending result for 23 directly to driver
16/07/20 21:34:04 INFO executor.Executor: Finished task ID 23
16/07/20 21:34:04 INFO scheduler.TaskSetManager: Finished TID 23 in 22 ms on localhost (progress: 2/2)
16/07/20 21:34:04 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 11.0, whose tasks have all completed, from pool 
16/07/20 21:34:04 INFO scheduler.DAGScheduler: Completed ResultTask(11, 1)
16/07/20 21:34:04 INFO scheduler.DAGScheduler: Stage 11 (lookup at <console>:15) finished in 0.031 s
16/07/20 21:34:04 INFO spark.SparkContext: Job finished: lookup at <console>:15, took 0.051465203 s
res15: Seq[Int] = WrappedArray(1, 2)

scala> rdd.lookup('a') //返回一个seq,(1, 2) 是把a对应的所有元素的value提出来组成一个seq
16/07/20 21:34:11 INFO spark.SparkContext: Starting job: lookup at <console>:15
16/07/20 21:34:11 INFO scheduler.DAGScheduler: Got job 10 (lookup at <console>:15) with 2 output partitions (allowLocal=false)
16/07/20 21:34:11 INFO scheduler.DAGScheduler: Final stage: Stage 12(lookup at <console>:15)
16/07/20 21:34:11 INFO scheduler.DAGScheduler: Parents of final stage: List()
16/07/20 21:34:11 INFO scheduler.DAGScheduler: Missing parents: List()
16/07/20 21:34:11 INFO scheduler.DAGScheduler: Submitting Stage 12 (MappedRDD[29] at lookup at <console>:15), which has no missing parents
16/07/20 21:34:11 INFO scheduler.DAGScheduler: Submitting 2 missing tasks from Stage 12 (MappedRDD[29] at lookup at <console>:15)
16/07/20 21:34:11 INFO scheduler.TaskSchedulerImpl: Adding task set 12.0 with 2 tasks
16/07/20 21:34:11 INFO scheduler.TaskSetManager: Starting task 12.0:0 as TID 24 on executor localhost: localhost (PROCESS_LOCAL)
16/07/20 21:34:11 INFO scheduler.TaskSetManager: Serialized task 12.0:0 as 1676 bytes in 3 ms
16/07/20 21:34:11 INFO scheduler.TaskSetManager: Starting task 12.0:1 as TID 25 on executor localhost: localhost (PROCESS_LOCAL)
16/07/20 21:34:11 INFO scheduler.TaskSetManager: Serialized task 12.0:1 as 1676 bytes in 2 ms
16/07/20 21:34:11 INFO executor.Executor: Running task ID 24
16/07/20 21:34:11 INFO executor.Executor: Running task ID 25
16/07/20 21:34:11 INFO executor.Executor: Serialized size of result for 24 is 550
16/07/20 21:34:11 INFO executor.Executor: Sending result for 24 directly to driver
16/07/20 21:34:11 INFO executor.Executor: Finished task ID 24
16/07/20 21:34:11 INFO scheduler.TaskSetManager: Finished TID 24 in 28 ms on localhost (progress: 1/2)
16/07/20 21:34:11 INFO scheduler.DAGScheduler: Completed ResultTask(12, 0)
16/07/20 21:34:11 INFO executor.Executor: Serialized size of result for 25 is 542
16/07/20 21:34:11 INFO executor.Executor: Sending result for 25 directly to driver
16/07/20 21:34:11 INFO executor.Executor: Finished task ID 25
16/07/20 21:34:11 INFO scheduler.TaskSetManager: Finished TID 25 in 26 ms on localhost (progress: 2/2)
16/07/20 21:34:11 INFO scheduler.DAGScheduler: Completed ResultTask(12, 1)
16/07/20 21:34:11 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 12.0, whose tasks have all completed, from pool 
16/07/20 21:34:11 INFO scheduler.DAGScheduler: Stage 12 (lookup at <console>:15) finished in 0.035 s
16/07/20 21:34:11 INFO spark.SparkContext: Job finished: lookup at <console>:15, took 0.05054779 s
res16: Seq[Int] = WrappedArray(1, 2)

scala> val wordcount = rdd.flatMap(_split(' ')).map(_,1).reduceByKey(_+_).map(x => (x._2, x._1)).sortByKey(false).map(x => (x._2, x._1))
<console>:14: error: missing parameter type for expanded function ((x$1) => rdd.flatMap(_split(' ')).map(x$1, 1).reduceByKey(((x$2, x$3) => x$2.$plus(x$3))).map(((x) => scala.Tuple2(x._2, x._1))).sortByKey(false).map(((x) => scala.Tuple2(x._2, x._1))))
       val wordcount = rdd.flatMap(_split(' ')).map(_,1).reduceByKey(_+_).map(x => (x._2, x._1)).sortByKey(false).map(x => (x._2, x._1))
                                                    ^
<console>:14: error: not found: value _split
       val wordcount = rdd.flatMap(_split(' ')).map(_,1).reduceByKey(_+_).map(x => (x._2, x._1)).sortByKey(false).map(x => (x._2, x._1))
                                   ^

scala> val wordcount = rdd.flatMap(_split(' ')).map(_,1).reduceByKey(_+_).map(x => (x._2, x._1)).sortByKey(false).map(x => (x._2, x._1))
<console>:14: error: missing parameter type for expanded function ((x$1) => rdd.flatMap(_split(' ')).map(x$1, 1).reduceByKey(((x$2, x$3) => x$2.$plus(x$3))).map(((x) => scala.Tuple2(x._2, x._1))).sortByKey(false).map(((x) => scala.Tuple2(x._2, x._1))))
       val wordcount = rdd.flatMap(_split(' ')).map(_,1).reduceByKey(_+_).map(x => (x._2, x._1)).sortByKey(false).map(x => (x._2, x._1))
                                                    ^
<console>:14: error: not found: value _split
       val wordcount = rdd.flatMap(_split(' ')).map(_,1).reduceByKey(_+_).map(x => (x._2, x._1)).sortByKey(false).map(x => (x._2, x._1))
                                   ^

scala> 


评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值