第57课:SparkSQL案例实战学习笔记

第57课:SparkSQL案例实战学习笔记
本期内容:
1.SparkSQL基础案例实战
2.SparkSQL商业类型的案例


进入Spark官网的sql-programming-guide:
http://spark.apache.org/docs/latest/sql-programming-guide.html#getting-started
可以看到
The entry point into all functionality in Spark SQL is the SQLContext class, or one of its descendants. To create a basic SQLContext, all you need is a SparkContext.
val sc: SparkContext // An existing SparkContext.
val sqlContext = new org.apache.spark.sql.SQLContext(sc)
// this is used to implicitly convert an RDD to a DataFrame.
import sqlContext.implicits._
sqlContext以SparkContext为参数,具有SparkContext的功能,又扩展了SparkContext的功能。


运行SparkSQL前需要做一个配置,即在${SPARK_HOME}/conf目录下,新建hive-site.xml文件,并加入如下内容:


注意:不是把hive的hive-site.xml拷贝过来,而是需要新建hive-site.xml再做这一个配置即可。因为要用SparkSQL操作Hive的话,是把Hive当作数据仓库,数据仓库就需要有元数据和数据本身,要想访问真正的数据,就需要访问元数据。所以只需要配置hive.metastore.uris就可以了。
为什么要配置hive.metastore.uris?
=>因为底层是Hive作数据仓库做存储引擎,SparkSQL是计算引擎,SparkSQL要访问元数据就需要做这个配置。
另外要访问hive的元数据mysql,需要把mysql-connector-java-5.1.35-bin.jar放入Spark的lib目录下(不配置也可?)。
这个配置是每台机都要配置吗?
=>不需要,只要在Hive机上配置即可。
Spark启动时不会到Hive的文件夹读取hive-site.xml。只跟Hive中的元数据有关系,跟Hive本身没关系。因为Hive只是一个数据仓库,不是一个计算引擎,计算引擎是SparkSQL。


1)启动HDFS:${HADOOP_HOME}/sbin/start-dfs.sh
2)启动Spark:${SPARK_HOME}/sbin/start-all.sh
3)启动metastore服务:
hive --service metastore > metastore.log 2>& 1&
4)进入spark目录下的bin目录,启动spark-shell
./spark-shell --master spark://slq1:7077
5)Spark-shell启动后生成一个hiveContext:
val hiveContext = new org.apache.spark.sql.hive.HiveContext(sc)
执行结果:
scala> val hiveContext= new org.apache.spark.sql.hive.HiveContext(sc)
16/03/27 01:45:59 INFO hive.HiveContext: Initializing execution hive, version 1.2.1
16/03/27 01:45:59 INFO client.ClientWrapper: Inspected Hadoop version: 2.6.0
16/03/27 01:45:59 INFO client.ClientWrapper: Loaded org.apache.hadoop.hive.shims.Hadoop23Shims for Hadoop version 2.6.0
16/03/27 01:46:01 INFO hive.metastore: Mestastore configuration hive.metastore.warehouse.dir changed from file:/tmp/spark-b5ed27dd-b732-41f6-bf34-82f3f8fdaa02/metastore to file:/tmp/spark-99274b47-7d87-4fbf-9815-8c913bec38a9/metastore
16/03/27 01:46:01 INFO hive.metastore: Mestastore configuration javax.jdo.option.ConnectionURL changed from jdbc:derby:;databaseName=/tmp/spark-b5ed27dd-b732-41f6-bf34-82f3f8fdaa02/metastore;create=true to jdbc:derby:;databaseName=/tmp/spark-99274b47-7d87-4fbf-9815-8c913bec38a9/metastore;create=true
16/03/27 01:46:01 INFO metastore.HiveMetaStore: 0: Shutting down the object store...
16/03/27 01:46:01 INFO HiveMetaStore.audit: ugi=richard ip=unknown-ip-addr cmd=Shutting down the object store...
16/03/27 01:46:01 INFO metastore.HiveMetaStore: 0: Metastore shutdown complete.
16/03/27 01:46:01 INFO HiveMetaStore.audit: ugi=richard ip=unknown-ip-addr cmd=Metastore shutdown complete.
16/03/27 01:46:01 INFO metastore.HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
16/03/27 01:46:01 INFO metastore.ObjectStore: ObjectStore, initialize called
16/03/27 01:46:01 INFO DataNucleus.Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored
16/03/27 01:46:01 INFO DataNucleus.Persistence: Property datanucleus.cache.level2 unknown - will be ignored
16/03/27 01:46:02 WARN DataNucleus.Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
16/03/27 01:46:02 WARN DataNucleus.Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
16/03/27 01:46:09 INFO metastore.ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
16/03/27 01:46:12 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
16/03/27 01:46:12 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
16/03/27 01:46:26 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
16/03/27 01:46:26 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
16/03/27 01:46:30 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY
16/03/27 01:46:30 INFO metastore.ObjectStore: Initialized ObjectStore
16/03/27 01:46:30 WARN metastore.ObjectStore: Failed to get database default, returning NoSuchObjectException
16/03/27 01:46:32 INFO metastore.HiveMetaStore: Added admin role in metastore
16/03/27 01:46:32 INFO metastore.HiveMetaStore: Added public role in metastore
16/03/27 01:46:33 INFO metastore.HiveMetaStore: No user is added in admin role, since config is empty
16/03/27 01:46:33 INFO session.SessionState: Created local directory: /tmp/1711ab6e-62f0-496d-8f04-de91c2808678_resources
16/03/27 01:46:34 INFO session.SessionState: Created HDFS directory: /tmp/hive/richard/1711ab6e-62f0-496d-8f04-de91c2808678
16/03/27 01:46:34 INFO session.SessionState: Created local directory: /tmp/richard/1711ab6e-62f0-496d-8f04-de91c2808678
16/03/27 01:46:34 INFO session.SessionState: Created HDFS directory: /tmp/hive/richard/1711ab6e-62f0-496d-8f04-de91c2808678/_tmp_space.db
16/03/27 01:46:35 INFO hive.HiveContext: default warehouse location is /user/hive/warehouse
16/03/27 01:46:35 INFO hive.HiveContext: Initializing HiveMetastoreConnection version 1.2.1 using Spark classes.
16/03/27 01:46:35 INFO client.ClientWrapper: Inspected Hadoop version: 2.6.0
16/03/27 01:46:35 INFO client.ClientWrapper: Loaded org.apache.hadoop.hive.shims.Hadoop23Shims for Hadoop version 2.6.0
16/03/27 01:46:41 INFO hive.metastore: Trying to connect to metastore with URI thrift://slq1:9083
16/03/27 01:46:42 INFO hive.metastore: Connected to metastore.
16/03/27 01:46:43 INFO session.SessionState: Created local directory: /tmp/5f541be5-2b00-43a4-92f8-80ebd5e43d13_resources
16/03/27 01:46:43 INFO session.SessionState: Created HDFS directory: /tmp/hive/richard/5f541be5-2b00-43a4-92f8-80ebd5e43d13
16/03/27 01:46:43 INFO session.SessionState: Created local directory: /tmp/richard/5f541be5-2b00-43a4-92f8-80ebd5e43d13
16/03/27 01:46:43 INFO session.SessionState: Created HDFS directory: /tmp/hive/richard/5f541be5-2b00-43a4-92f8-80ebd5e43d13/_tmp_space.db
hiveContext: org.apache.spark.sql.hive.HiveContext = org.apache.spark.sql.hive.HiveContext@3b14d63b


6)这样就可以写sql了:
hiveContext.sql(“use hive”)
执行结果:
scala> hiveContext.sql("use hive")
16/03/27 01:56:24 INFO parse.ParseDriver: Parsing command: use hive
16/03/27 01:56:33 INFO parse.ParseDriver: Parse Completed
16/03/27 01:56:41 INFO log.PerfLogger: <PERFLOG method=Driver.run from=org.apache.hadoop.hive.ql.Driver>
16/03/27 01:56:41 INFO log.PerfLogger: <PERFLOG method=TimeToSubmit from=org.apache.hadoop.hive.ql.Driver>
16/03/27 01:56:41 INFO log.PerfLogger: <PERFLOG method=compile from=org.apache.hadoop.hive.ql.Driver>
16/03/27 01:56:42 INFO log.PerfLogger: <PERFLOG method=parse from=org.apache.hadoop.hive.ql.Driver>
16/03/27 01:56:42 INFO parse.ParseDriver: Parsing command: use hive
16/03/27 01:56:50 INFO parse.ParseDriver: Parse Completed
16/03/27 01:56:50 INFO log.PerfLogger: </PERFLOG method=parse start=1459015002514 end=1459015010047 duration=7533 from=org.apache.hadoop.hive.ql.Driver>
16/03/27 01:56:50 INFO log.PerfLogger: <PERFLOG method=semanticAnalyze from=org.apache.hadoop.hive.ql.Driver>
16/03/27 01:56:51 INFO ql.Driver: Semantic Analysis Completed
16/03/27 01:56:51 INFO log.PerfLogger: </PERFLOG method=semanticAnalyze start=1459015010099 end=1459015011510 duration=1411 from=org.apache.hadoop.hive.ql.Driver>
16/03/27 01:56:51 INFO ql.Driver: Returning Hive schema: Schema(fieldSchemas:null, properties:null)
16/03/27 01:56:51 INFO log.PerfLogger: </PERFLOG method=compile start=1459015001824 end=1459015011691 duration=9867 from=org.apache.hadoop.hive.ql.Driver>
16/03/27 01:56:51 INFO ql.Driver: Concurrency mode is disabled, not creating a lock manager
16/03/27 01:56:51 INFO log.PerfLogger: <PERFLOG method=Driver.execute from=org.apache.hadoop.hive.ql.Driver>
16/03/27 01:56:51 INFO ql.Driver: Starting command(queryId=richard_20160327015642_7563ed54-0636-47bd-8e0d-c3c9c84d7bd6): use hive
16/03/27 01:56:52 INFO log.PerfLogger: </PERFLOG method=TimeToSubmit start=1459015001824 end=1459015012272 duration=10448 from=org.apache.hadoop.hive.ql.Driver>
16/03/27 01:56:52 INFO log.PerfLogger: <PERFLOG method=runTasks from=org.apache.hadoop.hive.ql.Driver>
16/03/27 01:56:52 INFO log.PerfLogger: <PERFLOG method=task.DDL.Stage-0 from=org.apache.hadoop.hive.ql.Driver>
16/03/27 01:56:52 INFO ql.Driver: Starting task [Stage-0:DDL] in serial mode
16/03/27 01:56:52 INFO log.PerfLogger: </PERFLOG method=runTasks start=1459015012273 end=1459015012822 duration=549 from=org.apache.hadoop.hive.ql.Driver>
16/03/27 01:56:52 INFO log.PerfLogger: </PERFLOG method=Driver.execute start=1459015011692 end=1459015012823 duration=1131 from=org.apache.hadoop.hive.ql.Driver>
16/03/27 01:56:52 INFO ql.Driver: OK
16/03/27 01:56:52 INFO log.PerfLogger: <PERFLOG method=releaseLocks from=org.apache.hadoop.hive.ql.Driver>
16/03/27 01:56:52 INFO log.PerfLogger: </PERFLOG method=releaseLocks start=1459015012848 end=1459015012848 duration=0 from=org.apache.hadoop.hive.ql.Driver>
16/03/27 01:56:52 INFO log.PerfLogger: </PERFLOG method=Driver.run start=1459015001810 end=1459015012848 duration=11038 from=org.apache.hadoop.hive.ql.Driver>
res0: org.apache.spark.sql.DataFrame = [result: string]


hiveContext.sql(“show tables”).collect.foreach(println)
执行结果:
scala> hiveContext.sql("show tables").collect.foreach(println)
[person,false]


hiveContext.sql(“select count(*) from person”).collect.foreach(println)
执行结果:
scala> hiveContext.sql("select count(*) from person").collect.foreach(println)
16/03/27 02:09:42 INFO parse.ParseDriver: Parsing command: select count(*) from person
16/03/27 02:09:43 INFO parse.ParseDriver: Parse Completed
16/03/27 02:10:19 INFO storage.MemoryStore: Block broadcast_0 stored as values in memory (estimated size 553.5 KB, free 553.5 KB)
16/03/27 02:10:20 INFO storage.MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 41.8 KB, free 595.3 KB)
16/03/27 02:10:20 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on localhost:43229 (size: 41.8 KB, free: 517.4 MB)
16/03/27 02:10:20 INFO spark.SparkContext: Created broadcast 0 from collect at <console>:30
16/03/27 02:10:29 INFO mapred.FileInputFormat: Total input paths to process : 1
16/03/27 02:10:32 INFO spark.SparkContext: Starting job: collect at <console>:30
16/03/27 02:10:32 INFO scheduler.DAGScheduler: Registering RDD 6 (collect at <console>:30)
16/03/27 02:10:32 INFO scheduler.DAGScheduler: Got job 0 (collect at <console>:30) with 1 output partitions
16/03/27 02:10:32 INFO scheduler.DAGScheduler: Final stage: ResultStage 1 (collect at <console>:30)
16/03/27 02:10:32 INFO scheduler.DAGScheduler: Parents of final stage: List(ShuffleMapStage 0)
16/03/27 02:10:32 INFO scheduler.DAGScheduler: Missing parents: List(ShuffleMapStage 0)
16/03/27 02:10:32 INFO scheduler.DAGScheduler: Submitting ShuffleMapStage 0 (MapPartitionsRDD[6] at collect at <console>:30), which has no missing parents
16/03/27 02:10:33 INFO storage.MemoryStore: Block broadcast_1 stored as values in memory (estimated size 13.1 KB, free 608.3 KB)
16/03/27 02:10:33 INFO storage.MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 6.6 KB, free 614.9 KB)
16/03/27 02:10:33 INFO storage.BlockManagerInfo: Added broadcast_1_piece0 in memory on localhost:43229 (size: 6.6 KB, free: 517.4 MB)
16/03/27 02:10:33 INFO spark.SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1006
16/03/27 02:10:33 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ShuffleMapStage 0 (MapPartitionsRDD[6] at collect at <console>:30)
16/03/27 02:10:34 INFO scheduler.TaskSchedulerImpl: Adding task set 0.0 with 1 tasks
16/03/27 02:10:35 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, localhost, partition 0,ANY, 2151 bytes)
16/03/27 02:10:35 INFO executor.Executor: Running task 0.0 in stage 0.0 (TID 0)
16/03/27 02:10:36 INFO rdd.HadoopRDD: Input split: hdfs://slq1:9000/user/hive/warehouse/hive.db/person/000000_0:0+11
16/03/27 02:10:37 INFO Configuration.deprecation: mapred.tip.id is deprecated. Instead, use mapreduce.task.id
16/03/27 02:10:37 INFO Configuration.deprecation: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
16/03/27 02:10:37 INFO Configuration.deprecation: mapred.task.is.map is deprecated. Instead, use mapreduce.task.ismap
16/03/27 02:10:37 INFO Configuration.deprecation: mapred.task.partition is deprecated. Instead, use mapreduce.task.partition
16/03/27 02:10:37 INFO Configuration.deprecation: mapred.job.id is deprecated. Instead, use mapreduce.job.id
16/03/27 02:10:45 INFO codegen.GenerateMutableProjection: Code generated in 5525.488798 ms
16/03/27 02:10:46 INFO codegen.GenerateUnsafeProjection: Code generated in 497.07644 ms
16/03/27 02:10:47 INFO codegen.GenerateMutableProjection: Code generated in 326.10197 ms
16/03/27 02:10:47 INFO codegen.GenerateUnsafeRowJoiner: Code generated in 185.831859 ms
16/03/27 02:10:47 INFO codegen.GenerateUnsafeProjection: Code generated in 199.900204 ms
16/03/27 02:10:49 INFO executor.Executor: Finished task 0.0 in stage 0.0 (TID 0). 2500 bytes result sent to driver
16/03/27 02:10:49 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 14563 ms on localhost (1/1)
16/03/27 02:10:49 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool 
16/03/27 02:10:49 INFO scheduler.DAGScheduler: ShuffleMapStage 0 (collect at <console>:30) finished in 15.180 s
16/03/27 02:10:49 INFO scheduler.DAGScheduler: looking for newly runnable stages
16/03/27 02:10:49 INFO scheduler.DAGScheduler: running: Set()
16/03/27 02:10:49 INFO scheduler.DAGScheduler: waiting: Set(ResultStage 1)
16/03/27 02:10:49 INFO scheduler.DAGScheduler: failed: Set()
16/03/27 02:10:49 INFO scheduler.DAGScheduler: Submitting ResultStage 1 (MapPartitionsRDD[9] at collect at <console>:30), which has no missing parents
16/03/27 02:10:49 INFO storage.MemoryStore: Block broadcast_2 stored as values in memory (estimated size 12.2 KB, free 627.1 KB)
16/03/27 02:10:49 INFO storage.MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 6.1 KB, free 633.3 KB)
16/03/27 02:10:49 INFO storage.BlockManagerInfo: Added broadcast_2_piece0 in memory on localhost:43229 (size: 6.1 KB, free: 517.4 MB)
16/03/27 02:10:49 INFO spark.SparkContext: Created broadcast 2 from broadcast at DAGScheduler.scala:1006
16/03/27 02:10:50 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1 (MapPartitionsRDD[9] at collect at <console>:30)
16/03/27 02:10:50 INFO scheduler.TaskSchedulerImpl: Adding task set 1.0 with 1 tasks
16/03/27 02:10:50 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1.0 (TID 1, localhost, partition 0,NODE_LOCAL, 1999 bytes)
16/03/27 02:10:50 INFO executor.Executor: Running task 0.0 in stage 1.0 (TID 1)
16/03/27 02:10:50 INFO storage.ShuffleBlockFetcherIterator: Getting 1 non-empty blocks out of 1 blocks
16/03/27 02:10:50 INFO storage.ShuffleBlockFetcherIterator: Started 0 remote fetches in 95 ms
16/03/27 02:10:50 INFO codegen.GenerateMutableProjection: Code generated in 176.590518 ms
16/03/27 02:10:52 INFO codegen.GenerateMutableProjection: Code generated in 209.671601 ms
16/03/27 02:10:52 INFO executor.Executor: Finished task 0.0 in stage 1.0 (TID 1). 1830 bytes result sent to driver
16/03/27 02:10:52 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1.0 (TID 1) in 2267 ms on localhost (1/1)
16/03/27 02:10:52 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool 
16/03/27 02:10:52 INFO scheduler.DAGScheduler: ResultStage 1 (collect at <console>:30) finished in 2.272 s
16/03/27 02:10:52 INFO scheduler.DAGScheduler: Job 0 finished: collect at <console>:30, took 20.351784 s
[1]


从运行的log可以看出是Spark的引擎对Hive进行查询。


此时访问Mater机的4040端口即可看到刚才执行的JOB的信息:


点击Job名:collect at <console>:30即可查看JOB的信息:


可以看到菜单上有SQL、SQL1、SQL2栏,这里点击SQL2栏可以看到刚才运行的SQL的信息:




对比SparkSQL和Hive的运行时间可以看出SparkSQL从Hive数据仓库中查询数据的速度比直接用Hive查询快几十倍。


下面讲解sqlContext,在spark-shell中输入sqlContext可以看到显示:
org.apache.spark.sql.SQLContext = org.apache.spark.sql.hive.HiveContext@78762e87
说明可以创建多个HiveContext并行查询。




下面实验Spark官网的例子(http://spark.apache.org/docs/latest/sql-programming-guide.html)
DataFrame Operations
DataFrames provide a domain-specific language for structured data manipulation in Scala, Java, Python and R.
Here we include some basic examples of structured data processing using DataFrames:
Scala
Java
Python
R
val sc: SparkContext 
// An existing SparkContext.
val sqlContext = new org.apache.spark.sql.SQLContext(sc)
// Create the DataFrame
val df = sqlContext.read.json("examples/src/main/resources/people.json")
// Show the content of the DataFrame
df.show()
// age  name// null Michael// 30   Andy// 19   Justin
// Print the schema in a tree format
df.printSchema()
// root// |-- age: long (nullable = true)// |-- name: string (nullable = true)
// Select only the "name" column
df.select("name").show()
// name// Michael// Andy// Justin
// Select everybody, but increment the age by 1
df.select(df("name"), df("age") + 1).show()
// name    (age + 1)// Michael null// Andy    31// Justin  20
// Select people older than 21
df.filter(df("age") > 21).show()
// age name// 30  Andy
// Count people by age
df.groupBy("age").count().show()
// age  count// null 1// 19   1// 30   1
下面是实验结果:


scala> val df = sqlContext.read.json("/user/data/SparkResources/people.json")
16/03/27 03:17:21 INFO json.JSONRelation: Listing hdfs://slq1:9000/user/data/SparkResources/people.json on driver
16/03/27 03:17:22 INFO storage.MemoryStore: Block broadcast_8 stored as values in memory (estimated size 211.8 KB, free 772.8 KB)
16/03/27 03:17:22 INFO storage.MemoryStore: Block broadcast_8_piece0 stored as bytes in memory (estimated size 19.8 KB, free 792.6 KB)
16/03/27 03:17:22 INFO storage.BlockManagerInfo: Added broadcast_8_piece0 in memory on localhost:43229 (size: 19.8 KB, free: 517.3 MB)
16/03/27 03:17:22 INFO spark.SparkContext: Created broadcast 8 from json at <console>:25
16/03/27 03:17:22 INFO mapred.FileInputFormat: Total input paths to process : 1
16/03/27 03:17:22 INFO spark.SparkContext: Starting job: json at <console>:25
16/03/27 03:17:22 INFO scheduler.DAGScheduler: Got job 3 (json at <console>:25) with 1 output partitions
16/03/27 03:17:22 INFO scheduler.DAGScheduler: Final stage: ResultStage 4 (json at <console>:25)
16/03/27 03:17:22 INFO scheduler.DAGScheduler: Parents of final stage: List()
16/03/27 03:17:22 INFO scheduler.DAGScheduler: Missing parents: List()
16/03/27 03:17:22 INFO scheduler.DAGScheduler: Submitting ResultStage 4 (MapPartitionsRDD[23] at json at <console>:25), which has no missing parents
16/03/27 03:17:22 INFO storage.MemoryStore: Block broadcast_9 stored as values in memory (estimated size 4.3 KB, free 796.9 KB)
16/03/27 03:17:22 INFO storage.MemoryStore: Block broadcast_9_piece0 stored as bytes in memory (estimated size 2.4 KB, free 799.3 KB)
16/03/27 03:17:22 INFO storage.BlockManagerInfo: Added broadcast_9_piece0 in memory on localhost:43229 (size: 2.4 KB, free: 517.3 MB)
16/03/27 03:17:22 INFO spark.SparkContext: Created broadcast 9 from broadcast at DAGScheduler.scala:1006
16/03/27 03:17:22 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 4 (MapPartitionsRDD[23] at json at <console>:25)
16/03/27 03:17:22 INFO scheduler.TaskSchedulerImpl: Adding task set 4.0 with 1 tasks
16/03/27 03:17:22 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 4.0 (TID 4, localhost, partition 0,ANY, 2155 bytes)
16/03/27 03:17:23 INFO executor.Executor: Running task 0.0 in stage 4.0 (TID 4)
16/03/27 03:17:23 INFO rdd.HadoopRDD: Input split: hdfs://slq1:9000/user/data/SparkResources/people.json:0+73
16/03/27 03:17:23 INFO executor.Executor: Finished task 0.0 in stage 4.0 (TID 4). 2845 bytes result sent to driver
16/03/27 03:17:23 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 4.0 (TID 4) in 311 ms on localhost (1/1)
16/03/27 03:17:23 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 4.0, whose tasks have all completed, from pool 
16/03/27 03:17:23 INFO scheduler.DAGScheduler: ResultStage 4 (json at <console>:25) finished in 0.316 s
16/03/27 03:17:23 INFO scheduler.DAGScheduler: Job 3 finished: json at <console>:25, took 0.436831 s
df: org.apache.spark.sql.DataFrame = [age: bigint, name: string]


scala> df.show()
16/03/27 03:17:42 INFO storage.MemoryStore: Block broadcast_10 stored as values in memory (estimated size 211.3 KB, free 1010.7 KB)
16/03/27 03:17:42 INFO storage.MemoryStore: Block broadcast_10_piece0 stored as bytes in memory (estimated size 19.7 KB, free 1030.4 KB)
16/03/27 03:17:42 INFO storage.BlockManagerInfo: Added broadcast_10_piece0 in memory on localhost:43229 (size: 19.7 KB, free: 517.3 MB)
16/03/27 03:17:42 INFO spark.SparkContext: Created broadcast 10 from show at <console>:28
16/03/27 03:17:43 INFO storage.MemoryStore: Block broadcast_11 stored as values in memory (estimated size 211.8 KB, free 1242.2 KB)
16/03/27 03:17:43 INFO storage.MemoryStore: Block broadcast_11_piece0 stored as bytes in memory (estimated size 19.8 KB, free 1262.0 KB)
16/03/27 03:17:43 INFO storage.BlockManagerInfo: Added broadcast_11_piece0 in memory on localhost:43229 (size: 19.8 KB, free: 517.3 MB)
16/03/27 03:17:43 INFO spark.SparkContext: Created broadcast 11 from show at <console>:28
16/03/27 03:17:44 INFO mapred.FileInputFormat: Total input paths to process : 1
16/03/27 03:17:44 INFO spark.SparkContext: Starting job: show at <console>:28
16/03/27 03:17:44 INFO scheduler.DAGScheduler: Got job 4 (show at <console>:28) with 1 output partitions
16/03/27 03:17:44 INFO scheduler.DAGScheduler: Final stage: ResultStage 5 (show at <console>:28)
16/03/27 03:17:44 INFO scheduler.DAGScheduler: Parents of final stage: List()
16/03/27 03:17:44 INFO scheduler.DAGScheduler: Missing parents: List()
16/03/27 03:17:44 INFO scheduler.DAGScheduler: Submitting ResultStage 5 (MapPartitionsRDD[29] at show at <console>:28), which has no missing parents
16/03/27 03:17:44 INFO storage.MemoryStore: Block broadcast_12 stored as values in memory (estimated size 5.7 KB, free 1267.6 KB)
16/03/27 03:17:44 INFO storage.MemoryStore: Block broadcast_12_piece0 stored as bytes in memory (estimated size 3.2 KB, free 1270.9 KB)
16/03/27 03:17:44 INFO storage.BlockManagerInfo: Added broadcast_12_piece0 in memory on localhost:43229 (size: 3.2 KB, free: 517.3 MB)
16/03/27 03:17:44 INFO spark.SparkContext: Created broadcast 12 from broadcast at DAGScheduler.scala:1006
16/03/27 03:17:44 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 5 (MapPartitionsRDD[29] at show at <console>:28)
16/03/27 03:17:44 INFO scheduler.TaskSchedulerImpl: Adding task set 5.0 with 1 tasks
16/03/27 03:17:44 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 5.0 (TID 5, localhost, partition 0,ANY, 2155 bytes)
16/03/27 03:17:44 INFO executor.Executor: Running task 0.0 in stage 5.0 (TID 5)
16/03/27 03:17:44 INFO rdd.HadoopRDD: Input split: hdfs://slq1:9000/user/data/SparkResources/people.json:0+73
16/03/27 03:17:45 INFO executor.Executor: Finished task 0.0 in stage 5.0 (TID 5). 2508 bytes result sent to driver
16/03/27 03:17:45 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 5.0 (TID 5) in 464 ms on localhost (1/1)
16/03/27 03:17:45 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 5.0, whose tasks have all completed, from pool 
16/03/27 03:17:45 INFO scheduler.DAGScheduler: ResultStage 5 (show at <console>:28) finished in 0.469 s
16/03/27 03:17:45 INFO scheduler.DAGScheduler: Job 4 finished: show at <console>:28, took 0.663429 s
+----+-------+
| age|   name|
+----+-------+
|null|Michael|
|  30|   Andy|
|  19| Justin|
+----+-------+




scala> df.printSchema
root
 |-- age: long (nullable = true)
 |-- name: string (nullable = true)




scala> df.printSchema()
root
 |-- age: long (nullable = true)
 |-- name: string (nullable = true)




scala> df.select("name").show
16/03/27 03:18:34 INFO storage.MemoryStore: Block broadcast_13 stored as values in memory (estimated size 211.3 KB, free 1482.2 KB)
16/03/27 03:18:34 INFO storage.MemoryStore: Block broadcast_13_piece0 stored as bytes in memory (estimated size 19.7 KB, free 1501.9 KB)
16/03/27 03:18:34 INFO storage.BlockManagerInfo: Added broadcast_13_piece0 in memory on localhost:43229 (size: 19.7 KB, free: 517.3 MB)
16/03/27 03:18:34 INFO spark.SparkContext: Created broadcast 13 from show at <console>:28
16/03/27 03:18:34 INFO storage.MemoryStore: Block broadcast_14 stored as values in memory (estimated size 211.8 KB, free 1713.7 KB)
16/03/27 03:18:35 INFO storage.MemoryStore: Block broadcast_14_piece0 stored as bytes in memory (estimated size 19.8 KB, free 1733.5 KB)
16/03/27 03:18:35 INFO storage.BlockManagerInfo: Added broadcast_14_piece0 in memory on localhost:43229 (size: 19.8 KB, free: 517.2 MB)
16/03/27 03:18:35 INFO spark.SparkContext: Created broadcast 14 from show at <console>:28
16/03/27 03:18:35 INFO mapred.FileInputFormat: Total input paths to process : 1
16/03/27 03:18:35 INFO spark.SparkContext: Starting job: show at <console>:28
16/03/27 03:18:35 INFO scheduler.DAGScheduler: Got job 5 (show at <console>:28) with 1 output partitions
16/03/27 03:18:35 INFO scheduler.DAGScheduler: Final stage: ResultStage 6 (show at <console>:28)
16/03/27 03:18:35 INFO scheduler.DAGScheduler: Parents of final stage: List()
16/03/27 03:18:35 INFO scheduler.DAGScheduler: Missing parents: List()
16/03/27 03:18:35 INFO scheduler.DAGScheduler: Submitting ResultStage 6 (MapPartitionsRDD[35] at show at <console>:28), which has no missing parents
16/03/27 03:18:35 INFO storage.MemoryStore: Block broadcast_15 stored as values in memory (estimated size 5.7 KB, free 1739.2 KB)
16/03/27 03:18:35 INFO storage.MemoryStore: Block broadcast_15_piece0 stored as bytes in memory (estimated size 3.3 KB, free 1742.5 KB)
16/03/27 03:18:35 INFO storage.BlockManagerInfo: Added broadcast_15_piece0 in memory on localhost:43229 (size: 3.3 KB, free: 517.2 MB)
16/03/27 03:18:35 INFO spark.SparkContext: Created broadcast 15 from broadcast at DAGScheduler.scala:1006
16/03/27 03:18:35 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 6 (MapPartitionsRDD[35] at show at <console>:28)
16/03/27 03:18:35 INFO scheduler.TaskSchedulerImpl: Adding task set 6.0 with 1 tasks
16/03/27 03:18:35 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 6.0 (TID 6, localhost, partition 0,ANY, 2155 bytes)
16/03/27 03:18:35 INFO executor.Executor: Running task 0.0 in stage 6.0 (TID 6)
16/03/27 03:18:35 INFO rdd.HadoopRDD: Input split: hdfs://slq1:9000/user/data/SparkResources/people.json:0+73
16/03/27 03:18:36 INFO codegen.GenerateUnsafeProjection: Code generated in 238.622957 ms
16/03/27 03:18:36 INFO codegen.GenerateSafeProjection: Code generated in 314.33143 ms
16/03/27 03:18:37 INFO executor.Executor: Finished task 0.0 in stage 6.0 (TID 6). 2415 bytes result sent to driver
16/03/27 03:18:37 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 6.0 (TID 6) in 1211 ms on localhost (1/1)
16/03/27 03:18:37 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 6.0, whose tasks have all completed, from pool 
16/03/27 03:18:37 INFO scheduler.DAGScheduler: ResultStage 6 (show at <console>:28) finished in 1.214 s
16/03/27 03:18:37 INFO scheduler.DAGScheduler: Job 5 finished: show at <console>:28, took 1.341020 s
+-------+
|   name|
+-------+
|Michael|
|   Andy|
| Justin|
+-------+




scala> df.select(df("name"),df("age") + 1).show()
16/03/27 03:20:07 INFO storage.MemoryStore: Block broadcast_16 stored as values in memory (estimated size 211.3 KB, free 1953.8 KB)
16/03/27 03:20:08 INFO storage.MemoryStore: Block broadcast_16_piece0 stored as bytes in memory (estimated size 19.7 KB, free 1973.5 KB)
16/03/27 03:20:08 INFO storage.BlockManagerInfo: Added broadcast_16_piece0 in memory on localhost:43229 (size: 19.7 KB, free: 517.2 MB)
16/03/27 03:20:08 INFO spark.SparkContext: Created broadcast 16 from show at <console>:28
16/03/27 03:20:08 INFO storage.MemoryStore: Block broadcast_17 stored as values in memory (estimated size 211.8 KB, free 2.1 MB)
16/03/27 03:20:08 INFO storage.MemoryStore: Block broadcast_17_piece0 stored as bytes in memory (estimated size 19.8 KB, free 2.2 MB)
16/03/27 03:20:08 INFO storage.BlockManagerInfo: Added broadcast_17_piece0 in memory on localhost:43229 (size: 19.8 KB, free: 517.2 MB)
16/03/27 03:20:08 INFO spark.SparkContext: Created broadcast 17 from show at <console>:28
16/03/27 03:20:09 INFO mapred.FileInputFormat: Total input paths to process : 1
16/03/27 03:20:09 INFO spark.SparkContext: Starting job: show at <console>:28
16/03/27 03:20:09 INFO scheduler.DAGScheduler: Got job 6 (show at <console>:28) with 1 output partitions
16/03/27 03:20:09 INFO scheduler.DAGScheduler: Final stage: ResultStage 7 (show at <console>:28)
16/03/27 03:20:09 INFO scheduler.DAGScheduler: Parents of final stage: List()
16/03/27 03:20:09 INFO scheduler.DAGScheduler: Missing parents: List()
16/03/27 03:20:09 INFO scheduler.DAGScheduler: Submitting ResultStage 7 (MapPartitionsRDD[42] at show at <console>:28), which has no missing parents
16/03/27 03:20:10 INFO storage.MemoryStore: Block broadcast_18 stored as values in memory (estimated size 7.8 KB, free 2.2 MB)
16/03/27 03:20:10 INFO storage.MemoryStore: Block broadcast_18_piece0 stored as bytes in memory (estimated size 4.2 KB, free 2.2 MB)
16/03/27 03:20:10 INFO storage.BlockManagerInfo: Added broadcast_18_piece0 in memory on localhost:43229 (size: 4.2 KB, free: 517.2 MB)
16/03/27 03:20:10 INFO spark.SparkContext: Created broadcast 18 from broadcast at DAGScheduler.scala:1006
16/03/27 03:20:10 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 7 (MapPartitionsRDD[42] at show at <console>:28)
16/03/27 03:20:10 INFO scheduler.TaskSchedulerImpl: Adding task set 7.0 with 1 tasks
16/03/27 03:20:10 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 7.0 (TID 7, localhost, partition 0,ANY, 2155 bytes)
16/03/27 03:20:10 INFO executor.Executor: Running task 0.0 in stage 7.0 (TID 7)
16/03/27 03:20:10 INFO rdd.HadoopRDD: Input split: hdfs://slq1:9000/user/data/SparkResources/people.json:0+73
16/03/27 03:20:10 INFO codegen.GenerateUnsafeProjection: Code generated in 331.131861 ms
16/03/27 03:20:11 INFO codegen.GenerateUnsafeProjection: Code generated in 245.055735 ms
16/03/27 03:20:11 INFO codegen.GenerateSafeProjection: Code generated in 203.81864 ms
16/03/27 03:20:11 INFO executor.Executor: Finished task 0.0 in stage 7.0 (TID 7). 2653 bytes result sent to driver
16/03/27 03:20:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 7.0 (TID 7) in 1438 ms on localhost (1/1)
16/03/27 03:20:11 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 7.0, whose tasks have all completed, from pool 
16/03/27 03:20:11 INFO scheduler.DAGScheduler: ResultStage 7 (show at <console>:28) finished in 1.442 s
16/03/27 03:20:11 INFO scheduler.DAGScheduler: Job 6 finished: show at <console>:28, took 1.643127 s
+-------+---------+
|   name|(age + 1)|
+-------+---------+
|Michael|     null|
|   Andy|       31|
| Justin|       20|
+-------+---------+




scala> df.filter(df("age") > 21).show()
16/03/27 03:20:44 INFO storage.MemoryStore: Block broadcast_19 stored as values in memory (estimated size 211.3 KB, free 2.4 MB)
16/03/27 03:20:44 INFO storage.MemoryStore: Block broadcast_19_piece0 stored as bytes in memory (estimated size 19.7 KB, free 2.4 MB)
16/03/27 03:20:44 INFO storage.BlockManagerInfo: Added broadcast_19_piece0 in memory on localhost:43229 (size: 19.7 KB, free: 517.2 MB)
16/03/27 03:20:44 INFO spark.SparkContext: Created broadcast 19 from show at <console>:28
16/03/27 03:20:44 INFO storage.MemoryStore: Block broadcast_20 stored as values in memory (estimated size 211.8 KB, free 2.6 MB)
16/03/27 03:20:44 INFO storage.MemoryStore: Block broadcast_20_piece0 stored as bytes in memory (estimated size 19.8 KB, free 2.6 MB)
16/03/27 03:20:44 INFO storage.BlockManagerInfo: Added broadcast_20_piece0 in memory on localhost:43229 (size: 19.8 KB, free: 517.2 MB)
16/03/27 03:20:44 INFO spark.SparkContext: Created broadcast 20 from show at <console>:28
16/03/27 03:20:45 INFO mapred.FileInputFormat: Total input paths to process : 1
16/03/27 03:20:45 INFO spark.SparkContext: Starting job: show at <console>:28
16/03/27 03:20:45 INFO scheduler.DAGScheduler: Got job 7 (show at <console>:28) with 1 output partitions
16/03/27 03:20:45 INFO scheduler.DAGScheduler: Final stage: ResultStage 8 (show at <console>:28)
16/03/27 03:20:45 INFO scheduler.DAGScheduler: Parents of final stage: List()
16/03/27 03:20:45 INFO scheduler.DAGScheduler: Missing parents: List()
16/03/27 03:20:45 INFO scheduler.DAGScheduler: Submitting ResultStage 8 (MapPartitionsRDD[49] at show at <console>:28), which has no missing parents
16/03/27 03:20:45 INFO storage.MemoryStore: Block broadcast_21 stored as values in memory (estimated size 7.7 KB, free 2.6 MB)
16/03/27 03:20:45 INFO storage.MemoryStore: Block broadcast_21_piece0 stored as bytes in memory (estimated size 4.1 KB, free 2.6 MB)
16/03/27 03:20:45 INFO storage.BlockManagerInfo: Added broadcast_21_piece0 in memory on localhos
  • 1
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值