Spark SQL

Spark SQL 编程指南

① Spark SQL是⽤于结构化数据处理的⼀个模块。同Spark RDD 不同地⽅在于Spark SQL的API可以给Spark计算引擎提供更多地 信息,例如:数据结构、计算算⼦等。在内部Spark可以通过这些信息有针对对任务做优化和调整。这⾥有⼏种⽅式和Spark SQL进⾏交互,例如Dataset API和SQL等,这两种API可以混合使⽤。Spark SQL的⼀个⽤途是执⾏SQL查询。 Spark SQL还可⽤于从现有Hive安装中读取数据。从其他编程语⾔中运⾏SQL时,结果将作为Dataset/DataFrame返回,使⽤命令 ⾏或JDBC / ODBC与SQL接⼝进⾏交互。
② Dataset是⼀个分布式数据集合在Spark 1.6提供⼀个新的接⼝,Dataset提供RDD的优势(强类型,使⽤强⼤的lambda函 数)以及具备了Spark SQL执⾏引擎的优点。Dataset可以通过JVM对象构建,然后可以使⽤转换函数等(例如:map、flatMap、filter等),⽬前Dataset API⽀持Scala和Java ⽬前Python对Dataset⽀持还不算完备。
③ DataFrame是命名列的数据集,他在概念是等价于关系型数据库。DataFrames可以从很多地⽅构建,⽐如说结构化数据⽂ 件、hive中的表或者外部数据库,使⽤Dataset[row]的数据集,可以理解DataFrame就是⼀个Dataset[Row].

SparkSession

Spark中所有功能的⼊⼝点是SparkSession类。要创建基本的SparkSession,只需使⽤
SparkSession.buildeåçr();

依赖

<dependency>
	<groupId>org.apache.spark</groupId>
	<artifactId>spark-sql_2.11</artifactId>
	<version>2.4.3</version>
</dependency>

Drvier程序

//1.创建SparkSession
val spark = SparkSession.builder()
 .appName("hellosql")
 .master("local[10]") .getOrCreate()
//2.引⼊改隐试转换 主要是 将 RDD 转换为 DataFrame/Dataset
import spark.implicits._
spark.sparkContext.setLogLevel("FATAL")
//关闭spark
spark.stop()

创建Dataset/DataFrame

Dataset

Dataset与RDD类似,但是它们不使⽤Java序列化或Kryo,⽽是使⽤专⽤的Encoder来序列化对象以便通过⽹络进⾏处理或传输。虽然Encoder和标准序列化都负责将对象转换为字节,但Encoder是动态⽣成的代码,并使⽤⼀种格式,允许Spark执⾏许多操作,如过滤,排序和散列,⽽⽆需将字节反序列化为对象。

case-class
case class Person(id:Int,name:String,age:Int,sex:Boolean)
val dataset: Dataset[Person] =List(Person(1,"zhangsan",18,true),Person(2,"wangwu",28,true)).toDS()
dataset.select($"id",$"name").show()
Tuple元组
val dataset: Dataset[(Int,String,Int,Boolean)] = List((1,"zhangsan",18,true),
(2,"wangwu",28,true)).toDS()
dataset.select($"_1",$"_2").show()
//或者
dataset.selectExpr("_1 as id","_2 as name","(_3 * 10) as age").show()
json数据
{"name":"张三","age":18}
{"name":"lisi","age":28}
{"name":"wangwu","age":38}

case class Person(id:Int,name:String,age:Int,sex:Boolean)
val dataset = spark.read.json("D:///Persion.json").as[Person]
dataset.show()
rdd
元组
val userRDD = spark.sparkContext.makeRDD(List((1,"张三",true,18,15000.0)))
userRDD.toDS().show()

+---+----+----+---+-------+
| _1| _2| _3| _4| _5|
+---+----+----+---+-------+
| 1|张三|true| 18|15000.0|
+---+----+----+---+-------+
case-class
val userRDD = spark.sparkContext.makeRDD(List(User(1,"张三",true,18,15000.0)))
userRDD.toDS().show()

+---+----+----+---+-------+
| id|name| sex|age| salary|
+---+----+----+---+-------+
| 1|张三|true| 18|15000.0|
+---+----+----+---+-------+

DataFrame

json⽂件
val frame = spark.read.json("file:///f:/person.json")
frame.show()
case-class
List(Person("zhangsan",18),Person("王五",20)).toDF("uname","uage").show()
Tuple元组
List(("zhangsan",18),("王五",20)).toDF("name","age").show()
RDD转换
Row
val userRDD = spark.sparkContext.makeRDD(List((1,"张三",true,18,15000.0)))
 .map(t=>Row(t._1,t._2,t._3,t._4,t._5))
var schema=new StructType()
 .add("id","int")
 .add("name","string")
 .add("sex","boolean")
 .add("age","int")
 .add("salary","double")
spark.createDataFrame(userRDD,schema).show()

+---+----+----+---+-------+
| id|name| sex|age| salary|
+---+----+----+---+-------+
| 1|张三|true| 18|15000.0|
+---+----+----+---+-------+
Javabean
val userRDD = spark.sparkContext.makeRDD(List(new User(1,"张三",true,18,15000.0)))
spark.createDataFrame(userRDD,classOf[User]).show()

提示 :这⾥的 User 须是JavaBean对象。如果是Scala的类,⽤户需要额外提供getXxx⽅法(没这个必要)

+---+----+----+---+-------+
| id|name| sex|age| salary|
+---+----+----+---+-------+
| 1|张三|true| 18|15000.0|
+---+----+----+---+-------+
case-class
val userRDD = spark.sparkContext.makeRDD(List(User(1,"张三",true,18,15000.0)))
spark.createDataFrame(userRDD).show()

+---+----+----+---+-------+
| id|name| sex|age| salary|
+---+----+----+---+-------+
| 1|张三|true| 18|15000.0|
+---+----+----+---+-------+
tuple元组
val userRDD = spark.sparkContext.makeRDD(List((1,"张三",true,18,15000.0)))
spark.createDataFrame(userRDD).show()

+---+----+----+---+-------+
| _1| _2| _3| _4| _5|
+---+----+----+---+-------+
| 1|张三|true| 18|15000.0|
+---+----+----+---+-------+

Dataset/DataFrame API操作

准备数据

1,Michael,false,29,2000
5,Lisa,false,19,1000
3,Justin,true,19,1000
2,Andy,true,30,5000
4,Kaine,false,20,5000

尝试将⽂本数据转变为DataFrame,先转化成RDD再转化成DataFrame

case class User01(id:Int,name:String,sex:Boolean,age:Int,salary:Double)
var userRDD:RDD[User01]=userlines.map(line=>line.split(","))
.map(ts=>User01(ts(0).toInt,ts(1),ts(2).toBoolean,ts(3).toInt,ts(4).toDouble))
val userDataFrame = userRDD.toDF()

printSchema

打印创建的表结构信息

userDataFrame.printSchema()

root
 |-- id: integer (nullable = false)
 |-- name: string (nullable = true)
 |-- sex: boolean (nullable = false)
 |-- age: integer (nullable = false)
 |-- salary: double (nullable = false)

show

默认将dataframe或者是dataset中前20⾏的数据打印在控制台,⼀般⽤于测试。

userDataFrame.show()

+---+-------+-----+---+------+
| id| name| sex|age|salary|
+---+-------+-----+---+------+
| 1|Michael|false| 29|2000.0|
| 2| Andy| true| 30|5000.0|
| 3| Justin| true| 19|1000.0|
| 4| Kaine|false| 20|5000.0|
| 5| Lisa|false| 19|1000.0|
+---+-------+-----+---+------+

例如只查询前2⾏ userDataFrame.show(2)

+---+-------+-----+---+------+
| id| name| sex|age|salary|
+---+-------+-----+---+------+
| 1|Michael|false| 29|2000.0|
| 2| Andy| true| 30|5000.0|
+---+-------+-----+---+------+

select

等价于sql脚本的select语句,⽤于过滤、投影出需要的字段信息。⽤户可以直接给列名,但是不⽀持计算

userDataFrame.select("id","name","sex","age","salary").show()

+---+-------+-----+---+------+
| id| name| sex|age|salary|
+---+-------+-----+---+------+
| 1|Michael|false| 29|2000.0|
| 2| Andy| true| 30|5000.0|
| 3| Justin| true| 19|1000.0|
| 4| Kaine|false| 20|5000.0|
| 5| Lisa|false| 19|1000.0|
+---+-------+-----+---+------+

⽤户可以给select传递Cloumn,这样⽤户可以针对Column做⼀些简单的计算

userDataFrame.select(new Column("id"),new Column("name"),new Column("age"),new
Column("salary"),new Column("salary").*(12))
 .show()

简化写法

userDataFrame.select($"id",$"name",$"age",$"salary",$"salary" * 12) .show()

+---+-------+---+------+-------------+
| id| name|age|salary|(salary * 12)|
+---+-------+---+------+-------------+
| 1|Michael| 29|2000.0| 24000.0|
| 2| Andy| 30|5000.0| 60000.0|
| 3| Justin| 19|1000.0| 12000.0|
| 4| Kaine| 20|5000.0| 60000.0|
| 5| Lisa| 19|1000.0| 12000.0|
+---+-------+---+------+-------------+

selectExpr

允许直接给字段名,并且基于字段名指定⼀些常⻅字符串SQL运算符。

userDataFrame.selectExpr("id","name || '⽤户'","salary * 12 as annal_salary").show()
+---+------------------+------------+
| id|concat(name, ⽤户)|annal_salary|
+---+------------------+------------+
| 1| Michael⽤户| 24000.0|
| 2| Andy⽤户| 60000.0|
| 3| Justin⽤户| 12000.0|
| 4| Kaine⽤户| 60000.0|
| 5| Lisa⽤户| 12000.0|
+---+------------------+------------+

where

类似SQL中的where,主要⽤于过滤查询结果。该算⼦可以传递Conditiion或者ConditionExp

userDataFrame.select($"id",$"name",$"age",$"salary",$"salary" * 12)
 .where($"name" like "%a%")
 .show()

等价写法

userDataFrame.select($"id",$"name",$"age",$"salary",$"salary" * 12)
 .where("name like '%a%'")
 .show()

注意spark中别名不要出现中⽂,如果出现中⽂,在 where表达式 中存在bug

userDataFrame.select($"id",$"name",$"age",$"salary",$"salary" * 12 as "annal_salary")
 .where("(name like '%a%') and (annal_salary > 12000)" )
 .show() //正常

userDataFrame.select($"id",$"name",$"age",$"salary",$"salary" * 12 as "年薪")
 .where("(name like '%a%') and ('年薪' > 12000)" )
 .show()//错误

userDataFrame.select($"id",$"name",$"age",$"salary",$"salary" * 12 as "年薪")
 .where($"name" like "%a%" and $"年薪" > 12000 )
 .show() //正常

withColumn

可以给dataframe添加⼀个字段信息

userDataFrame.select($"id",$"name",$"age",$"salary",$"sex")
 .withColumn("年薪",$"salary" * 12)
 .show()
 
+---+-------+---+------+-----+-------+
| id| name|age|salary| sex| 年薪|
+---+-------+---+------+-----+-------+
| 1|Michael| 29|2000.0|false|24000.0|
| 2| Andy| 30|5000.0| true|60000.0|
| 3| Justin| 19|1000.0| true|12000.0|
| 4| Kaine| 20|5000.0|false|60000.0|
| 5| Lisa| 19|1000.0|false|12000.0|
+---+-------+---+------+-----+-------+

withColumnRenamed

修改现有字段名字

userDataFrame.select($"id",$"name",$"age",$"salary",$"sex")
 .withColumn("年薪",$"salary" * 12)
 .withColumnRenamed("年薪","annal_salary")
 .withColumnRenamed("id","uid")
 .show()

+---+-------+---+------+-----+------------+
|uid| name|age|salary| sex|annal_salary|
+---+-------+---+------+-----+------------+
| 1|Michael| 29|2000.0|false| 24000.0|
| 2| Andy| 30|5000.0| true| 60000.0|
| 3| Justin| 19|1000.0| true| 12000.0|
| 4| Kaine| 20|5000.0|false| 60000.0|
| 5| Lisa| 19|1000.0|false| 12000.0|
+---+-------+---+------+-----+------------+

groupBy

和SQL中的 group by ⽤法⼀直,通常和⼀些聚合函数⼀起使⽤。

userDataFrame.select($"id",$"name",$"age",$"salary",$"sex") .groupBy($"sex") .mean("salary")//计算平均值等价avg
.show()

+-----+------------------+
| sex| avg(salary)|
+-----+------------------+
| true| 3000.0|
|false|2666.6666666666665|
+-----+------------------+

类似还有max、min、sum、avg算⼦,但是如果使⽤算⼦,后⾯跟⼀个聚合函数。⼀般来讲⽤户可以使⽤ agg 算⼦实现多个聚合操作。

agg

必须跟在groupBy后⾯,调⽤多个聚合函数,实现对某些字段的求和、最⼤值、最⼩值、平均值等。

import org.apache.spark.sql.functions._
userDataFrame.select($"id",$"name",$"age",$"salary",$"sex") .groupBy($"sex") .agg(sum("salary") as "sum", avg("salary") as "avg",max("salary") as
"max",min("salary") as "min") .show()

+-----+------+------------------+------+------+
| sex| sum| avg| max| min|
+-----+------+------------------+------+------+
| true|6000.0| 3000.0|5000.0|1000.0|
|false|8000.0|2666.6666666666665|5000.0|1000.0|
+-----+------+------------------+------+------+

或者

userDataFrame.select($"id",$"name",$"age",$"salary",$"sex")
 .groupBy($"sex")
 .agg("salary"->"sum","salary"->"avg","salary"->"max","salary"->"min")
 .show()

开窗函数

使⽤over完成开窗,操作。

import org.apache.spark.sql.functions._
val w = Window.partitionBy("sex")
 .orderBy($"salary" desc)
 .rowsBetween(Window.unboundedPreceding,Window.currentRow)
userDataFrame.select($"id",$"name",$"age",$"salary",$"sex") .withColumn("salary_rank",dense_rank() over (w ))
.show()
select id,name,...,dense_rank() over(partition by sex order by salary desc rows
between unbounded preceding and current row) from t_user

cube

实现多维度分析计算

import org.apache.spark.sql.functions._
spark.sparkContext.makeRDD(List((110,50,80),(120,60,95),(120,50,96)))
.toDF("height","weight","score") .cube($"height",$"weight") .agg(avg("score"),max("score"))
.show()

+------+------+-----------------+----------+
|height|weight| avg(score)|max(score)|
+------+------+-----------------+----------+
| 110| 50| 80.0| 80|
| 120| null| 95.5| 96|
| 120| 60| 95.0| 95|
| null| 60| 95.0| 95|
| null| null|90.33333333333333| 96|
| 120| 50| 96.0| 96|
| 110| null| 80.0| 80|
| null| 50| 88.0| 96|
+------+------+-----------------+----------+

pivot

该算⼦引⾃于SqlServer,主要⽤于实现⾏转列操作。

case class UserCost(id:Int,category:String,cost:Double)
var userCostRDD=spark.sparkContext.parallelize(List(
 UserCost(1,"电⼦类",100),
 UserCost(1,"电⼦类",20),
 UserCost(1,"⺟婴类",100),
 UserCost(1,"⽣活⽤品",100),
 UserCost(2,"美⻝",79),
 UserCost(2,"电⼦类",80),
 UserCost(2,"⽣活⽤品",100)
))
var categories=userCostRDD.map(uc=>uc.category).distinct.collect()
userCostRDD.toDF("id","category","cost") .groupBy("id") .pivot($"category",categories) .sum("cost") .show()

+---+------+--------+------+----+
| id|⺟婴类|⽣活⽤品|电⼦类|美⻝|
+---+------+--------+------+----+
| 1| 100.0| 100.0| 120.0|null|
| 2| null| 100.0| 80.0|79.0|
+---+------+--------+------+----+

na

提供了对null值字段数据的⾃动填充技术。

case class UserCost(id:Int,category:String,cost:Double)
var userCostRDD=spark.sparkContext.parallelize(List(
UserCost(1,"电⼦类",100),
 UserCost(1,"电⼦类",20),
 UserCost(1,"⺟婴类",100),
 UserCost(1,"⽣活⽤品",100),
 UserCost(2,"美⻝",79),
 UserCost(2,"电⼦类",80),
 UserCost(2,"⽣活⽤品",100)
))
var categories=userCostRDD.map(uc=>uc.category).distinct.collect()
userCostRDD.toDF("id","category","cost") .groupBy("id") .pivot($"category",categories) .sum("cost") .na.fill(0.0) .show()

+---+------+--------+------+----+
| id|⺟婴类|⽣活⽤品|电⼦类|美⻝|
+---+------+--------+------+----+
| 1| 100.0| 100.0| 120.0| 0.0|
| 2| 0.0| 100.0| 80.0|79.0|
+---+------+--------+------+----+

其中fill表示填充。

var userCostRDD=spark.sparkContext.parallelize(List(
 UserCost(1,"电⼦类",100),
 UserCost(1,"电⼦类",20),
 UserCost(1,"⺟婴类",100),
 UserCost(1,"⽣活⽤品",100),
 UserCost(2,"美⻝",79),
 UserCost(2,"电⼦类",80),
 UserCost(2,"⽣活⽤品",100)
))
var categories=userCostRDD.map(uc=>uc.category).distinct.collect()
userCostRDD.toDF("id","category","cost") .groupBy("id") .pivot($"category",categories) .sum("cost") .na.fill(Map("美⻝"-> -1,"⺟婴类"-> 1000))
.show()

+---+------+--------+------+----+
| id|⺟婴类|⽣活⽤品|电⼦类|美⻝|
+---+------+--------+------+----+
| 1| 100.0| 100.0| 120.0|-1.0|
| 2|1000.0| 100.0| 80.0|79.0|
+---+------+--------+------+----+

⼀般na后⾯还可以跟drop算⼦,可以删除⼀些null值的⾏

var userCostRDD=spark.sparkContext.parallelize(List(
 UserCost(1,"电⼦类",100),
 UserCost(1,"电⼦类",20),
 UserCost(1,"⺟婴类",100),
 UserCost(1,"⽣活⽤品",100),
 UserCost(2,"美⻝",79),
 UserCost(2,"电⼦类",80),
 UserCost(2,"⽣活⽤品",100)
))
var categories=userCostRDD.map(uc=>uc.category).distinct.collect()
userCostRDD.toDF("id","category","cost") .groupBy("id") .pivot($"category",categories) .sum("cost")
//.na.drop(4)//如果少于四个⾮空,删除
// .na.drop("any")//只要有⼀个为null,就删除,`all`都为null才删除
.na.drop(List("美⻝","⺟婴类"))//如果指定列出null、删除
.show()

join

和数据的join类似。

case class User01(id:Int,name:String,sex:Boolean,age:Int,salary:Double)
case class UserCost(id:Int,category:String,cost:Double)
var userCostRDD=spark.sparkContext.parallelize(List(
 UserCost(1,"电脑配件",100),
 UserCost(1,"⺟婴⽤品",100),
 UserCost(1,"⽣活⽤品",100),
 UserCost(2,"居家美⻝",79),
 UserCost(2,"消费电⼦",80),
 UserCost(2,"⽣活⽤品",100)
))
var userRDD=spark.sparkContext.parallelize(List(
 User01(1,"张晓三",true,18,15000),
 User01(2,"李晓四",true,18,18000),
 User01(3,"王晓五",false,18,10000)
))
val categories = userCostRDD.map(_.category).distinct().collect()
userCostRDD.toDF("id","category","cost").groupBy("id") .pivot($"category",categories) .sum("cost") .join(userRDD.toDF("id","name","sex","age","salary"),"id") .na.fill(0.0) .show()

userCostRDD.toDF("id","category","cost").as("c").groupBy("id")
 .pivot($"category",categories)
 .sum("cost")
 
.join(userRDD.toDF("id","name","sex","age","salary").as("u"),$"c.id"===$"u.id","LEFT_O
UTER")
 .na.fill(0.0)
 .show()
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

无敌火车滴滴开

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值