文章目录
Spark SQL架构
Spark SQL是Spark的核心组件之一(2014.4 Spark1.0)
能够直接访问现存的Hive数据
提供JDBC/ODBC接口供第三方工具借助Spark进行数据处理
提供了更高层级的接口方便地处理数据
支持多种操作方式:SQL、API编程
支持多种外部数据源:Parquet、JSON、RDBMS等
Spark SQL运行原理
Catalyst优化器是Spark SQL的核心
Catalyst Optimizer:Catalyst优化器,将逻辑计划转为物理计划
SparkContext
SQLContext
Spark SQL的编程入口
HiveContext
SQLContext的子集,包含更多功能
SparkSession(Spark 2.x推荐)
SparkSession:合并了SQLContext与HiveContext
提供与Spark功能交互单一入口点,并允许使用DataFrame和Dataset API对Spark进行编程
Dataset
特定域对象中的强类型集合
1、createDataset()的参数可以是:Seq、Array、RDD
2、上面三行代码生成的Dataset分别是:
Dataset[Int]、Dataset[(String,Int)]、Dataset[(String,Int,Int)]
3、Dataset=RDD+Schema,所以Dataset与RDD有大部共同的函数,如map、filter等
Dataset创建
val spark: SparkSession = SparkSession.builder().master("local[*]").appName("demo").getOrCreate()
import spark.implicits._
//Dataset创建方法
val ds1: Dataset[Int] = spark.createDataset(1 to 6)
ds1.show()
val ds2: Dataset[(String, Int)] = spark.createDataset(List(("a",1),("b",2),("c",3)))
ds2.show()
val ds3: Dataset[(String, Int, Int)] = spark.createDataset(sc.parallelize(List(("gree",17,175),("tom",22,180))))
ds3.show()
使用样例类创建dataset
//样例类
case class Point(label:String,x:Double,y:Double)
case class Category(i: Int, str: String)
val conf: SparkConf = new SparkConf().setMaster("local[*]").setAppName("demo1")
val sc = SparkContext.getOrCreate(conf)
val spark: SparkSession = SparkSession.builder().master("local[*]").appName("demo").getOrCreate()
import spark.implicits._
val points = Seq(Point("njzb",23.23,48.71),Point("njnz",26.12,48.33))
val pointsDS: Dataset[Point] = points.toDS()
pointsDS.printSchema()
pointsDS.select("label").show()
val categories = Seq(Category(1,"njzb"),Category(2,"njnz"))
val cateDS: Dataset[Category] = categories.toDS()
cateDS.printSchema()
//rdd转Dataset
val rdd: RDD[Int] = sc.parallelize(1 to 6)
val ds: Dataset[Int] = rdd.toDS()
DataFrame
DataFrame=Dataset[Row]
类似传统数据的二维表格
在RDD基础上加入了Schema(数据结构信息)
DataFrame Schema支持嵌套数据类型
struct
map
array
提供更多类似SQL操作的AP
RDD与DataFrame对比
通过加载文件 创建dataframe
val spark = SparkSession.builder().master("local[*]").appName("jsonsession").getOrCreate()
val frame = spark.read.format("json").option("header",true).load("in/user.json")
frame.printSchema()
frame.show()
val nameColumn: Column = frame("name")
frame.select(nameColumn).show()
frame.select("name").show()
rdd -> dataframe
val conf: SparkConf = new SparkConf().setMaster("local[*]").setAppName("demo1")
val sc = SparkContext.getOrCreate(conf)
val spark: SparkSession = SparkSession.builder().master("local[*]").appName("demo")
.getOrCreate()
import spark.implicits._
val people: RDD[String] = sc.textFile("in/people.txt")
val schemaString = "id name age"
// val schema: StructType = StructType(schemaString.split(" ").map(x => StructField(x, StringType, true)))
// val fields: Array[StructField] = schemaString.split(" ").map(x => StructField(x, StringType, true))
val fields1 = Array(
StructField("id", IntegerType, true),
StructField("name", StringType, true),
StructField("age", IntegerType, true)
)
val schema: StructType = StructType(fields1)
val row: RDD[Row] = people.map(x => x.split(" ")).map(x => Row(x(0).toInt, x(1), x(2).toInt))
val peopleDF: DataFrame = spark.createDataFrame(row, schema)
peopleDF.printSchema()
peopleDF.show()
peopleDF.createOrReplaceTempView("people")
val resultDF: DataFrame = spark.sql("select name from people where age>30")
resultDF.show()
//读写parquet文件
// peopleDF.write.parquet("out/parquettest")
val frame1: DataFrame = spark.read.parquet("out/parquettest")
frame1.printSchema()
frame1.show()
dataframe ->rdd
/** people.json内容如下
* {"name":"Michael"}
* {"name":"Andy", "age":30}
* {"name":"Justin", "age":19}
*/
val df = spark.read.json("file:///home/hadoop/data/people.json")
//将DF转为RDD
df.rdd.collect