一、通用的Load/Save函数
1.1 什么是parquet文件?
Parquet是列式存储格式
的一种文件类型,列式存储有以下的核心:
(1)可以跳过不符合条件的数据,只读取需要的数据,降低IO数据量。
(2)压缩编码可以降低磁盘存储空间。由于同一列的数据类型是一样的,可以使用更高效的压缩编码(例如Run Length Encoding和Delta Encoding)进一步节约存储空间。
(3)只读取需要的列,支持向量运算,能够获取更好的扫描性能。
(4)Parquet格式是Spark SQL的默认数据源,可通过spark.sql.sources.default
配置
1.2 通用的Load/Save函数
(1)读取Parquet文件
val usersDF = spark.read.load("/root/resources/users.parquet")
(2)查询Schema和数据
(3)查询用户的name和喜爱颜色,并保存
usersDF.select($"name",$"favorite_color").write.save("/root/result/parquet")
(4)验证结果
1.3 显式指定文件格式:加载json格式
(1)直接加载:(会出错
)
val usersDF = spark.read.load("/root/resources/people.json")
(2)需要添加文件类型:format("json")
val usersDF = spark.read.format("json").load("/root/resources/people.json")
scala> testResult.show
+----+------+-----+------+----------+---------+----+----+
|comm|deptno|empno| ename| hiredate| job| mgr| sal|
+----+------+-----+------+----------+---------+----+----+
| | 20| 7369| SMITH|1980/12/17| CLERK|7902| 800|
| 300| 30| 7499| ALLEN| 1981/2/20| SALESMAN|7698|1600|
| 500| 30| 7521| WARD| 1981/2/22| SALESMAN|7698|1250|
| | 20| 7566| JONES| 1981/4/2| MANAGER|7839|2975|
|1400| 30| 7654|MARTIN| 1981/9/28| SALESMAN|7698|1250|
| | 30| 7698| BLAKE| 1981/5/1| MANAGER|7839|2850|
| | 10| 7782| CLARK| 1981/6/9| MANAGER|7839|2450|
| | 20| 7788| SCOTT| 1987/4/19| ANALYST|7566|3000|
| | 10| 7839| KING|1981/11/17|PRESIDENT| |5000|
| 0| 30| 7844|TURNER| 1981/9/8| SALESMAN|7698|1500|
| | 20| 7876| ADAMS| 1987/5/23| CLERK|7788|1100|
| | 30| 7900| JAMES| 1981/12/3| CLERK|7698| 950|
| | 20| 7902| FORD| 1981/12/3| ANALYST|7566|3000|
| | 10| 7934|MILLER| 1982/1/23| CLERK|7782|1300|
+----+------+-----+------+----------+---------+----+----+
1.4 存储模式(Save Modes)
可以采用SaveMode执行存储操作,SaveMode定义了对数据的处理模式。需要注意的是,这些保存模式不使用任何锁定,不是原子操作。此外,当使用Overwrite方式执行时,在输出新数据之前原数据就已经被删除。SaveMode详细介绍如下表:
Demo:
usersDF.select($"name").write.save("/root/result/parquet1")
--> 直接这样会出错:因为/root/result/parquet1已经存在
usersDF.select($"name").write.mode("overwrite").save("/root/result/parquet1")
1.5 将结果保存为表
usersDF.select($"name").write.saveAsTable("table1")
查看数据:
scala> spark.sql("select * from table1").show
+------+
| name|
+------+
|Alyssa|
| Ben|
+------+
也可以进行分区、分桶等操作:partitionBy、bucketBy
二、Parquet文件
Parquet是一个列格式而且用于多个数据处理系统中。Spark SQL提供支持对于Parquet文件的读写,也就是自动保存原始数据的schema。当写Parquet文件时,所有的列被自动转化为nullable,因为兼容性的缘故。
2.1 案例
读入json格式的数据,将其转换成parquet格式,并创建相应的表来使用SQL进行查询。
emp.json
文件内容如下:
{"empno":7369,"ename":"SMITH","job":"CLERK","mgr":"7902","hiredate":"1980/12/17","sal":800,"comm":"","deptno":20}
{"empno":7499,"ename":"ALLEN","job":"SALESMAN","mgr":"7698","hiredate":"1981/2/20","sal":1600,"comm":"300","deptno":30}
{"empno":7521,"ename":"WARD","job":"SALESMAN","mgr":"7698","hiredate":"1981/2/22","sal":1250,"comm":"500","deptno":30}
{"empno":7566,"ename":"JONES","job":"MANAGER","mgr":"7839","hiredate":"1981/4/2","sal":2975,"comm":"","deptno":20}
{"empno":7654,"ename":"MARTIN","job":"SALESMAN","mgr":"7698","hiredate":"1981/9/28","sal":1250,"comm":"1400","deptno":30}
{"empno":7698,"ename":"BLAKE","job":"MANAGER","mgr":"7839","hiredate":"1981/5/1","sal":2850,"comm":"","deptno":30}
{"empno":7782,"ename":"CLARK","job":"MANAGER","mgr":"7839","hiredate":"1981/6/9","sal":2450,"comm":"","deptno":10}
{"empno":7788,"ename":"SCOTT","job":"ANALYST","mgr":"7566","hiredate":"1987/4/19","sal":3000,"comm":"","deptno":20}
{"empno":7839,"ename":"KING","job":"PRESIDENT","mgr":"","hiredate":"1981/11/17","sal":5000,"comm":"","deptno":10}
{"empno":7844,"ename":"TURNER","job":"SALESMAN","mgr":"7698","hiredate":"1981/9/8","sal":1500,"comm":"0","deptno":30}
{"empno":7876,"ename":"ADAMS","job":"CLERK","mgr":"7788","hiredate":"1987/5/23","sal":1100,"comm":"","deptno":20}
{"empno":7900,"ename":"JAMES","job":"CLERK","mgr":"7698","hiredate":"1981/12/3","sal":950,"comm":"","deptno":30}
{"empno":7902,"ename":"FORD","job":"ANALYST","mgr":"7566","hiredate":"1981/12/3","sal":3000,"comm":"","deptno":20}
{"empno":7934,"ename":"MILLER","job":"CLERK","mgr":"7782","hiredate":"1982/1/23","sal":1300,"comm":"","deptno":10}
2.2 Schema的合并:
Parquet支持Schema evolution(Schema演变,即:合并
)。用户可以先定义一个简单的Schema,然后逐渐的向Schema中增加列描述。通过这种方式,用户可以获取多个有不同Schema但相互兼容的Parquet文件。
Demo:
val df1 = sc.makeRDD(1 to 5).map(i => (i, i * 2)).toDF("single", "double")
df1.write.parquet("/root/myresult/test_table/wgh=1")
val df2 = sc.makeRDD(6 to 10).map(i => (i, i * 3)).toDF("single", "triple")
df2.write.parquet("/root/myresult/test_table/wgh=2")
val df3 = spark.read.option("mergeSchema", "true").parquet("/root/myresult/test_table/")
df3.printSchema()
查看表中数据:
scala> df3.show
+------+------+------+---+
|single|double|triple|wgh|
+------+------+------+---+
| 8| null| 24| 2|
| 9| null| 27| 2|
| 10| null| 30| 2|
| 3| 6| null| 1|
| 4| 8| null| 1|
| 5| 10| null| 1|
| 6| null| 18| 2|
| 7| null| 21| 2|
| 1| 2| null| 1|
| 2| 4| null| 1|
+------+------+------+---+
三、JSON Datasets
Spark SQL能自动解析JSON数据集的Schema,读取JSON数据集为DataFrame格式。读取JSON数据集方法为SQLContext.read().json()
。该方法将String格式的RDD或JSON文件转换为DataFrame。
需要注意的是,这里的JSON文件不是常规的JSON格式。JSON文件每一行必须包含一个独立的、自满足有效的JSON对象。如果用多行描述一个JSON对象,会导致读取出错。读取JSON数据集示例如下:
3.1 使用Spark自带的示例文件 --> people.json 文件
定义路径:
val path ="/root/resources/people.json"
读取Json文件,生成DataFrame:
val peopleDF = spark.read.json(path)
打印Schema结构信息:
peopleDF.printSchema()
创建临时视图:
peopleDF.createOrReplaceTempView("people")
执行查询
spark.sql("SELECT name FROM people WHERE age=19").show
四、使用JDBC
Spark SQL同样支持通过JDBC读取其他数据库的数据作为数据源。
Demo演示:使用Spark SQL读取Oracle数据库中的表。
(1)启动Spark Shell的时候,指定Oracle数据库的驱动
spark-shell --master spark://spark81:7077
--jars /root/temp/ojdbc6.jar
--driver-class-path /root/temp/ojdbc6.jar
(2)读取Oracle数据库中的数据
4.1 方式一:
val oracleDF = spark.read.format("jdbc").
option("url","jdbc:oracle:thin:@192.168.88.101:1521/orcl.example.com").
option("dbtable","scott.emp").
option("user","scott").
option("password","tiger").
load
4.2 方式二:
导入需要的类:
import java.util.Properties
定义属性:
val oracleprops = new Properties()
oracleprops.setProperty("user","scott")
oracleprops.setProperty("password","tiger")
读取数据:
val oracleEmpDF =
spark.read.jdbc("jdbc:oracle:thin:@192.168.88.101:1521/orcl.example.com",
"scott.emp",oracleprops)
注意:下面是读取Oracle 10g(Windows上)的步骤
五、使用Hive Table
(1)首先,搭建好Hive的环境(需要Hadoop)
(2)配置Spark SQL支持Hive
只需要将以下文件拷贝到$SPARK_HOME/conf的目录下,即可
$HIVE_HOME/conf/hive-site.xml
$HADOOP_CONF_DIR/core-site.xml
$HADOOP_CONF_DIR/hdfs-site.xml
(3)使用Spark Shell操作Hive
1)启动Spark Shell的时候,需要使用–jars指定mysql的驱动程序
2)创建表
spark.sql("create table src (key INT, value STRING) row format delimited fields terminated by ','")
3)导入数据
spark.sql("load data local path '/root/temp/data.txt' into table src")
4)查询数据
spark.sql("select * from src").show
(4)使用spark-sql操作Hive
1)启动spark-sql的时候,需要使用–jars指定mysql的驱动程序
2)操作Hive
show tables;
select * from 表;