测试文件
测试文件employees.json
,内容如下:
{"name":"Michael", "salary":3000, "age": 28}
{"name":"Andy", "salary":4500}
{"name":"Justin", "salary":3500}
{"name":"Berta", "salary":4000}
{"name":"vincent", "salary":90000}
CREATE DataFrame
package cn.ac.iie.spark
import org.apache.spark.sql.SparkSession
/**
* DataFrame API基本操作
*/
object DataFrameApp {
def main(args: Array[String]): Unit = {
val spark = SparkSession.builder().appName("DataFrameApp").master("local[2]").getOrCreate()
// 本地文件系统或者HDFS都支持
// 将Json文件加载成一个DataFrame
val peopleDF = spark.read.format("json").load("file:///E:/test/employees.json")
// 输出DataFrame对应的Schema信息
peopleDF.printSchema()
// 默认展示数据集前20条记录
peopleDF.show()
//查询某列的所有数据,相当于mysql中的 select name from
peopleDF.select("name").show()
peopleDF.select(peopleDF.col("name"), peopleDF.col("age") + 10).show()
peopleDF.select(peopleDF.col("name"), (peopleDF.col("age") + 10).as("age2")).show()
// 根据某一列的值进行过滤: select * from table where age > 20
peopleDF.filter(peopleDF.col("age") > 20).show()
// 根据某一列进行分组,然后在进行聚合操作:select age, count(1) from table group by age
peopleDF.groupBy(peopleDF.col("age")).count().show()
spark.stop()
}
}
show()
方法默认展示前20条记录,如果要展示多条,则写为show(100)
printSchema
peopleDF.printSchema()
show
peopleDF.show()
select
查询某一列数据
peopleDF.select("name").show()
查询某几列所有的数据
peopleDF.select(peopleDF.col("name"), peopleDF.col("age") + 10).show()
,并且还可以对某一列数据进行相应的计算。
给某列名起别名
peopleDF.select(peopleDF.col("name"), (peopleDF.col("age") + 10).as("age2")).show()
filter
peopleDF.filter(peopleDF.col("age") > 20).show()
group
根据某一列进行分组,然后在进行聚合操作.
原始数据:
peopleDF.groupBy(peopleDF.col("age")).count().show()