Spark SQL内置函数
可以在org.apache.spark.sql.funtions.scala
中查看具体的函数。
例如:
val accessLog = Array(
“2016-12-27,001”,
“2016-12-27,001”,
“2016-12-27,002”,
“2016-12-28,003”,
“2016-12-28,004”,
“2016-12-28,002”,
“2016-12-28,002”,
“2016-12-28,001”
)
- 定义表结构;
- RDD转换为Row;
- 根据数据以及Schema信息生成DataFrame;
- 内置函数操作
import org.apache.spark.sql.types.{IntegerType, StringType, StructField, StructType}
import org.apache.spark.sql.{Row, SparkSession}
val accessLog = Array(
"2016-12-27,001",
"2016-12-27,001",
"2016-12-27,002",
"2016-12-28,003",
"2016-12-28,004",
"2016-12-28,002",
"2016-12-28,002",
"2016-12-28,001"
)
val schema = StructType(Array(
StructField("day", StringType, true),//true表示是否不为空
StructField("userId", IntegerType, true)
))
val rdd = sc.parallelize(accessLog).map(_.split(","))
.map(x => Row(x(0), x(1).toInt))
val df = spark.createDataFrame(rdd,schema)
//df.printSchema()
//导入Spark SQL内置的函数
import org.apache.spark.sql.functions._
//求每天所有的访问量(pv)
df.groupBy(df("day")).agg(count(df("userId")).as("pv"))
.collect().foreach(println)
//求每天的去重访问量(uv)
df.groupBy(df("day")).agg(countDistinct(df("userId"))).as("pv")
.show()
自定义函数
- 定义函数
- 注册函数
SparkSession.udf.register():只在sql()中有效;
functions.udf():对DataFrame API均有效; - 函数调用
case class Hobbies(name:String,hobbies:String)
def main(args: Array[String]): Unit = {
val spark = SparkSession.builder().master("local[1]").appName("UDF")
.getOrCreate()
val sc = spark.sparkContext
import spark.implicits._
val rdd = sc.textFile("data/hobbies.txt")
val hobbyDF = rdd.map(_.split(" ")).map(p=>Hobbies(p(0),p(1))).toDF()
hobbyDF.show()
df.createTempView("hobbies")
//注册自定义函数
spark.udf.register("hobby_num",(v:String)=>v.split(",").size)
//调用自定义函数
spark.sql("select name,hobbies,hobby_num(hobbies) as bobby_num from hobbies")
.show()
}
我们还可以先定义好函数,然后调用该函数:
val fun=(x:String)=>{
x.split(",").size
}
spark.udf.register("hobby_num",fun)