1. Spark SQL 是Spark的一个模块,用来进行结构化数据处理。与RDD不同,Spark SQL提供了更多的数据和计算能力。
2. Spark SQL的其中一个功能就是执行SQL查询:
(1) 可以使用hive sql
(2)用sql查询返回的是Data/DataFrame
3. Dataset: 分布式数据集合
4. DataFrame: Dataset+列名
5. SparkSession:
(1) SparkSession提供了执行HIveQL、从hive表读取数据的功能。
(2) 要想使用Spark的基础功能,必须先使用SparkSession.builder() 创建一个SparkSession对象:
从文件创建DataFrame:
http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.sql.functions$
(2) 直到Spark程序结束释放
(3) global_temp库
(2) Encoder将数据转换成一种Spark可以处理的格式。
2. Spark SQL的其中一个功能就是执行SQL查询:
(1) 可以使用hive sql
(2)用sql查询返回的是Data/DataFrame
3. Dataset: 分布式数据集合
4. DataFrame: Dataset+列名
5. SparkSession:
(1) SparkSession提供了执行HIveQL、从hive表读取数据的功能。
(2) 要想使用Spark的基础功能,必须先使用SparkSession.builder() 创建一个SparkSession对象:
import org.apache.spark.sql.SparkSession val spark = SparkSession .builder() .appName("Spark SQL basic example") .config("spark.some.config.option", "some-value") .getOrCreate() // For implicit conversions like converting RDDs to DataFrames import spark.implicits._6.创建DataFrame:有了SparkSession之后,可以创建DataFrame;
从文件创建DataFrame:
val df = spark.read.json("examples/src/main/resources/people.json") // Displays the content of the DataFrame to stdout df.show() // +----+-------+ // | age| name| // +----+-------+ // |null|Michael| // | 30| Andy| // | 19| Justin| // +----+-------+7. 无类型Dataset操作方法:http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.sql.Dataset
http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.sql.functions$
// This import is needed to use the $-notation import spark.implicits._ // Print the schema in a tree format df.printSchema() // root // |-- age: long (nullable = true) // |-- name: string (nullable = true) // Select only the "name" column df.select("name").show() // +-------+ // | name| // +-------+ // |Michael| // | Andy| // | Justin| // +-------+ // Select everybody, but increment the age by 1 df.select($"name", $"age" + 1).show() // +-------+---------+ // | name|(age + 1)| // +-------+---------+ // |Michael| null| // | Andy| 31| // | Justin| 20| // +-------+---------+ // Select people older than 21 df.filter($"age" > 21).show() // +---+----+ // |age|name| // +---+----+ // | 30|Andy| // +---+----+ // Count people by age df.groupBy("age").count().show() // +----+-----+ // | age|count| // +----+-----+ // | 19| 1| // |null| 1| // | 30| 1| // +----+-----+8. 执行SQL查询程序
// Register the DataFrame as a SQL temporary view df.createOrReplaceTempView("people") val sqlDF = spark.sql("SELECT * FROM people") sqlDF.show() // +----+-------+ // | age| name| // +----+-------+ // |null|Michael| // | 30| Andy| // | 19| Justin| // +----+-------+9. Global Temporary View全局临时视图 (1) 所有sessions共享
(2) 直到Spark程序结束释放
(3) global_temp库
-
// Register the DataFrame as a global temporary view df.createGlobalTempView("people") // Global temporary view is tied to a system preserved database `global_temp` spark.sql("SELECT * FROM global_temp.people").show() // +----+-------+ // | age| name| // +----+-------+ // |null|Michael| // | 30| Andy| // | 19| Justin| // +----+-------+ // Global temporary view is cross-session spark.newSession().sql("SELECT * FROM global_temp.people").show() // +----+-------+ // | age| name| // +----+-------+ // |null|Michael| // | 30| Andy| // | 19| Justin| // +----+-------+
(2) Encoder将数据转换成一种Spark可以处理的格式。
-
// Note: Case classes in Scala 2.10 can support only up to 22 fields. To work around this limit, // you can use custom classes that implement the Product interface case class Person(name: String, age: Long) // Encoders are created for case classes val caseClassDS = Seq(Person("Andy", 32)).toDS() caseClassDS.show() // +----+---+ // |name|age| // +----+---+ // |Andy| 32| // +----+---+ // Encoders for most common types are automatically provided by importing spark.implicits._ val primitiveDS = Seq(1, 2, 3).toDS() primitiveDS.map(_ + 1).collect() // Returns: Array(2, 3, 4) // DataFrames can be converted to a Dataset by providing a class. Mapping will be done by name val path = "examples/src/main/resources/people.json" val peopleDS = spark.read.json(path).as[Person] peopleDS.show() // +----+-------+ // | age| name| // +----+-------+ // |null|Michael| // | 30| Andy| // | 19| Justin| // +----+-------+
(1)已知RDD的数据类型,做映射
// For implicit conversions from RDDs to DataFrames import spark.implicits._ // Create an RDD of Person objects from a text file, convert it to a Dataframe val peopleDF = spark.sparkContext .textFile("examples/src/main/resources/people.txt") .map(_.split(",")) .map(attributes => Person(attributes(0), attributes(1).trim.toInt)) .toDF() // Register the DataFrame as a temporary view peopleDF.createOrReplaceTempView("people")(2)未知RDD的数据类型
import org.apache.spark.sql.types._ // Create an RDD val peopleRDD = spark.sparkContext.textFile("examples/src/main/resources/people.txt") // The schema is encoded in a string val schemaString = "name age" // Generate the schema based on the string of schema val fields = schemaString.split(" ") .map(fieldName => StructField(fieldName, StringType, nullable = true)) val schema = StructType(fields) // Convert records of the RDD (people) to Rows val rowRDD = peopleRDD .map(_.split(",")) .map(attributes => Row(attributes(0), attributes(1).trim)) // Apply the schema to the RDD val peopleDF = spark.createDataFrame(rowRDD, schema)