Spark SQL允许Spark执行用SQL, HiveQL或者Scala表示的关系查询。这个模块的核心是一个新类型的RDD-
SchemaRDD
。
def main(args: Array[String]) {
//case class Customer(name:String,age:Int,gender:String,address: String)
//屏蔽日志
Logger.getLogger("org.apache.spark").setLevel(Level.WARN)
Logger.getLogger("org.eclipse.jetty.server").setLevel(Level.OFF)
val sparkConf = new SparkConf().setAppName("customers")
val sc = new SparkContext(sparkConf)
val sqlContext = new SQLContext(sc)
val schema =
StructType(
StructField("name", StringType, false) ::
StructField("age", IntegerType, true) :: Nil)
val r = sc.textFile(args(0))
val people = r.map(_.split(",")).map(p => Row(p(0), p(1).trim.toInt))
val dataFrame = sqlContext.createDataFrame(people, schema)
dataFrame.printSchema
dataFrame.registerTempTable("people")
sqlContext.sql("select * from people where age <25").collect.foreach(println)
}
def main(args: Array[String]) {
//屏蔽日志
Logger.getLogger("org.apache.spark").setLevel(Level.WARN)
Logger.getLogger("org.eclipse.jetty.server").setLevel(Level.OFF)
val sparkConf = new SparkConf().setAppName("customers")
val sc = new SparkContext(sparkConf)
val sqlContext = new SQLContext(sc)
// The schema is encoded in a string
val schemaString = "name age"
// Generate the schema based on the string of schema
val schema =
StructType(
schemaString.split(" ").map(fieldName => StructField(fieldName, StringType, true)))
val people = sc.textFile(args(0))
// Convert records of the RDD (people) to Rows.
val rowRDD = people.map(_.split(",")).map(p => Row(p(0), p(1).trim))
val dataFrame = sqlContext.createDataFrame(rowRDD, schema)
dataFrame.printSchema
dataFrame.registerTempTable("people")
sqlContext.sql("select * from people where age <25").collect.foreach(println)
}
def main(args: Array[String]) {
//屏蔽日志
Logger.getLogger("org.apache.spark").setLevel(Level.WARN)
Logger.getLogger("org.eclipse.jetty.server").setLevel(Level.OFF)
val sparkConf = new SparkConf().setAppName("customers")
val sc = new SparkContext(sparkConf)
val sqlContext = new SQLContext(sc)
// A JSON dataset is pointed to by path.
// The path can be either a single text file or a directory storing text files.
val path = "xrli/people.json"
// Create a SchemaRDD from the file(s) pointed to by path
val people = sqlContext.jsonFile(path)
// The inferred schema can be visualized using the printSchema() method.
people.printSchema()
// Register this SchemaRDD as a table.
people.registerTempTable("people")
// SQL statements can be run by using the sql methods provided by sqlContext.
val teenagers = sqlContext.sql("SELECT name FROM people WHERE age >= 13 AND age <= 19")
val anotherPeopleRDD = sc.parallelize(
"""{"name":"Yin","address":{"city":"Columbus","state":"Ohio"}}""" :: Nil)
val anotherPeople = sqlContext.jsonRDD(anotherPeopleRDD)
anotherPeople.printSchema()
anotherPeople.registerTempTable("anotherPeople")
sqlContext.sql("SELECT name FROM anotherPeople")
}
val sparkConf = new SparkConf().setAppName("customers")
val sc = new SparkContext(sparkConf)
val sqlContext = new
org.apache.spark.sql.hive.HiveContext(sc)
sqlContext.sql("CREATE TABLE IF NOT EXISTS SparkHive (key INT, value STRING)")
sqlContext.sql("LOAD DATA LOCAL INPATH 'xrli/kv1.txt' INTO TABLE src")
// Queries are expressed in HiveQL
sqlContext.sql("FROM src SELECT key, value").collect().foreach(println)
1、使用反射来推断包含特定对象类型的RDD的模式(schema)。在你写spark程序的同时,当你已经知道了模式,这种基于反射的 方法可以使代码更简洁并且程序工作得更好。
例如
sc.textFile("examples/src/main/resources/people.txt").map(_.split(",")).map(p => Person(p(0), p(1).trim.toInt))
people.registerTempTable("people")
/2、方法是通过一个编程接口来实现,这个接口允许你构造一个模式,然后在存在的RDDs上使用它。虽然这种方法更冗长,但是它允许你在运行期之前不知道列以及列 的类型的情况下构造SchemaRDDs。
import org.apache.spark._
import org.apache.spark.sql._
import org.apache.spark.sql.types._
import SparkContext._
import org.apache.log4j.{Level, Logger}
object SparkSQL {
}
输入的文件是下面这样
John,15
HanMM,20
Lixurui,27
Shanxin,22
输出结果
下面这种写法或许更清楚:
object SparkSQL {
}
Spark SQL能够自动推断JSON数据集的模式,加载它为一个SchemaRDD(最新的被DataFrame所替代)。这种转换可以通过下面两种方法来实现
- jsonFile :从一个包含JSON文件的目录中加载。文件中的每一行是一个JSON对象
- jsonRDD :从存在的RDD加载数据,这些RDD的每个元素是一个包含JSON对象的字符串
注意,作为jsonFile的文件不是一个典型的JSON文件,每行必须是独立的并且包含一个有效的JSON对象。结果是,一个多行的JSON文件经常会失败
例如people.json如下:
{"name":"Michael"}
{"name":"Andy", "age":30}
{"name":"Justin", "age":19}
import org.apache.spark._
import org.apache.spark.sql._
import org.apache.spark.sql.types._
import SparkContext._
import org.apache.log4j.{Level, Logger}
object SparkJSON {
// root
// |-- age: integer (nullable = true)
// |-- name: string (nullable = true)
// Alternatively, a SchemaRDD can be created for a JSON dataset represented by
// an RDD[String] storing one JSON object per string.
}
结果
Spark SQL
还能跟hive互通,互通需要手动配置一下,参考下面 http://lxw1234.com/archives/2015/06/294.htm
然后就能使用了。比如写进代码
import org.apache.spark._
import org.apache.spark.sql._
import org.apache.spark.sql.types._
import SparkContext._
import org.apache.log4j.{Level, Logger}
class SparkSQLHive {
}