spark sql模块

           版本:spark1.6.1

        spark  sql是spark的一个结构化数据处理模块。有三种方式与spark sql进行交互

        SQL:提供了spark-sql的命令端,以及spark的hiveserver2方式。如果编程的话,将会返回一个DataFrames.能够直接跟hive进行交互。

        DataFrames:能够从一个已经存在的RDD,hive表,其他数据源进行创建

        Datasets:spark1.6.1新提供的接口,用来提供更好的性能


       spark sql提供了两种方式将RDD转化为DataFrames。

       第一种方式是通过reflect去推断RDD的schema。scala中是通过样例类来完成的。 

       

/*
 * Licensed to the Apache Software Foundation (ASF) under one or more
 * contributor license agreements.  See the NOTICE file distributed with
 * this work for additional information regarding copyright ownership.
 * The ASF licenses this file to You under the Apache License, Version 2.0
 * (the "License"); you may not use this file except in compliance with
 * the License.  You may obtain a copy of the License at
 *
 *    http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

// scalastyle:off println
package org.apache.spark.examples.sql

import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.sql.SQLContext
import org.apache.spark.sql.functions._

// One method for defining the schema of an RDD is to make a case class with the desired column
// names and types.
//样例类
case class Record(key: Int, value: String)

object RDDRelation {
  def main(args: Array[String]) {
    val sparkConf = new SparkConf().setAppName("RDDRelation")
    val sc = new SparkContext(sparkConf)
    //创建普通sqlContext,如果是操作hive对象,new org.apache.spark.sql.hive.HiveContext(sc)
    val sqlContext = new SQLContext(sc)

    // Importing the SQL context gives access to all the SQL functions and implicit conversions.
    import sqlContext.implicits._
    //创建RDD,每一条是一个Record对象,然后将RDD转换为DataFrames
    val df = sc.parallelize((1 to 100).map(i => Record(i, s"val_$i"))).toDF()
    // Any RDD containing case classes can be registered as a table.  The schema of the table is
    // automatically inferred using scala reflection.
    //注册临时表
    df.registerTempTable("records")

    // Once tables have been registered, you can run SQL queries over them.
    println("Result of SELECT *:")
    sqlContext.sql("SELECT * FROM records").collect().foreach(println)

    // Aggregation queries are also supported.
    val count = sqlContext.sql("SELECT COUNT(*) FROM records").collect().head.getLong(0)
    println(s"COUNT(*): $count")

    // The results of SQL queries are themselves RDDs and support all normal RDD functions.  The
    // items in the RDD are of type Row, which allows you to access each column by ordinal.
    val rddFromSql = sqlContext.sql("SELECT key, value FROM records WHERE key < 10")

    println("Result of RDD.map:")
    rddFromSql.map(row => s"Key: ${row(0)}, Value: ${row(1)}").collect().foreach(println)

    // Queries can also be written using a LINQ-like Scala DSL.
    df.where($"key" === 1).orderBy($"value".asc).select($"key").collect().foreach(println)

    // Write out an RDD as a parquet file.
    df.write.parquet("pair.parquet")

    // Read in parquet file.  Parquet files are self-describing so the schmema is preserved.
    val parquetFile = sqlContext.read.parquet("pair.parquet")

    // Queries can be run using the DSL on parequet files just like the original RDD.
    parquetFile.where($"key" === 1).select($"value".as("a")).collect().foreach(println)

    // These files can also be registered as tables.
    parquetFile.registerTempTable("parquetFile")
    sqlContext.sql("SELECT * FROM parquetFile").collect().foreach(println)

    sc.stop()
  }
}
// scalastyle:on println

         第二种方式是通过编程指定schema的方式。

package org.apache.spark.examples.sql

import org.apache.spark.SparkConf
import org.apache.spark.SparkContext

object SpecifyScheme {
  
  def main(args:Array[String]){
   // sc is an existing SparkContext.
    val conf = new SparkConf().setAppName("specify scheme")
    val sc = new SparkContext(conf)
    val sqlContext = new org.apache.spark.sql.SQLContext(sc)

    // Create an RDD
    val people = sc.textFile("examples/src/main/resources/people.txt")

    // The schema is encoded in a string
    val schemaString = "name age"

    // Import Row.
    import org.apache.spark.sql.Row;

    // Import Spark SQL data types
    import org.apache.spark.sql.types.{StructType,StructField,StringType};

    // Generate the schema based on the string of schema
    //
    val schema =
     StructType(
        schemaString.split(" ").map(fieldName => StructField(fieldName, StringType, true)))

   // Convert records of the RDD (people) to Rows.
     val rowRDD = people.map(_.split(",")).map(p => Row(p(0), p(1).trim))

   // Apply the schema to the RDD.
    val peopleDataFrame = sqlContext.createDataFrame(rowRDD, schema)

    // Register the DataFrames as a table.
     peopleDataFrame.registerTempTable("people")

    // SQL statements can be run by using the sql methods provided by sqlContext.
     val results = sqlContext.sql("SELECT name FROM people")

     // The results of SQL queries are DataFrames and support all the normal RDD operations.
     // The columns of a row in the result can be accessed by field index or by field name.
     results.map(t => "Name: " + t(0)).collect().foreach(println)
    
  }
}

      SQL来操作hive的表,sql返回的结果被转化为DataFrames.

       

val sqlContext = new org.apache.spark.sql.hive.HiveContext(sc)

sqlContext.sql("CREATE TABLE IF NOT EXISTS src (key INT, value STRING)")
sqlContext.sql("LOAD DATA LOCAL INPATH 'examples/src/main/resources/kv1.txt' INTO TABLE src")

// Queries are expressed in HiveQL
sqlContext.sql("FROM src SELECT key, value").collect().foreach(println)
       CLI接口以及hiveserver2接口,跟hive的用法类似,可以直接操作hql语句。

       需要注意的一点是,对于hive,能够自动计算reduce的数量。但是对于spark sql,是不能自动计算指定的。spark sql是通过参数spark.sql.shuffle.partitions来指定shuffle的并发度的,所以在直接使用sql或者编程时,要根据实际情况指定shuffle并发度。

       还有一点,在使用spark-sql的cli方式时,对于原来hive的map join,spark-sql也不能自动识别。对于小表a,可以使用cache table a 将小表保存到内存中,来实现mapjoin类似的操作。编程的时候需要persist() or cache()方法,将小表的RDD持久化到内存中。      

 

        

          

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值