spark-RDD、DS、DF相互转换

一、RDD 和 DataFrame之间的转换

准备测试数据,将本地集合转为RDD

scala> val rdd=sc.makeRDD(List("Mina,19","Andy,30","Michael,29"))
rdd: org.apache.spark.rdd.RDD[String] = ParallelCollectionRDD[10] at makeRDD at <console>:24

需要注意,只有import spark.implicits._之后,RDD才有toDF、toDS功能

scala> import spark.implicits._
import spark.implicits._

1. RDD转换成DataFrame

{1} toDF,左边的rdd的泛型必须是Tuple

scala> rdd.map{x=>{val par=x.split(",");(par(0),par(1).toInt)}}.toDF("name","age")
res3: org.apache.spark.sql.DataFrame = [name: string, age: int]

scala> res3.show
+-------+---+
|   name|age|
+-------+---+
|   Mina| 19|
|   Andy| 30|
|Michael| 29|
+-------+---+

{2} toDF,左边的rdd的泛型为case类的rdd

SparkSQL能够自动将泛型为case类的RDD转换成DF,case类定义了表的schema,case类属性通过反射变成了表的列名。Case类可以包含诸如Seqs或者Array等复杂的结构。

scala> case class Person(name:String,age:Int)
defined class Person

scala> val df = rdd.map{x => val par = x.split(",");Person(par(0),par(1).toInt)}.toDF
df: org.apache.spark.sql.DataFrame = [name: string, age: int]

scala> df.show
+-------+---+
|   name|age|
+-------+---+
|   Mina| 19|
|   Andy| 30|
|Michael| 29|
+-------+---+

{3} 如果左边rdd不能返回Tuple或者case类,可以自定义一个StructType

如果case类不能够提前定义,可以通过下面三个步骤定义一个DataFrame

  1. 创建一个多行结构的RDD;
  2. 创建用StructType来表示的行结构信息。
  3. 通过SparkSession提供的createDataFrame方法来应用Schema
scala> import org.apache.spark.sql._
import org.apache.spark.sql._

scala> import org.apache.spark.sql.types._
import org.apache.spark.sql.types._

//属性名应该是动态通过程序生成的,再跟进属性名动态定义属性类型,生成StructType
scala> val schemaString = "name age"
schemaString: String = name age

scala> val field = schemaString.split(" ").map(filename=> filename match{ case "name"=> StructField(filename,StringType,nullable = true); case "age"=>StructField(filename, IntegerType,nullable = true)})
field: Array[org.apache.spark.sql.types.StructField] = Array(StructField(name,StringType,true), StructField(age,IntegerType,true))

scala> val schema = StructType(field)
schema: org.apache.spark.sql.types.StructType = StructType(StructField(name,StringType,true), StructField(age,IntegerType,true))

scala> val rowRDD = rdd.map(_.split(",")).map(attributes => Row(attributes(0), attributes(1).toInt))
rowRDD: org.apache.spark.rdd.RDD[org.apache.spark.sql.Row] = MapPartitionsRDD[2] at map at <console>:35

scala> val peopleDF = spark.createDataFrame(rowRDD, schema)
peopleDF: org.apache.spark.sql.DataFrame = [name: string, age: int]

scala> peopleDF.show
+-------+---+
|   name|age|
+-------+---+
|   Mina| 19|
|   Andy| 30|
|Michael| 29|
+-------+---+

二、RDD和DataSet之间的转换

1. RDD转换成DataSet

case class Person(name:String,age:Int)  
import spark.implicits._

scala> val ds = rdd.map{x => val par = x.split(",");Person(par(0),par(1).trim().toInt)}.toDS
ds: org.apache.spark.sql.Dataset[Person] = [name: string, age: int]

scala> ds.show
+-------+---+
|   name|age|
+-------+---+
|   Mina| 19|
|   Andy| 30|
|Michael| 29|
+-------+---+

三、DataFrame和DataSet之间的转换

1. DataSet转换成DataFrame

scala> ds.toDF
res14: org.apache.spark.sql.DataFrame = [name: string, age: int]

scala> ds.toDF.show
+-------+---+
|   name|age|
+-------+---+
|   Mina| 19|
|   Andy| 30|
|Michael| 29|
+-------+---+

2. DataFrame转换成DataSet

//	定义case class
case class Person(name:String,age:Int) 
//	注意 dataframe中的列名和列的数量应该和case class 一一对应
df.as[Person] 

scala> df.as[Person]
res16: org.apache.spark.sql.Dataset[Person] = [name: string, age: int]
scala> df.as[Person].show
+-------+---+
|   name|age|
+-------+---+
|   Mina| 19|
|   Andy| 30|
|Michael| 29|
+-------+---+
  • 4
    点赞
  • 14
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值