RDD <==> DataFrame
reflection
infer scheme 使用反射推导RDD的schema
RDD : know the schema
val personDF = spark.sparkContext.textFile("file:///home/hadoop/app/spark-2.2.0-bin-2.6.0-cdh5.7.0/examples/src/main/resources/people.txt").map(x => x.split(",")).map(x => Person(x(0),x(1).trim.toInt)).toDF()
schemaString ==>
StructField("age", StringType, nullable = true)
StructField("name", StringType, nullable = true)
.....
val personRDD = spark.sparkContext.textFile("file:///home/hadoop/app/spark-2.2.0-bin-2.6.0-cdh5.7.0/examples/src/main/resources/people.txt").map(x => x.split(",")).map(attributes => Row(attributes(0), attributes(1).trim))
val schemaString = "name age"
val fields = schemaString.split(" ").map(fieldName =>StructField(fieldName, StringType, nullable = true))
val schema = StructType(fields)
val df = spark.createDataFrame(personRDD, schema)
val students1 = spark.sparkContext.textFile("file:///home/hadoop/data/student.data").map(x => x.split("\\|")).map(x => Student(x(0),x(1),x(2),x(3))).toDF()
val students2 = spark.sparkContext.textFile("file:///home/hadoop/data/student.data").map(x => x.split("\\|")).map(x => Student(x(0),x(1),x(2),x(3))).toDF()
as
External DataSource API 外部数据源
load ==> compute ==> sink
1) file system
2) storage
1.2
json
标准写法:spark.read.format("json").load("")
val df = spark.read.format("parquet").load("file:///home/hadoop/app/spark-2.2.0-bin-2.6.0-cdh5.7.0/examples/src/main/resources/users.parquet")
df.show
df.write.format("").save("")
https://spark-packages.org/
val jdbcDF = spark.read.format("jdbc").option("url", "jdbc:mysql://localhost:3306").option("dbtable", "ruozedata_basic02.TBLS").option("user", "root").option("password", "root").load()
emp==>hive
dept==>mysql
compute: deptno, dname, count(1)
==> json -> hdfs
reflection
infer scheme 使用反射推导RDD的schema
RDD : know the schema
case class 好处:底层调用的是apply方法
一句话:预先定义好一个case class,然后通过文件读进来变成一个RDD,把每一行的元素作用到case class上
val personDF = spark.sparkContext.textFile("file:///home/hadoop/app/spark-2.2.0-bin-2.6.0-cdh5.7.0/examples/src/main/resources/people.txt").map(x => x.split(",")).map(x => Person(x(0),x(1).trim.toInt)).toDF()
schemaString ==>
StructField("age", StringType, nullable = true)
StructField("name", StringType, nullable = true)
.....
val personRDD = spark.sparkContext.textFile("file:///home/hadoop/app/spark-2.2.0-bin-2.6.0-cdh5.7.0/examples/src/main/resources/people.txt").map(x => x.split(",")).map(attributes => Row(attributes(0), attributes(1).trim))
val schemaString = "name age"
val fields = schemaString.split(" ").map(fieldName =>StructField(fieldName, StringType, nullable = true))
val schema = StructType(fields)
val df = spark.createDataFrame(personRDD, schema)
val students1 = spark.sparkContext.textFile("file:///home/hadoop/data/student.data").map(x => x.split("\\|")).map(x => Student(x(0),x(1),x(2),x(3))).toDF()
val students2 = spark.sparkContext.textFile("file:///home/hadoop/data/student.data").map(x => x.split("\\|")).map(x => Student(x(0),x(1),x(2),x(3))).toDF()
as
External DataSource API 外部数据源
load ==> compute ==> sink
1) file system
2) storage
1.2
json
标准写法:spark.read.format("json").load("")
val df = spark.read.format("parquet").load("file:///home/hadoop/app/spark-2.2.0-bin-2.6.0-cdh5.7.0/examples/src/main/resources/users.parquet")
df.show
df.write.format("").save("")
https://spark-packages.org/
val jdbcDF = spark.read.format("jdbc").option("url", "jdbc:mysql://localhost:3306").option("dbtable", "ruozedata_basic02.TBLS").option("user", "root").option("password", "root").load()
emp==>hive
dept==>mysql
compute: deptno, dname, count(1)
==> json -> hdfs