引入相关的库,生成一个SparkContext对象
from pyspark.sql import SparkSession
from pyspark import SparkConf,SparkContext, conf
spark = SparkSession.builder.config(conf=SparkConf()).getOrCreate()
from pyspark.sql import Row
from pyspark.sql.types import *
方式1:利用反射机制推断RDD模式
people = spark.sparkContext.textFile("/home/service/spark-2.4.5-bin-hadoop2.7/examples/src/main/resources/people.txt").map(lambda line:line.split(",")).map(lambda p:Row(name=p[0],age=int(p[1])))
schemapeople = spark.createDataFrame(people)
#创建临时表
schemapeople.createOrReplaceTempView("people")
personDF = spark.sql("select name,age from people where age>20")
personDF.show()
personRDD = personDF.rdd.map(lambda p:"Name:"+p.name+","+"Age:"+str(p.age))
print(personRDD.collect())
上述以SparkContext对象spark
I.创建一个Dataframe,使用Row的DataFrame按照行进行分区
II.将该DataFrame注册为临时表
III.通过pyspark.sql对DataFrame进行SQL语句的查询,并返回一个Dataframe再输出查看
IV.使用map映射对Dataframe处理并使用collect打印结果
结果:
方式2: 使用编程的方式定义Dataframe
#生成表头
schemaString = "name age"
fields = [StructField(field_name,StringType(),True) for field_name in schemaString.split(" ")]
schema = StructType(fields)
#生成表的内容
lines = spark.sparkContext.textFile("/home/service/spark-2.4.5-bin-hadoop2.7/examples/src/main/resources/people.txt")
parts = lines.map(lambda x:x.split(","))
#表头和内容拼接
people =parts.map(lambda p:Row(p[0],p[1].strip()))
schemapeoples = spark.createDataFrame(people,schema)
#注册临时表才能查询
schemapeoples.createOrReplaceTempView("peoples")
results = spark.sql("select name,age from peoples")
results.show()
I.生成表头
II.生成表的内容
III.拼接表头和表的数据
IV.注册临时表查询
结果: