dataframe转csv,并将文件保存至HDFS,然后下载到本地
dfResult = spark.sql("select * from tmp.lanfz_dirty_imei")
dfResult.write.format("csv").option("header","true").mode("overwrite").save("/user/lanfz/dirty_imei/")
注意:结果目录可能会产生多个文件
提供以下两种方式合并文件,并获取到本地
方式一(适用较大数据量)
dfResult.write.format("csv").option("header","true").mode("overwrite").save("/user/lanfz/dirty_imei/")
hadoop fs -getmerge /user/lanfz/dirty_imei/* dirty_imei.csv
方式二(适用较小数据量)
dfResult.repartition(1).write.format("csv").option("header","true").mode("overwrite").save("/user/lanfz/dirty_imei/")
hadoop fs -get /user/lanfz/dirty_imei/* dirty_imei.csv
csv转dataframe
# 默认分隔符为","
df = spark.read.format("csv").load("/user/data.csv",header=True, inferSchema="true")
# 若分隔符为其他, 例如 制表符 \t 空格等
df = spark.read.format("csv").option("delimiter", "\t").load("/user/data.csv",header=True, inferSchema="true")
df = spark.read.format("csv").option("delimiter", " ").load("/user/data.csv",header=True, inferSchema="true")