1、textFile
当我们将一个文本文件读取为 RDD 时,输入的每一行都会成为RDD的一个元素。也可以将多个完整的文本文件一次性读取为一个pair RDD, 其中键是文件名,值是文件内容。
scala> val input = sc.textFile("./README.md")
input: org.apache.spark.rdd.RDD[String]
= ./README.md MapPartitionsRDD[40] at textFile at <console>:27
如果传递目录,则将目录下的所有文件读取作为RDD。文件路径支持通配符。
2、wholeTextFiles
通过wholeTextFiles()对于大量的小文件读取效率比较高,大文件效果没有那么高。
scala> val inputw = sc.wholeTextFiles("./README.md")
inputw: org.apache.spark.rdd.RDD[(String, String)]
= ./README.md MapPartitionsRDD[42] at wholeTextFiles at <console>:27
3、saveAsTextFile
Spark通过saveAsTextFile() 进行文本文件的输出,该方法接收一个路径,并将 RDD 中的内容都输入到路径对应的文件中。Spark 将传入的路径,作为目录对待,会在那个目录下输出多个文件。这样,Spark 就可以从多个节点上并行输出了。
scala> val readme = sc.textFile("./README.md")
readme: org.apache.spark.rdd.RDD[String]
= ./README.md MapPartitionsRDD[1] at textFile at <console>:24
scala> readme.collect
res0: Array[String] = Array(# Apache Spark, "", Spark is a fast and general cluster computing system for Big Data. It provides, high-level APIs in Scala, Java, Python, and R, and an optimized engine that, supports general computation graphs for data analysis. It also supports a, rich set of higher-level tools including Spark SQL for SQL and DataFrames,, MLlib for machine learning, GraphX for graph processing,, and Spark Streaming for stream processing., "", <http://spark.apache.org/>, "", "", ## Online Documentation, "", You can find the latest Spark documentation, including a programming, guide, on the [project web page](http://spark.apache.org/documentation.html)., This README file only contains basic setup instructions., "", ## Building Spark, "", Spark is built using [Apache Maven](...
scala> readme.saveAsTextFile("hdfs://hadoop102:9000/rdtest")
scala>
查看
[yinggu@hadoop102 hadoop-2.8.2]$ bin/hadoop fs -cat /rdtest/p*