对象文件是将对象序列化后保存的文件,采用Java的序列化机制。
可以通过objectFilek,v 函数接收一个路径,读取对象文件,返回对应的 RDD,也可以通过调用saveAsObjectFile() 实现对对象文件的输出。因为是序列化所以要指定类型。
scala> val data = sc.parallelize(List((2,"aa"),(3,"bb"),(4,"cc"),(5,"dd"),(6,"ee")))
data: org.apache.spark.rdd.RDD[(Int, String)]
= ParallelCollectionRDD[7] at parallelize at <console>:24
scala> data.saveAsObjectFile("hdfs://hadoop102:9000/objfile")
scala> import org.apache.spark.rdd._
import org.apache.spark.rdd._
scala> val objrdd:RDD[(Int,String)]
= sc.objectFile[(Int,String)]("hdfs://hadoop102:9000/objfile/p*")
objrdd: org.apache.spark.rdd.RDD[(Int, String)]
= MapPartitionsRDD[15] at objectFile at <console>:27
scala> objrdd.collect
res4: Array[(Int, String)] = Array((2,aa), (3,bb), (4,cc), (5,dd), (6,ee))