转换操作 :
0.参考网站:http://homepage.cs.latrobe.edu.au/zhe/ZhenHeSparkRDDAPIExamples.html#intersection
$> spark-shell --master spark://master:7077
http://master:8080/
1.map、flatMap、distinct
map说明:将一个RDD中的每个数据项,通过map中的函数映射变为一个新的元素。
输入分区与输出分区一一对应,即:有多少个输入分区,就有多少个输出分区。
flatMap说明:同Map算子一样,最后将所有元素放到同一集合中;
distinct说明:将RDD中重复元素做去重处理
注意:针对Array[String]类型,将String对象视为字符串数组
scala> val rdd =sc.textFile("/input/input1.txt")
rdd: org.apache.spark.rdd.RDD[String] = /worldcount/test1.txt MapPartitionsRDD[1] at textFile at <console>:24
scala> val rdd1 = rdd.map(x=>x.split(" "))
rdd1: org.apache.spark.rdd.RDD[Array[String]] = MapPartitionsRDD[2] at map at <console>:26
scala> rdd1.collect
res0: Array[Array[String]] = Array(Array(hello, world), Array(how, are, you?), Array(ni, hao), Array(hello, tom))
scala> val rdd2 = rdd1.flatMap(x=>x)
rdd2: org.apache.spark.rdd.RDD[String] = MapPartitionsRDD[3] at flatMap at <console>:28
scala> rdd2.collect
res1: Array[String] = Array(hello, world, how, are, you?, ni, hao, hello, tom)
scala> rdd2.flatMap(x=>x).collect
res3: Array[Char] = Array(h, e, l, l, o, w, o, r, l, d, h, o, w, a, r, e, y, o, u, ?, n, i, h, a, o, h, e, l, l, o, t, o, m)
scala> val rdd3 = rdd2.distinct
rdd3: org.apache.spark.rdd.RDD[String] = MapPartitionsRDD[7] at distinct at <console>:30
scala> rdd3.collect
res4: Array[String] = Array(are, tom, how, you?, hello, hao, world, ni)
2.coalesce和repartition :
coalesce修改RDD分区数; repartition:重分区
coalesce说明:将RDD的分区数进行修改,并生成新的RDD;
有两个参数:第一个参数为分区数,第二个参数为shuffle Booleean类型,默认为false
如果更改分区数比原有RDD的分区数小,shuffle为false;
如果更改分区数比原有RDD的分区数大,shuffle必须为true;
应用说明:一般处理filter或简化操作时,新生成的RDD中分区内数据骤减,可考虑重分区
查看默认rdd分区数
scala> rdd.partitions.size
res4: Int = 2
默认分区2个,往小了分Yes 修改rdd分区数,并生成新的rdd
scala> val rdd4 = rdd.coalesce(1)
rdd4: org.apache.spark.rdd.RDD[String] = CoalescedRDD[8] at coalesce at <console>:26
scala> rdd4.partitions.size
res10: Int = 1
默认分区2个,往大了分NO
scala> val rdd5 = rdd.coalesce(5)
rdd5: org.apache.spark.rdd.RDD[String] = CoalescedRDD[9] at coalesce at <console>:26
scala> rdd5.partitions.size
res12: Int = 2
默认分区2个,往大了分 增加属性shuffle设为true
scala> val rdd5 = rdd.coalesce(5,true)
rdd5: org.apache.spark.rdd.RDD[String] = MapPartitionsRDD[13] at coalesce at <console>:26
scala> rdd5.partitions.size
res13: Int = 5
重分区 repartition ,可增可减分区
scala> val rdd6 = rdd5.repartition(8)
rdd6: org.apache.spark.rdd.RDD[String] = MapPartitionsRDD[11] at repartition at <console>:34
scala> rdd6.partitions.size
res6: Int = 8
*******修改分区即修改Task任务数*******
1)textFile可以修改分区,如果加载文件后再想修改分区,可以使用以上两种方法
2)考虑场景业务需求清洗后,数据会减少,通过glom查看分区里的数据会有空值情况,采用重新分区解决
3.randomSplit:
def randomSplit(weights: Array[Double], seed: Long = Utils.random.nextLong): Array[RDD[T]]
说明:将RDD按照权重(weights)进行随机分配,返回指定个数的RDD集合;
应用案例:Hadoop全排操作
scala> val rdd = sc.parallelize(List(1,2,3,4,5,6,7))
rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[0] at parallelize at <console>:24
//0.7+0.1+0.2 = 1 ,将rdd中的7个元素按权重分配, 权重加起来一定等于1
scala> val rdd1 = rdd.randomSplit(Array(0.7,0.1,0.2))
rdd1: Array[org.apache.spark.rdd.RDD[Int]] = Array(MapPartitionsRDD[1] at randomSplit at <console>:26, MapPartitionsRDD[2] at randomSplit at <console>:26, MapPartitionsRDD[3] at randomSplit at <console>:26)
scala> rdd1(0).collect
res0: Array[Int] = Array(1, 5)
scala> rdd1(1).collect
res1: Array[Int] = Array()
scala> rdd1(2).collect
res2: Array[Int] = Array(2, 3, 4, 6, 7)
rdd重分区 ,按权重分配
4.glom 说明:返回每个分区中的数据项
scala>val a = sc.parallelize(1 to 100, 3)
scala>a.glom.collect
5.union:并集 将两个RDD进行合并,不去重
scala>val rdd1 = sc.parallelize(1 to 5)
scala>val rdd2 = sc.parallelize(5 to 10)
scala>val rdd3 =rdd1.union(rdd2)
6.subtract:差集
val a = sc.parallelize(1 to 9, 3)
val b = sc.parallelize(1 to 3, 3)
val c = a.subtract(b)
c