mapPartitionsWithIndex算子:
与mapPartitions相似,可以看见使用到了哪一个partitions ,func带有一个整数参数表示分片的索引值,因此在类型为T的RDD上运行时,func的函数类型必须是(Int, Interator[T]) => Iterator[U];
mapPartitionsWithIndex算子 第二个参数preservesPartitioning(boolean,默认为false)的含义:
preservesPartitioning表示是否保留父RDD的partitioner分区
此标志用于优化目的,当您不修改分区时,将它设置为false,
如果您需要修改分区时,将它设置为true,这样spark可以更有效地执行操作,
但如果您不告诉spark,它无法知道你的目的,也将无法达到优化的目的。
采用分区的话:parallelize优先级最高,其次是conf.set,最后是local[]
List<String> names = Arrays.asList("w1","w2","w3","w4","w5","W6","W7","w8","w9","w10","w11","w12");
//将list转为RDD并且分为2个partition分区
JavaRDD<String> nameRDD = javaSparkContext.parallelize(names,2);
JavaRDD<String> withIndexRdd = nameRDD.mapPartitionsWithIndex(new Function2<Integer, Iterator<String>, Iterator<String>>() {
@Override
public Iterator<String> call(Integer integer, Iterator<String> stringIterator) throws Exception {
List<String> nameList = new ArrayList<>();
while (stringIterator.hasNext()) {
nameList.add(integer + ":" + stringIterator.next());
}
return nameList.iterator();
}
}, true);
System.out.println(withIndexRdd.collect());
//修改sparkRDD分区
JavaRDD<String> repartitionRDD = withIndexRdd.repartition(4);
System.err.println(repartitionRDD.partitions().size());
repartitionRDD.foreach(new VoidFunction<String>() {
@Override
public void call(String s) throws Exception {
System.err.println("mapPartitionsWithIndex:"+s);
}
});
withIndexRdd 的结果为:0为第一个分区,1为第二个分区
0:w1, 0:w2, 0:w3, 0:w4, 0:w5, 0:W6, 1:W7, 1:w8, 1:w9, 1:w10, 1:w11, 1:w12