前面有关RDD的理论已经说过其中一点就是RDD是由一系列的分区组成,所以RDD也提供了和分区相关的一系列算子,这次需要整理的是分区迭代器、重设分区以及countByKey、groupByKey等算子
package com.debug;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Iterator;
import java.util.List;
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.api.java.function.Function2;
import org.apache.spark.api.java.function.VoidFunction;
public class UseRDD04 {
public static void main(String[] args) {
SparkConf conf=new SparkConf();
conf.setMaster("local");
conf.setAppName("WordCountApp");
JavaSparkContext sc=new JavaSparkContext(conf);
List<String> arr=Arrays.asList("上海","北京","昆明","深圳","长沙","合肥");
JavaRDD<String> rdd1=sc.parallelize(arr,3);
JavaRDD<String> rdd2=rdd1.mapPartitionsWithIndex(new Function2<Integer, Iterator<String>, Iterator<String>>() {
public Iterator<String> call(Integer index, Iterator<String> iter) throws Exception {
List<String> arr=new ArrayList<>();
while(iter.hasNext()) {
arr.add(iter.next()+"-"+index);
}
return arr.iterator();
}
},true);
JavaRDD<String> rdd3=rdd2.coalesce(2, false);
rdd3.foreach(new VoidFunction<String>() {
public void call(String city) throws Exception {
System.out.println(city);
}
});
sc.stop();
}
}
这里需要注意理解的是coalesce()方法和rePartition()方法,区别是coalesce()方法的参数shuffle默认设置为false,repartition()方法就是coalesce()方法shuffle为true的情况;shuffle就是磁盘的数据写入和读取的过程,如果为宽依赖则有shuffle过程,窄依赖则没有。如有疑问可以参考下面的文章
https://blog.csdn.net/lzq20115395/article/details/80602071
接下来再补充两个算子,作为算子知识整理的最后一段代码
package com.debug;
import java.util.Arrays;
import java.util.List;
import java.util.Map;
import java.util.Map.Entry;
import java.util.Set;
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaPairRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.api.java.function.VoidFunction;
import scala.Tuple2;
public class UseRDD06 {
public static void main(String[] args) {
SparkConf conf=new SparkConf();
conf.setMaster("local");
conf.setAppName("rdd06");
JavaSparkContext sc=new JavaSparkContext(conf);
List<Tuple2<String, Integer>> arr=Arrays.asList(
new Tuple2<String, Integer>("u1", 20),
new Tuple2<String, Integer>("u1", 15),
new Tuple2<String, Integer>("u2", 18),
new Tuple2<String, Integer>("u3", 20),
new Tuple2<String, Integer>("u4", 20),
new Tuple2<String, Integer>("u5", 100)
);
JavaPairRDD<String, Integer> rdd=sc.parallelizePairs(arr);
Map<String,Object> m=rdd.countByKey();
Set<Entry<String, Object>> se=m.entrySet();
for(Entry<String, Object> en:se) {
String key=en.getKey();
String value=en.getValue().toString();
System.out.println(key+","+value);
}
JavaPairRDD<String, Iterable<Integer>> rdd2=rdd.groupByKey();
rdd2.foreach(new VoidFunction<Tuple2<String,Iterable<Integer>>>() {
public void call(Tuple2<String, Iterable<Integer>> tup) throws Exception {
System.out.println(tup);
}
});
sc.stop();
}
}
reduceByKey和groupByKey在本人看来区别主要是reduceByKey会对分组好的数据进行计算,groupByKey则只是分好组不作计算