1、reduceByKey
接收一个函数,按照相同的key进行reduce操作,类似于scala的reduce的操作
Scala版本
val conf = new SparkConf().setMaster("local[*]").setAppName("reduceByKeyScala")
val sc = new SparkContext(conf)
var mapRDD = sc.parallelize(List((1,2),(3,4),(3,6)))
var reduceRDD = mapRDD.reduceByKey((x,y)=>x+y)
reduceRDD.foreach(x=>println(x))
运行结果如下:
单词计数,统计文件中各单词数量,文件内容如下
aa bb cc aa aa aa dd dd ee ee ee ee
ff aa bb zks
ee kks
ee zz zks
Scala版本
val conf = new SparkConf().setMaster("local[*]").setAppName("reduceByKeyScala")
val sc = new SparkContext(conf)
val lines = sc.textFile("in/sample.txt")
val wordsRDD = lines.flatMap(x=>x.split(" ")).map(x=>(x,1))
val wordCountRDD = wordsRDD.reduceByKey((x,y)=>x+y)
wordCountRDD.foreach(x=>println(x))
Java版本
SparkConf conf = new SparkConf().setAppName("ReduceByKeyJava").setMaster("local[*]");
JavaSparkContext sc = new JavaSparkContext(conf);
JavaRDD<String> lines = sc.textFile("in/sample.txt");
JavaPairRDD<String, Integer> wordPairRDD = lines.flatMapToPair(new PairFlatMapFunction<String, String, Integer>() {
@Override
public Iterator<Tuple2<String, Integer>> call(String s) throws Exception {
ArrayList<Tuple2<String, Integer>> tpLists = new ArrayList<Tuple2<String, Integer>>();
String[] split = s.split("\\s+");
for (int i = 0; i <split.length ; i++) {
Tuple2 tp = new Tuple2<String,Integer>(split[i], 1);
tpLists.add(tp);
}
return tpLists.iterator();
}
});
JavaPairRDD<String, Integer> wordCountRDD = wordPairRDD.reduceByKey(new Function2<Integer, Integer, Integer>() {
@Override
public Integer call(Integer i1, Integer i2) throws Exception {
return i1 + i2;
}
});
List<Tuple2<String, Integer>> collect = wordCountRDD.collect();
for (Tuple2<String, Integer> key:collect){
System.out.println(key);
}
运行结果如下:
2、foldByKey
基本语法
def foldByKey(zeroValue: V)(func: (V, V) => V): RDD[(K, V)]
def foldByKey(zeroValue: V, numPartitions: Int)(func: (V, V) => V): RDD[(K, V)]
def foldByKey(zeroValue: V, partitioner: Partitioner)(func: (V, V) => V): RDD[(K, V)]
该函数用于RDD[K,V]根据K将V做折叠、合并处理
val conf = new SparkConf().setMaster("local[*]").setAppName("FoldByKeyScala")
val sc = new SparkContext(conf)
var rdd1 = sc.makeRDD(Array(("A",0),("A",2),("B",1),("B",2),("C",1)))
val fold = rdd1.foldByKey(0)(_+_)
fold.collect.foreach(println)
运行结果如下:
1、首先将传入的0作为初始值
2、将初始值与values做传入的方法操作,这里为求和操作
3、注意是key相同的value做求和操作
3、SortByKey
基本语法
def sortByKey(ascending : scala.Boolean = { /* compiled code */ }, numPartitions : scala.Int = { /* compiled code */ }) : org.apache.spark.rdd.RDD[scala.Tuple2[K, V]] = { /* compiled code */ }
SortByKey 用于对 pairRDD 按照 key 进行排序,第一个参数可以设置 true 或者 false,默认是 true
Scala版本
val conf = new SparkConf().setMaster("local[*]").setAppName("SortByKeyScala")
val sc = new SparkContext(conf)
val rdd = sc.parallelize(Array((3, 4),(1, 2),(4,4),(2,5), (6,5), (5, 6)))
// sortByKey不是Action操作,只能算是转换操作
val Sort = rdd.sortByKey()
Sort.collect.foreach(println)
运行结果如下:
Java版本
SparkConf conf = new SparkConf().setAppName("SortByKeyJava").setMaster("local[2]");
JavaSparkContext sc = new JavaSparkContext(conf);
List<Tuple2<Integer,String>> list = new ArrayList<>();
list.add(new Tuple2<>(5,"hello"));
list.add(new Tuple2<>(3,"world"));
list.add(new Tuple2<>(1,"scala"));
list.add(new Tuple2<>(2,"spark"));
list.add(new Tuple2<>(4,"java"));
JavaRDD<Tuple2<Integer, String>> rdd1 = sc.parallelize(list);
PairFunction<Tuple2<Integer,String>, Integer, String> pairFunction = new PairFunction<Tuple2<Integer,String>, Integer, String>() {
@Override
public Tuple2<Integer, String> call(Tuple2<Integer,String> tup2) throws Exception {
return tup2;
}
};
JavaPairRDD<Integer, String> integerStringJavaPairRDD = rdd1.mapToPair(pairFunction);
JavaPairRDD<Integer, String> integerStringJavaPairRDD1 = integerStringJavaPairRDD.sortByKey();
List<Tuple2<Integer, String>> collect = integerStringJavaPairRDD1.collect();
for (Tuple2<Integer, String> str:collect){
System.out.println(str);
}