countByKey
返回每个key的个数
def countByKey(): Map[K, Long]
以RDD{(1, 2),(2,4),(2,5), (3, 4),(3,5), (3, 6)}为例 rdd.countByKey会返回{(1,1),(2,2),(3,3)}
JavaPairRDD<Integer, Integer> rdd = javaSparkContext.parallelizePairs(Arrays.asList(new Tuple2<>(1, 2),
new Tuple2<>(1, 4),
new Tuple2<>(2, 5),
new Tuple2<>(3, 4),
new Tuple2<>(3, 5),
new Tuple2<>(3, 6),
new Tuple2<>(3, 5)));
Map<Integer, Long> integerLongMap = rdd.countByKey();
System.out.println(integerLongMap);
//key为1的有2个,key为3的有4个,key为2的有1个
//{1=2, 3=4, 2=1}
CollectAsMap()
返回hashMap包含所有RDD中的分片,key如果重复,后边的元素会覆盖前面的元素
JavaPairRDD<Integer, Integer> rdd = javaSparkContext.parallelizePairs(Arrays.asList(new Tuple2<>(1, 2),
new Tuple2<>(2, 9),
new Tuple2<>(2, 5),
new Tuple2<>(3, 4),
new Tuple2<>(3, 5),
new Tuple2<>(3, 6)));
Map<Integer, Integer> integerIntegerMap = rdd.collectAsMap();
System.out.println(integerIntegerMap);
//运行返回的结果
//{2=5, 1=2, 3=6}