Spark Java版的GroupByKey示例
感觉reduceByKey只能完成一些满足交换率,结合律的运算,如果想把某些数据聚合到一些做一些操作,得换groupbykey
比如下面:我想把相同key对应的value收集到一起,完成一些运算(例如拼接字符串,或者去重)
public class SparkSample {
private static final Pattern SPACE = Pattern.compile(" ");
public static void main(String args[]) {
SparkConf sparkConf = new SparkConf();
sparkConf.setAppName("Spark_GroupByKey_Sample");
sparkConf.setMaster("local");
JavaSparkContext context = new JavaSparkContext(sparkConf);
List data = Arrays.asList(1,1,2,2,1);
JavaRDD distData= context.parallelize(data);
JavaPairRDD firstRDD = distData.mapToPair(new PairFunction() {
@Override
public Tuple2 call(Integer integer) throws Exception {
return new Tuple2(integer, integer*integer);