package com.bjsxt;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Iterator;
import java.util.List;
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaPairRDD;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.api.java.function.Function;
import org.apache.spark.api.java.function.Function2;
import org.apache.spark.api.java.function.VoidFunction;
import scala.Tuple2;
public class aggregateByKey {
public static void main(String[] args) {
SparkConf conf=new SparkConf().setAppName("test").setMaster("local");
JavaSparkContext sc=new JavaSparkContext(conf);
List<Tuple2<String, Integer>> list = Arrays.asList(
new Tuple2<>("zhangsan",10),
new Tuple2<>("lisi",11),
new Tuple2<>("zhangsan",12),
new Tuple2<>("zhangsan"
Spark算子中aggregateByKey算子的理解【Java版纯代码】
最新推荐文章于 2023-08-16 15:52:42 发布
本文详细探讨了Spark中的aggregateByKey算子,通过Java代码实例展示其使用方法。aggregateByKey允许对分区内的键值对进行聚合操作,先在每个分区内部进行局部聚合,然后跨分区进行全局合并,从而实现高效的数据处理。
摘要由CSDN通过智能技术生成