RDD学习总结

1、引入Spark

Spark 2.3.2 使用 Scala 2.11.x 写应用程序,你需要使用一个兼容的 Scala 版本。

写 Spark 应用程序时,你需要添加 Spark 的 Maven 依赖,Spark 可以通过 Maven 中心仓库来获得:

groupId = org.apache.spark
artifactId = spark-core_2.10
version = 1.2.0

另外,如果你希望访问 HDFS 集群,你需要根据你的 HDFS 版本添加 hadoop-client 的依赖。

groupId = org.apache.hadoop
artifactId = hadoop-client
version = <your-hdfs-version>

最后,你需要导入一些 Spark 的类和隐式转换到你的程序,添加下面的行就可以了:

import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf

2、初始化Spark

java初始化Spark:
package com.zhangbb;

import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaSparkContext;

public class App {

    public static void main(String[] args){

        SparkConf sparkConf = new SparkConf().setMaster("local[*]").setAppName("java Spark Demo");
        JavaSparkContext sc = new JavaSparkContext(sparkConf);

        sc.close();
    }


Scala初始化Spark:
package com.zhangbb

import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf

object Application {

  def main(args: Array[String]): Unit = {

    val conf = new SparkConf().setAppName("Scala Spark Demo").setMaster("local[*]")
    val sc = new SparkContext(conf)

  }

}

3、本地运行Spark程序需要的环境(本地运行Spark依赖于Hadoop环境,注意Spark的版本一定与Scala版本对应,如Spark-2.3.2与Scala-2.11.x对应)

  1. 下载Hadoop,并解压到你的本地目录,我下载的是hadoop-2.5.2版本,解压在D:\hadoop\hadoop-2.5.2。
  2. 计算机 –>属性 –>高级系统设置 –>高级选项卡 –>环境变量 –> 单击新建HADOOP_HOME。注意:路径到bin的上级目录即可。
  3. Path环境变量下配置【%HADOOP_HOME%\bin;】
  4. 将下载到的hadooponwindows-master.zip,解压,将全部bin目录文件替换至hadoop目录下的bin目录。下载地址:https://pan.baidu.com/s/1eGra7gKCDbvNubO8UO5rgw密码:yk9u(本人QQ:1280072240)

4、算子的介绍

Value类型 Transformation 算子分类

相关代码:

//map算子:生成一个新的RDD,新的RDD中每个元素均有父RDD通 过作用func函数映射变换而来。新的RDD叫做MappedRDD
    val mapRDD = sc.parallelize(List(1,2,3,4,5),2)
    val map1 = mapRDD.map(_*2).collect()
    println(map1.mkString(","))

    JavaRDD<Integer> map = sc.parallelize(Arrays.asList(1,2,3,4,5));
        JavaRDD<Integer> map1 = map.map(new Function<Integer, Integer>() {
            @Override
            public Integer call(Integer integer) throws Exception {
                return integer*2;
            }
        });
        System.out.println(map1.collect().toString());


//mapPartitions 获取到每个分区的迭代器。对每个分区中每个元素进行操作
    val rd1 = sc.parallelize(List("20180101", "20180102", "20180103", "20180104", "20180105", "20180106"), 2)
    val rd2 = rd1.mapPartitions(iter => {
      val dateFormat = new java.text.SimpleDateFormat("yyyyMMdd")
      iter.map(date => dateFormat.parse(date))
    })
    println(rd2.collect().foreach(println(_)))

JavaRDD<String> rd1 = sc.parallelize(Arrays.asList("20180101", "20180102", "20180103", "20180104", "20180105", "20180106"),2);
        JavaRDD<Tuple2<String,Date>> rd2 = rd1.mapPartitions(new FlatMapFunction<Iterator<String>, Tuple2<String, Date>>() {
            @Override
            public Iterator<Tuple2<String, Date>> call(Iterator<String> stringIterator) throws Exception {
                List<Tuple2<String,Date>> list = new ArrayList<>();
                while (stringIterator.hasNext()){
                    String item = stringIterator.next();
                    DateFormat dateFormat = new SimpleDateFormat("yyyyMMdd");
                    list.add(new Tuple2<String, Date>(item,dateFormat.parse(item)));
                }
                return list.iterator();
            }
        });
//        rd2.foreach(new VoidFunction<Tuple2<String, Date>>() {
//            @Override
//            public void call(Tuple2<String, Date> stringDateTuple2) throws Exception {
//                System.out.println(stringDateTuple2._2());
//            }
//        });
        rd2.foreach(x -> System.out.println(x));


//flatMap 将RDD中的每个元素通过func转换为新的元素,进行扁平化:合并所有的集合为一个新集合,新的RDD叫做FlatMappedRDD
    val rd1 = sc.parallelize(Seq("I have a pen", "I have an apple", "I have a pen", "I have a pineapple"), 2)
    val rd2 = rd1.map(_.split(" "))
    println("map ==== " + rd2.collect().mkString(","))

    val rd3 = rd1.flatMap(_.split(" "))
    println("flatMap ==== " + rd3.collect().mkString(","))

JavaRDD<String> rd1 = sc.parallelize(Arrays.asList("I have a pen", "I have an apple", "I have a pen", "I have a pineapple"));
        JavaRDD<String> rd2 = rd1.flatMap(new FlatMapFunction<String, String>() {
            @Override
            public Iterator<String> call(String s) throws Exception {
                return Arrays.asList(s.split(" ")).iterator();
            }
        });
        rd2.collect().forEach(x -> System.out.println(x));


    val rdd1 = sc.parallelize(Seq("Apple", "Banana", "Orange"))
    val rdd2 = sc.parallelize(Seq("Banana", "Pineapple"))
    val rdd3 = sc.parallelize(Seq("Durian"))

    //union 合并两个RDD,元素数据类型需要相同,并不进行去重操作
    val unionRDD = rdd1.union(rdd2).union(rdd3)
    //distinc 对RDD中的元素进行去重操作
    val unionRDD2 = unionRDD.distinct()
    //filter 对RDD元素的数据进行过滤 • 当满足f返回值为true时保留元素,否则丢弃
    val filterRDD = rdd1.filter(_.contains("ana"))
    //intersection 对两个RDD元素取交集
    val intersectionRDD = rdd1.intersection(rdd2)

    println("===union===")
    println(unionRDD.collect().mkString(","))
    println("===distinc===")
    println(unionRDD2.collect().mkString(","))
    println("===filter===")
    println(filterRDD.collect().mkString(","))
    println("===intersection===")
    println(intersectionRDD.collect().mkString(","))

    JavaRDD<String> rdd1 = sc.parallelize(Arrays.asList("Apple", "Banana", "Orange"));
    JavaRDD<String> rdd2 = sc.parallelize(Arrays.asList("Banana", "Pineapple"));
    JavaRDD<String> rdd3 = sc.parallelize(Arrays.asList("Durian"));

    JavaRDD<String> unionRDD = rdd1.union(rdd2).union(rdd3);
    JavaRDD<String> unionRDD2 = unionRDD.distinct();
    JavaRDD<String> filterRdd = rdd1.filter(new Function<String, Boolean>() {
        @Override
        public Boolean call(String s) throws Exception {
            return s.contains("ana");
        }
    });
    JavaRDD<String> intersectionRDD = rdd1.intersection(rdd2);
    unionRDD.collect().forEach(x -> System.out.println(x));
    unionRDD2.collect().forEach(x -> System.out.println(x));
    filterRdd.collect().forEach(x -> System.out.println(x));
    intersectionRDD.collect().forEach(x -> System.out.println(x));

Key-Value类型 Transformation 算子分类

相关代码:

    //groupByKey reduceByKey 对RDD[Key, Value]按照相同的key进行分组
    val scoreDetail = sc.parallelize(List(("xiaoming","A"), ("xiaodong","B"), ("peter","B"), ("liuhua","C"), ("xiaofeng","A")), 3)
    val scoreDetail2 = sc.parallelize(List("A", "B", "B", "D", "B", "D", "E", "A", "E"), 3)
    val sorrceGroup = scoreDetail.map(x => (x._2,x._1)).groupByKey().collect()
    val sorrceGroup2 = scoreDetail2.map(x => (x,1)).groupByKey().collect()

    val sorrceReduce = scoreDetail.map(x => (x._2,x._1)).reduceByKey(_+_).collect()
    val sorrceReduce2 = scoreDetail2.map(x => (x,1)).reduceByKey(_+_).collect()

    println("===groupByKey===")
    println(sorrceGroup.mkString(","))
    println("===groupByKey===")
    println(sorrceGroup2.mkString(","))
    println("===reduceByKey===")
    println(sorrceReduce.mkString(","))
    println("===reduceByKey===")

    JavaRDD<String> scoreDetail = sc.parallelize(Arrays.asList("A", "B", "B", "D", "B", "D", "E", "A", "E"),3);
        JavaPairRDD<String,Integer> socerGroup = scoreDetail.mapToPair(new PairFunction<String, String, Integer>() {
            @Override
            public Tuple2<String, Integer> call(String s) throws Exception {
                Tuple2<String, Integer> tuple2 = new Tuple2(s,1);
                return tuple2;
            }
        });
        JavaPairRDD<String,Iterable<Integer>> socerGroupres = socerGroup.groupByKey();
        System.out.println(socerGroupres.collect().toString());

        JavaPairRDD<String,Integer> socerReduce = scoreDetail.mapToPair(new PairFunction<String, String, Integer>() {
            @Override
            public Tuple2<String, Integer> call(String s) throws Exception {
                Tuple2<String, Integer> tuple2 = new Tuple2(s,1);
                return tuple2;
            }
        });
        JavaPairRDD<String,Integer> socerReduceRes = socerReduce.reduceByKey(new Function2<Integer, Integer, Integer>() {
            @Override
            public Integer call(Integer integer, Integer integer2) throws Exception {
                return integer+integer2;
            }
        });
        System.out.println(socerReduceRes.collect().toString());

        List<Tuple2<String,String>> list = new ArrayList<>();
        list.add(new Tuple2<>("xiaoming","A"));
        list.add(new Tuple2<>("xiaodong","B"));
        list.add(new Tuple2<>("peter","B"));
        list.add(new Tuple2<>("liuhua","C"));
        list.add(new Tuple2<>("xiaofeng","A"));
        JavaPairRDD<String,String> scoreDetail20 = sc.parallelizePairs(list);
        JavaPairRDD<String,String> scoreDetail2 = scoreDetail20.mapPartitionsToPair(new PairFlatMapFunction<Iterator<Tuple2<String,String>>, String, String>() {
            @Override
            public Iterator<Tuple2<String, String>> call(Iterator<Tuple2<String, String>> tuple2Iterator) throws Exception {
                List<Tuple2<String,String>> list = new ArrayList<>();
                while (tuple2Iterator.hasNext()){
                    Tuple2<String,String> t1 = tuple2Iterator.next();
                    Tuple2<String,String> t2 = new Tuple2<>(t1._2(),t1._1());
                    list.add(t2);
                }
                return list.iterator();
            }
        });
        JavaPairRDD<String,Iterable<String>> socerGroupres1 = scoreDetail2.groupByKey();
        System.out.println(socerGroupres1.collect().toString());
        JavaPairRDD<String,String> socerReduceRes1 = scoreDetail2.reduceByKey(new Function2<String, String, String>() {
            @Override
            public String call(String s, String s2) throws Exception {
                return s + "," + s2;
            }
        });
        System.out.println(socerReduceRes1.collect().toString());


    //join 对两个RDD根据key进行连接操作
    val data1 = sc.parallelize(Array(("A", 1),("b", 2),("c", 3)))
    val data2 = sc.parallelize(Array(("A", 4),("A", 6),("b", 7),("c", 3),("c", 8)))

    val joinRDD = data1.join(data2)
    println(joinRDD.collect().mkString(","))


        List<Tuple2<String,Integer>> list1 = new ArrayList<>();
        list1.add(new Tuple2<String,Integer>("A", 1));
        list1.add(new Tuple2<String,Integer>("b", 2));
        list1.add(new Tuple2<String,Integer>("c", 3));
        List<Tuple2<String,Integer>> list2 = new ArrayList<>();
        list2.add(new Tuple2<String,Integer>("A", 4));
        list2.add(new Tuple2<String,Integer>("A", 6));
        list2.add(new Tuple2<String,Integer>("b", 7));
        list2.add(new Tuple2<String,Integer>("c", 3));
        list2.add(new Tuple2<String,Integer>("c", 8));

        JavaPairRDD<String,Integer> data1 = sc.parallelizePairs(list1);
        JavaPairRDD<String,Integer> data2 = sc.parallelizePairs(list2);

        JavaPairRDD<String, Tuple2<Integer, Integer>> dataJoin = data1.join(data2);
        dataJoin.collect().forEach(x -> System.out.println(x));

Action 算子分类

相关代码:

    val data1 = sc.parallelize(Array(("A", 1),("b", 2),("c", 3),("A", 4),("A", 6),("b", 7),("c", 3),("c", 8)))
    //count 从RDD中返回元素的个数
    println(data1.count())
    //countByKey 从RDD[K, V]中返回key出现的次数
    println(data1.countByKey().mkString(","))
    //countByValue 统计RDD中值出现的次数
    println(data1.countByValue().mkString(","))
    //take 从RDD中取0到num – 1下标的元素,不排序
    println(data1.take(1).mkString(","))
    //takeOrdered 从RDD中返按从小到大(默认)返回num个元素
    println(data1.takeOrdered(3).mkString(","))
    //top 和takeOrdered类似,但是排序顺序从大到小
    println(data1.top(3).mkString(","))

        List<Tuple2<String,Integer>> list1 = new ArrayList<>();
        list1.add(new Tuple2<String,Integer>("A", 1));
        list1.add(new Tuple2<String,Integer>("b", 2));
        list1.add(new Tuple2<String,Integer>("c", 3));
        list1.add(new Tuple2<String,Integer>("A", 4));
        list1.add(new Tuple2<String,Integer>("A", 6));
        list1.add(new Tuple2<String,Integer>("b", 7));
        list1.add(new Tuple2<String,Integer>("c", 3));
        list1.add(new Tuple2<String,Integer>("c", 8));

        JavaPairRDD<String,Integer> data = sc.parallelizePairs(list1);
        System.out.println("count" + data.count());
        System.out.println("countByKey" + data.countByKey());
        Map<String,Long> map =   data.countByKey();
        System.out.println("countByValue" + data.countByValue());
        Map<Tuple2<String,Integer>,Long> mapTuple = data.countByValue();
        //take 取对象从1开始
        List<Tuple2<String,Integer>> takeList = data.take(1);
        System.out.println("take");
        takeList.forEach(x -> System.out.println(x._1()+"-"+x._2()));

        JavaRDD<Integer> data2 = sc.parallelize(Arrays.asList(9,3,4,2,6,8,4,5,6));
        List<Integer> takeOrderList = data2.takeOrdered(3);
        System.out.println("takeOrdered");
        takeOrderList.forEach(x -> System.out.println(x.toString()));
        List<Integer> takeTopList = data2.top(3);
        takeTopList.forEach(x -> System.out.println(x.toString()));

        //takeOrdered top 由于Tuple无法比较
        //List<Tuple2<String,Integer>> takeOrderList = data.takeOrdered(3);
        //List<Tuple2<String,Integer>> takeTopList = data.top(3);
        //System.out.println("takeOrdered");
        //takeOrderList.forEach(x -> System.out.println(x.toString()));
        //System.out.println("top" );
        //takeTopList.forEach(x -> System.out.println(x.toString()));


    val data = sc.parallelize(List(1,2,3,2,3,4,5,4,3,6,8,76,8),3)
    //reduce 对RDD中的元素进行聚合操作、注意:reduceByKey是Transformation,如果集合为空则会抛出Exception,Java实现可以为空,此时为0
    val d = data.reduce(_ + _)
    val f = data.filter(_ > 4).reduce(_ + _)
    println(d)
    println(f)
    //fold 类似于reduce,对RDD进行聚合操作;首先每个分区分别进行聚合,初始值为传入的zeroValue,然后对所有 的分区进行聚合
    val e = data.fold(0)(_+_)
    val h = data.filter(_ >100).fold(0)(_+_)
    println(e)
    println(h)

        JavaRDD<Integer> data = sc.parallelize(Arrays.asList(1,3,4,6,2,3,4,5,6));
        Integer sum = data.reduce(new Function2<Integer, Integer, Integer>() {
            @Override
            public Integer call(Integer integer, Integer integer2) throws Exception {
                return integer + integer2;
            }
        });
        Integer sum1 = data.filter(new org.apache.spark.api.java.function.Function<Integer, Boolean>() {
            @Override
            public Boolean call(Integer integer) throws Exception {
                return integer>7;
            }
        }).fold(0, new Function2<Integer, Integer, Integer>() {
            @Override
            public Integer call(Integer integer, Integer integer2) throws Exception {
                return integer + integer2;
            }
        });
        System.out.println(sum);
        System.out.println(sum1);

        Integer sum2 = data.fold(0, new Function2<Integer, Integer, Integer>() {
            @Override
            public Integer call(Integer integer, Integer integer2) throws Exception {
                return integer + integer2;
            }
        });
        Integer sum3 = data.filter(new org.apache.spark.api.java.function.Function<Integer, Boolean>() {
            @Override
            public Boolean call(Integer integer) throws Exception {
                return integer>7;
            }
        }).fold(0, new Function2<Integer, Integer, Integer>() {
            @Override
            public Integer call(Integer integer, Integer integer2) throws Exception {
                return integer + integer2;
            }
        });
        System.out.println(sum2);
        System.out.println(sum3);


    //aggregateByKey 分组计算平均值
    val data = sc.parallelize( Seq ( ("A",110),("A",130),("A",120), ("B",200),("B",206),("B",206), ("C",150),("C",160),("C",170)))
    val rdd1 = data.aggregateByKey((0,0))((k,v) => (k._1 + v,k._2 +1),(k,v) => (k._1 + v._1,k._2 + v._2))
    val rdd2 = rdd1.mapValues(x => x._1/x._2)
    println(rdd1.collect().mkString(","))
    println(rdd2.collect().mkString(","))

List<Tuple2<String,Integer>> list = new ArrayList<>();
        list.add(new Tuple2<>("A",110));
        list.add(new Tuple2<>("A",130));
        list.add(new Tuple2<>("A",120));
        list.add(new Tuple2<>("B",200));
        list.add(new Tuple2<>("B",206));
        list.add(new Tuple2<>("B",206));
        list.add(new Tuple2<>("C",150));
        list.add(new Tuple2<>("C",160));
        list.add(new Tuple2<>("C",170));
        JavaPairRDD<String,Integer> data = sc.parallelizePairs(list);

        JavaPairRDD<String,Tuple2<Integer, Integer>> rdd = data.aggregateByKey(new Tuple2<Integer, Integer>(0, 0), new Function2<Tuple2<Integer, Integer>, Integer, Tuple2<Integer, Integer>>() {
                    @Override
                    public Tuple2<Integer, Integer> call(Tuple2<Integer, Integer> v1, Integer v2) throws Exception {
                        return new Tuple2<Integer, Integer>(v1._1() + v2,v1._2() +1);
                    }
                }, new Function2<Tuple2<Integer, Integer>, Tuple2<Integer, Integer>, Tuple2<Integer, Integer>>() {
                    @Override
                    public Tuple2<Integer, Integer> call(Tuple2<Integer, Integer> v1, Tuple2<Integer, Integer> v2) throws Exception {

                        return new Tuple2<Integer, Integer>(v1._1() + v2.productArity(),v1._2() + v2._2());
                    }
                }
        );
        System.out.println(rdd.collect().toString());
        JavaPairRDD<String,Integer> rdd2 = rdd.mapValues(new org.apache.spark.api.java.function.Function<Tuple2<Integer, Integer>, Integer>() {
            @Override
            public Integer call(Tuple2<Integer, Integer> v1) throws Exception {
                return v1._1()/v1._2();
            }
        });
        System.out.println(rdd2.collect().toString());

5、运行spark程序

  • 本地运行:可直接执行main方法运行
  • yarn(集群运行):spark-submit --master yarn-client --class spark的jar包路径 <input_path> <output_path>
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值