RDD【弹性分布式数据集】是spark的核心,它是只读的,基于内存的,RDD结合算子会形成一个DAG【有向无环图】,DAG可以推测和延迟执行,效率极高。本文将阐述基于RDD的编程。
1 系统、软件以及前提约束
- CentOS 7 64 工作站 作者的机子ip是192.168.100.200,请读者根据自己实际情况设置
- 已完成spark访问mysql
https://www.jianshu.com/p/2b4471c03fea - 已完成spark访问hive
https://www.jianshu.com/p/c0ea1249958e - hadoop已经安装
https://www.jianshu.com/p/b7ae3b51e559 - 为去除权限对操作的影响,所有操作都以root进行
- 确保hadoop,spark已经启动,已经执行spark-shell连接到scala
2 操作
- 1 从本地文件加载数据创建RDD
确保/root/spark-2.2.1-bin-hadoop2.7/bin下有一个文本文件word。
在scala命令行执行以下命令:
>val rdd1 = sc.textFile("file:///root/spark-2.2.1-bin-hadoop2.7/bin/word")
>rdd1.collect().foreach(println)
- 2 从HDFS加载数据创建RDD
确保HDFS服务下有一个文本文件/word。
在scala命令行执行以下命令:
>val rdd2 = sc.textFile("hdfs://192.168.100.200:9000/word")
>rdd2.collect().foreach(println)
- 3 通过数组创建RDD
在scala命令行执行以下命令:
>val array=Array("java","python","cpp")
>val rdd3 = sc.parallelize(array)
>rdd3.collect().foreach(println)
- 4 转换算子之filter
在scala命令行执行以下命令:
>val array=Array("I am zhangli","zhangli is myname","He is zhangli")
>var rdd4 = sc.parallelize(array)
>rdd4.filter(line=>line.contains("zhangli")).collect().foreach(println)
- 5 转换算子之flatMap
在scala命令行执行以下命令:
>val array=Array("I am zhangli","zhangli is myname","He is zhangli")
>var rdd = sc.parallelize(array)
>rdd.flatMap(line=>line.split(" ")).collect().foreach(println)
- 6 转换算子之map
在scala命令行执行以下命令:
>val array=Array("I am zhangli","zhangli is myname","He is zhangli")
>var rdd = sc.parallelize(array)
>rdd.flatMap(line=>line.split(" ")).map(word=>(word,1)).collect().foreach(println)
- 7 转换算子之reduceByKey
在scala命令行执行以下命令:
>val array=Array("I am zhangli","zhangli is myname","He is zhangli")
>var rdd = sc.parallelize(array)
>rdd.flatMap(line=>line.split(" ")).map(word=>(word,1)).reduceByKey((a,b)=>a+b).collect().foreach(println)
- 8 转换算子groupByKey
在scala命令行执行以下命令:
>val array=Array("I am zhangli","zhangli is myname","He is zhangli")
>var rdd = sc.parallelize(array)
>rdd.flatMap(line=>line.split(" ")).map((_,1)).groupByKey().collect().foreach(println)
- 9 动作算子count
>val array=Array("I am zhangli","zhangli is myname","He is zhangli")
>var rdd = sc.parallelize(array)
>rdd.count()
- 10 动作算子first
>val array=Array("I am zhangli","zhangli is myname","He is zhangli")
>var rdd = sc.parallelize(array)
>rdd.first()
- 11 动作算子take
>val array=Array("I am zhangli","zhangli is myname","He is zhangli")
>var rdd = sc.parallelize(array)
>rdd.take(2)
- 12 动作算子reduce
>val rdd = sc.parallelize(1 to 10)
>rdd.reduce((x, y) => x + y)
以上就是RDD中的常用算子。