PySpark
输出方法:collect,count,reduce,take
from pyspark import SparkConf, SparkContext
import os
# 设置环境变量,让pyspark准确找到python解释器
os.environ["PYSPARK_PYTHON"] = "D:/python/python-3.10.9/python.exe"
# 构建SparkConf对象实例
conf = SparkConf().setMaster("local[*]").setAppName("test_spark_app")
# 构建SparkContext
sc = SparkContext(conf=conf)
# 将python容器数据转为rdd,列表,元祖,集合,字典,字符串等均可
rdd = sc.parallelize([1, 2, 3, 4, 5])
# collect算子
print(rdd.collect())
# take算子,取出前几位
print(rdd.take(3))
# 数据数量
print(rdd.count())
# reduce算子,数据两两处理
num = rdd.reduce(lambda x, y: x + y)
print(num)
# 关闭pyspark程序
sc.stop()
输出结果:
[1, 2, 3, 4, 5]
[1, 2, 3]
5
15
输出数据到文件中:saveAsTextFile
前置工作:
下载hadoop安装包
http://archive.apache.org/dist/hadoop/common/hadoop-3.0.0/hadoop-3.0.0.tar.gz
解压到任意位置
模块配置-配置到解压文件路径:os.environ["HADOOP_HOME"] = "D:/tool/hadoop-3.0.0"
winutils.exe放到解压文件路径bin目录下
https://raw.githubusercontent.com/steveloughran/winutils/master/hadoop-3.0.0/bin/winutils.exe
hadoop.dll放到C盘-Windows-System32文件夹下
https://raw.githubusercontent.com/steveloughran/winutils/master/hadoop-3.0.0/bin/hadoop.dll
from pyspark import SparkConf, SparkContext
import os
# 设置环境变量,让pyspark准确找到python解释器
os.environ["PYSPARK_PYTHON"] = "D:/python/python-3.10.9/python.exe"
os.environ["HADOOP_HOME"] = "D:/tool/hadoop-3.0.0"
# 构建SparkConf对象实例
conf = SparkConf().setMaster("local[*]").setAppName("test_spark_app")
# 构建SparkContext
sc = SparkContext(conf=conf)
# 将python容器数据转为rdd,列表,元祖,集合,字典,字符串等均可
rdd = sc.parallelize([1, 2, 3, 4, 5], numSlices=1)
rdd.saveAsTextFile("D:/saveTest")
# 关闭pyspark程序
sc.stop()
每天进步一点点~~~加油