新建代码文件WordCount.py,并编写程序
touch WordCount.py
vim WordCount.py
from pyspark import SparkConf, SparkContext
# 使用本地模式启动
conf = SparkConf().setMaster("local").setAppName("My App")
# 生成一个SparkContext对象
sc = SparkContext(conf=conf)
# 设置文件路径
logFile = "file:///opt/servers/spark/README.md"
# 读取README.md文件生成的RDD
logData = sc.textFile(logFile, 2).cache()
# 分别统计RDD元素中包含字母a和b的行数
numAS = logData.filter(lambda line: 'a' in line).count()
numBs = logData.filter(lambda line: 'b' in line).count()
# 打印输出结果
print('Lines with a: %s, Lines with b: %s' % (numAS, numBs))
运行代码:python3 WordCount.py
如果报如下错误:
python3 WordCount.py
Traceback (most recent call last):
File “WordCount.py”, line 1, in
from pyspark import SparkConf, SparkContext
ModuleNotFoundError: No module named ‘pyspark’
说明没有pyspark模块。
进入python安装目录下的lib/site-packages目录下,使用pip下载安装pyspark,这里使用国内清华大学镜像网站。
pip install pyspark -i http://pypi.tuna.tsinghua.edu.cn/simple/ --trusted-host pypi.tuna.tsinghua.edu.cn
下载一个镜像,中间出了好几个问题,被我记录在
安装pyspark库成功后,重新运行代码,然后还是报错
Setting default log level to “WARN”.
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
/usr/local/python3/lib/python3.7/site-packages/pyspark/context.py:317: FutureWarning: Python 3.7 support is deprecated in Spark 3.4.
warnings.warn(“Python 3.7 support is deprecated in Spark 3.4.”, FutureWarning)
Traceback (most recent call last):
File “WordCount.py”, line 11, in
numAS = logData.filter(lambda line: ‘a’ in line).count()
File “/usr/local/python3/lib/python3.7/site-packages/pyspark/rdd.py”, line 2297, in count
return self.mapPartitions(lambda i: [sum(1 for _ in i)]).sum()
File “/usr/local/python3/lib/python3.7/site-packages/pyspark/rdd.py”, line 2273, in sum
0, operator.add
File “/usr/local/python3/lib/python3.7/site-packages/pyspark/rdd.py”, line 2025, in fold
vals = self.mapPartitions(func).collect()
File “/usr/local/python3/lib/python3.7/site-packages/pyspark/rdd.py”, line 1814, in collect
sock_info = self.ctx._jvm.PythonRDD.collectAndServe(self._jrdd.rdd())
File “/usr/local/python3/lib/python3.7/site-packages/pyspark/rdd.py”, line 5442, in _jrdd
self.ctx, self.func, self._prev_jrdd_deserializer, self._jrdd_deserializer, profiler
File “/usr/local/python3/lib/python3.7/site-packages/pyspark/rdd.py”, line 5250, in _wrap_function
sc._javaAccumulator,
TypeError: ‘JavaPackage’ object is not callable
是因为pyspark版本过高, 改成3.2.0版本的就可以了
pip3 install pyspark==3.2.0 -i http://pypi.tuna.tsinghua.edu.cn/simple/ --trusted-host pypi.tuna.tsinghua.edu.cn
Looking in indexes: http://pypi.tuna.tsinghua.edu.cn/simple/
再次运行代码,运行结果如下:
3.通过spark-submit运行程序
进入spark安装目录下的bin中
./spark-submit WordCound.py的绝对路径
省略了<master-url>参数,默认本地模式
运行结果如下 (部分截图):
在这个过程中产生了许多其他信息干扰,可以通过修改log4j的日志信息显示级别,来消除干扰信息。
进入spark安装目录下的配置文件夹conf
cp log4j2.properties.template log4j2.properties
**自我介绍一下,小编13年上海交大毕业,曾经在小公司待过,也去过华为、OPPO等大厂,18年进入阿里一直到现在。**
**深知大多数大数据工程师,想要提升技能,往往是自己摸索成长或者是报班学习,但对于培训机构动则几千的学费,着实压力不小。自己不成体系的自学效果低效又漫长,而且极易碰到天花板技术停滞不前!**
**因此收集整理了一份《2024年大数据全套学习资料》,初衷也很简单,就是希望能够帮助到想自学提升又不知道该从何学起的朋友。**
![img](https://img-blog.csdnimg.cn/img_convert/4fddf9dc3e464614a4f90331b2db4127.png)
![img](https://img-blog.csdnimg.cn/img_convert/52432b2cd6b78ebe0df764b519e47c09.png)
![img](https://img-blog.csdnimg.cn/img_convert/dc83ccb3950eb6f3054dabc956734dcc.png)
![img](https://img-blog.csdnimg.cn/img_convert/5d602fac7114a939f25a3af6ac2d1364.png)
![img](https://img-blog.csdnimg.cn/img_convert/6dbab4d00e8ccd55b8218eaecebc4c09.png)
**既有适合小白学习的零基础资料,也有适合3年以上经验的小伙伴深入学习提升的进阶课程,基本涵盖了95%以上大数据开发知识点,真正体系化!**
**由于文件比较大,这里只是将部分目录大纲截图出来,每个节点里面都包含大厂面经、学习笔记、源码讲义、实战项目、讲解视频,并且后续会持续更新**
**如果你觉得这些内容对你有帮助,可以添加VX:vip204888 (备注大数据获取)**
![img](https://img-blog.csdnimg.cn/img_convert/ef3a24482ffc58f2f3efbd859dc15e2c.png)
目录大纲截图出来,每个节点里面都包含大厂面经、学习笔记、源码讲义、实战项目、讲解视频,并且后续会持续更新**
**如果你觉得这些内容对你有帮助,可以添加VX:vip204888 (备注大数据获取)**
[外链图片转存中...(img-fRjY9Vjf-1712861197202)]