首先centos默认的是py2,我这里改的py3,安装完spark后,要安装python环境
接下来我们测试一下
在/usr/local/spark-2.4.5-bin-hadoop2.7/bin下执行
spark-submit test.py
test.py
from pyspark import SparkConf,SparkContext
#import logging
#logging.basicConfig(level=logging.ERROR, format=' %(asctime)s - %(levelname)s -%(message)s')
conf=SparkConf().setMaster('local').setAppName('word_count')
sc = SparkContext(conf=conf)
d = ['a b c d', 'b c d e', 'c d e f']
d_rdd = sc.parallelize(d)
rdd_res = d_rdd.flatMap(lambda x: x.split(' ')).map(lambda word: (word, 1)).reduceByKey(lambda a, b: a+b)
print(rdd_res.collect())
结果
[('a', 1), ('b', 2), ('c', 3), ('d', 3), ('e', 2), ('f', 1)]
表示我们安装成功