刚开始使用pyspark,写了import pyspark的代码,运行时需要spark-submit xx.py来执行xx文件。可以直接使用python来执行xx.py。需要首先安装findspark包
pip install findspark
然后在代码开始进行初始化,找到spark,详细如下:
import findspark
findspark.init()
from pyspark import SparkContext
sc = SparkContext("local", "count app")
words = sc.parallelize(
["scala",
"java",
"hadoop",
"spark",
"akka",
"spark vs hadoop",
"pyspark",
"pyspark and spark"
])
counts = words.count()
print("Number of elements in RDD -> %i" % counts)