目的描述如题
系统版本:CentOS 6.8
Python:3.5.9
Hadoop:2.7.2
Spark:2.1.1
Pycharm或IDEA SSH远程Linux方法如下:戳我
参考链接:戳我
至于集群搭建那是另一个故事了,此处不表
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# Author: CK
# CreateTime: 2019/12/14 10:06
import os
import sys
# 这步十分关键,否则提示ModuleNotFoundError: No module named 'pyspark'
sys.path.append('/opt/module/spark/python')
from pyspark import SparkConf, SparkContext
# 指定运行的python版本,我没有试过python 2是否可以
os.environ["PYSPARK_PYTHON"] = "/usr/bin/python3"
# 以下是个小Demo
conf = SparkConf().setMaster("local").setAppName("Test")
sc = SparkContext(conf=conf)
sc.setLogLevel("WARN")
filePath = "file:///opt/module/spark/README.md"
rdd = sc.textFile(filePath).cache()
numAs = rdd.filter(lambda line: 'a' in line).count()
print(numAs)
运行结果
请忽略我用root运行
ssh://root@192.168.3.55:22/usr/bin/python3 -u /root/project/sparkTest/remote.py
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
19/12/13 13:36:21 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
62
Process finished with exit code 0