一、Spark介绍
Spark的cluster模式框架构图
Driver Program就是Spark程序,SparkContext是Spark应用程序的入口
SparkContext通过Cluster Manager管理集群,集群中包含多个Worker Node,每个当中都有Executor执行任务。
Cluster Manager可以在下列模式下运行:
- 本地运行
- Spark Standalone Cluster,可以直接存取HDFS或LocalDisk
- Hadoop YARN
- 云端
二、Scala安装
Spark本身以Scala开发,所以必须先装Scala,下载Scala2.11
wget https://www.scala-lang.org/files/archive/scala-2.12.9.tgz
tar xvf scala-2.12.9.tgz 解压缩
sudo mv scala-2.12.9 /usr/local/scala
设置Scala用户环境变量
sudo gedit ~/.bashrc
输入:
export SCALA_HOME=/usr/local/scala
export PATH=$PATH:$SCALA_HOME/bin 设置环境变量
source ~/.bashrc
启动Scala
在命令行中输入:scala 启动
三、Spark安装
下载
wget https://www-eu.apache.org/dist/spark/spark-2.3.3/spark-2.3.3-bin-hadoop2.6.tgz
解压缩
tar zxf spark-2.3.3-bin-hadoop2.6.tgz
移动
sudo mv spark-2.3.3-bin-hadoop2.6 /usr/local/spark1/
添加环境变量
sudo gedit ~/.bashrc
export SPARK_HOME=/usr/local/spark1
export PATH=$PATH:$SPARK_HOME/bin
source ~/.bashrc
cd /usr/local/spark1/conf/
cp spark-env.sh.template spark-env.sh
sudo gedit spark-env.sh
添加:
export SCALA_HOME=/usr/local/scala1
export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
export HADOOP_HOME=/usr/local/hadoop
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
SPARK_MASTER_IP=master
SPARK_LOCAL_DIRS=/usr/local/spark1
SPARK_DRIVER_MEMORY=1G
export LD_LIBRARY_PATH=/usr/local/hadoop/lib/native/:$LD_LIBRARY_PATH
启动PySpark交互式界面
pyspark 离开:exit
(2)设置pyspark显示信息
cd /usr/local/spark/conf
复制log4j模板到log4j.properties
cp log4j.properties.template log4j.properties
设置log4j
sudo gedit log4j.properties
INFO改为WARN
四、创建测试用的文本文件
需要测试pyspark 读取HDFS,要先启动Hadoop,上传文件到HDFS
(1)复制LICENSE.txt
cp /usr/local/hadoop/LICENSE.txt ~/wordcount/input
(2)启动所有服务器
start-all.sh
(3)将文件上传到HDFS
hadoop fs -mkdir -p /user/hduser/wordcount/input
(4)切换至数据文件目录
cd ~/wordcount/input
(5)上传文本文件到HDFS
hadoop fs -copyFromLocal LICENSE.txt /user/hduser/wordcount/input
(6)列出HDFS文件
hadoop fs -ls /user/hduser/wordcount/input
五、本地运行pyspark程序
(1)运行pyspark
pyspark --master local[4] local[4] 代表4个线程
查看当前运行模式
sc.master
(2)尝试执行简单的程序:读取本地文件
textFile=sc.textFile("file:/usr/local/spark/README.md")
显示项数
textFile.count()
(3)读取HDFS文件
在路径前加上“hdfs://master:9000”,则告诉系统读取HDFS文件
textFile=sc.textFile("hdfs://master:9000/user/hduser/wordcount/input/LICENSE.txt")
textFile.count()
exit()
六、在Hadoop YARN运行pyspark程序
Spark在Hadoop YARN上运行,YARN可以进行多台机器资源管理
(1)Hadoop YARN运行spark
HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop pyspark --master yarn --deploy-mode client
其中:
HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop :设置Hadoop配置文件目录
–master yarn --deploy-mode client :设置运行模式是YARN-client
(2)查看运行模式
sc.master
(3)读取HDFS文件
textFile=sc.textFile("hdfs://master:9000/user/hduser/wordcount/input/LICENSE.txt")