Spark安装

环境搭建

1)Spark On Yarn

Hadoop环境

① 设置CentOS进程数和⽂件数(可选)

vim /etc/security/limits.conf

* soft nofile 204800
* hard nofile 204800
* soft nproc 204800
* hard nproc 204800

优化linux性能,修改这个最⼤值,重启CentOS⽣效

② 配置主机名(重启⽣效)

vim /etc/hostname

zly

③ 设置IP映射

vim /etc/hosts

192.168.118.155 zly

④ 防⽕墙服务

# 临时关闭服务
systemctl stop firewalld
# 查看状态 (not running 已关闭)
firewall-cmd --state
# 关闭开机⾃动启动
systemctl disable firewalld

以下步骤略,具体参考之前的文章
⑤ 安装JDK1.8+
⑥ SSH配置免密
⑦ 配置HDFS|YARN
⑧ 配置hadoop环境变量
⑨ 启动Hadoop服务

# 第一次启动时需要初始化
 hdfs namenode -format
 start-dfs.sh
 start-yarn.sh
 # 检测是否成功
 jps
122690 NodeManager
122374 SecondaryNameNode
122201 DataNode
122539 ResourceManager
122058 NameNode
123036 Jps

或者

访问 hdfs ip:50070  yarn ip:8088

Spark环境

① 下载 spark-2.4.5-bin-without-hadoop.tgz 解压到 /usr/soft ⽬录,并且将Spark⽬录修改名字为 spark-2.4.5

tar -zxvf spark-2.4.5-bin-without-hadoop.tgz -C /usr/soft/
mv /usr/soft/spark-2.4.5-bin-without-hadoop/ /usr/soft/spark-2.4.5

② 配置Spark服务,修改 spark-env.sh 和 spark-default.conf ⽂件。

cd /usr/soft/spark-2.4.5/
mv conf/spark-env.sh.template conf/spark-env.sh
vim conf/spark-env.sh

spark-env.sh

# - SPARK_EXECUTOR_CORES, Number of cores for the executors (Default: 1).
# - SPARK_EXECUTOR_MEMORY, Memory per Executor (e.g. 1000M, 2G) (Default: 1G)
# - SPARK_DRIVER_MEMORY, Memory for Driver (e.g. 1000M, 2G) (Default: 1G)

HADOOP_CONF_DIR=/usr/soft/hadoop-2.9.2/etc/hadoop
YARN_CONF_DIR=/usr/soft/hadoop-2.9.2/etc/hadoop
SPARK_EXECUTOR_CORES= 2
SPARK_EXECUTOR_MEMORY=1G
SPARK_DRIVER_MEMORY=1G
LD_LIBRARY_PATH=/usr/soft/hadoop-2.9.2/lib/native
export HADOOP_CONF_DIR
export YARN_CONF_DIR
export SPARK_EXECUTOR_CORES
export SPARK_DRIVER_MEMORY
export SPARK_EXECUTOR_MEMORY
export LD_LIBRARY_PATH
export SPARK_DIST_CLASSPATH=$(hadoop classpath):$SPARK_DIST_CLASSPATH
export SPARK_HISTORY_OPTS="-Dspark.history.fs.logDirectory=hdfs:///spark-logs"


# Options for the daemons used in the standalone deploy mode
# - SPARK_MASTER_HOST, to bind the master to a different IP address or hostname
# - SPARK_MASTER_PORT / SPARK_MASTER_WEBUI_PORT, to use non-default ports for the master
# - SPARK_MASTER_OPTS, to set config properties only for the master (e.g. "-Dx=y")
# - SPARK_WORKER_CORES, to set the number of cores to use on this machine
# - SPARK_WORKER_MEMORY, to set how much total memory workers have to give executors (e.g. 1000m, 2g)

spark-default.conf

mv conf/spark-defaults.conf.template conf/spark-defaults.conf
vim conf/spark-defaults.conf 
spark.eventLog.enabled=true
spark.eventLog.dir=hdfs:///spark-logs

注意:需要现在在HDFS上创建 spark-logs ⽬录,⽤于作为Sparkhistory服务器存储历史计算数据的地⽅。
hdfs dfs -mkdir /spark-logs

③ 启动Spark history server

./sbin/start-history-server.sh
jps

1824 SecondaryNameNode
2768 Jps
1653 DataNode
2726 HistoryServer
2089 NodeManager
1980 ResourceManager
1517 NameNode

访问 ip:18080
在这里插入图片描述④ 测试环境

./bin/spark-submit --master yarn --deploy-mode client --class org.apache.spark.examples.SparkPi --num-executors 2 --executor-cores 3 /usr/soft/spark-2.4.5/examples/jars/spark-examples_2.11-2.4.5.jar

在这里插入图片描述

2)Spark Standalone

Hadoop环境

与 1)Spark On Yarn配置环境一致,这里就省略了。
其中Yarn的配置可以删除

Spark环境

① 修改 spark-env.sh,spark-default.conf 不变

./sbin/stop-history-server.sh
vim conf/spark-env.sh 
# - SPARK_DAEMON_JAVA_OPTS, to set config properties for all daemons (e.g. "-Dx=y")
# - SPARK_DAEMON_CLASSPATH, to set the classpath for all daemons
# - SPARK_PUBLIC_DNS, to set the public dns name of the master or workers

#ON Standalone

SPARK_MASTER_HOST=zly
SPARK_MASTER_PORT=7077
SPARK_WORKER_CORES=4
SPARK_WORKER_INSTANCES=2
SPARK_WORKER_MEMORY=2g
export SPARK_MASTER_HOST
export SPARK_MASTER_PORT
export SPARK_WORKER_CORES
export SPARK_WORKER_MEMORY
export SPARK_WORKER_INSTANCES
export LD_LIBRARY_PATH=/usr/soft/hadoop-2.9.2/lib/native
export SPARK_DIST_CLASSPATH=$(hadoop classpath)
export SPARK_HISTORY_OPTS="-Dspark.history.fs.logDirectory=hdfs:///spark-logs"

② 需要现在在HDFS上创建 spark-logs ⽬录,⽤于作为Sparkhistory服务器存储历史计算数据的地⽅。

hdfs dfs -mkdir /spark-logs

③ 启动Spark history server

 ./sbin/start-history-server.sh
 jps
 
1824 SecondaryNameNode
3938 Jps
3908 HistoryServer
1653 DataNode
1517 NameNode

访问: ip:18080

④ 启动Spark⾃⼰计算服务

./sbin/start-all.sh

1824 SecondaryNameNode
4387 Master
3908 HistoryServer
4484 Worker
1653 DataNode
4521 Worker
1517 NameNode
4605 Jps

访问:ip:8080
在这里插入图片描述⑤ 测试环境

./bin/spark-submit --master spark://zly:7077 --deploy-mode client --class org.apache.spark.examples.SparkPi --total-executor-cores 6  /usr/soft/spark-2.4.5/examples/jars/spark-examples_2.11-2.4.5.jar

在这里插入图片描述

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

无敌火车滴滴开

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值