安装Scala
1、进入Scala安装包位置,解压
cd /opt/packages
tar -zxvf scala-2.11.8.tgz -C /opt/programs/
2、环境变量
vim /etc/profile
export SCALA_HOME=/opt/programs/scala-2.11.8
export PATH=$PATH:$SCALA_HOME/bin
source /etc/profile
3、检验是否安装成功
scala -version
若出现 Scala code runner version 2.11.8 – Copyright 2002-2016, LAMP/EPFL 则说明安装成功
4、进入Scala命令模式
scala
5、退出
:quit
安装spark
伪分布式模式安装spark
1、进入spark安装包位置,解压
cd /opt/packages
tar -zxvf spark-2.3.3-bin-hadoop2.7.tgz -C /opt/programs/
2、进入spark目录下conf文件夹,将spark-env.sh.template文件复制并重命名spark-env.sh,并修改spark-env.sh文件
cd /opt/programs/spark-2.3.3-bin-hadoop2.7/conf
cp spark-env.sh.template spark-env.sh
vim spark-env.sh
文件末尾加上
export JAVA_HOME=/opt/programs/jdk1.8.0_144
export SCALA_HOME=/opt/programs/scala-2.11.8
export HADOOP_HOME=/opt/programs/hadoop-2.7.2
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export SPARK_MASTER_HOST=hadoop0
3、将slaves.template文件复制并重命名slaves,修改slaves文件
cp slaves.template slaves
vim slaves
# 将文件中localhost修改为
hadoop0
4、启动Spark(前提是启动Hadoop服务)
cd /opt/programs/spark-2.3.3-bin-hadoop2.7
sbin/start-all.sh
5、停止Spark
cd /opt/programs/spark-2.3.3-bin-hadoop2.7
sbin/stop-all.sh
6、进入spark shell命令模式
cd /opt/programs/spark-2.3.3-bin-hadoop2.7
bin/spark-shell
退出
:quit
7、在浏览器中输入"http://hadoop0:8080/“,可查看spark的运行状态。在浏览器中输入"http://hadoop0:4040/”,可查看spark任务的运行情况(前提是进入spark shell)
完全分布式模式安装spark
1、hadoop01节点进入spark安装包位置,解压
cd /opt/packages
tar -zxvf spark-2.3.3-bin-hadoop2.7.tgz -C /opt/programs/
2、进入spark目录下conf文件夹,将spark-env.sh.template文件复制并重命名
spark-env.sh,并修改spark-env.sh文件
cd /opt/programs/spark-2.3.3-bin-hadoop2.7/conf
cp spark-env.sh.template spark-env.sh
vim spark-env.sh
#文件末尾加上
export JAVA_HOME=/opt/programs/jdk1.8.0_144
export SCALA_HOME=/opt/programs/scala-2.11.8
export HADOOP_HOME=/opt/programs/hadoop-2.7.2
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export SPARK_MASTER_HOST=hadoop01
3、将slaves.template文件复制并重命名slaves,修改slaves文件
cp slaves.template slaves
vim slaves
# 将文件中localhost修改为
hadoop02
hadoop03
4、将整个hadoop01节点的整个HBase安装目录远程复制到hadoop02和hadoop03节点
scp -r /opt/programs/spark-2.3.3-bin-hadoop2.7 root@hadoop02:/opt/programs/
scp -r /opt/programs/spark-2.3.3-bin-hadoop2.7 root@hadoop03:/opt/programs/
5、启动Spark(在hadoop01节点运行,前提是启动zookeeper和Hadoop服务)
cd /opt/programs/spark-2.3.3-bin-hadoop2.7
sbin/start-all.sh
6、停止Spark(在hadoop01节点运行)
cd /opt/programs/spark-2.3.3-bin-hadoop2.7
sbin/stop-all.sh
7、进入spark shell命令模式(在hadoop01节点运行)
cd /opt/programs/spark-2.3.3-bin-hadoop2.7
bin/spark-shell
退出
:quit
8、在浏览器中输入"http://hadoop01:8080/“,可查看spark的运行状态。在浏览器中输入"http://hadoop01:4040/”,可查看spark任务的运行情况(前提是进入spark shell)