hadoop之spark完全分布式环境搭建
配置scala
1)下载Scala安装包scala-2.11.4.tgz安装
rpm -xzvf scala-2.11.4.tgz
2)添加Scala环境变量,在~/.bashrc
中添加:
export SCALA_HOME=/usr/local/scala
export PATH=$SCALA_HOME/bin:$PATH
2)验证Scala是否成功:
scala -version
配置SPARK
下载二进制包spark-2.2.0-bin-hadoop2.7.tgz
网址:http://spark.apache.org/downloads.html,最新为2.2.0
步骤
tar开文件包
tar -xzvf spark-2.2.0-bin-hadoop2.7.tgz
重命名
`mv spark-2.2.0-bin-hadoop2.7 spark`
配置环境变量
vi ~/.bashrc
添加export SPARK_HOME=/usr/local/spark export PATH=$SPARK_HOME/bin:$PATH
保存后执行
source ~/.bashrc
执行spark-shell
看是否配置成功进入conf文件夹,复制
spark-env.sh.template
成spark-env.sh
,并添加如下内容export JAVA_HOME=/usr/local/java export SCALA_HOME=/usr/local/scala export HADOOP_HOME=/usr/local/hadoop export HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop export SPARK_MASTER_HOST=master export SPARK_LOCAL_IP=192.168.1.151 export SPARK_WORKER_MEMORY=800m export SPARK_WORKER_CORES=1 export SPARK_HOME=/usr/local/spark export SPARK_DIST_CLASSPATH=$(/usr/local/hadoop/bin/hadoop classpath)
复制slaves.template成slaves
cp slaves.template slaves
,修改$SPARK_HOME/conf/slaves,添加如下内容:master slave1 slave2
将配置好的spark文件和.bashrc文件复制到slave1和slave2节点
scp -r /usr/local/spark slave1:/usr/local scp -r /usr/local/spark slave2:/usr/local scp -r ~/.bashrc slave1:~/ scp -r ~/.bashrc slave2:~/
最后各节点
source ~/.bashrc
在slave1和slave2修改
$SPARK_HOME/conf/spark-env.sh
,将export SPARK_LOCAL_IP=192.168.1.151
改成slave1和slave2对应节点的IP在Master节点启动集群
sbin/start-all.sh
使用
jps
查看集群是否启动成功master在Hadoop的基础上新增了: Master
slave1和slave2在Hadoop的基础上新增了: Worker
10.电脑访问http://master:8080/出现如下页面,证明搭建成功