spark学习1--centOS7.2下基于hadoop2.7.3的spark2.0集群环境搭建

    安装步骤如没特殊指明哪台服务器,则说明都是在3台服务器上做同样的操作。  

  1.准备

      1.1 装有centOS7的3台服务器

       master    192.168.174.132

       node1     192.168.174.133

       node2     192.168.174.134

      1.2 搭建hadoop集群环境

       参考“hadoop学习1--hadoop2.7.3集群环境搭建

      1.3 准备安装包

       scala-2.11.8.tgz

       spark-2.0.0-bin-hadoop2.7.tgz

      然后上传到3台服务器的/soft目录上

   2.安装scala

   2.1  解压

[root@node1 soft]# mkdir -p /soft/scala
[root@node1 soft]# tar -zxvf scala-2.11.8.tgz -C /soft/scala

  2.2 配置环境变量

[root@node1 soft]# echo -e "export SCALA_HOME=/soft/scala/scala-2.11.8" >> /etc/profile
[root@node1 soft]# echo -e "export PATH=$PATH:$SCALA_HOME/bin" >> /etc/profile

 3.安装spark

  3.1 解压

[root@node1 soft]# mkdir -p /soft/spark
[root@node1 soft]# tar -zxvf spark-2.0.0-bin-hadoop2.7.tgz -C /soft/spark/

 3.2 配置环境变量

[root@node1 soft]# echo -e "export SPARK_HOME=/soft/spark/spark-2.0.0-bin-hadoop2.7" >> /etc/profile
[root@node1 soft]# echo -e "export PATH=$PATH:$SPARK_HOME/bin" >> /etc/profile

3.3 配置集群

   slaves增加node1,node2

[root@node1 spark-2.0.0-bin-hadoop2.7]# cd /soft/spark/spark-2.0.0-bin-hadoop2.7/conf/
[root@node1 conf]# cp slaves.template slaves
[root@node1 conf]# echo -e "node1\nnode2" > slaves

 spark-env.sh增加scala_home

[root@master conf]# cp spark-env.sh.template spark-env.sh
[root@master conf]# echo -e "export SCALA_HOME=/soft/scala/scala-2.11.8" >> spark-env.sh

[root@master conf]# echo -e "JAVA_HOME=/soft/java/jdk1.7.0_79" >> spark-env.sh


让环境变量生效

source /etc/profile

4.启动

启动hadoop集群,master上执行

[root@master conf]# $HADOOP_HOME/sbin/start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [master]
master: starting namenode, logging to /soft/hadoop/hadoop-2.7.3/logs/hadoop-root-namenode-master.out
node1: starting datanode, logging to /soft/hadoop/hadoop-2.7.3/logs/hadoop-root-datanode-node1.out
node2: starting datanode, logging to /soft/hadoop/hadoop-2.7.3/logs/hadoop-root-datanode-node2.out
Starting secondary namenodes [master]
master: starting secondarynamenode, logging to /soft/hadoop/hadoop-2.7.3/logs/hadoop-root-secondarynamenode-master.out
starting yarn daemons
starting resourcemanager, logging to /soft/hadoop/hadoop-2.7.3/logs/yarn-root-resourcemanager-master.out
node1: starting nodemanager, logging to /soft/hadoop/hadoop-2.7.3/logs/yarn-root-nodemanager-node1.out
node2: starting nodemanager, logging to /soft/hadoop/hadoop-2.7.3/logs/yarn-root-nodemanager-node2.out
启动spark集群,master上执行

[root@master conf]# $SPARK_HOME/sbin/start-all.sh
org.apache.spark.deploy.master.Master running as process 3075.  Stop it first.
node2: starting org.apache.spark.deploy.worker.Worker, logging to /soft/spark/spark-2.0.0-bin-hadoop2.7/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-node2.out
node1: starting org.apache.spark.deploy.worker.Worker, logging to /soft/spark/spark-2.0.0-bin-hadoop2.7/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-node1.out
[root@master conf]# 

5.验证

 访问master的8080,比如http://192.168.174.132:8080/










  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值