spark部署(集群)

前提部署好zookeeper环境

下载spark压缩包(jdk起码1.7以上)

● 修改配置

[root@localhost conf]# pwd
/usr/local/apps/spark-2.1.3-bin-hadoop2.7/conf
[root@localhost conf]# cp slaves.template ./slaves
[root@localhost conf]# vi slaves
#添加worker节点(hostname),默认有一个localhost,我就一台虚拟机,将就着用,所以就不需要再填节点,
#多个节点需要填写各自的ip或者hostname
localhost
[root@localhost conf]# cp spark-env.sh.template spark-env.sh
[root@localhost conf]# vi spark-env.sh
#添加如下
export JAVA_HOME=/usr/java/jdk1.7.0_79
export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=localhost:2181 -Dspark.deploy.zookeeper.dir=/spark"

● 启动master

[root@localhost spark-2.1.3-bin-hadoop2.7]# ./sbin/start-master.sh 
starting org.apache.spark.deploy.master.Master, logging to /usr/local/apps/spark-2.1.3-bin-hadoop2.7/logs/spark-root-org.apache.spark.deploy.master.Master-1-localhost.localdomain.out
[root@localhost spark-2.1.3-bin-hadoop2.7]# jps
14263 Launcher
20167 ZooKeeperMain
20661 Master
19231 QuorumPeerMain
20745 Jps

● 启动master和work

#需要输入密码,最好做成免密登陆
[root@localhost sbin]# ./start-all.sh 
org.apache.spark.deploy.master.Master running as process 21210.  Stop it first.
root@localhost's password: 
localhost: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/apps/spark-2.1.3-bin-hadoop2.7/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-localhost.localdomain.out
[root@localhost sbin]# jps
14263 Launcher
20167 ZooKeeperMain
21210 Master
21374 Worker
19231 QuorumPeerMain
21441 Jps 

● 浏览器查看

http://192.168.x.xx:8080/
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值