spark1.6.0 standalone HA模式

  1. 下载 zookeeper,此处使用3.5.8
  2. 解压tar -xvf apache-zookeeper-3.5.8-bin.tar.gz
  3. 复制到/usr/local sudo cp -r apache-zookeeper-3.5.8-bin /usr/local/zookeeper
  4. 修改目录归属为hadoop sudo chown -R hadoop:users /usr/local/zookeeper/
  5. 配置.bashrc
ZOOKEEPER_HOME=/usr/local/zookeeper
PATH=$JAVA_HOME/bin:$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$SPARK_HOME/bin:$SPARK_HOME/sbin:$ZOOKEEPER_HOME/bin
export ZOOKEEPER_HOME
  1. 配置zoo.cfg
dataDir=/usr/local/zookeeper/data
dataLogDir=/usr/local/zookeeper/logs
server.0=master:2888:3888
server.1=slave1:2888:3888
server.2=slave2:2888:3888
  1. 将spark目录以及.bashrc同步到从节点
hadoop@master:~> scp -r /usr/local/zookeeper/ root@slave1:/usr/local
hadoop@master:~> scp -r /usr/local/zookeeper/ root@slave2:/usr/local
hadoop@master:~> scp ~/.bashrc hadoop@slave1:/home/hadoop
hadoop@master:~> scp ~/.bashrc hadoop@slave2:/home/hadoop
  1. 修改从节点hadoop目录权限
hadoop@slave1:~> sudo chown -R hadoop:users /usr/local/zookeeper/
hadoop@slave2:~> sudo chown -R hadoop:users /usr/local/zookeeper/
  1. 配置各个节点myid
hadoop@master:~> mkdir /usr/local/zookeeper/data
hadoop@master:~> echo 0 >/usr/local/zookeeper/data/myid

hadoop@slave1:~> mkdir /usr/local/zookeeper/data
hadoop@slave1:~> echo 1 >/usr/local/zookeeper/data/myid

hadoop@slave2:~> mkdir /usr/local/zookeeper/data
hadoop@slave2:~> echo 2 >/usr/local/zookeeper/data/myid
  1. 启动各个节点zookeeper服务
hadoop@master:~> /usr/local/zookeeper/bin/zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED

hadoop@slave1:~> /usr/local/zookeeper/bin/zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED

hadoop@slave2:~> /usr/local/zookeeper/bin/zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
  1. 检查zookeeper状态
hadoop@master:/usr/local/spark/conf> /usr/local/zookeeper/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost.
Mode: follower

hadoop@slave1:~> /usr/local/zookeeper/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost.
`Mode: leader`

hadoop@slave2:~> /usr/local/zookeeper/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost.
Mode: follower
  1. 修改spark-env.sh中master节点信息
#export SPARK_MASTER_IP=master
export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=master:2181,slave1:2181,slave2:2181 -Dspark.deploy.zookeeper.dir=/spark"
  1. 同步spark-env.sh到从节点
hadoop@master:/usr/local/spark/conf> scp /usr/local/spark/conf/spark-env.sh hadoop@slave1:/usr/local/spark/conf/
spark-env.sh                                                                                                                                                       100% 4645     4.5KB/s   00:00
hadoop@master:/usr/local/spark/conf> scp /usr/local/spark/conf/spark-env.sh hadoop@slave2:/usr/local/spark/conf/
spark-env.sh                                                                                                                                                       100% 4645     4.5KB/s   00:00
  1. 启动spark集群,主节点正常启动,从节点手工启动Master进程
hadoop@master:~> /usr/local/spark/sbin/start-all.sh
starting org.apache.spark.deploy.master.Master, logging to /usr/local/spark/logs/spark-hadoop-org.apache.spark.deploy.master.Master-1-master.out
slave2: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark/logs/spark-hadoop-org.apache.spark.deploy.worker.Worker-1-slave2.out
slave1: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark/logs/spark-hadoop-org.apache.spark.deploy.worker.Worker-1-slave1.out

hadoop@slave1:~> /usr/local/spark/sbin/start-master.sh
starting org.apache.spark.deploy.master.Master, logging to /usr/local/spark/logs/spark-hadoop-org.apache.spark.deploy.master.Master-1-slave1.out

hadoop@slave2:~> /usr/local/spark/sbin/start-master.sh
starting org.apache.spark.deploy.master.Master, logging to /usr/local/spark/logs/spark-hadoop-org.apache.spark.deploy.master.Master-1-slave2.out
  1. 通过浏览器检查各Master进程运行状态,master节点为ALIVE,其他节点为STANDBY
    由于8080端口被zookeeper adminServer服务占用,本次执行start-all.sh时sparkUI会尝试在8081端口运行; 从节点8081被Work工作进程占用,因此Master运行在8082端口
http://master:8081/
http://slave1:8082/
http://slave2:8082/
  1. 测试HA是否正常,
    启动spark-shell hadoop@master:~> spark-shell --master spark://master:7077,slave1:7077,slave2:7077
    kill主节点的Master进程,某个slave节点Master状态由STANDBY变成ALIVE
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
【1】项目代码完整且功能都验证ok,确保稳定可靠运行后才上传。欢迎下载使用!在使用过程中,如有问题或建议,请及时私信沟通,帮助解答。 【2】项目主要针对各个计算机相关专业,包括计科、信息安全、数据科学与大数据技术、人工智能、通信、物联网等领域的在校学生、专业教师或企业员工使用。 【3】项目具有较高的学习借鉴价值,不仅适用于小白学习入门进阶。也可作为毕设项目、课程设计、大作业、初期项目立项演示等。 【4】如果基础还行,或热爱钻研,可基于此项目进行二次开发,DIY其他不同功能,欢迎交流学习。 【注意】 项目下载解压后,项目名字和项目路径不要用中文,否则可能会出现解析不了的错误,建议解压重命名为英文名字后再运行!有问题私信沟通,祝顺利! 基于C语言实现智能决策的人机跳棋对战系统源码+报告+详细说明.zip基于C语言实现智能决策的人机跳棋对战系统源码+报告+详细说明.zip基于C语言实现智能决策的人机跳棋对战系统源码+报告+详细说明.zip基于C语言实现智能决策的人机跳棋对战系统源码+报告+详细说明.zip基于C语言实现智能决策的人机跳棋对战系统源码+报告+详细说明.zip基于C语言实现智能决策的人机跳棋对战系统源码+报告+详细说明.zip基于C语言实现智能决策的人机跳棋对战系统源码+报告+详细说明.zip基于C语言实现智能决策的人机跳棋对战系统源码+报告+详细说明.zip基于C语言实现智能决策的人机跳棋对战系统源码+报告+详细说明.zip基于C语言实现智能决策的人机跳棋对战系统源码+报告+详细说明.zip基于C语言实现智能决策的人机跳棋对战系统源码+报告+详细说明.zip基于C语言实现智能决策的人机跳棋对战系统源码+报告+详细说明.zip基于C语言实现智能决策的人机跳棋对战系统源码+报告+详细说明.zip
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值