- Standalone-HA(高可用)
- 原理:
- 单个主容易故障,一旦挂机就不可使用了
- 所以现在HA可以加一个Master,如果有两个Master的话,谁来运行,由zookeeper进行抉择
-
- 操作:
- 停止spark集群
- 在master上配置
- 操作:
2.在node01上配置:
修改spark-env.sh注释或删除MASTER_HOST内容:
vim /export/serverlspark/conf/spark-env.sh
# SPARK_MASTER_HOST=master
增加如下配置:
SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER-Dspark.deploy.zookeeper.url=node1:2181,node2:2181,node3:2181-Dspark.deploy.zookeeper.dir=/spark-ha"
-Dspark.deploy.recoveryMode=ZOOKEEPER:恢复模式为zookeeper
Dspark.deploy.zookeeper.url=node1:2181,node2:2181,node3:2181:zookeeper应用的位置
-Dspark.deploy.zookeeper.dir=/spark-ha:zookeeper信息存放的地址位置会自己创建
- 分发:
- scp -r /opt/spark-3.1.1/conf/spark-env.sh root@node2:/opt
- scp -r /opt/spark-3.1.1/conf/spark-env.sh root@node3:/opt
- 测试:
- 开启zookeeper
- jps
- 开启spark
- jps
- 单独在node2上面开启一个master做ha
- 模拟master宕机
- 再次查看web-UI
- 再测试wordcount
- /opt/spark/bin/spark-shell --master spark://master:7077,node2:7077
- val textFile = sc.textFile("hdfs://node1:8020或者9000/wordcount/input/words.txt")
val counts = textFile.flatMap(line => line.split(" ")).map(word => (word, 1)).reduceByKey(_ + _) - counts.collect
- counts.saveASTextFile("hdfs://node01:8020/wordcount/output48")
- 查看结果可以到wordcount/output中查看
- 查看spark任务web-UI(4040端口)