十二、Spark HA集群部署 1. 修改spark-env.sh配置文件 进去spark-env.sh目录后,把原先的export SPARK_MASTER_HOST=hadoop01用#注释掉,并添加 vi spark-env.sh export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=hadoop01:2181,hadoop02: