配置 slaves
bigdata-pro03.kfk.com
配置 spark-env.sh
JAVA_HOME=/opt/modules/jdk1.8.0_11
SCALA_HOME=/opt/modules/scala-2.11.8
SPARK_MASTER_HOST=bigdata-pro03.kfk.com
SPARK_MASTER_PORT=7077
SPARK_MASTER_WEBUI_PORT=8080
SPARK_WORKER_CORES=1
SPARK_WORKER_MEMORY=1g
SPARK_WORKER_PORT=7078
SPARK_WORKER_WEBUI_PORT=8081
SPARK_CONF_DIR=/opt/modules/spark-2.2.0-bin/conf
启动
[kfk@bigdata-pro03 spark-2.2.0-bin]$ sbin/start-all.sh
starting org.apache.spark.deploy.master.Master, logging to /opt/modules/spark-2.2.0-bin/logs/spark-kfk-org.apache.spark.deploy.master.Master-1-bigdata-pro03.kfk.com.out
bigdata-pro03.kfk.com: starting org.apache.spark.deploy.worker.Worker, logging to /opt/modules/spark-2.2.0-bin/logs/spark-kfk-org.apache.spark.deploy.worker.Worker-1-bigdata-pro03.kfk.com.out
[kfk@bigdata-pro03 spark-2.2.0-bin]$ jps
27104 Master
24897 JournalNode
11074 HMaster
7682 QuorumPeerMain
24803 DataNode
18707 Kafka
27189 Worker
24998 NodeManager
27241 Jps
10857 HRegionServer
客户端模式执行
[kfk@bigdata-pro03 spark-2.2.0-bin]$ bin/spark-shell --master spark://bigdata-pro03.kfk.com:7077
Spark context Web UI available at http://192.168.0.153:4040
Spark context available as 'sc' (master = spark://bigdata-pro03.kfk.com:7077, app id = app-20200623131554-0000).
Spark session available as 'spark'.
scala> spark.read.textFile("file:///opt/datas/stu.txt")
res1: org.apache.spark.sql.Dataset[String] = [value: string]
集群模式执行
bin/spark-submit --master spark://bigdata-pro03.kfk.com:7077 --deploy-mode cluster /opt/jars/TestSpark.jar file:///opt/datas/stu.txt
http://192.168.0.153:8080/
https://blog.csdn.net/tanxiang21/article/details/108681948