1.确保前边的环境都是否配置成功
搭建环境之前先确定自己的环境是否做好
1.jdk 1.8版本
2.HDFS MapReduce Hadoop 3.2.1 +
3.zookeeper
4.python 环境 3.8+
5.HADOOP_CONF_DIR
6.YARN_CONF_DIR
点击查看这六个的配置方法: 点击直接跳转.
2.连接到YARN中
bin/pyspark
bin/pyspark --master yarn
bin/pyspark --master yarn --deploy-mode client|cluster
# --deploy-mode 选项是指定部署模式, 默认是 客户端模式
# client就是客户端模式
# cluster就是集群模式
# --deploy-mode 仅可以用在YARN模式下
注意: 交互式环境 pyspark 和 spark-shell 无法运行 cluster模式
- 成功标识
(base) [root@6274master spark]# bin/pyspark --master yarn
Python 3.8.12 (default, Oct 12 2021, 13:49:34)
[GCC 7.5.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
22/01/13 02:54:48 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
22/01/13 02:54:57 WARN Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/__ / .__/\_,_/_/ /_/\_\ version 3.2.0
/_/
Using Python version 3.8.12 (default, Oct 12 2021 13:49:34)
Spark context Web UI available at http://6274master:4040
Spark context available as 'sc' (master = yarn, app id = application_1642051744763_0001).
SparkSession available as 'spark'.
>>>