一、默认配置
Spark_HOME:进入目录/soft/client/spark-2.1.1-bin-2.6.0/conf文件夹,文件如下:
spark-defaults.conf //设置spark maeter地址、每个executor进程的内存、占用核数等
spark-env.sh //spark相关的各种环境变量
log4j.properties.template //设置driver向console输出的日志的等级及格式
fairscheduler.xml.template //设置调度方式
metrics.properties.template //设置spark内部metrics系统,一般无需改动
slaves //设置spark集群中的slave节点(worker),无需改动
hadoop-default.xml //hadoop配置,主要是hdfs配置
hadoop-site.xml //hadoop集群的访问配置
常见默认配置
# ApplicationMaster
spark.yarn.am.memory=8g
spark.yarn.am.cores=5
#spark.yarn.am.memoryOverhead=AM memory * 0.10, with minimum of 384
#spark.yarn.am.extraJavaOptions
# Driver
spark.driver.cores=2
spark.driver.memory=4g
spark.driver.maxResultSize=1g
#spark.yarn.driver.memoryOverhead=driverMemory * 0.10, with minimum of 384
# Executor
spark.executor.cores=4
spark.executor.memory=8g
spark.executor.instances=1
spark.executor.heartbeatInterval=20s
#spark.yarn.executor.memoryOverhead=executorMemory * 0.10, with minimum of 384
#spark.executor.extraJavaOptions=-XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three"
spark提交脚本:
spark-submit example.py
#脚本需要建立Session
from pyspark.sql import SparkSession
spark = SparkSession \
.builder \
.appName("Python Spark SQL basic example") \
.config("spark.some.config.option","some-value") \
.enableHiveSupport() \
.getOrCreate()
#spark连接的物理上的数据库
spark.catalog.listDatabases()
#返回Spark运行时的资源
spark.conf.get("spark.sql.shuffle.partitions")
spark.conf.get("spark.executor.memory")
spark.conf.get("spark.executor.cores")
#get换为set可进行修改
二、Spark SQL操作
初始操作及数据展示
from pyspark