Usage: spark-submit [options] <app jar | python file | R file> [app arguments]
Usage: spark-submit --kill [submission ID] --master [spark://...]
Usage: spark-submit --status [submission ID] --master [spark://...]
Usage: spark-submit run-example [options] example-class [example args]
Options:
--master MASTER_URL spark://host:port, mesos://host:port, yarn,
k8s://https://host:port, or local (Default: local[*]).
--deploy-mode DEPLOY_MODE Whether to launch the driver program locally ("client") or
on one of the worker machines inside the cluster ("cluster")
(Default: client).
程序运行的位置,可选(“client”、“cluster”)默认“client”
--class CLASS_NAME Your application's main class (for Java / Scala apps).
应用程序要运行的class(适用于Java / Scala程序)
--name NAME A name of your application.
程序的名称
--jars JARS Comma-separated list of jars to include on the driver
and executor classpaths.
用逗号隔开的driver本地jar包列表以及executor类路径
--packages Comma-separated list of maven coordinates of jars to include
on the driver and executor classpaths. Will search the local
maven repo, then maven central and any additional remote
repositories given by --repositories. The format for the
coordinates should be groupId:artifactId:version.
逗号分隔的包含在driver和executor的classpath中的jar的maven坐标,
坐标格式groupId:artifactId:version
--exclude-packages Comma-separated list of groupId:artifactId, to exclude while
resolving the dependencies provided in --packages to avoid
dependency conflicts.
逗号分隔的指定在解析--packages时不包含的package,格式groupId:artifactId
--repositories Comma-separated list of additional remote repositories to
search for the maven coordinates given with --packages.
逗号隔开的远程repository
--py-files PY_FILES Comma-separated list of .zip, .egg, or .py files to place
on the PYTHONPATH for Python apps.
用逗号隔开的放置在Python应用程序PYTHONPATH上的.zip, .egg, .py文件列表
--files FILES Comma-separated list of files to be placed in the working
directory of each executor. File paths of these files
in executors can be accessed via SparkFiles.get(fileName).
用逗号隔开的要放置在每个executor工作目录的文件列表,可以通过
SparkFiles.get(fileName)访问执行程序中这些文件的文件路径
--conf, -c PROP=VALUE Arbitrary Spark configuration property.
任意Spark配置属性
--properties-file FILE Path to a file from which to load extra properties. If not
specified, this will look for conf/spark-defaults.conf.
设置应用程序属性的文件放置位置,默认conf/spark-defaults.conf
--driver-memory MEM Memory for driver (e.g. 1000M, 2G) (Default: 1024M).
每个driver使用的内存(例如 1000M, 2G) (默认: 1024M)
--driver-java-options Extra Java options to pass to the driver.
driver的其他java选项
--driver-library-path Extra library path entries to pass to the driver.
driver程序的库路径
--driver-class-path Extra class path entries to pass to the driver. Note that
jars added with --jars are automatically included in the
classpath.
driver程序的类路径,用--jars添加的jar将自动包含在类路径中
--executor-memory MEM Memory per executor (e.g. 1000M, 2G) (Default: 1G).
每个executor使用的内存(例如 1000M, 2G) (默认: 1G)
--proxy-user NAME User to impersonate when submitting the application.
This argument does not work with --principal / --keytab.
--help, -h Show this help message and exit.
显示此帮助消息并退出
--verbose, -v Print additional debug output.
打印其他调试输出
--version, Print the version of current Spark.
打印当前Spark的版本
Cluster deploy mode only:
--driver-cores NUM Number of cores used by the driver, only in cluster mode
(Default: 1).
driver使用的核数
Spark standalone or Mesos with cluster deploy mode only:
--supervise If given, restarts the driver on failure.
失败后是否重启driver
--kill SUBMISSION_ID If given, kills the driver specified.
结束指定的driver
--status SUBMISSION_ID If given, requests the status of the driver specified.
请求指定driver的状态
Spark standalone and Mesos only:
--total-executor-cores NUM Total cores for all executors.
所有executors使用的总核数
Spark standalone and YARN only:
--executor-cores NUM Number of cores per executor. (Default: 1 in YARN mode,
or all available cores on the worker in standalone mode)
每个executor使用的核数,默认是1
YARN-only:
--queue QUEUE_NAME The YARN queue to submit to (Default: "default").
要提交给的YARN的队列名,默认“default”
--num-executors NUM Number of executors to launch (Default: 2).
If dynamic allocation is enabled, the initial number of
executors will be at least NUM.
要启动的executor数量
如果开启了动态分配,则executor的初始数量将至少为NUM
--archives ARCHIVES Comma separated list of archives to be extracted into the
working directory of each executor.
逗号隔开的被每个executor提取到工作目录的档案列表
--principal PRINCIPAL Principal to be used to login to KDC, while running on
secure HDFS.
在开启安全的HDFS上运行时用于登录KDC的主体
--keytab KEYTAB The full path to the file that contains the keytab for the
principal specified above. This keytab will be copied to
the node running the Application Master via the Secure
Distributed Cache, for renewing the login tickets and the
delegation tokens periodically.
keytab的全路径
Spark-submit全参数详解
最新推荐文章于 2024-06-19 00:17:06 发布