06_Flink命令行界面、作业管理示例、Savepoints、语法(run、通用配置、yarn-cluster、info、list、stop、cancel、savepoint等)

1.6.Flink命令行界面
1.6.1.Deployment targets
1.6.2.例子
1.6.3.作业管理示例
1.6.4.Savepoints
1.6.5.语法

1.6.Flink命令行界面

此篇内容来自:https://ci.apache.org/projects/flink/flink-docs-release-1.11/zh/ops/cli.html

Flink提供了一个命令行接口(CLI)来运行打包为JAR文件的程序,并控制它们的执行。CLI是任何Flink设置的一部分,可以在本地单节点设置和分布式设置中使用。它位于/bin/flink下,默认情况下连接到从相同安装目录启动的正在运行的JobManager。
可以使用命令行:
提交作业执行。
取消正在运行的作业。
提供有关job的信息。
列出正在运行和等待的作业。
触发和释放保存点(savepoints)

存储使用命令行接口的先决条件是,JobManager已经启动(通过/bin/start-cluster.sh),或者在YARN 或Kubernetes上部署了。

1.6.1.Deployment targets

Flink具有执行器的概念,用于定义可用的目标部署位置。你可以通过bin/flink --help查看可用的部署位置。例如:

Options for Generic CLI mode:
     -D <property=value>   Generic configuration options for
                           execution/deployment and for the configured executor.
                           The available options can be found at
                           https://ci.apache.org/projects/flink/flink-docs-stabl
                           e/ops/config.html
     -t,--target <arg>     The deployment target for the given application,
                           which is equivalent to the "execution.target" config
                           option. The currently available targets are:
                           "remote", "local", "kubernetes-session", "yarn-per-job",
                           "yarn-session", "yarn-application" and "kubernetes-application".

1.6.2.例子

Run example program with no arguments:
./bin/flink run ./examples/batch/WordCount.jar

Run example program with arguments for input and result files:
./bin/flink run ./examples/batch/WordCount.jar \
                     --input file:///home/user/hamlet.txt --output file:///home/user/wordcount_out

Run example program with parallelism 16 and arguments for input and result files:
./bin/flink run -p 16 ./examples/batch/WordCount.jar \
                     --input file:///home/user/hamlet.txt --output file:///home/user/wordcount_out

Run example program with flink log output disabled:
    ./bin/flink run -q ./examples/batch/WordCount.jar
Run example program in detached mode:
./bin/flink run -d ./examples/batch/WordCount.jar

Run example program on a specific JobManager:
./bin/flink run -m myJMHost:8081 \
                       ./examples/batch/WordCount.jar \
                       --input file:///home/user/hamlet.txt --output file:///home/user/wordcount_out

Run example program with a specific class as an entry point:
./bin/flink run -c org.apache.flink.examples.java.wordcount.WordCount \
                       ./examples/batch/WordCount.jar \
                       --input file:///home/user/hamlet.txt --output file:///home/user/wordcount_out

Run example program using a per-job YARN cluster with 2 TaskManagers:
./bin/flink run -m yarn-cluster \
                       ./examples/batch/WordCount.jar \
                       --input hdfs:///user/hamlet.txt --output hdfs:///user/wordcount_out

1.6.3.作业管理示例

Display the optimized execution plan for the WordCount example program as JSON:
./bin/flink info ./examples/batch/WordCount.jar \
                        --input file:///home/user/hamlet.txt --output file:///home/user/wordcount_out

List scheduled and running jobs (including their JobIDs):
./bin/flink list

List scheduled jobs (including their JobIDs):
./bin/flink list -s

List running jobs (including their JobIDs):
./bin/flink list -r

List all existing jobs (including their JobIDs):
./bin/flink list -a

List running Flink jobs inside Flink YARN session:
./bin/flink list -m yarn-cluster -yid <yarnApplicationID> -r

Cancel a job:
./bin/flink cancel <jobID>
Cancel a job with a savepoint (deprecated; use “stop” instead):

./bin/flink cancel -s [targetDirectory] <jobID>
Gracefully stop a job with a savepoint (streaming jobs only):

./bin/flink stop [-p targetDirectory] [-d] <jobID>

1.6.4.Savepoints

Savepoints are controlled via the command line client:

Trigger a Savepoint
./bin/flink savepoint <jobId> [savepointDirectory]

This will trigger a savepoint for the job with ID jobId, and returns the path of the created savepoint. You need this path to restore and dispose savepoints.
Furthermore, you can optionally specify a target file system directory to store the savepoint in. The directory needs to be accessible by the JobManager.
If you don’t specify a target directory, you need to have configured a default directory. Otherwise, triggering the savepoint will fail.

Trigger a Savepoint with YARN
./bin/flink savepoint <jobId> [savepointDirectory] -yid <yarnAppId>

This will trigger a savepoint for the job with ID jobId and YARN application ID yarnAppId, and returns the path of the created savepoint.
Everything else is the same as described in the above Trigger a Savepoint section.

Stop

Use the stop to gracefully stop a running streaming job with a savepoint.

./bin/flink stop [-p targetDirectory] [-d] <jobID>

A “stop” call is a more graceful way of stopping a running streaming job, as the “stop” signal flows from source to sink. When the user requests to stop a job, all sources will be requested to send the last checkpoint barrier that will trigger a savepoint, and after the successful completion of that savepoint, they will finish by calling their cancel() method. If the -d flag is specified, then a MAX_WATERMARK will be emitted before the last checkpoint barrier. This will result all registered event-time timers to fire, thus flushing out any state that is waiting for a specific watermark, e.g. windows. The job will keep running until all sources properly shut down. This allows the job to finish processing all in-flight data.

Cancel with a savepoint (deprecated)

You can atomically trigger a savepoint and cancel a job.

./bin/flink cancel -s [savepointDirectory] <jobID>

If no savepoint directory is configured, you need to configure a default savepoint directory for the Flink installation (see Savepoints).
The job will only be cancelled if the savepoint succeeds.
Note: Cancelling a job with savepoint is deprecated. Use “stop” instead.

Restore a savepoint
./bin/flink run -s <savepointPath> ...

The run command has a savepoint flag to submit a job, which restores its state from a savepoint. The savepoint path is returned by the savepoint trigger command.
By default, we try to match all savepoint state to the job being submitted. If you want to allow to skip savepoint state that cannot be restored with the new job you can set the allowNonRestoredState flag. You need to allow this if you removed an operator from your program that was part of the program when the savepoint was triggered and you still want to use the savepoint.

./bin/flink run -s <savepointPath> -n ...

This is useful if your program dropped an operator that was part of the savepoint.

Dispose a savepoint
./bin/flink savepoint -d <savepointPath>

Disposes the savepoint at the given path. The savepoint path is returned by the savepoint trigger command.
If you use custom state instances (for example custom reducing state or RocksDB state), you have to specify the path to the program JAR with which the savepoint was triggered in order to dispose the savepoint with the user code class loader:

./bin/flink savepoint -d <savepointPath> -j <jarFile>

Otherwise, you will run into a ClassNotFoundException.

1.6.5.语法

[root@hadoop6 flink-1.11.1]# bin/flink --help
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/admin/installed/flink-1.11.1/lib/log4j-slf4j-impl-2.12.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/3.1.4.0-315/hadoop/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
./flink <ACTION> [OPTIONS] [ARGUMENTS]

The following actions are available:

Action "run" compiles and runs a program.

  Syntax: run [OPTIONS] <jar-file> <arguments>
  "run" action options:
     -c,--class <classname>               Class with the program entry point
                                          ("main()" method). Only needed if the
                                          JAR file does not specify the class in
                                          its manifest.
     -C,--classpath <url>                 Adds a URL to each user code
                                          classloader  on all nodes in the
                                          cluster. The paths must specify a
                                          protocol (e.g. file://) and be
                                          accessible on all nodes (e.g. by means
                                          of a NFS share). You can use this
                                          option multiple times for specifying
                                          more than one URL. The protocol must
                                          be supported by the {@link
                                          java.net.URLClassLoader}.
     -d,--detached                        If present, runs the job in detached
                                          mode
     -n,--allowNonRestoredState           Allow to skip savepoint state that
                                          cannot be restored. You need to allow
                                          this if you removed an operator from
                                          your program that was part of the
                                          program when the savepoint was
                                          triggered.
     -p,--parallelism <parallelism>       The parallelism with which to run the
                                          program. Optional flag to override the
                                          default value specified in the
                                          configuration.
     -py,--python <pythonFile>            Python script with the program entry
                                          point. The dependent resources can be
                                          configured with the `--pyFiles`
                                          option.
     -pyarch,--pyArchives <arg>           Add python archive files for job. The
                                          archive files will be extracted to the
                                          working directory of python UDF
                                          worker. Currently only zip-format is
                                          supported. For each archive file, a
                                          target directory be specified. If the
                                          target directory name is specified,
                                          the archive file will be extracted to
                                          a name can directory with the
                                          specified name. Otherwise, the archive
                                          file will be extracted to a directory
                                          with the same name of the archive
                                          file. The files uploaded via this
                                          option are accessible via relative
                                          path. '#' could be used as the
                                          separator of the archive file path and
                                          the target directory name. Comma (',')
                                          could be used as the separator to
                                          specify multiple archive files. This
                                          option can be used to upload the
                                          virtual environment, the data files
                                          used in Python UDF (e.g.: --pyArchives
                                          file:///tmp/py37.zip,file:///tmp/data.
                                          zip#data --pyExecutable
                                          py37.zip/py37/bin/python). The data
                                          files could be accessed in Python UDF,
                                          e.g.: f = open('data/data.txt', 'r').
     -pyexec,--pyExecutable <arg>         Specify the path of the python
                                          interpreter used to execute the python
                                          UDF worker (e.g.: --pyExecutable
                                          /usr/local/bin/python3). The python
                                          UDF worker depends on Python 3.5+,
                                          Apache Beam (version == 2.19.0), Pip
                                          (version >= 7.1.0) and SetupTools
                                          (version >= 37.0.0). Please ensure
                                          that the specified environment meets
                                          the above requirements.
     -pyfs,--pyFiles <pythonFiles>        Attach custom python files for job.
                                          These files will be added to the
                                          PYTHONPATH of both the local client
                                          and the remote python UDF worker. The
                                          standard python resource file suffixes
                                          such as .py/.egg/.zip or directory are
                                          all supported. Comma (',') could be
                                          used as the separator to specify
                                          multiple files (e.g.: --pyFiles
                                          file:///tmp/myresource.zip,hdfs:///$na
                                          menode_address/myresource2.zip).
     -pym,--pyModule <pythonModule>       Python module with the program entry
                                          point. This option must be used in
                                          conjunction with `--pyFiles`.
     -pyreq,--pyRequirements <arg>        Specify a requirements.txt file which
                                          defines the third-party dependencies.
                                          These dependencies will be installed
                                          and added to the PYTHONPATH of the
                                          python UDF worker. A directory which
                                          contains the installation packages of
                                          these dependencies could be specified
                                          optionally. Use '#' as the separator
                                          if the optional parameter exists
                                          (e.g.: --pyRequirements
                                          file:///tmp/requirements.txt#file:///t
                                          mp/cached_dir).
     -s,--fromSavepoint <savepointPath>   Path to a savepoint to restore the job
                                          from (for example
                                          hdfs:///flink/savepoint-1537).
     -sae,--shutdownOnAttachedExit        If the job is submitted in attached
                                          mode, perform a best-effort cluster
                                          shutdown when the CLI is terminated
                                          abruptly, e.g., in response to a user
                                          interrupt, such as typing Ctrl + C.
  Options for Generic CLI mode:
     -D <property=value>   Generic configuration options for
                           execution/deployment and for the configured executor.
                           The available options can be found at
                           https://ci.apache.org/projects/flink/flink-docs-stabl
                           e/ops/config.html
     -e,--executor <arg>   DEPRECATED: Please use the -t option instead which is
                           also available with the "Application Mode".
                           The name of the executor to be used for executing the
                           given job, which is equivalent to the
                           "execution.target" config option. The currently
                           available executors are: "collection", "remote",
                           "local", "kubernetes-session", "yarn-per-job",
                           "yarn-session".
     -t,--target <arg>     The deployment target for the given application,
                           which is equivalent to the "execution.target" config
                           option. The currently available targets are:
                           "collection", "remote", "local",
                           "kubernetes-session", "yarn-per-job", "yarn-session",
                           "yarn-application" and "kubernetes-application".

  Options for yarn-cluster mode:
     -d,--detached                        If present, runs the job in detached mode
     -m,--jobmanager <arg>                Address of the JobManager to which to
                                          connect. Use this flag to connect to a
                                          different JobManager than the one
                                          specified in the configuration.
     -yat,--yarnapplicationType <arg>     Set a custom application type for the
                                          application on YARN
     -yD <property=value>                 use value for given property
     -yd,--yarndetached                   If present, runs the job in detached
                                          mode (deprecated; use non-YARN
                                          specific option instead)
     -yh,--yarnhelp                       Help for the Yarn session CLI.
     -yid,--yarnapplicationId <arg>       Attach to running YARN session
     -yj,--yarnjar <arg>                  Path to Flink jar file
     -yjm,--yarnjobManagerMemory <arg>    Memory for JobManager Container with
                                          optional unit (default: MB)
     -ynl,--yarnnodeLabel <arg>           Specify YARN node label for the YARN
                                          application
     -ynm,--yarnname <arg>                Set a custom name for the application
                                          on YARN
     -yq,--yarnquery                      Display available YARN resources
                                          (memory, cores)
     -yqu,--yarnqueue <arg>               Specify YARN queue.
     -ys,--yarnslots <arg>                Number of slots per TaskManager
     -yt,--yarnship <arg>                 Ship files in the specified directory
                                          (t for transfer)
     -ytm,--yarntaskManagerMemory <arg>   Memory per TaskManager Container with
                                          optional unit (default: MB)
     -yz,--yarnzookeeperNamespace <arg>   Namespace to create the Zookeeper
                                          sub-paths for high availability mode
     -z,--zookeeperNamespace <arg>        Namespace to create the Zookeeper
                                          sub-paths for high availability mode

  Options for default mode:
     -m,--jobmanager <arg>           Address of the JobManager to which to
                                     connect. Use this flag to connect to a
                                     different JobManager than the one specified
                                     in the configuration.
     -z,--zookeeperNamespace <arg>   Namespace to create the Zookeeper sub-paths
                                     for high availability mode



Action "info" shows the optimized execution plan of the program (JSON).

  Syntax: info [OPTIONS] <jar-file> <arguments>
  "info" action options:
     -c,--class <classname>           Class with the program entry point
                                      ("main()" method). Only needed if the JAR
                                      file does not specify the class in its
                                      manifest.
     -p,--parallelism <parallelism>   The parallelism with which to run the
                                      program. Optional flag to override the
                                      default value specified in the
                                      configuration.


Action "list" lists running and scheduled programs.

  Syntax: list [OPTIONS]
  "list" action options:
     -a,--all         Show all programs and their JobIDs
     -r,--running     Show only running programs and their JobIDs
     -s,--scheduled   Show only scheduled programs and their JobIDs
  Options for Generic CLI mode:
     -D <property=value>   Generic configuration options for
                           execution/deployment and for the configured executor.
                           The available options can be found at
                           https://ci.apache.org/projects/flink/flink-docs-stabl
                           e/ops/config.html
     -e,--executor <arg>   DEPRECATED: Please use the -t option instead which is
                           also available with the "Application Mode".
                           The name of the executor to be used for executing the
                           given job, which is equivalent to the
                           "execution.target" config option. The currently
                           available executors are: "collection", "remote",
                           "local", "kubernetes-session", "yarn-per-job",
                           "yarn-session".
     -t,--target <arg>     The deployment target for the given application,
                           which is equivalent to the "execution.target" config
                           option. The currently available targets are:
                           "collection", "remote", "local",
                           "kubernetes-session", "yarn-per-job", "yarn-session",
                           "yarn-application" and "kubernetes-application".

  Options for yarn-cluster mode:
     -m,--jobmanager <arg>            Address of the JobManager to which to
                                      connect. Use this flag to connect to a
                                      different JobManager than the one
                                      specified in the configuration.
     -yid,--yarnapplicationId <arg>   Attach to running YARN session
     -z,--zookeeperNamespace <arg>    Namespace to create the Zookeeper
                                      sub-paths for high availability mode

  Options for default mode:
     -m,--jobmanager <arg>           Address of the JobManager to which to
                                     connect. Use this flag to connect to a
                                     different JobManager than the one specified
                                     in the configuration.
     -z,--zookeeperNamespace <arg>   Namespace to create the Zookeeper sub-paths
                                     for high availability mode



Action "stop" stops a running program with a savepoint (streaming jobs only).

  Syntax: stop [OPTIONS] <Job ID>
  "stop" action options:
     -d,--drain                           Send MAX_WATERMARK before taking the
                                          savepoint and stopping the pipelne.
     -p,--savepointPath <savepointPath>   Path to the savepoint (for example
                                          hdfs:///flink/savepoint-1537). If no
                                          directory is specified, the configured
                                          default will be used
                                          ("state.savepoints.dir").
  Options for Generic CLI mode:
     -D <property=value>   Generic configuration options for
                           execution/deployment and for the configured executor.
                           The available options can be found at
                           https://ci.apache.org/projects/flink/flink-docs-stabl
                           e/ops/config.html
     -e,--executor <arg>   DEPRECATED: Please use the -t option instead which is
                           also available with the "Application Mode".
                           The name of the executor to be used for executing the
                           given job, which is equivalent to the
                           "execution.target" config option. The currently
                           available executors are: "collection", "remote",
                           "local", "kubernetes-session", "yarn-per-job",
                           "yarn-session".
     -t,--target <arg>     The deployment target for the given application,
                           which is equivalent to the "execution.target" config
                           option. The currently available targets are:
                           "collection", "remote", "local",
                           "kubernetes-session", "yarn-per-job", "yarn-session",
                           "yarn-application" and "kubernetes-application".

  Options for yarn-cluster mode:
     -m,--jobmanager <arg>            Address of the JobManager to which to
                                      connect. Use this flag to connect to a
                                      different JobManager than the one
                                      specified in the configuration.
     -yid,--yarnapplicationId <arg>   Attach to running YARN session
     -z,--zookeeperNamespace <arg>    Namespace to create the Zookeeper
                                      sub-paths for high availability mode

  Options for default mode:
     -m,--jobmanager <arg>           Address of the JobManager to which to
                                     connect. Use this flag to connect to a
                                     different JobManager than the one specified
                                     in the configuration.
     -z,--zookeeperNamespace <arg>   Namespace to create the Zookeeper sub-paths
                                     for high availability mode



Action "cancel" cancels a running program.

  Syntax: cancel [OPTIONS] <Job ID>
  "cancel" action options:
     -s,--withSavepoint <targetDirectory>   **DEPRECATION WARNING**: Cancelling
                                            a job with savepoint is deprecated.
                                            Use "stop" instead.
                                            Trigger savepoint and cancel job.
                                            The target directory is optional. If
                                            no directory is specified, the
                                            configured default directory
                                            (state.savepoints.dir) is used.
  Options for Generic CLI mode:
     -D <property=value>   Generic configuration options for
                           execution/deployment and for the configured executor.
                           The available options can be found at
                           https://ci.apache.org/projects/flink/flink-docs-stabl
                           e/ops/config.html
     -e,--executor <arg>   DEPRECATED: Please use the -t option instead which is
                           also available with the "Application Mode".
                           The name of the executor to be used for executing the
                           given job, which is equivalent to the
                           "execution.target" config option. The currently
                           available executors are: "collection", "remote",
                           "local", "kubernetes-session", "yarn-per-job",
                           "yarn-session".
     -t,--target <arg>     The deployment target for the given application,
                           which is equivalent to the "execution.target" config
                           option. The currently available targets are:
                           "collection", "remote", "local",
                           "kubernetes-session", "yarn-per-job", "yarn-session",
                           "yarn-application" and "kubernetes-application".

  Options for yarn-cluster mode:
     -m,--jobmanager <arg>            Address of the JobManager to which to
                                      connect. Use this flag to connect to a
                                      different JobManager than the one
                                      specified in the configuration.
     -yid,--yarnapplicationId <arg>   Attach to running YARN session
     -z,--zookeeperNamespace <arg>    Namespace to create the Zookeeper
                                      sub-paths for high availability mode

  Options for default mode:
     -m,--jobmanager <arg>           Address of the JobManager to which to
                                     connect. Use this flag to connect to a
                                     different JobManager than the one specified
                                     in the configuration.
     -z,--zookeeperNamespace <arg>   Namespace to create the Zookeeper sub-paths
                                     for high availability mode



Action "savepoint" triggers savepoints for a running job or disposes existing ones.

  Syntax: savepoint [OPTIONS] <Job ID> [<target directory>]
  "savepoint" action options:
     -d,--dispose <arg>       Path of savepoint to dispose.
     -j,--jarfile <jarfile>   Flink program JAR file.
  Options for Generic CLI mode:
     -D <property=value>   Generic configuration options for
                           execution/deployment and for the configured executor.
                           The available options can be found at
                           https://ci.apache.org/projects/flink/flink-docs-stabl
                           e/ops/config.html
     -e,--executor <arg>   DEPRECATED: Please use the -t option instead which is
                           also available with the "Application Mode".
                           The name of the executor to be used for executing the
                           given job, which is equivalent to the
                           "execution.target" config option. The currently
                           available executors are: "collection", "remote",
                           "local", "kubernetes-session", "yarn-per-job",
                           "yarn-session".
     -t,--target <arg>     The deployment target for the given application,
                           which is equivalent to the "execution.target" config
                           option. The currently available targets are:
                           "collection", "remote", "local",
                           "kubernetes-session", "yarn-per-job", "yarn-session",
                           "yarn-application" and "kubernetes-application".

  Options for yarn-cluster mode:
     -m,--jobmanager <arg>            Address of the JobManager to which to
                                      connect. Use this flag to connect to a
                                      different JobManager than the one
                                      specified in the configuration.
     -yid,--yarnapplicationId <arg>   Attach to running YARN session
     -z,--zookeeperNamespace <arg>    Namespace to create the Zookeeper
                                      sub-paths for high availability mode

  Options for default mode:
     -m,--jobmanager <arg>           Address of the JobManager to which to
                                     connect. Use this flag to connect to a
                                     different JobManager than the one specified
                                     in the configuration.
     -z,--zookeeperNamespace <arg>   Namespace to create the Zookeeper sub-paths
                                     for high availability mode



您在 /var/spool/mail/root 中有新邮件
[root@hadoop6 flink-1.11.1]#
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

涂作权的博客

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值