Run example program with no arguments:
./bin/flink run ./examples/batch/WordCount.jar
Run example program with arguments for input and result files:
./bin/flink run ./examples/batch/WordCount.jar \
--input file:///home/user/hamlet.txt --output file:///home/user/wordcount_out
Run example program with parallelism 16 and arguments for input and result files:
./bin/flink run -p 16 ./examples/batch/WordCount.jar \
--input file:///home/user/hamlet.txt --output file:///home/user/wordcount_out
Run example program with flink log output disabled:
./bin/flink run -q ./examples/batch/WordCount.jar
Run example program in detached mode:
./bin/flink run -d ./examples/batch/WordCount.jar
Run example program on a specific JobManager:
./bin/flink run -m myJMHost:8081 \
./examples/batch/WordCount.jar \
--input file:///home/user/hamlet.txt --output file:///home/user/wordcount_out
Run example program with a specific class as an entry point:
./bin/flink run -c org.apache.flink.examples.java.wordcount.WordCount \
./examples/batch/WordCount.jar \
--input file:///home/user/hamlet.txt --output file:///home/user/wordcount_out
Run example program using a per-job YARN cluster with 2 TaskManagers:
./bin/flink run -m yarn-cluster \
./examples/batch/WordCount.jar \
--input hdfs:///user/hamlet.txt --output hdfs:///user/wordcount_out
注意 通过flink run提交Python任务时Flink会调用“python”命令。请执行以下命令以确认当前环境下的指令“python”指向Python的版本为3.5, 3.6 或者 3.7中的一个:
$python --version
# the version printed here must be 3.5, 3.6 or 3.7
提交一个Python Table的作业:
./bin/flink run -py WordCount.py
提交一个有多个依赖的Python Table的作业:
./bin/flink run -py examples/python/table/batch/word_count.py \
-pyfs file:///user.txt,hdfs:///$namenode_address/username.txt
提交一个Python Table的作业,并指定依赖的jar包:
./bin/flink run -py examples/python/table/batch/word_count.py -j
提交一个有多个依赖的Python Table的作业,Python作业的主入口通过pym选项指定:
./bin/flink run -pym batch.word_count -pyfs examples/python/table/batch
提交一个指定并发度为16的Python Table的作业:
./bin/flink run -p 16 -py examples/python/table/batch/word_count.py
提交一个关闭flink日志输出的Python Table的作业:
./bin/flink run -q -py examples/python/table/batch/word_count.py
提交一个运行在detached模式下的Python Table的作业:
./bin/flink run -d -py examples/python/table/batch/word_count.py
提交一个运行在指定JobManager上的Python Table的作业:
./bin/flink run -m myJMHost:8081 \
-py examples/python/table/batch/word_count.py
提交一个运行在有两个TaskManager的per-job YARN cluster的Python Table的作业:
./bin/flink run -m yarn-cluster \
-py examples/python/table/batch/word_count.py
作业管理示例
Display the optimized execution plan for the WordCount example program as JSON:
./bin/flink info ./examples/batch/WordCount.jar \
--input file:///home/user/hamlet.txt --output file:///home/user/wordcount_out
List scheduled and running jobs (including their JobIDs):
./bin/flink list
List scheduled jobs (including their JobIDs):
./bin/flink list -s
List running jobs (including their JobIDs):
./bin/flink list -r
List all existing jobs (including their JobIDs):
./bin/flink list -a
List running Flink jobs inside Flink YARN session:
./bin/flink list -m yarn-cluster -yid -r
Cancel a job:
./bin/flink cancel
Cancel a job with a savepoint (deprecated; use “stop” instead):
./bin/flink cancel -s [targetDirectory]
Gracefully stop a job with a savepoint (streaming jobs only):
./bin/flink stop [-p targetDirectory] [-d]
Savepoints
Savepoints are controlled via the command line client:
Trigger a Savepoint
./bin/flink savepoint [savepointDirectory]
This will trigger a savepoint for the job with ID jobId, and returns the path of the created savepoint. You need this path to restore and dispose savepoints.
Furthermore, you can optionally specify a target file system directory to store the savepoint in. The directory needs to be accessible by the JobManager.
If you don’t specify a target directory, you need to have configured a default directory. Otherwise, tri