文章目录
- RestAPI端口设置
- RestAPI使用
- 1. 启动Yarn Session集群
- 2. 关闭集群
- 3. 查看集群概览
- 4. 查看Dastboard页面
- 5. 查看JobManager上的相关配置
- 6. 查看JobManager Metrics
- 7. 查看所有TaskManager
- 8. 查看某个TaskManager
- 9. 查看某个TaskManager Metrics
- 10. 上传Jar包
- 11. 查看所有Jar包
- 12. 删除某个Jar包
- 13. 根据输入参数查看Job Dataflow Plan
- 14. 向集群提交新Job或从状态恢复Job
- 15. 查看所有Job
- 16. 取消某个Job
- 17. 查看某个Job详细信息
- 18. 查看Job Dataflow Plan
- 19. 查看Job Metrics
- 20. 查看Job中不可恢复的异常
- 21. 查看Job Checkpoint配置
- 22. 查看Job中的Checkpoint
- 23. 查看Job某次Checkpoint详细信息
- 24. 触发Savepoint然后取消Job
- 25. 查看触发的Savepoint状态
- 26. 查看Job中的累加器
- 27. 查看Job某个Vertex所有Subtasks信息
- 28. 查看Job某个Vertex Metrics
- 29. 查看Job某个Vertex下某Subtasks信息
- 30. 查看Job某个Vertex下某个Subtasks Metrics
- 31. 查看Job某个Vertex下某个Subtasks下某次Attempt信息
基于
Yarn Session
模式总结
Rest API
在DataStream Job上的使用。
注: 也适用于Yarn Per Job
模式。
RestAPI端口设置
可通过flink-conf.yaml
中的rest.bind-port
参数设置。
注意:
-
Rest Server
和Dashboard Server
监听在同一服务器(Active JobManager所在机器)、同一端口。通过不同的URL响应。 -
rest.bind-port
不设置,则Rest Server
默认绑定到rest.port
端口(8081)。 -
rest.bind-port
可以设置成列表格式如50100,50101
,也可设置成范围格式如50100-50200
。推荐范围格式,避免端口冲突。
RestAPI使用
注意:
- Rest API对Backpressure的监控已经过时。
1. 启动Yarn Session集群
bin/yarn-session.sh --container 3 --slots 2 --queue default --taskManagerMemory 1024 --yarndetached
启动后,在JobManager中可见如下日志:
2019-09-14 13:47:27,163 INFO org.apache.flink.runtime.dispatcher.DispatcherRestEndpoint - Web frontend listening at http://node2:50100.
由日志可知,Rest Server正监听在node2的50100端口。
2. 关闭集群
API: DELETE /cluster
请求:
curl -X DELETE http://node2:50100/v1/cluster|jq
返回:
{}
3. 查看集群概览
可以看到当前集群运行的Job数、取消的Job数等等。
API: GET /overview
请求:
curl -X GET http://node2:50100/v1/overview|jq
返回:
{
"taskmanagers": 0,
"slots-total": 0,
"slots-available": 0,
"jobs-running": 1,
"jobs-finished": 0,
"jobs-cancelled": 6,
"jobs-failed": 0,
"flink-version": "1.9.0",
"flink-commit": "9c32ed9"
}
4. 查看Dastboard页面
http://node2:50100/#/overview
5. 查看JobManager上的相关配置
API:GET /jobmanager/config
请求:
curl -X GET http://node2:50100/v1/jobmanager/config|jq
返回:
[
{
"key": "historyserver.web.address",
"value": "node2"
},
{
"key": "state.checkpoints.num-retained",
"value": "20"
},
{
"key": "historyserver.web.port",
"value": "18082"
},
{
"key": "jobmanager.execution.failover-strategy",
"value": "region"
},
{
"key": "high-availability.cluster-id",
"value": "application_1568023533160_0015"
},
{
"key": "jobmanager.rpc.address",
"value": "node2"
},
{
"key": "FLINK_PLUGINS_DIR",
"value": "/data/software/flink-1.9.0/plugins"
},
{
"key": "high-availability.zookeeper.path.root",
"value": "/flink"
},
{
"key": "high-availability.storageDir",
"value": "hdfs://node1:8020/flink/high_availability"
},
{
"key": "rest.bind-port",
"value": "50100-50200"
},
{
"key": "io.tmp.dirs",
"value": "/hadoop/yarn/local/usercache/root/appcache/application_1568023533160_0015"
},
{
"key": "parallelism.default",
"value": "1"
},
{
"key": "yarn.application-attempts",
"value": "2"
},
{
"key": "taskmanager.numberOfTaskSlots",
"value": "2"
},
{
"key": "zookeeper.sasl.disable",
"value": "true"
},
{
"key": "historyserver.archive.fs.dir",
"value": "hdfs:///completed-jobs"
},
{
"key": "jobmanager.heap.size",
"value": "1024m"
},
{
"key": "jobmanager.archive.fs.dir",
"value": "hdfs:///completed-jobs"
},
{
"key": "web.port",
"value": "0"
},
{
"key": "classloader.resolve-order",
"value": "parent-first"
},
{
"key": "web.tmpdir",
"value": "/tmp/flink-web-951aa23b-c01a-4026-a154-602538873f1d"
},
{
"key": "jobmanager.rpc.port",
"value": "37294"
},
{
"key": "internal.io.tmpdirs.use-local-default",
"value": "true"
},
{
"key": "rest.port",
"value": "8081"
},
{
"key": "high-availability.zookeeper.quorum",
"value": "node1:2181,node2:2181,node3:2181"
},
{
"key": "internal.cluster.execution-mode",
"value": "NORMAL"
},
{
"key": "high-availability",
"value": "zookeeper"
},
{
"key": "rest.address",
"value": "node2"
},
{
"key": "taskmanager.heap.size",
"value": "1024m"
}
]
6. 查看JobManager Metrics
API:GET /jobmanager/metrics
参数:
A、get: 逗号分隔的Metrics列表。如taskSlotsAvailable,taskSlotsTotal,Status.JVM.Memory.Heap.Used。
请求:
curl -X GET http://node2:50100/v1/jobmanager/metrics?get=taskSlotsAvailable,taskSlotsTotal,Status.JVM.Memory.Heap.Used|jq
返回:
[
{
"id": "taskSlotsAvailable",
"value": "0"
},
{
"id": "taskSlotsTotal",
"value": "2"
},
{
"id": "Status.JVM.Memory.Heap.Used",
"value": "62225192"
}
]
7. 查看所有TaskManager
API: GET /taskmanagers
请求:
curl -X GET http://node2:50100/v1/taskmanagers|jq
返回:
{
"taskmanagers": [
{
"id": "container_e35_1568023533160_0015_01_000007",
"path": "akka.tcp://flink@node5:45124/user/taskmanager_0",
"dataPort": 41687,
"timeSinceLastHeartbeat": 1568519539011,
"slotsNumber": 2,
"freeSlots": 0,
"hardware": {
"cpuCores": 6,
"physicalMemory": 12131536896,
"freeMemory": 362283008,
"managedMemory": 264241152
}
}
]
}
8. 查看某个TaskManager
API: GET /taskmanagers/:taskmanagerid
参数:
A、taskmanagerid: /taskmanagers接口返回的id。
请求:
curl -X GET http://node2:50100/v1/taskmanagers/container_e35_1568023533160_0015_01_000007|jq
返回:
{
"id": "container_e35_1568023533160_0015_01_000007",
"path": "akka.tcp://flink@node5:45124/user/taskmanager_0",
"dataPort": 41687,
"timeSinceLastHeartbeat": 1568519639213,
"slotsNumber": 2,
"freeSlots": 0,
"hardware": {
"cpuCores": 6,
"physicalMemory": 12131536896,
"freeMemory": 362283008,
"managedMemory": 264241152
},
"metrics": {
"heapUsed": 78424048,
"heapCommitted": 361234432,
"heapMax": 361234432,
"nonHeapUsed": 86401760,
"nonHeapCommitted": 88735744,
"nonHeapMax": -1,
"directCount": 2206,
"directUsed": 67839321,
"directMax": 67839319,
"mappedCount": 0,
"mappedUsed": 0,
"mappedMax": 0,
"memorySegmentsAvailable": 2044,
"memorySegmentsTotal": 2048,
"garbageCollectors": [
{
"name": "PS_Scavenge",
"count": 9,
"time": 116
},
{
"name": "PS_MarkSweep",
"count": 3,
"time": 159
}
]
}
}
9. 查看某个TaskManager Metrics
API: GET /taskmanagers/:taskmanagerid/metrics
参数:
A、get: 逗号分隔的Metrics列表。如taskSlotsAvailable,taskSlotsTotal,Status.JVM.Memory.Heap.Used。
请求:
curl -X GET http://node2:50100/v1/taskmanagers/container_e35_1568023533160_0015_01_000007/metrics?get=Status.JVM.Memory.Heap.Used,Status.JVM.Memory.NonHeap.Used,Status.JVM.Memory.NonHeap.Max|jq
返回:
[
{
"id": "Status.JVM.Memory.NonHeap.Used",
"value": "101535976"
},
{
"id": "Status.JVM.Memory.Heap.Used",
"value": "101347576"
},
{
"id": "Status.JVM.Memory.NonHeap.Max",
"value": "-1"
}
]
10. 上传Jar包
API: POST /jars/upload
参数:
A、jarfile: Jar包路径。
请求:
curl -X POST -H "Expect:" -F "jarfile=@/data/apps/flink-read-write-kafka-jar-with-dependencies.jar" http://node2:50100/v1/jars/upload|jq
返回:
{
"filename": "/tmp/flink-web-951aa23b-c01a-4026-a154-602538873f1d/flink-web-upload/74ae26fe-7973-4998-8594-23d4e357c577_flink-read-write-kafka-jar-with-dependencies.jar",
"status": "success"
}
11. 查看所有Jar包
API: GET /jars
请求:
curl -X GET http://node2:50100/v1/jars|jq
返回:
{
"address": "http://node2:50100",
"files": [
{
#Jar唯一标识
"id": "74ae26fe-7973-4998-8594-23d4e357c577_flink-read-write-kafka-jar-with-dependencies.jar",
"name": "flink-read-write-kafka-jar-with-dependencies.jar",
"uploaded": 1568469312000,
"entry": []
}
]
}
12. 删除某个Jar包
API: DELETE /jars/:jarid
参数:
jarid: Jar唯一标识。
请求:
curl -X DELETE http://node2:50100/v1/jars/74ae26fe-7973-4998-8594-23d4e357c577_flink-read-write-kafka-jar-with-dependencies.jar|jq
返回:
{}
13. 根据输入参数查看Job Dataflow Plan
支持Query Parameter
和JSON Request
两种请求方式。这里,只总结JSON Request
方式(推荐)。
API: GET /jars/:jarid/plan
参数:
A、jarid: Jar唯一标识。
B、programArgsList: 应用程序参数列表。
C、entryClass: 入口类。
D、parallelism: 并行度。
请求:
curl -X GET -H 'Content-Type: application/json' --data '
{
"programArgsList": [
"--fromKafkaBrokers",
"node1:6667,node2:6667,node3:6667",
"--fromKafkaGroup",
"c4",
"--fromKafkaTopic",
"userActionLog",
"--toKafkaBrokers",
"node5:9092",
"--toKafkaTopic",
"realtime_dashboard"
],
"entryClass": "com.bigdata.flink.ReadWriteKafka",
"parallelism": 2
}
' http://node2:50100/v1/jars/7184ba9f-b160-439c-b3cb-bd2fa61ca704_flink-read-write-kafka-jar-with-dependencies.jar/plan|jq
返回:
{
"plan": {
"jid": "06b5f5d9a56646f54721b4421b59f811",
"name": "",
"nodes": [
{
"id": "09078a2a0d570fc10e5b089eeccaaa4b",
"parallelism": 2,
"operator": "",
"operator_strategy": "",
"description": "Window(TumblingEventTimeWindows(5000), EventTimeTrigger, CustomAggFunction, CustomWindowFunction) -> (Sink: Print to Std. Out, Sink: outputToKafka)",
"inputs": [
{
"num": 0,
"id": "bd9c186e89cc37a76d776844dfa8ccba",
"ship_strategy": "HASH",
"exchange": "pipelined_bounded"
}
],
"optimizer_properties": {}
},
{
"id": "bd9c186e89cc37a76d776844dfa8ccba",
"parallelism": 2,
"operator": "",
"operator_strategy": "",
"description": "Source: KafkaSource -> Map: ExtractTransform -> Filter: FilterExceptionData -> Timestamps/Watermarks",
"optimizer_properties": {}
}
]
}
}
14. 向集群提交新Job或从状态恢复Job
支持Query Parameter
和JSON Request
两种请求方式。这里,只总结JSON Request
方式(推荐)。
API: POST /jars/:jarid/run
参数:
A、jarid: Jar唯一标识。
B、programArgsList: 应用程序参数列表。
C、entryClass: 入口类。
D、parallelism: 并行度。
E、allowNonRestoredState: 状态不一致时,是否允许提交。
F、savepointPath: savepoint路径。
请求:
curl -X POST -H 'Content-Type: application/json' --data '
{
"programArgsList": [
"--fromKafkaBrokers",
"node1:6667,node2:6667,node3:6667",
"--fromKafkaGroup",
"c4",
"--fromKafkaTopic",
"userActionLog",
"--toKafkaBrokers",
"node5:9092",
"--toKafkaTopic",
"realtime_dashboard"
],
"entryClass": "com.bigdata.flink.ReadWriteKafka",
"parallelism": 2,
"allowNonRestoredState":true,
"savepointPath":""
}
' http://node2:50100/v1/jars/7184ba9f-b160-439c-b3cb-bd2fa61ca704_flink-read-write-kafka-jar-with-dependencies.jar/run|jq
返回:
{
"jobid": "a5e6a799c9b358817e8a5d21534d6948"
}
15. 查看所有Job
API:GET /jobs
请求:
curl -X GET http://node2:50100/v1/jobs|jq
返回:
{
"jobs": [
{
"id": "a5e6a799c9b358817e8a5d21534d6948",
"status": "RUNNING"
},
{
"id": "33b12aef7b474f306e59938ac4d9c32e",
"status": "CANCELED"
},
{
"id": "19fe28e89dbd5550b4c3510894204027",
"status": "CANCELED"
},
{
"id": "1ecad592ae428d685b6f8c7c8bec362a",
"status": "CANCELED"
}
]
}
16. 取消某个Job
API:PATCH /jobs/:jobid
请求:
curl -X PATCH http://node2:50100/v1/jobs/a5e6a799c9b358817e8a5d21534d6948|jq
返回:
{}
17. 查看某个Job详细信息
API:GET /jobs/:jobid
请求:
curl -X GET http://node2:50100/v1/jobs/a5e6a799c9b358817e8a5d21534d6948|jq
返回:
{
"jid": "a5e6a799c9b358817e8a5d21534d6948",
"name": "",
"isStoppable": false,
"state": "RUNNING",
"start-time": 1568513921599,
"end-time": -1,
"duration": 1454068,
"now": 1568515375667,
"timestamps": {
"FAILING": 0,
"CANCELED": 0,
"FINISHED": 0,
"SUSPENDED": 0,
"CREATED": 1568513921599,
"FAILED": 0,
"RECONCILING": 0,
"RESTARTING": 0,
"RUNNING": 1568513921638,
"CANCELLING": 0
},
"vertices": [
{
"id": "bd9c186e89cc37a76d776844dfa8ccba",
"name": "Source: KafkaSource -> Map: ExtractTransform -> Filter: FilterExceptionData -> Timestamps/Watermarks",
"parallelism": 2,
"status": "RUNNING",
"start-time": 1568515358914,
"end-time": -1,
"duration": 16753,
"tasks": {
"CREATED": 0,
"RUNNING": 2,
"FINISHED": 0,
"SCHEDULED": 0,
"FAILED": 0,
"RECONCILING": 0,
"DEPLOYING": 0,
"CANCELING": 0,
"CANCELED": 0
},
"metrics": {
"read-bytes": 0,
"read-bytes-complete": true,
"write-bytes": 0,
"write-bytes-complete": true,
"read-records": 0,
"read-records-complete": true,
"write-records": 605,
"write-records-complete": true
}
},
{
"id": "09078a2a0d570fc10e5b089eeccaaa4b",
"name": "Window(TumblingEventTimeWindows(5000), EventTimeTrigger, CustomAggFunction, CustomWindowFunction) -> (Sink: Print to Std. Out, Sink: outputToKafka)",
"parallelism": 2,
"status": "RUNNING",
"start-time": 1568515358914,
"end-time": -1,
"duration": 16753,
"tasks": {
"CREATED": 0,
"RUNNING": 2,
"FINISHED": 0,
"SCHEDULED": 0,
"FAILED": 0,
"RECONCILING": 0,
"DEPLOYING": 0,
"CANCELING": 0,
"CANCELED": 0
},
"metrics": {
"read-bytes": 30983,
"read-bytes-complete": true,
"write-bytes": 0,
"write-bytes-complete": true,
"read-records": 605,
"read-records-complete": true,
"write-records": 0,
"write-records-complete": true
}
}
],
"status-counts": {
"CREATED": 0,
"RUNNING": 2,
"FINISHED": 0,
"SCHEDULED": 0,
"FAILED": 0,
"RECONCILING": 0,
"DEPLOYING": 0,
"CANCELING": 0,
"CANCELED": 0
},
"plan": {
"jid": "a5e6a799c9b358817e8a5d21534d6948",
"name": "",
"nodes": [
{
"id": "09078a2a0d570fc10e5b089eeccaaa4b",
"parallelism": 2,
"operator": "",
"operator_strategy": "",
"description": "Window(TumblingEventTimeWindows(5000), EventTimeTrigger, CustomAggFunction, CustomWindowFunction) -> (Sink: Print to Std. Out, Sink: outputToKafka)",
"inputs": [
{
"num": 0,
"id": "bd9c186e89cc37a76d776844dfa8ccba",
"ship_strategy": "HASH",
"exchange": "pipelined_bounded"
}
],
"optimizer_properties": {}
},
{
"id": "bd9c186e89cc37a76d776844dfa8ccba",
"parallelism": 2,
"operator": "",
"operator_strategy": "",
"description": "Source: KafkaSource -> Map: ExtractTransform -> Filter: FilterExceptionData -> Timestamps/Watermarks",
"optimizer_properties": {}
}
]
}
}
18. 查看Job Dataflow Plan
API: GET /jobs/:jobid/plan
请求:
curl -X GET http://node2:50100/v1/jobs/12c848465ef964bba9d1b1ddf558570a/plan|jq
返回:
{
"plan": {
"jid": "12c848465ef964bba9d1b1ddf558570a",
"name": "",
"nodes": [
{
"id": "09078a2a0d570fc10e5b089eeccaaa4b",
"parallelism": 2,
"operator": "",
"operator_strategy": "",
"description": "Window(TumblingEventTimeWindows(5000), EventTimeTrigger, CustomAggFunction, CustomWindowFunction) -> (Sink: Print to Std. Out, Sink: outputToKafka)",
"inputs": [
{
"num": 0,
"id": "bd9c186e89cc37a76d776844dfa8ccba",
"ship_strategy": "HASH",
"exchange": "pipelined_bounded"
}
],
"optimizer_properties": {}
},
{
"id": "bd9c186e89cc37a76d776844dfa8ccba",
"parallelism": 2,
"operator": "",
"operator_strategy": "",
"description": "Source: KafkaSource -> Map: ExtractTransform -> Filter: FilterExceptionData -> Timestamps/Watermarks",
"optimizer_properties": {}
}
]
}
}
19. 查看Job Metrics
API: GET /jobs/:jobid/metrics
参数:
get: 逗号分隔的metrics列表,如要看uptime,lastCheckpointExternalPath。
请求:
curl -X GET http://node2:50100/v1/jobs/12c848465ef964bba9d1b1ddf558570a/metrics?get=uptime,lastCheckpointExternalPath|jq
返回:
[
{
"id": "lastCheckpointExternalPath",
"value": "hdfs://node1:8020/flink/checkpoint/12c848465ef964bba9d1b1ddf558570a/chk-143"
},
{
"id": "uptime",
"value": "1437983"
}
]
20. 查看Job中不可恢复的异常
API: GET /jobs/:jobid/exceptions
请求:
curl -X GET http://node2:50100/v1/jobs/12c848465ef964bba9d1b1ddf558570a/exceptions|jq
返回:
{
"root-exception": null,
"timestamp": null,
"all-exceptions": [],
"truncated": false
}
21. 查看Job Checkpoint配置
API: GET /jobs/:jobid/checkpoints/config
请求:
curl -X GET http://node2:50100/v1/jobs/12c848465ef964bba9d1b1ddf558570a/checkpoints/config|jq
返回:
{
"mode": "exactly_once",
"interval": 10000,
"timeout": 600000,
"min_pause": 0,
"max_concurrent": 1,
"externalization": {
"enabled": true,
"delete_on_cancellation": false
}
}
22. 查看Job中的Checkpoint
API: GET /jobs/:jobid/checkpoints
请求:
curl -X GET http://node2:50100/v1/jobs/12c848465ef964bba9d1b1ddf558570a/checkpoints|jq
返回:
{
"counts": {
"restored": 0,
"total": 2,
"in_progress": 0,
"completed": 2,
"failed": 0
},
"summary": {
"state_size": {
"min": 13518,
"max": 13640,
"avg": 13579
},
"end_to_end_duration": {
"min": 456,
"max": 467,
"avg": 461
},
"alignment_buffered": {
"min": 0,
"max": 0,
"avg": 0
}
},
"latest": {
"completed": {
"@class": "completed",
"id": 2,
"status": "COMPLETED",
"is_savepoint": false,
"trigger_timestamp": 1568516147296,
"latest_ack_timestamp": 1568516147763,
"state_size": 13640,
"end_to_end_duration": 467,
"alignment_buffered": 0,
"num_subtasks": 4,
"num_acknowledged_subtasks": 4,
"tasks": {},
"external_path": "hdfs://node1:8020/flink/checkpoint/12c848465ef964bba9d1b1ddf558570a/chk-2",
"discarded": false
},
"savepoint": null,
"failed": null,
"restored": null
},
"history": [
{
"@class": "completed",
"id": 2,
"status": "COMPLETED",
"is_savepoint": false,
"trigger_timestamp": 1568516147296,
"latest_ack_timestamp": 1568516147763,
"state_size": 13640,
"end_to_end_duration": 467,
"alignment_buffered": 0,
"num_subtasks": 4,
"num_acknowledged_subtasks": 4,
"tasks": {},
"external_path": "hdfs://node1:8020/flink/checkpoint/12c848465ef964bba9d1b1ddf558570a/chk-2",
"discarded": false
},
{
"@class": "completed",
"id": 1,
"status": "COMPLETED",
"is_savepoint": false,
"trigger_timestamp": 1568516137296,
"latest_ack_timestamp": 1568516137752,
"state_size": 13518,
"end_to_end_duration": 456,
"alignment_buffered": 0,
"num_subtasks": 4,
"num_acknowledged_subtasks": 4,
"tasks": {},
"external_path": "hdfs://node1:8020/flink/checkpoint/12c848465ef964bba9d1b1ddf558570a/chk-1",
"discarded": false
}
]
}
23. 查看Job某次Checkpoint详细信息
API: GET /jobs/:jobid/checkpoints/details/:checkpointid
请求:
curl -X GET http://node2:50100/v1/jobs/12c848465ef964bba9d1b1ddf558570a/checkpoints/details/33|jq
返回:
{
"@class": "completed",
"id": 33,
"status": "COMPLETED",
"is_savepoint": false,
"trigger_timestamp": 1568516457296,
"latest_ack_timestamp": 1568516458382,
"state_size": 13700,
"end_to_end_duration": 1086,
"alignment_buffered": 0,
"num_subtasks": 4,
"num_acknowledged_subtasks": 4,
"tasks": {
"bd9c186e89cc37a76d776844dfa8ccba": {
"id": 33,
"status": "COMPLETED",
"latest_ack_timestamp": 1568516457349,
"state_size": 2508,
"end_to_end_duration": 53,
"alignment_buffered": 0,
"num_subtasks": 2,
"num_acknowledged_subtasks": 2
},
"09078a2a0d570fc10e5b089eeccaaa4b": {
"id": 33,
"status": "COMPLETED",
"latest_ack_timestamp": 1568516458382,
"state_size": 11192,
"end_to_end_duration": 1086,
"alignment_buffered": 0,
"num_subtasks": 2,
"num_acknowledged_subtasks": 2
}
},
"external_path": "hdfs://node1:8020/flink/checkpoint/12c848465ef964bba9d1b1ddf558570a/chk-33",
"discarded": false
}
24. 触发Savepoint然后取消Job
API: POST /jobs/:jobid/savepoints
请求:
curl -X POST -H 'Content-Type: application/json' --data '
{
"target-directory":"hdfs://node1:8020/flink/savepoint/",
"cancel-job":true
}
' http://node2:50100/v1/jobs/12c848465ef964bba9d1b1ddf558570a/savepoints|jq
返回:
{
"request-id": "7cc8e3cf54cc9d3c3465102ce2e44ef0"
}
25. 查看触发的Savepoint状态
API: GET /jobs/:jobid/savepoints/:triggerid
参数:
A、jobid: Job ID。
B、triggerid: /jobs/:jobid/savepoints接口返回的request-id。
请求:
curl -X GET http://node2:50100/v1/jobs/12c848465ef964bba9d1b1ddf558570a/savepoints/7cc8e3cf54cc9d3c3465102ce2e44ef0|jq
返回:
{
"status": {
"id": "COMPLETED"
},
"operation": {
"location": "hdfs://node1:8020/flink/savepoint/savepoint-12c848-69571045e0b3"
}
}
26. 查看Job中的累加器
API: GET /jobs/:jobid/accumulators
请求:
curl -X GET http://node2:50100/v1/jobs/dc68ddeffded6119d7395c814b9aff17/accumulators|jq
返回:
{
"job-accumulators": [],
"user-task-accumulators": [],
"serialized-user-task-accumulators": {}
}
27. 查看Job某个Vertex所有Subtasks信息
API: GET /jobs/:jobid/vertices/:vertexid
参数:
A、vertexid: jobs/jobid接口vertices中的id
请求:
curl -X GET http://node2:50100/v1/jobs/ff9d383001652ab54e4970965fa386ee/vertices/bd9c186e89cc37a76d776844dfa8ccba|jq
返回:
{
"id": "bd9c186e89cc37a76d776844dfa8ccba",
"name": "Source: KafkaSource -> Map: ExtractTransform -> Filter: FilterExceptionData -> Timestamps/Watermarks",
"parallelism": 2,
"now": 1568523887783,
"subtasks": [
{
"subtask": 0,
"status": "RUNNING",
"attempt": 0,
"host": "node5:41687",
"start-time": 1568519471838,
"end-time": -1,
"duration": 4415945,
"metrics": {
"read-bytes": 0,
"read-bytes-complete": true,
"write-bytes": 113598,
"write-bytes-complete": true,
"read-records": 0,
"read-records-complete": true,
"write-records": 1486,
"write-records-complete": true
},
"start_time": 1568519471838
},
{
"subtask": 1,
"status": "RUNNING",
"attempt": 0,
"host": "node5:41687",
"start-time": 1568519471840,
"end-time": -1,
"duration": 4415943,
"metrics": {
"read-bytes": 0,
"read-bytes-complete": true,
"write-bytes": 223023,
"write-bytes-complete": true,
"read-records": 0,
"read-records-complete": true,
"write-records": 2919,
"write-records-complete": true
},
"start_time": 1568519471840
}
]
}
28. 查看Job某个Vertex Metrics
API: /jobs/:jobid/vertices/:vertexid/metrics
参数:
A、vertexid: jobs/jobid接口vertices中的id
B、get: 逗号分隔的指定metrics名字。
请求:
curl -X GET http://node2:50100/v1/jobs/ff9d383001652ab54e4970965fa386ee/vertices/bd9c186e89cc37a76d776844dfa8ccba/metrics?get=0.numRecordsInPerSecond,0.buffers.inputQueueLength,0.buffers.outputQueueLength,1.numRecordsInPerSecond,1.buffers.inputQueueLength,1.buffers.outputQueueLength|jq
返回:
[
{
"id": "0.buffers.inputQueueLength",
"value": "0"
},
{
"id": "1.numRecordsInPerSecond",
"value": "0.0"
},
{
"id": "0.numRecordsInPerSecond",
"value": "0.0"
},
{
"id": "1.buffers.inputQueueLength",
"value": "0"
},
{
"id": "0.buffers.outputQueueLength",
"value": "2"
},
{
"id": "1.buffers.outputQueueLength",
"value": "2"
}
]
29. 查看Job某个Vertex下某Subtasks信息
API: /jobs/:jobid/vertices/:vertexid/subtasks/:subtaskindex
参数:
A、vertexid: jobs/jobid接口vertices中的id
B、subtaskindex: /jobs/:jobid/vertices/:vertexid接口中的subtask。
请求:
curl -X GET http://node2:50100/v1/jobs/ff9d383001652ab54e4970965fa386ee/vertices/bd9c186e89cc37a76d776844dfa8ccba/subtasks/0|jq
返回:
{
"subtask": 0,
"status": "RUNNING",
"attempt": 0,
"host": "node5",
"start-time": 1568519471838,
"end-time": -1,
"duration": 5682174,
"metrics": {
"read-bytes": 0,
"read-bytes-complete": true,
"write-bytes": 145817,
"write-bytes-complete": true,
"read-records": 0,
"read-records-complete": true,
"write-records": 1907,
"write-records-complete": true
}
}
30. 查看Job某个Vertex下某个Subtasks Metrics
API: /jobs/:jobid/vertices/:vertexid/subtasks/:subtaskindex/metrics
参数:
A、vertexid: jobs/jobid接口vertices中的id
B、subtaskindex: /jobs/:jobid/vertices/:vertexid接口中的subtask。
C、get参数: 逗号分隔的metrics列表。
请求:
curl -X GET http://node2:50100/v1/jobs/ff9d383001652ab54e4970965fa386ee/vertices/bd9c186e89cc37a76d776844dfa8ccba/subtasks/0/metrics?get=Timestamps/Watermarks.currentOutputWatermark,Source__KafkaSource.records-consumed-rate|jq
返回:
[
{
"id": "Source__KafkaSource.records-consumed-rate",
"value": "0.38687440649949"
},
{
"id": "Timestamps/Watermarks.currentOutputWatermark",
"value": "1568527125000"
}
]
31. 查看Job某个Vertex下某个Subtasks下某次Attempt信息
API: /jobs/:jobid/vertices/:vertexid/subtasks/:subtaskindex/attempts/:attempt
参数:
A、vertexid: jobs/jobid接口vertices中的id
B、subtaskindex: /jobs/:jobid/vertices/:vertexid接口中的subtask。
C、attempt: /jobs/:jobid/vertices/:vertexid/subtasks/:subtaskindex接口中的attempt。
请求:
curl -X GET http://node2:50100/v1/jobs/ff9d383001652ab54e4970965fa386ee/vertices/bd9c186e89cc37a76d776844dfa8ccba/subtasks/1/attempts/0|jq
返回:
{
"subtask": 0,
"status": "RUNNING",
"attempt": 0,
"host": "node5",
"start-time": 1568519471838,
"end-time": -1,
"duration": 5682174,
"metrics": {
"read-bytes": 0,
"read-bytes-complete": true,
"write-bytes": 145817,
"write-bytes-complete": true,
"read-records": 0,
"read-records-complete": true,
"write-records": 1907,
"write-records-complete": true
}
}