一、修改flink配置,开放flink被监控端口
### --- 把prometheus的jar包复制到flink的lib目录下
~~~ # 把prometheus的jar包复制到flink的lib目录下
~~~ # prometheus的jar包的地址在flink的plugin目录下:/opt/yanqi/servers/flink-1.11.1/plugins/metrics-prometheus/flink-metrics-prometheus-1.11.1.jar
[root@hadoop01 ~]# cp /opt/yanqi/servers/flink-1.11.1/plugins/metrics-prometheus/flink-metrics-prometheus-1.11.1.jar \
/opt/yanqi/servers/flink-1.11.1/lib/
### --- 修改flink-conf.yaml:
~~~ # 修改flink-conf.yaml配置文件
[root@hadoop01 ~]# vim /opt/yanqi/servers/flink-1.11.1/conf/flink-conf.yaml
~~~ # 文件末尾添加如下参数:hadoop00:为prometheus,pushgateway机器的主机名
metrics.reporter.promgateway.class: org.apache.flink.metrics.prometheus.PrometheusPushGatewayReporter
metrics.reporter.promgateway.host: hadoop00
metrics.reporter.promgateway.port: 9091
metrics.reporter.promgateway.jobName: myJob
metrics.reporter.promgateway.randomJobNameSuffix: true
metrics.reporter.promgateway.deleteOnShutdown: false
### --- 发送配置文件到其它主机
~~~ # 发送prometheus.jar包到其它主机
[root@hadoop01 ~]# rsync-script /opt/yanqi/servers/flink-1.11.1/lib/flink-metrics-prometheus-1.11.1.jar
~~~ # 同步flink-conf.yaml配置文件到其它主机
[root@hadoop01 ~]# rsync-script /opt/yanqi/servers/flink-1.11.1/conf/flink-conf.yaml
二、启动flink.standalone集群模式
### --- 启动flink.standalone集群模式
~~~ # 启动flink.tandalone集群模式:启动hdfs和yarn服务
[root@hadoop01 ~]# /opt/yanqi/servers/flink-1.11.1/bin/start-cluster.sh
~~~ # 启动写入窗口
[root@hadoop01 ~]# nc -lp 7777
~~~ # 写入参数
hello prometheus
~~~ # 启动yarn session
[root@hadoop01 ~]# /opt/yanqi/servers/flink-1.11.1/bin/yarn-session.sh -s 1 -jm 1024 -tm 1024m
~~~ # 输出参数
[] - Submitting application master application_1638345082205_0001
JobManager Web Interface: http://hadoop03:42033
~~~ # 启动flink.yarn.jobs
[root@hadoop01 ~]# /opt/yanqi/servers/flink-1.11.1/bin/flink run \
-c WordCountScalaStream -yid application_1638345082205_0001 /root/myjars/FirstFlink-1.0-SNAPSHOT.jar
~~~ # 输出参数
Job has been submitted with JobID 391c28db1d8a2303234373b4f1de8e9f
三、查看Pushgateway是否监控到数据