yarn-per-job 提交时
/opt/flink-1.13.2/bin/flink run -t yarn-per-job --detached \
-c com.keyo163.etl.order.OrderFinanceDetail \
-Dyarn.application.name=order-finance\
/dist/cgzd-flink-sql-order/cgzd-flink-sql-order-1.0.jar \
/dist/cgzd-flink-sql-order/config/properties
yarn-cluster 提交时
/opt/flink-1.13.2/bin/flink run \
-m yarn-cluster \
-ynm AdviserRealtimeBusinessCountApp0 \
-yjm 1024 \
-ytm 1024 \
-ys 1 \
--detached \
-yD env.java.opts="-Dfile.encoding=UTF-8 -Dsun.jnu.encoding=UTF-8" \
-c com.keyto163.app.dws.AdviserRealtimeBusinessCountApp \
/dist/cgzd-realtime-warehouse/jar/streaming-project-1.0.jar \
/dist/cgzd-realtime-warehouse/conf/pro.properties
然后 flinkSQL 设置自己的 JobName 是需要在代码中添加:
FlinkSQL-API
// 1.2 设置 JobName
Configuration configuration = tableEnv.getConfig().getConfiguration();
configuration.setString("pipeline.name", JobName);
FlinkDataStream 设置 API
env.execute("ods Kafka Source ");