一,最近在做平台,就是前后端分离的项目,简单的说就是对各种组件整合一下子,所以呢,提交任务啥的都在平台上搞了。
二,这里实现的功能很简单吧。就是代码模式,执行任务就可以kill掉yarn上的Flink任务。并且能自动生成savapoint
三,我们需要写入的参数是:
1)yarn 任务id
String appId = "application_1600222031782_0023";
2)Flink任务的jobId
String jobid = "c4d7e2ff6a35d402eaf54b9f9ca0f6c6";
3)需要savapoint地址
String savePoint = "hdfs://dev-ct6-dc-master01:8020/flink-savepoints5";
pom依赖:
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-yarn_${scala.binary.version}</artifactId>
<version>${flink.version}</version>
</dependency>
如果不成功,执行任务的时候加上hadoop_home的环境变量(下面只是参考)

四,代码
import org.apache.flink.api.common.JobID;
import org.apache.flink.client.cli.CliArgsException;
import org.apache.flink.client.program.ClusterClient;
import org.apache.flink.configuration.Configuration;
import org.apache.flink.configuration.GlobalConfiguration;
import org.apache.flink.util.FlinkException;
import org.apache.flink.yarn.YarnClusterClientFactory;
import org.apache.flink.yarn.YarnClusterDescriptor;
import org.apache.flink.yarn.configuration.YarnConfigOptions;
import org.apache.hadoop.yarn.api.records.ApplicationId;
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.ExecutionException;
public class StopYarnJob {
public static void main(String[] args) throws FlinkException, CliArgsException, ExecutionException, InterruptedException {
String hadoop_home = System.getProperty("HADOOP_HOME");
System.out.println("hado

本文介绍了如何在平台化项目中通过代码方式Kill Flink在YARN上的任务,并自动触发Savepoint。主要步骤包括设置YARN任务ID、Flink的jobId和Savepoint路径,依赖配置,以及测试流程,验证了Savepoint的生效,实现了任务状态的无缝恢复。
最低0.47元/天 解锁文章
479

被折叠的 条评论
为什么被折叠?



