1. (无效) 配置flink-conf.yaml文件, 修改 state.backend: filesystem 与 state.checkpoints.dir: file:///root/flink/cp, 运行以下程序, 程序正常但无cp文件输出
public static void main(String[] args) throws Exception {
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.enableCheckpointing(5000);
env.getCheckpointConfig().setCheckpointingMode(CheckpointingMode.EXACTLY_ONCE);
env.getCheckpointConfig().enableExternalizedCheckpoints(CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION);
env.setParallelism(2);
DataStreamSource<Integer> ds = env.addSource(
new SourceFunction<Integer>() {
Integer i = 1;
@Override
public void run(SourceContext<Integer> ctx) throws Exception {
while (i > 0) {
ctx.collect(i);
i++;
}
}
@Override
public void cancel() {
}
});
ds.print();
env.execute(TestFlinkCheckpoint.class.getSimpleName());
}
2. (有效) 与aws相关人员沟通后,更改配置方式
- 选择emr集群 -> 配置 -> 筛选条件,选择活动实例组 -> 重新配置 -> 以表格式编辑 -> 添加分类(配置文件名),属性(key),值(value) -> 勾选将此配置应用于所有活动实例组,即可发给master与slave -> 等待后台配置完成
- 添加配置 state.backend=filesystem 与 state.checkpoints.dir=s3://<bucket>/<endpoint>, 运行以上程序, 程序报错如下, 经检查, master与slave均有往s3 cp文件的权限
ERROR org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Fatal error occurred in the cluster entrypoint. org.apache.flink.util.FlinkException: JobMaster for job 10a95e021d5f72a0a1e0501ab7c1ef5e failed. Caused by: org.apache.flink.runtime.client.JobInitializationException: Could not start the JobMaster. Caused by: java.util.concurrent.CompletionException: org.apache.flink.util.FlinkRuntimeException: Failed to create checkpoint storage at checkpoint coordinator side. Caused by: org.apache.flink.util.FlinkRuntimeException: Failed to create checkpoint storage at checkpoint coordinator side. Caused by: org.apache.hadoop.fs.s3a.AWSBadRequestException: doesBucketExist on zzs: com.amazonaws.services.s3.model.AmazonS3Exception: Bad Request (Service: Amazon S3; Status Code: 400; Error Code: 400 Bad Request; Request ID: 1PX45C2B41M1V6RA; S3 Extended Request ID: CYBf6T7dXOzKmC+gAqXB7H4YXFKlGwcAeaHs6HfCV+b8pd/b+dIOfBU8yQlVAloipNCwqX47ckM=; Proxy: null), S3 Extended Request ID: CYBf6T7dXOzKmC+gAqXB7H4YXFKlGwcAeaHs6HfCV+b8pd/b+dIOfBU8yQlVAloipNCwqX47ckM=:400 Bad Request: Bad Request (Service: Amazon S3; Status Code: 400; Error Code: 400 Bad Request; Request ID: 1PX45C2B41M1V6RA; S3 Extended Request ID: CYBf6T7dXOzKmC+gAqXB7H4YXFKlGwcAeaHs6HfCV+b8pd/b+dIOfBU8yQlVAloipNCwqX47ckM=; Proxy: null) Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: Bad Request (Service: Amazon S3; Status Code: 400; Error Code: 400 Bad Request; Request ID: 1PX45C2B41M1V6RA; S3 Extended Request ID: CYBf6T7dXOzKmC+gAqXB7H4YXFKlGwcAeaHs6HfCV+b8pd/b+dIOfBU8yQlVAloipNCwqX47ckM=; Proxy: null)
更改配置 state.checkpoints.dir=file:///root/flink/cp, 运行以上程序, 程序正常但无cp文件输出
- 配置终端节点,vpc,路由表 (运维), spark代码可以写入,flink代码无法写入
- 添加配置fs.s3a.endpoint=s3.cn-northwest-1.amazonaws.com.cn, checkpoint写入s3正常
3. 分析
- spark代码可以直接写入,无需配置,但flink需要配置,暂时不知道是什么原因
- flink代码无法自动识别s3与emr机器的region, 故需手动指定, 但aws工作人员没有指定也可以写入, 暂时不知道是什么原因