46、Flink窗口迟到数据的处理 AllowedLateness 代码示例

1、概述

使用 event-time 窗口时,数据可能会迟到,默认 watermark 一旦越过窗口结束的 timestamp,迟到的数据就会被直接丢弃;

Allowed lateness 默认是 0,在 watermark 超过窗口末端、到达窗口末端加上 allowed lateness 之前的这段时间内到达的元素,依旧会被加入窗口;

触发的结果取决于触发器是否 purge 而导致结果不同;

ProcessTime 和 GlobalWindows 无迟到数据。

2、代码示例

import org.apache.flink.api.common.eventtime.SerializableTimestampAssigner;
import org.apache.flink.api.common.eventtime.WatermarkStrategy;
import org.apache.flink.api.common.functions.MapFunction;
import org.apache.flink.api.java.tuple.Tuple2;
import org.apache.flink.streaming.api.datastream.DataStreamSource;
import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.api.functions.windowing.WindowFunction;
import org.apache.flink.streaming.api.windowing.assigners.TumblingEventTimeWindows;
import org.apache.flink.streaming.api.windowing.triggers.EventTimeTrigger;
import org.apache.flink.streaming.api.windowing.triggers.PurgingTrigger;
import org.apache.flink.streaming.api.windowing.windows.TimeWindow;
import org.apache.flink.util.Collector;

import java.time.Duration;

/**
 * 使用 event-time 窗口时,数据可能会迟到,默认 watermark 一旦越过窗口结束的 timestamp,迟到的数据就会被直接丢弃;
 * Allowed lateness 默认是 0,在 watermark 超过窗口末端、到达窗口末端加上 allowed lateness 之前的这段时间内到达的元素,依旧会被加入窗口;
 * 触发的结果取决于触发器是否 purge 而导致结果不同
 *
 * ProcessTime 和 GlobalWindows 无迟到数据
 */
public class _14_WindowAllowedLateness {
    public static void main(String[] args) throws Exception {
        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();

        DataStreamSource<String> input = env.socketTextStream("localhost", 8888);

        // 测试时限制了分区数,生产中需要设置空闲数据源
        env.setParallelism(2);

        // 事件时间需要设置水位线策略和时间戳
        SingleOutputStreamOperator<Tuple2<String, Long>> map = input.map(new MapFunction<String, Tuple2<String, Long>>() {
            @Override
            public Tuple2<String, Long> map(String input) throws Exception {
                String[] fields = input.split(",");
                return new Tuple2<>(fields[0], Long.parseLong(fields[1]));
            }
        });

        SingleOutputStreamOperator<Tuple2<String, Long>> watermarks = map.assignTimestampsAndWatermarks(WatermarkStrategy.<Tuple2<String, Long>>forBoundedOutOfOrderness(Duration.ofSeconds(0))
                .withTimestampAssigner(new SerializableTimestampAssigner<Tuple2<String, Long>>() {
                    @Override
                    public long extractTimestamp(Tuple2<String, Long> input, long l) {
                        return input.f1;
                    }
                }));

        watermarks.keyBy(e -> e.f0)
                .window(TumblingEventTimeWindows.of(Duration.ofSeconds(5)))
                .trigger(PurgingTrigger.of(EventTimeTrigger.create()))
//                .trigger(EventTimeTrigger.create())
                .allowedLateness(Duration.ofSeconds(3))
                .apply(new WindowFunction<Tuple2<String, Long>, String, String, TimeWindow>() {
                    @Override
                    public void apply(String s, TimeWindow timeWindow, Iterable<Tuple2<String, Long>> iterable, Collector<String> collector) throws Exception {
                        System.out.println(timeWindow.getStart() + "-" + timeWindow.getEnd());

                        for (Tuple2<String, Long> tuple : iterable) {
                            collector.collect(tuple.f0);
                        }
                    }
                })
                .print();

        env.execute();
    }
}

3、数据测试

1)EventTimeTrigger-不清除窗口内的数据:设置允许延迟的时间后,当迟到的数据到达时,窗口会再次触发

          a,1718157600000
          b,1718157600000
          c,1718157600000
         
          a,1718157602000
          b,1718157602000
          c,1718157602000
         
          a,1718157605001
          b,1718157605001
         
          1718157600000-1718157605000
          2> a
          2> a
          1718157600000-1718157605000
          1> b
          1> b
          1718157600000-1718157605000
          1> c
          1> c
         
          c,1718157605001
         
          a,1718157604000
          b,1718157604000
          c,1718157604000
         
          1718157600000-1718157605000
          2> a
          2> a
          2> a
          1718157600000-1718157605000
          1> b
          1> b
          1> b
          1718157600000-1718157605000
          1> c
          1> c
          1> c

2)PurgingTrigger.of(EventTimeTrigger.create()):清除窗口内的数据,设置允许延迟的时间后,当迟到的数据到达时,窗口会再次触发

          a,1718157600000
          b,1718157600000
          c,1718157600000
         
          a,1718157602000
          b,1718157602000
          c,1718157602000
         
          a,1718157605001
          b,1718157605001
         
          1718157600000-1718157605000
          2> a
          2> a
          1718157600000-1718157605000
          1> b
          1> b
          1718157600000-1718157605000
          1> c
          1> c
         
          c,1718157605001
         
          a,1718157604000
          b,1718157604000
          c,1718157604000
         
          1718157600000-1718157605000
          2> a
          1718157600000-1718157605000
          1> b
          1718157600000-1718157605000
          1> c
  • 7
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
以下是一个简单的 Flink 代码示例,它将自定义数据流落地到 Hive 表中: ```java import org.apache.flink.api.common.functions.MapFunction; import org.apache.flink.api.common.serialization.SimpleStringEncoder; import org.apache.flink.api.common.serialization.SimpleStringSchema; import org.apache.flink.core.fs.Path; import org.apache.flink.streaming.api.datastream.DataStream; import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment; import org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink; import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer; import org.apache.flink.table.api.EnvironmentSettings; import org.apache.flink.table.api.Table; import org.apache.flink.table.api.bridge.java.StreamTableEnvironment; import org.apache.flink.table.catalog.hive.HiveCatalog; import org.apache.hadoop.hive.conf.HiveConf; import java.util.Properties; public class FlinkHiveSink { public static void main(String[] args) throws Exception { // 设置执行环境 StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(); env.setParallelism(1); // 设置 Table 环境 EnvironmentSettings settings = EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build(); StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env, settings); // 设置 Hive catalog String catalogName = "my_hive_catalog"; String defaultDatabase = "default"; String hiveConfDir = "/etc/hadoop/conf"; HiveConf hiveConf = new HiveConf(); hiveConf.addResource(new Path(hiveConfDir + "/hive-site.xml")); HiveCatalog hiveCatalog = new HiveCatalog(catalogName, defaultDatabase, hiveConf); tableEnv.registerCatalog(catalogName, hiveCatalog); // 设置 Kafka 数据源 Properties kafkaProps = new Properties(); kafkaProps.setProperty("bootstrap.servers", "<your-bootstrap-servers>"); kafkaProps.setProperty("group.id", "<your-group-id>"); FlinkKafkaConsumer<String> kafkaConsumer = new FlinkKafkaConsumer<>("my-topic", new SimpleStringSchema(), kafkaProps); DataStream<String> dataStream = env.addSource(kafkaConsumer); // 将数据流转换为 Table Table table = tableEnv.fromDataStream(dataStream, "value"); // 定义输出的 Table String tableName = "my_hive_table"; String createTableStmt = String.format("CREATE TABLE IF NOT EXISTS %s (value STRING) STORED AS TEXTFILE", tableName); tableEnv.executeSql(createTableStmt); tableEnv.useCatalog(catalogName); tableEnv.useDatabase(defaultDatabase); tableEnv.createTemporaryView("temp_table", table); String insertStmt = String.format("INSERT INTO %s SELECT * FROM temp_table", tableName); tableEnv.executeSql(insertStmt); // 将数据流落地到 HDFS StreamingFileSink<String> sink = StreamingFileSink .forRowFormat(new Path("hdfs://<your-hdfs-path>"), new SimpleStringEncoder<String>("UTF-8")) .build(); dataStream.map(new MapFunction<String, String>() { @Override public String map(String value) throws Exception { return value; } }).addSink(sink); // 执行任务 env.execute("Flink Hive Sink Example"); } } ``` 在这个示例中,我们首先设置了执行环境和 Table 环境。然后,我们设置了 Hive catalog 和 Kafka 数据源。接着,我们将数据流转换为 Table 并定义要输出的 Table。最后,我们使用 Flink 的 StreamingFileSink 将数据写入 HDFS 中。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

猫猫爱吃小鱼粮

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值