时间乱序数据情况
由于业务数据采集是获取的数据有时并不能保证数据的顺序传输,错误的数据顺序可能会带来业务的异常。例如:数据如下;
01,1635867066000
01,1635867067000
01,1635867068000
01,1635867069000
01,1635867070000
01,1635867071000
实例验证
POM文件
<dependencies>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-java</artifactId>
<version>1.13.2</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-core</artifactId>
<version>1.13.2</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-streaming-java_2.12</artifactId>
<version>1.13.2</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-clients_2.12</artifactId>
<version>1.13.2</version>
</dependency>
<dependency>
<groupId>com.alibaba</groupId>
<artifactId>fastjson</artifactId>
<version>1.2.29</version>
</dependency>
</dependencies>
代码实现
设置解释:
- env.getConfig().setAutoWatermarkInterval(1000L):每隔一秒去自动emitWatermark
- TumblingEventTimeWindows.of(Time.seconds(4)):滚动窗口为4s
- private long maxOutOfOrderness = 3000L:允许的最大延迟时间3s
注意:
//最初写成Long.MIN_VALUE 导致new Watermark(maxTimeStamp - maxOutOfOrderness)
//超出范围出错,出错不报错很难排查
//private long maxTimeStamp =Long.MIN_VALUE;
//排出后改为
private long maxTimeStamp = 0L;
import org.apache.flink.api.common.eventtime.*;
import org.apache.flink.api.common.functions.MapFunction;
import org.apache.flink.api.java.tuple.Tuple2;
import org.apache.flink.streaming.api.CheckpointingMode;
import org.apache.flink.streaming.api.datastream.DataStreamSource;
import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.api.functions.windowing.WindowFunction;
import org.apache.flink.streaming.api.windowing.assigners.TumblingEventTimeWindows;
import org.apache.flink.streaming.api.windowing.time.Time;
import org.apache.flink.streaming.api.windowing.windows.TimeWindow;
import org.apache.flink.util.Collector;
import java.util.Iterator;
public class IoTMain4 {
public static void main(String[] args) throws Exception {
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.enableCheckpointing(60 * 1000, CheckpointingMode.EXACTLY_ONCE);
// 设置自动水印发射的间隔
env.getConfig().setAutoWatermarkInterval(1000L);
env.setParallelism(1);
DataStreamSource<String> sourceDs = env.socketTextStream("localhost", 9000);
SingleOutputStreamOperator<Tuple2<String, Long>> mapDs = sourceDs
.map(new MapFunction<String, Tuple2<String, Long>>() {
private static final long serialVersionUID = -5181351998053732122L;
@Override
public Tuple2<String, Long> map(String value) throws Exception {
String[] split = value.split(",");
return Tuple2.of(split[0], Long.valueOf(split[1]));
}
});
// 周期性 发射watermark
SingleOutputStreamOperator<Tuple2<String, Long>> watermarks = mapDs
.assignTimestampsAndWatermarks(new WatermarkStrategy<Tuple2<String, Long>>() {
private static final long serialVersionUID = -8873639694196414860L;
@Override
public WatermarkGenerator<Tuple2<String, Long>> createWatermarkGenerator(
WatermarkGeneratorSupplier.Context context) {
return new WatermarkGenerator<Tuple2<String, Long>>() {
private long maxTimeStamp = 0L;
private long maxOutOfOrderness = 3000L; // 允许的最大延迟时间
@Override
public void onEvent(Tuple2<String, Long> event, long eventTimestamp,
WatermarkOutput output) {
// 每次来一条数据就会触发一次
maxTimeStamp = Math.max(maxTimeStamp, event.f1);
}
@Override
public void onPeriodicEmit(WatermarkOutput output) {
// 周期性 发射watermark
output.emitWatermark(new Watermark(maxTimeStamp - maxOutOfOrderness));
}
};
}
}.withTimestampAssigner(((element, recordTimestamp) -> element.f1)));
watermarks.keyBy(x -> x.f0).window(TumblingEventTimeWindows.of(Time.seconds(4)))
.apply(new WindowFunction<Tuple2<String, Long>, String, String, TimeWindow>() {
private static final long serialVersionUID = 65693184846116387L;
@Override
public void apply(String s, TimeWindow window, Iterable<Tuple2<String, Long>> input,
Collector<String> out) throws Exception {
Iterator<Tuple2<String, Long>> iterator = input.iterator();
int count = 0;
while (iterator.hasNext()) {
count++;
iterator.next();
}
out.collect(window.getStart() + "->" + window.getEnd() + " " + s + ":" + count);
}
}).print();
env.execute();
}
}
测试情况
使用 netcat 向9000发送上述测试数据
C:\Users\xxx> nc -l -p 9999
01,1635867066000
01,1635867067000
01,1635867068000
01,1635867069000
01,1635867070000
01,1635867071000
当最后一条数据 01,1635867071000 处理时,会触发窗口**[1635867064000, 1635867068000) **且不再接收此阶段数据(可以自定义处理);
滚动窗口将每一分钟每隔四秒分隔,前闭后开
例如2021-11-02 23:31分划分成:
[2021-11-02 23:31:00 , 2021-11-02 23:31:04)
[2021-11-02 23:31:04 , 2021-11-02 23:31:08)
....
[2021-11-02 23:31:56 , 2021-11-02 23:32:00)
[1635867064000, 1635867068000) 对应 [2021-11-02 23:31:04, 2021-11-02 23:31:08);
最大延迟时间为3s,所以触发 [1635867064000, 1635867068000) 此窗口的时间戳要 >= 1635867071000 即 1635867068000 + 3000 = 1635867067100