Flink开发-全局窗口GlobalWindows

全局窗口没有结束的边界,使用的Trigger(触发器)是NeverTrigger。如果不对全局窗口指定一个触发器,窗口是不会触发计算的。

1.Non-Keyed Count Windows

是按照窗口中接收到数据的条数划分窗口的,跟时间无关。Non-Keyed Windows,就仅有一个全局窗口。Count Windows属于Global Windows并指定了CountTrigger。如果没有达到指定的条数,窗口不会被触发执行。

1.1 aggregates增量聚合

    public static void main(String[] args) throws Exception {
        //CountWindowAll是GlobalWindow的一种
        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
        DataStreamSource<String> socketStream = env.socketTextStream("localhost", 8888);
        SingleOutputStreamOperator<Integer> mapStream = socketStream.map(Integer::parseInt);
        //划分Non-Keyed countWindowAll,并行度为1
        AllWindowedStream<Integer, GlobalWindow> windowAll = mapStream.countWindowAll(5);
        //把窗口数据进行聚合
        SingleOutputStreamOperator<Integer> sum = windowAll.sum(0);
        sum.print();
        env.execute("");
    }

输入内容:

C:\Users\zhibai>nc -lp 8888
1
2
3
4
5
6
7
8
9
10

输出结果:

4> 15
5> 40

1.2 reduce增量聚合

    public static void main(String[] args) throws Exception {
        //local模式默认的并行度是当前机器的逻辑核数
        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
        DataStreamSource<String> socketStream = env.socketTextStream("localhost", 8888);
        SingleOutputStreamOperator<Integer> mapStream = socketStream.map(Integer::parseInt);
        //划分Non-Keyed countWindowAll,并行度为1
        AllWindowedStream<Integer, GlobalWindow> windowAll = mapStream.countWindowAll(5);
        //把窗口数据进行增量聚合,每次数据流入都会计算结果,内存中只保留中间状态,效率更高更节省资源。
        SingleOutputStreamOperator<Integer> reduce = windowAll.reduce(new ReduceFunction<Integer>() {
            @Override
            public Integer reduce(Integer t1, Integer t2) throws Exception {
                return t1 + t2;
            }
        });
        reduce.print();
        env.execute("");
    }

输入内容:

C:\Users\zhibai>nc -lp 8888
1
2
3
4
5
6
7
8
9
10

输出结果:

1> 15
2> 40

1.3 apply全量聚合

程序运行时将窗口中的数据先在window state中存起来,当满足触发条件后再将状态中的数据取出来进行计算。

    public static void main(String[] args) throws Exception {
        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
        DataStreamSource<String> socketStream = env.socketTextStream("localhost", 8888);
        SingleOutputStreamOperator<Integer> mapStream = socketStream.map(Integer::parseInt);
        //划分Non-Keyed countWindowAll,并行度为1
        AllWindowedStream<Integer, GlobalWindow> windowAll = mapStream.countWindowAll(5);
        SingleOutputStreamOperator<Integer> apply = windowAll.apply(new AllWindowFunction<Integer, Integer, GlobalWindow>() {
            @Override
            public void apply(GlobalWindow window, Iterable<Integer> values, Collector<Integer> out) throws Exception {
                Integer sum = 0;
                for (Integer value : values) {
                    sum += value;
                }
                out.collect(sum);
            }
        });
        apply.print();
        env.execute("");
    }

输入内容:

C:\Users\zhibai>nc -lp 8888
1
2
3
4
5
6
7
8
9
10

输出结果:

4> 15
5> 40

因为apply是对全量数据进行处理我们也可以利用这一特点,对窗口内的数据进行排序等操作。

    public static void main(String[] args) throws Exception {
        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
        DataStreamSource<String> socketStream = env.socketTextStream("localhost", 8888);
        SingleOutputStreamOperator<Integer> mapStream = socketStream.map(Integer::parseInt);
        //划分Non-Keyed countWindowAll,并行度为1
        AllWindowedStream<Integer, GlobalWindow> windowAll = mapStream.countWindowAll(5);
        SingleOutputStreamOperator<Integer> apply = windowAll.apply(new AllWindowFunction<Integer, Integer, GlobalWindow>() {
            @Override
            public void apply(GlobalWindow window, Iterable<Integer> values, Collector<Integer> out) throws Exception {
                ArrayList<Integer> lst = new ArrayList<>();
                for (Integer value : values) {
                    lst.add(value);
                }
                lst.sort(new Comparator<Integer>() {
                    @Override
                    public int compare(Integer o1, Integer o2) {
                        //return o1 - o2;
                        return Integer.compare(o1,o2);
                    }
                });
                for (Integer i : lst) {
                    out.collect(i);
                }
            }
        });
        apply.print().setParallelism(1);
        env.execute("");
    }

输入内容:

C:\Users\zhibai>nc -lp 8888
5
4
3
2
1

输出结果:

1
2
3
4
5

2.Keyed Count Windows

全局窗口将key相同的数据都分配到一个单独的窗口中,每一种key对应一个全局窗口,多个全局窗口之间是相互独立的,每个组可以分别触发,不需要等待所有组都满足结果再触发。

    public static void main(String[] args) throws Exception {
        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
        DataStreamSource<String> socketStream = env.socketTextStream("localhost", 8888);
        SingleOutputStreamOperator<Tuple2<String, Integer>> wordAndOne = socketStream.map(new MapFunction<String, Tuple2<String, Integer>>() {
            @Override
            public Tuple2<String, Integer> map(String s) throws Exception {
                String[] fields = s.split(" ");
                return Tuple2.of(fields[0], Integer.parseInt(fields[1]));
            }
        });
        KeyedStream<Tuple2<String, Integer>, String> keyedStream = wordAndOne.keyBy(new KeySelector<Tuple2<String, Integer>, String>() {
            @Override
            public String getKey(Tuple2<String, Integer> s) throws Exception {
                return s.f0;
            }
        });
        SingleOutputStreamOperator<Tuple2<String, Integer>> reduce = keyedStream.countWindow(3).reduce(new ReduceFunction<Tuple2<String, Integer>>() {
            @Override
            public Tuple2<String, Integer> reduce(Tuple2<String, Integer> value1, Tuple2<String, Integer> value2) throws Exception {
                value1.f1 = value1.f1 + value2.f1;
                return value1;
            }
        });
        reduce.print();
        env.execute("");
    }

输入内容:

C:\Users\zhibai>nc -lp 8888
hadoop 2
flink 1
spark 3
hadoop 3
hadoop 1
flink 2
spark 4
flink 4
spark 7

输出结果:

8> (hadoop,6)
7> (flink,7)
1> (spark,14)
  • 3
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值