❤️电商用户行为分析-Flink【Java重写版本】,内附具体代码❤️,可以直接学习使用❤️【建议收藏】!

前言

近些年,随着对实时数据需求越来越高,掀起了一波学习Flink的热潮,本文借鉴于尚硅谷大数据实战_电商用户行为分析(项目开发实战)学习,原始项目使用Scala,本文尝试用Java对项目进行重写,也会结合官方文档,介绍一些api的用处。话不多说,直接开始我们今天的正题:

项目整体介绍

项目主要模块

基于对电商用户行为数据的基本分类,我们可以发现主要有以下三个分析方向:

1.热门统计
利用用户的点击浏览行为,进行流量统计、近期热门商品统计等。

2.偏好统计
利用用户的偏好行为,比如收藏、喜欢、评分等,进行用户画像分析,给出个性化的商品推荐列表。

3.风险控制
利用用户的常规业务行为,比如登录、下单、支付等,分析数据,对异常情况进行报警提示。

本项目限于数据,我们只实现热门统计和风险控制中的部分内容,将包括以下五大模块:实时热门商品统计、实时流量统计、市场营销商业指标统计、恶意登录监控和订单支付失效监控,其中细分为以下9个具体指标:
在这里插入图片描述由于对实时性要求较高,我们会用flink作为数据处理的框架。在项目中,我们将综合运用flink的各种API,基于EventTime去处理基本的业务需求,并且灵活地使用底层的processFunction,基于状态编程和CEP去处理更加复杂的情形。

数据源解析

行为数据UserBehavior

字段名数据类型说明
userIdLong加密后的用户ID
itemIdLong加密后的商品ID
categoryIdInt加密后的商品所属类别ID
behaviorString用户行为类型,包括(‘pv’, ‘’buy, ‘cart’, ‘fav’)
timestampLong行为发生的时间戳,单位秒

web日志数据

字段名数据类型说明
ipString访问的 IP
userIdLong访问的 user ID
eventTimeLong访问时间
methodString访问方法 GET/POST/PUT/DELETE
urlString访问的 url

热门时事商品统计

基本需求

统计近一小时热门商品,每五秒钟更新一次
热门数用浏览度pv来衡量

解决思路

过滤出用户行为中的pv
构建滑动窗口

按照商品id进行分区

.keyBy(“itemid”)

设置时间窗口

.timeWindow(Time.minutes(60),Time.minutes(5)) 滑动窗口
时间窗口左闭右开,同一份数据可以发送给满足条件的多份窗口

窗口聚合

.aggregate(new CountAgg(),new WindowResultFunction())
new CountAgg():定义聚合规则
new WindowResultFunction():定义输出的数据结构

实时流量统计–热门页面

基本需求

从web服务器日志中,统计实时热门访问页面
统计每分钟ip访问量,取出访问量最大的五个地址,每五秒更新一次

解决思路

将日志中的时间转换为时间戳
构建滑动窗口

市场营销分析–APP市场推广统计

基本需求

统计APP市场推广的数据指标
按照不同的推广渠道,分别统计数据

解决思路

通过滤过,按照不同渠道进行统计
自定义processFunction

市场营销分析–页面广告统计

基本需求

按照不同省份,统计每小时页面访问量,五秒钟统计一次
对于频繁的点击行为进行过滤,放入黑名单

解决思路

滑动窗口
利用processFunction进行黑名单过滤
其实需求的具体细节还有很多,代码实现中再展开

项目编写

热门商品统计

数据分析

// userID,itemId,categoryId,mode,timeStamp
543462,1715,1464116,pv,1511658000

定义数据输入输出的结构

// input structure
class UserBehavior {
    public long userId;
    public long itemId;
    public int categoryId;
    public String behavior;
    public long timeStamp;

    public UserBehavior() {
    }

    public UserBehavior(long userId, long itemId, int categoryId, String behavior, long timeStamp) {
        this.userId = userId;
        this.itemId = itemId;
        this.categoryId = categoryId;
        this.behavior = behavior;
        this.timeStamp = timeStamp;
    }

    @Override
    public String toString() {
        return "UserBehavior{" +
                "userId=" + userId +
                ", itemId=" + itemId +
                ", categoryId=" + categoryId +
                ", behavior='" + behavior + '\'' +
                ", timeStamp=" + timeStamp +
                '}';
    }
}

// output structure
class ItemViewCount{
    public long itemID;
    public long windowEnd;
    public long count;

    public ItemViewCount() {
    }

    public ItemViewCount(long itemID, long windowEnd, long count) {
        this.itemID = itemID;
        this.windowEnd = windowEnd;
        this.count = count;
    }

    @Override
    public String toString() {
        return "ItemViewCount{" +
                "itemID=" + itemID +
                ", windowEnd=" + windowEnd +
                ", count=" + count +
                '}';
    }

WATERMARKS

为了使用事件时间语义,Flink 应用程序需要知道事件时间戳对应的字段,意味着数据流中的每个元素都需要拥有可分配的事件时间戳。其通常通过使用 TimestampAssigner API 从元素中的某个字段去访问/提取时间戳。

时间戳的分配与 watermark 的生成是齐头并进的,其可以告诉 Flink 应用程序事件时间的进度。其可以通过指定 WatermarkGenerator 来配置 watermark 的生成方式。

使用 Flink API 时需要设置一个同时包含 TimestampAssigner 和 WatermarkGenerator 的 WatermarkStrategy。WatermarkStrategy 工具类中也提供了许多常用的 watermark 策略,并且用户也可以在某些必要场景下构建自己的 watermark 策略。

// 设置水印,处理乱序数据
// 水印策略,有界无序,定义一个固定延迟事件
// 同时时间的语义,由我们对象中的timeStamp指定
SingleOutputStreamOperator<UserBehavior> userBehaviorWatermark = userBehaviorStream.assignTimestampsAndWatermarks(WatermarkStrategy
.<UserBehavior>forBoundedOutOfOrderness(Duration.ofSeconds(5))
.withTimestampAssigner((event, timeStamp) -> event.timeStamp * 1000));

AGGREGATEFUNCTION自定义聚合规则

AggregateFunction比ReduceFunction更加通用,它有三个参数:输入类型(IN),累加器类型(ACC)和输出类型(OUT)

class CountAgg implements AggregateFunction<UserBehavior,Long,Long> {

    // 定义初始值
    @Override
    public Long createAccumulator() {
        return null;
    }

    // 组内规则
    @Override
    public Long add(UserBehavior userBehavior, Long aLong) {
        return null;
    }

    // 返回值
    @Override
    public Long getResult(Long aLong) {
        return null;
    }

    // 组间规则
    @Override
    public Long merge(Long aLong, Long acc1) {
        return null;
    }
}

WINDOWFUNCTION自定义窗口处理元素的规则

@Public
public interface WindowFunction<IN, OUT, KEY, W extends Window> extends Function, Serializable {
    void apply(KEY var1, W var2, Iterable<IN> var3, Collector<OUT> var4) throws Exception;
}

处理函数(PROCESSFUNCTIONS)

ProcessFunction 将事件处理与 Timer,State 结合在一起,使其成为流处理应用的强大构建模块。 这是使用 Flink 创建事件驱动应用程序的基础。它和 RichFlatMapFunction 十分相似, 但是增加了 Timer。

这里展示了其中一种ProcessFunction。

public abstract class KeyedProcessFunction<K, I, O> extends AbstractRichFunction {
    private static final long serialVersionUID = 1L;

    public KeyedProcessFunction() {
    }

    public abstract void processElement(I var1, KeyedProcessFunction<K, I, O>.Context var2, Collector<O> var3) throws Exception;

    public void onTimer(long timestamp, KeyedProcessFunction<K, I, O>.OnTimerContext ctx, Collector<O> out) throws Exception {
    }
  ......
class TopNHotItems extends KeyedProcessFunction<Long,ItemViewCount,String> {

    public int n;
    // 定义一个状态变量 list state,用来保存所有的 ItemViewCont
    public ListState<ItemViewCount> itemState;

    public TopNHotItems(int n) {
        this.n = n;
    }

    // // 在执行processElement方法之前,会最先执行并且只执行一次 open 方法
    @Override
    public void open(Configuration parameters) throws Exception {
        itemState = getRuntimeContext().getListState(new ListStateDescriptor<ItemViewCount>("itemState",ItemViewCount.class));
    }

    @Override
    public void processElement(ItemViewCount itemViewCount, Context context, Collector<String> collector) throws Exception {
        itemState.add(itemViewCount);
        // 注册 windowEnd+1 的 EventTime Timer, 延迟触发,当触发时,说明收齐了属于windowEnd窗口的所有商品数据,统一排序处理
        context.timerService().registerEventTimeTimer(itemViewCount.windowEnd+1);
    }

    // 定时器触发时,会执行这个方法
    @Override
    public void onTimer(long timestamp, OnTimerContext ctx, Collector<String> out) throws Exception {
        // 已经收集到所有的数据,首先把所有的数据放到一个 List 中
        List<ItemViewCount> allItems = new ArrayList<>();
        Iterable<ItemViewCount> itemViewCounts = itemState.get();
        Iterator<ItemViewCount> iterator = itemViewCounts.iterator();
        int cnt=0;
        while (iterator.hasNext()) {
            if(cnt>=3) break;
            allItems.add(iterator.next());
            cnt++;
        }
        // 清除状态
        itemState.clear();
        // 按照 count 大小  倒序排序
        Collections.sort(allItems, new Comparator<ItemViewCount>() {
            @Override
            public int compare(ItemViewCount o1, ItemViewCount o2) {
                if(o1.count>o2.count) return -1;
                else if(o1.count==o2.count) return 0;
                else return 1;
            }
        });
        StringBuilder result = new StringBuilder();
        result.append("======================================================\n");
        // 触发定时器时,我们多设置了1秒的延迟,这里我们将时间减去0.1获取到最精确的时间
        result.append("时间:").append(new Timestamp(timestamp - 1)).append("\n");
        for(ItemViewCount elem:allItems) result.append(elem.toString());
        result.append("\n");
        result.append("======================================================\n");
        out.collect(result.toString());
    }
}

完整代码如下

注释还是写得很详细的,层层递进

import org.apache.flink.api.common.eventtime.WatermarkStrategy;
import org.apache.flink.api.common.functions.FilterFunction;
import org.apache.flink.api.common.functions.MapFunction;
import org.apache.flink.api.common.functions.AggregateFunction;
import org.apache.flink.api.common.state.ListState;
import org.apache.flink.api.common.state.ListStateDescriptor;
import org.apache.flink.configuration.Configuration;
import org.apache.flink.streaming.api.TimeCharacteristic;
import org.apache.flink.streaming.api.datastream.DataStreamSource;
import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.api.functions.KeyedProcessFunction;
import org.apache.flink.streaming.api.functions.windowing.WindowFunction;
import org.apache.flink.streaming.api.windowing.time.Time;
import org.apache.flink.streaming.api.windowing.windows.TimeWindow;
import org.apache.flink.util.Collector;

import java.sql.Timestamp;
import java.time.Duration;
import java.util.*;

/**
 * @Description: 热门时事商品统计
 */

// input structure
class UserBehavior {
    public long userId;
    public long itemId;
    public int categoryId;
    public String behavior;
    public long timeStamp;

    public UserBehavior() {
    }

    public UserBehavior(long userId, long itemId, int categoryId, String behavior, long timeStamp) {
        this.userId = userId;
        this.itemId = itemId;
        this.categoryId = categoryId;
        this.behavior = behavior;
        this.timeStamp = timeStamp;
    }

    @Override
    public String toString() {
        return "UserBehavior{" +
                "userId=" + userId +
                ", itemId=" + itemId +
                ", categoryId=" + categoryId +
                ", behavior='" + behavior + '\'' +
                ", timeStamp=" + timeStamp +
                '}';
    }
}

// output structure
class ItemViewCount{
    public long itemID;
    public long windowEnd;
    public long count;

    public ItemViewCount() {
    }

    public ItemViewCount(long itemID, long windowEnd, long count) {
        this.itemID = itemID;
        this.windowEnd = windowEnd;
        this.count = count;
    }

    @Override
    public String toString() {
        return "ItemViewCount{" +
                "itemID=" + itemID +
                ", windowEnd=" + windowEnd +
                ", count=" + count +
                '}';
    }
}


public class HotItems {

    public static void main(String[] args) throws Exception {

        // 定义流处理环境
        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
        // 设置并行度
        env.setParallelism(1);
        // 设置时间特征为事件事件
        env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);

        // source
        // input data
        DataStreamSource<String> stringDataStreamSource = env.readTextFile("/UserBehavior.csv");

        // transform
        // 将原始数据变成UserBehavior类型
        SingleOutputStreamOperator<UserBehavior> userBehaviorStream = stringDataStreamSource.map(new MapFunction<String, UserBehavior>() {
            @Override
            public UserBehavior map(String s) throws Exception {
                String[] split = s.split(",");
                return new UserBehavior(Long.parseLong(split[0]), Long.parseLong(split[1]), Integer.parseInt(split[2]), split[3], Long.parseLong(split[4]));
            }
        });
        // 设置水印,处理乱序数据
        // 水印策略,有界无序,定义一个固定延迟事件
        // 同时时间的语义,由我们对象中的timeStamp指定
        SingleOutputStreamOperator<UserBehavior> userBehaviorWatermark = userBehaviorStream.assignTimestampsAndWatermarks(WatermarkStrategy
                .<UserBehavior>forBoundedOutOfOrderness(Duration.ofSeconds(5))
                .withTimestampAssigner((event, timeStamp) -> event.timeStamp * 1000));
        // 过滤出pv数据
        userBehaviorWatermark.filter(new FilterFunction<UserBehavior>() {
            @Override
            public boolean filter(UserBehavior userBehavior) throws Exception {
                return userBehavior.behavior.equals("pv");
            }
        })
                // 按照itemId聚合
        .keyBy(value -> value.itemId)
        // Windows can be defined on already partitioned KeyedStreams
        // 定义滑动窗口
        .timeWindow(Time.hours(1), Time.minutes(5))
        // 统计出每种商品的个数,自定义聚合规则,和输出结构
        .aggregate(new CountAgg(), new WindowResult())

        // 按照每次窗口结束时间聚合
        .keyBy(value->value.windowEnd)
        // 输出每个窗口中点击量前N名的商品
        .process(new TopNHotItems(3))
        .print("HotItems");

        env.execute();
    }
}

class CountAgg implements AggregateFunction<UserBehavior,Long,Long> {

    // 定义初始值
    @Override
    public Long createAccumulator() {
        return 0L;
    }

    // 组内规则
    @Override
    public Long add(UserBehavior userBehavior, Long aLong) {
        return aLong+1;
    }

    // 返回值
    @Override
    public Long getResult(Long aLong) {
        return aLong;
    }

    // 组间规则
    @Override
    public Long merge(Long aLong, Long acc1) {
        return aLong+acc1;
    }
}

class WindowResult implements WindowFunction<Long,ItemViewCount,Long, TimeWindow> {

    @Override
    public void apply(Long aLong, TimeWindow timeWindow, java.lang.Iterable<Long> iterable, Collector<ItemViewCount> collector) throws Exception {
        collector.collect(new ItemViewCount(aLong,timeWindow.getEnd(),iterable.iterator().next()));
    }
}

class TopNHotItems extends KeyedProcessFunction<Long,ItemViewCount,String> {

    public int n;
    // 定义一个状态变量 list state,用来保存所有的 ItemViewCont
    public ListState<ItemViewCount> itemState;

    public TopNHotItems(int n) {
        this.n = n;
    }

    // // 在执行processElement方法之前,会最先执行并且只执行一次 open 方法
    @Override
    public void open(Configuration parameters) throws Exception {
        itemState = getRuntimeContext().getListState(new ListStateDescriptor<ItemViewCount>("itemState",ItemViewCount.class));
    }

    @Override
    public void processElement(ItemViewCount itemViewCount, Context context, Collector<String> collector) throws Exception {
        itemState.add(itemViewCount);
        // 注册 windowEnd+1 的 EventTime Timer, 延迟触发,当触发时,说明收齐了属于windowEnd窗口的所有商品数据,统一排序处理
        context.timerService().registerEventTimeTimer(itemViewCount.windowEnd+1);
    }

    // 定时器触发时,会执行这个方法
    @Override
    public void onTimer(long timestamp, OnTimerContext ctx, Collector<String> out) throws Exception {
        // 已经收集到所有的数据,首先把所有的数据放到一个 List 中
        List<ItemViewCount> allItems = new ArrayList<>();
        Iterable<ItemViewCount> itemViewCounts = itemState.get();
        Iterator<ItemViewCount> iterator = itemViewCounts.iterator();
        int cnt=0;
        while (iterator.hasNext()) {
            if(cnt>=3) break;
            allItems.add(iterator.next());
            cnt++;
        }
        // 清除状态
        itemState.clear();
        // 按照 count 大小  倒序排序
        Collections.sort(allItems, new Comparator<ItemViewCount>() {
            @Override
            public int compare(ItemViewCount o1, ItemViewCount o2) {
                if(o1.count>o2.count) return -1;
                else if(o1.count==o2.count) return 0;
                else return 1;
            }
        });
        StringBuilder result = new StringBuilder();
        result.append("======================================================\n");
        // 触发定时器时,我们多设置了1秒的延迟,这里我们将时间减去0.1获取到最精确的时间
        result.append("时间:").append(new Timestamp(timestamp - 1)).append("\n");
        for(ItemViewCount elem:allItems) result.append(elem.toString());
        result.append("\n");
        result.append("======================================================\n");
        out.collect(result.toString());
    }
}

第一个模块的代码,写得比较详细,之后的模块在文中就只写核心部分了。

数据源改为KAFKA

Properties properties = new Properties();
properties.setProperty(“bootstrap.servers”,“localhost:9092”);
properties.setProperty(“group.id”, “consumer-group”);
properties.setProperty(“key.deserializer”, “org.apache.kafka.common.serialization.StringDeserializer”);
properties.setProperty(“value.deserializer”, “org.apache.kafka.common.serialization.StringDeserializer”);
properties.setProperty(“auto.offset.reset”, “latest”);
DataStreamSource stringDataStreamSource = env.addSource(new FlinkKafkaConsumer<>(“hotItem”, new SimpleStringSchema(), properties));

自定义KAFKA生产者

可以从文件读取信息并不断发送,便于测试

public class myKafkaProducer {

    public static void main(String[] args) throws IOException {
        write2kafka("hotItem");
    }
    public static void write2kafka(String topic) throws IOException {
        Properties properties = new Properties();
        properties.setProperty("bootstrap.servers","192.168.166.200:9092");
        properties.setProperty("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
        properties.setProperty("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");

        Producer<String, String> producer = new KafkaProducer<String,String>(properties);
        InputStreamReader read = new InputStreamReader(new FileInputStream("data/UserBehavior.csv"));
        BufferedReader bufferedReader = new BufferedReader(read);
        while (true) {
            String s = bufferedReader.readLine();
            if(s!=null) {
                producer.send(new ProducerRecord<String, String>(topic,s));
            }
        }
    }
}

实时流量统计

页面浏览量统计

每隔5秒,输出最近10分钟内访问量最多的前N个URL。
套路其实是一样的,同上

网站浏览总量(PV)

统计每小时pv
其实就是一个word count

独立访客数统计(UV)

这里涉及到一个去重的操作,flink本身没有distinct算子,这比较出乎意料,当前场景,有如下几种去重的方式。
在展示去重方法之前,需要先指出一个api的要点,就是WindowedStream/AllWindowedStream,这两者最后输出的都是DataStream,可以apply在有分区和没分区的窗口中,效果虽然没有process这么强力,但还是不错的。

方法一

利用set进行去重

public class UniqueVisitor {

    public static void main(String[] args) throws Exception {

        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
        env.setParallelism(1);
        env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);

        DataStreamSource<String> stringDataStreamSource = env.readTextFile("data/UserBehavior.csv");

        SingleOutputStreamOperator<UserBehavior> userBehaviorStream = stringDataStreamSource.map(new MapFunction<String, UserBehavior>() {
            @Override
            public UserBehavior map(String s) throws Exception {
                String[] split = s.split(",");
                return new UserBehavior(Long.parseLong(split[0]), Long.parseLong(split[1]), Integer.parseInt(split[2]), split[3], Long.parseLong(split[4]));
            }
        });

        SingleOutputStreamOperator<UserBehavior> userBehaviorWatermark = userBehaviorStream.assignTimestampsAndWatermarks(WatermarkStrategy
                .<UserBehavior>forBoundedOutOfOrderness(Duration.ofSeconds(5))
                .withTimestampAssigner((event, timeStamp) -> event.timeStamp * 1000));

        userBehaviorWatermark.filter(value -> value.behavior.equals("pv"))
                .timeWindowAll(Time.minutes(60))
                .apply(new UvCountByWindow())
                .print();

        env.execute();
    }
}

class UvCountByWindow implements AllWindowFunction<UserBehavior,UVCount, TimeWindow> {

    @Override
    public void apply(TimeWindow window, Iterable<UserBehavior> input, Collector<UVCount> out) {
        Set<Long> longs = new HashSet<>();
        for (UserBehavior next : input) {
            longs.add(next.userId);
        }
        out.collect(new UVCount(window.getEnd(),longs.size()));
    }
}
方法二

利用mapState,思路和set差不多
可以看到这个processFunction并没有ontimer方法,因为keyedProcessFunction是ProcessFunction的扩展,可以在onTimer获取timer的key (通过context.getCurrentKey方法),而这个方法并不是

public class UniqueVisitor {

    public static void main(String[] args) throws Exception {

        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
        env.setParallelism(1);
        env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);

        DataStreamSource<String> stringDataStreamSource = env.readTextFile("data/UserBehavior.csv");

        SingleOutputStreamOperator<UserBehavior> userBehaviorStream = stringDataStreamSource.map(new MapFunction<String, UserBehavior>() {
            @Override
            public UserBehavior map(String s) throws Exception {
                String[] split = s.split(",");
                return new UserBehavior(Long.parseLong(split[0]), Long.parseLong(split[1]), Integer.parseInt(split[2]), split[3], Long.parseLong(split[4]));
            }
        });

        SingleOutputStreamOperator<UserBehavior> userBehaviorWatermark = userBehaviorStream.assignTimestampsAndWatermarks(WatermarkStrategy
                .<UserBehavior>forBoundedOutOfOrderness(Duration.ofSeconds(5))
                .withTimestampAssigner((event, timeStamp) -> event.timeStamp * 1000));

        userBehaviorWatermark.filter(value -> value.behavior.equals("pv"))
                .timeWindowAll(Time.minutes(60))
//                .apply(new UvCountByWindow())
                .process(new UvCountByProcess())
                .print();

        env.execute();
    }
}

class UvCountByProcess extends ProcessAllWindowFunction<UserBehavior, UVCount, TimeWindow> {

    public MapState<Long, Long> mapState;
    public int cnt = 0;

    @Override
    public void open(Configuration parameters) throws Exception {
        mapState = getRuntimeContext().getMapState(new MapStateDescriptor<>("mapState",Long.class , Long.class));
    }

    @Override
    public void process(Context context, Iterable<UserBehavior> iterable, Collector<UVCount> collector) throws Exception {
        for(UserBehavior elem:iterable) {
            if(!mapState.contains(elem.userId)) {
                mapState.put(elem.userId,1L);
                cnt++;
            }
        }
        collector.collect(new UVCount(context.window().getEnd(),cnt));
        mapState.clear();
    }


}
方法三 布隆过滤

上两种方法都需要用到内存在存储元素,要是数据量很大,会遇到资源不够的情况,这里采用布隆过滤器。

public class UvWithBloomFilter {

    public static void main(String[] args) throws Exception {

        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
        env.setParallelism(1);
        env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);

        DataStreamSource<String> stringDataStreamSource = env.readTextFile("data/UserBehavior.csv");

        SingleOutputStreamOperator<UserBehavior> userBehaviorStream = stringDataStreamSource.map(new MapFunction<String, UserBehavior>() {
            @Override
            public UserBehavior map(String s) throws Exception {
                String[] split = s.split(",");
                return new UserBehavior(Long.parseLong(split[0]), Long.parseLong(split[1]), Integer.parseInt(split[2]), split[3], Long.parseLong(split[4]));
            }
        });

        SingleOutputStreamOperator<UserBehavior> userBehaviorWatermark = userBehaviorStream.assignTimestampsAndWatermarks(WatermarkStrategy
                .<UserBehavior>forBoundedOutOfOrderness(Duration.ofSeconds(5))
                .withTimestampAssigner((event, timeStamp) -> event.timeStamp * 1000));

        userBehaviorWatermark.filter(value -> value.behavior.equals("pv"))
                .map(new MapFunction<UserBehavior, Tuple2<String,Long>>() {
                    @Override
                    public Tuple2<String, Long> map(UserBehavior userBehavior) throws Exception {
                        return new Tuple2<>("dummyKey",userBehavior.userId);
                    }
                })
                .keyBy(value->value.f0)
                .timeWindow(Time.minutes(60))
                // 我们不应该等待窗口关闭才去做 Redis 的连接 -》 数据量可能很大,窗口的内存放不下
                // 所以这里使用了 触发窗口操作的API -- 触发器 trigger
                .trigger(new MyTrigger())
                .process(new UvCountWithBloom())
                .print();

        env.execute();
    }
}

class MyTrigger extends Trigger<Tuple2<String, Long>, TimeWindow> {

    @Override
    public TriggerResult onElement(Tuple2<String, Long> stringLongTuple2, long l, TimeWindow timeWindow, TriggerContext triggerContext) throws Exception {
        return TriggerResult.FIRE_AND_PURGE;
    }

    @Override
    public TriggerResult onProcessingTime(long l, TimeWindow timeWindow, TriggerContext triggerContext) throws Exception {
        return TriggerResult.CONTINUE;
    }

    @Override
    public TriggerResult onEventTime(long l, TimeWindow timeWindow, TriggerContext triggerContext) throws Exception {
        return TriggerResult.CONTINUE;
    }

    @Override
    public void clear(TimeWindow timeWindow, TriggerContext triggerContext) throws Exception {
    }
}

// 定义一个布隆过滤器
class Bloom {
    public Long cap;

    public Bloom() {
    }

    public Bloom(Long cap) {
        this.cap = cap;
    }

    public Long hash(String value,int seed) {
        long result = 0;
        for(int i=0;i<value.length();i++) {
            result = result * seed + value.charAt(i);
        }
        return result & (cap-1);
    }

    @Override
    public String toString() {
        return "Bloom{" +
                "cap=" + cap +
                '}';
    }
}

class UvCountWithBloom extends ProcessWindowFunction<Tuple2<String, Long>, UVCount, String, TimeWindow> {

    // 可以在这里提前定义和redis的链接,这里我就不定义了
    // 这里我们就定义一个map来表示,其实应该是利用redis的位图,本部分建议还是看原始仓库scala的代码
    public Map<Long,Integer> map = new HashMap<>();

    // 定义bloom过滤器
    public Bloom bloom;

    @Override
    public void open(Configuration parameters) throws Exception {
        bloom = new Bloom(100L);
    }

    @Override
    public void process(String s, Context context, Iterable<Tuple2<String, Long>> elements, Collector<UVCount> out) {

        // 因为是每来一条数据就判断一次,所以我们就可以直接用last获取到这条数据
        String userId = elements.iterator().next().f1.toString();
        // 计算哈希
        long hash = bloom.hash(userId, 61);
        // 定义一个标志位,判断 redis 位图中有没有这一位
        if (!map.containsKey(hash)) {
            map.put(hash, map.getOrDefault(hash,1));
        }
        map.put(hash, map.get(hash)+1);
        out.collect(new UVCount(Long.parseLong(userId),map.get(hash)));
    }
}

市场营销商业指标

APP市场推广统计

主要有两个知识点,一个是自定义数据源,这对测试来说,是一个很好的方式

class SimulateEventSource extends RichParallelSourceFunction<MarketingUserBehavior> {

    // 定义是否运行的标识符
    Boolean running = true;
    // 定义渠道的集合
    String[] channelSet = {"AppStore", "XiaomiStore", "HuaweiStore", "weibo", "wechat", "tieba"};
    // 定义用户行为的集合
    String[] behaviorTypes = {"BROWSE", "CLICK", "PURCHASE", "UNINSTALL"};
    // 定义随机数发生器
    Random rand = new Random();

    @Override
    public void run(SourceContext<MarketingUserBehavior> ctx) throws Exception {

        long count = 0;
        long max = Long.MAX_VALUE;

        while (count<max && running) {
            String userId = String.valueOf(rand.nextLong());
            String behaviorType = behaviorTypes[rand.nextInt(behaviorTypes.length)];
            String channel = channelSet[rand.nextInt(channelSet.length)];
            long timestamp = System.currentTimeMillis();
            ctx.collect(new MarketingUserBehavior(userId,behaviorType,channel,timestamp));
            count++;
        }
    }

    @Override
    public void cancel() {
        running = false;
    }
}

另一个,在java中,keyby设计到的返回值要是过于复杂,如果不想定义pojo的话,还是要使用keyselector,否则可能会遇到错误。
完整代码如下

// 定义一个输入数据的样例类  保存电商用户行为的样例类
// case class MarketingUserBehavior(userId: String, behavior: String, channel: String, timestamp: Long)
class MarketingUserBehavior {
    public String userId;
    public String behavior;
    public String channel;
    public Long timestamp;

    public MarketingUserBehavior(String userId, String behavior, String channel, Long timestamp) {
        this.userId = userId;
        this.behavior = behavior;
        this.channel = channel;
        this.timestamp = timestamp;
    }

    @Override
    public String toString() {
        return "MarketingUserBehavior{" +
                "userId='" + userId + '\'' +
                ", behavior='" + behavior + '\'' +
                ", channel='" + channel + '\'' +
                ", timestamp=" + timestamp +
                '}';
    }
}
// 定义一个输出结果的样例类   保存 市场用户点击次数
// case class MarketingViewCount(windowStart: String, windowEnd: String, channel: String, behavior: String, count: Long)
class MarketingViewCount {
    public String windowStart;
    public String windowEnd;
    public String channel;
    public String behavior;
    public Long count;

    public MarketingViewCount(String windowStart, String windowEnd, String channel, String behavior, Long count) {
        this.windowStart = windowStart;
        this.windowEnd = windowEnd;
        this.channel = channel;
        this.behavior = behavior;
        this.count = count;
    }

    @Override
    public String toString() {
        return "MarketingViewCount{" +
                "windowStart='" + windowStart + '\'' +
                ", windowEnd='" + windowEnd + '\'' +
                ", channel='" + channel + '\'' +
                ", behavior='" + behavior + '\'' +
                ", count=" + count +
                '}';
    }
}

public class AppMarketingByChannel {

    public static void main(String[] args) throws Exception {

        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
        env.setParallelism(1);
        env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);

        // 自定义数据源
        DataStreamSource<MarketingUserBehavior> marketingUserBehaviorDataStreamSource = env.addSource(new SimulateEventSource());

        marketingUserBehaviorDataStreamSource.assignTimestampsAndWatermarks(WatermarkStrategy.<MarketingUserBehavior>forBoundedOutOfOrderness(Duration.ofSeconds(5))
        .withTimestampAssigner((event,timeStamp)->event.timestamp))
        .filter(value-> !value.behavior.equals("UNINSTALL"))
        .map(new MapFunction<MarketingUserBehavior, Tuple2<Tuple2<String,String>,Long>>() {
            @Override
            public Tuple2<Tuple2<String,String>, Long> map(MarketingUserBehavior marketingUserBehavior) throws Exception {
                return new Tuple2<>(new Tuple2<>(marketingUserBehavior.channel, marketingUserBehavior.behavior), 1L);
            }
        })
        .keyBy(new KeySelector<Tuple2<Tuple2<String, String>, Long>, Tuple2<String, String>>() {
            @Override
            public Tuple2<String, String> getKey(Tuple2<Tuple2<String, String>, Long> tuple2LongTuple2) throws Exception {
                return tuple2LongTuple2.f0;
            }
        })
        .timeWindow(Time.minutes(60),Time.seconds(10))
        .process(new MarketingCountByChannel())
        .print();

        env.execute();
    }
}

class SimulateEventSource extends RichParallelSourceFunction<MarketingUserBehavior> {

    // 定义是否运行的标识符
    Boolean running = true;
    // 定义渠道的集合
    String[] channelSet = {"AppStore", "XiaomiStore", "HuaweiStore", "weibo", "wechat", "tieba"};
    // 定义用户行为的集合
    String[] behaviorTypes = {"BROWSE", "CLICK", "PURCHASE", "UNINSTALL"};
    // 定义随机数发生器
    Random rand = new Random();

    @Override
    public void run(SourceContext<MarketingUserBehavior> ctx) throws Exception {

        long count = 0;
        long max = Long.MAX_VALUE;

        while (count<max && running) {
            String userId = String.valueOf(rand.nextLong());
            String behaviorType = behaviorTypes[rand.nextInt(behaviorTypes.length)];
            String channel = channelSet[rand.nextInt(channelSet.length)];
            long timestamp = System.currentTimeMillis();
            ctx.collect(new MarketingUserBehavior(userId,behaviorType,channel,timestamp));
            count++;
        }
    }

    @Override
    public void cancel() {
        running = false;
    }
}

class MarketingCountByChannel extends ProcessWindowFunction<Tuple2<Tuple2<String,String>,Long>,MarketingViewCount,Tuple2<String,String>, TimeWindow> {

    @Override
    public void process(Tuple2<String, String> stringStringTuple2, Context context, Iterable<Tuple2<Tuple2<String, String>, Long>> elements, Collector<MarketingViewCount> out) throws Exception {

        // 根据 context 对象分别获取到 Long 类型的 窗口的开始和结束时间
        //context.window.getStart是长整形   所以new 一个 变成String类型
        String startTs = String.valueOf(context.window().getStart());
        String endTs = String.valueOf(context.window().getEnd());

        // 获取到 渠道
        String channel = stringStringTuple2.f0;
        // 获取到 行为
        String behaviorType = stringStringTuple2.f1;
        // 获取到 次数
        long count = 0;
        for (Tuple2<Tuple2<String, String>, Long> element : elements) {
            count++;
        }

        // 输出结果
        out.collect(new MarketingViewCount(startTs, endTs, channel, behaviorType, count));
    }
}

页面广告分析

按照省份划分点击量

过滤黑名单

相比上一个功能多了一个过滤动作,具体过滤规则由需求决定
这里使用了旁路输出getSideOutput,这次接触到一个实际案例,收获还是很大的

//定义侧输出流报警信息样例类
// case class BlackListWarning(userId:Long,adId:Long,msg:String)
class BlackListWarning {
    public long userId;
    public long adId;
    public String msg;

    public BlackListWarning(long userId, long adId, String msg) {
        this.userId = userId;
        this.adId = adId;
        this.msg = msg;
    }

    @Override
    public String toString() {
        return "BlackListWarning{" +
                "userId=" + userId +
                ", adId=" + adId +
                ", msg='" + msg + '\'' +
                '}';
    }
}

public class AdAnalysisByProvinceBlack {
    public static void main(String[] args) throws Exception {

        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
        env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
        env.setParallelism(1);

        DataStreamSource<String> stringDataStreamSource = env.readTextFile("data/AdClickLog.csv");

        SingleOutputStreamOperator<AdClickEvent> adLogStream = stringDataStreamSource.map(new MapFunction<String, AdClickEvent>() {
            @Override
            public AdClickEvent map(String s) throws Exception {
                String[] s1 = s.split(",");
                return new AdClickEvent(Long.parseLong(s1[0]), Long.parseLong(s1[1]), s1[2], s1[3], Long.parseLong(s1[4]));
            }
        })
        .assignTimestampsAndWatermarks(WatermarkStrategy.<AdClickEvent>forBoundedOutOfOrderness(Duration.ofSeconds(5))
        .withTimestampAssigner((event, timestamp) -> event.timestamp * 1000));

        SingleOutputStreamOperator<AdClickEvent> filterBlackListStream = adLogStream.keyBy(new KeySelector<AdClickEvent, Tuple2<Long, Long>>() {
            @Override
            public Tuple2<Long, Long> getKey(AdClickEvent adClickEvent) throws Exception {
                return new Tuple2<>(adClickEvent.userId, adClickEvent.adId);
            }
        })
        .process(new FilterBlackList(100L));

        SingleOutputStreamOperator<String> process = filterBlackListStream
                .keyBy(value -> value.province)
                .timeWindow(Time.minutes(60), Time.seconds(5))
                .process(new AdCount());

//        process.print();

        filterBlackListStream.getSideOutput(new OutputTag<BlackListWarning>("BlackListOutputTag"){}).print();

        env.execute();
    }
}

class FilterBlackList extends KeyedProcessFunction<Tuple2<Long, Long>, AdClickEvent, AdClickEvent> {

    public long cnt;

    public FilterBlackList(long cnt) {
        this.cnt = cnt;
    }

    ValueState<Long> count;
    ValueState<Boolean> state;

    // 定义一个状态,需要保存当前用户对当前广告的点击量 count
    // lazy val countState:ValueState[Long] = getRuntimeContext.getState(new ValueStateDescriptor[Long]("count",classOf[Long]))
    // 定义一个标识位,用来表示用户是否已经在黑名单中
    // lazy val isSendState:ValueState[Boolean] = getRuntimeContext.getState(new ValueStateDescriptor[Boolean]("is-sent",classOf[Boolean]))

    @Override
    public void open(Configuration parameters) throws Exception {
        count = getRuntimeContext().getState(new ValueStateDescriptor<Long>("count", Long.class));
        state = getRuntimeContext().getState(new ValueStateDescriptor<Boolean>("is-sent", Boolean.class));
    }

    @Override
    public void processElement(AdClickEvent value, Context ctx, Collector<AdClickEvent> out) throws Exception {
        // 取出状态数据
        Long value1 = count.value();
        // 如果是第一个数据,那么注册第二天0点的定时器,用于清空状态
        if(value1==null || value1==0L) {
            count.update(0L);
            state.update(false);
            long ts = (ctx.timerService().currentProcessingTime()/(1000*60*60*24) + 1) * (1000*60*60*24);
            ctx.timerService().registerProcessingTimeTimer(ts);
        }
        // 判断 count 值是否达到上限,如果达到,并且之前没有输出过报警信息,那么则报警
        if(count.value()>cnt) {
            if(!state.value()) {
                // 旁路输出数据
                ctx.output(new OutputTag<BlackListWarning>("BlackListOutputTag"){},new BlackListWarning(value.userId,value.adId,"click over "+cnt+" times today"));
                // 更新黑名单
                state.update(true);
            }
            return;
        }
        count.update(count.value()+1);
        out.collect(value);
    }

    @Override
    public void onTimer(long timestamp, OnTimerContext ctx, Collector<AdClickEvent> out) throws Exception {
        count.clear();
        state.clear();
    }
}

恶意登录监控

方法一

最朴素的方法

// 输入的登录事件样例类
// case class LoginEvent(userId: Long, ip: String, eventType: String, eventTime: Long)
class LoginEvent {
    public long userId;
    public String ip;
    public String eventType;
    public long eventTime;

    public LoginEvent(long userId, String ip, String eventType, long eventTime) {
        this.userId = userId;
        this.ip = ip;
        this.eventType = eventType;
        this.eventTime = eventTime;
    }

    @Override
    public String toString() {
        return "LoginEvent{" +
                "userId=" + userId +
                ", ip='" + ip + '\'' +
                ", eventType='" + eventType + '\'' +
                ", eventTime=" + eventTime +
                '}';
    }
}

// 输出的报警信息样例类
// case class Warning(userId: Long, firstFailTime: Long, lastFailTime: Long, warningMsg: String)
class Warning {
    public long userId;
    public long firstFailTime;
    public long lastFailTime;
    public String warningMsg;

    public Warning(long userId, long firstFailTime, long lastFailTime, String warningMsg) {
        this.userId = userId;
        this.firstFailTime = firstFailTime;
        this.lastFailTime = lastFailTime;
        this.warningMsg = warningMsg;
    }

    @Override
    public String toString() {
        return "Warning{" +
                "userId=" + userId +
                ", firstFailTime=" + firstFailTime +
                ", lastFailTime=" + lastFailTime +
                ", warningMsg='" + warningMsg + '\'' +
                '}';
    }
}

public class LoginFailTwo {

    public static void main(String[] args) throws Exception {

        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
        env.setParallelism(1);
        env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);

        DataStreamSource<String> stringDataStreamSource = env.readTextFile("data/LoginLog.csv");
        stringDataStreamSource.map(new MapFunction<String, LoginEvent>() {
            @Override
            public LoginEvent map(String s) throws Exception {
                String[] split = s.split(",");
                return new LoginEvent(Long.parseLong(split[0]), split[1], split[2], Long.parseLong(split[3]));
            }
        })
                .assignTimestampsAndWatermarks(WatermarkStrategy.<LoginEvent>forBoundedOutOfOrderness(Duration.ofSeconds(5))
                        .withTimestampAssigner((event, timestamr) -> event.eventTime * 1000))
                .keyBy(value -> value.userId)
                .process(new LoginWarning())
                .print();

        env.execute();
    }
}

class LoginWarning extends KeyedProcessFunction<Long, LoginEvent, Warning> {

    ListState<LoginEvent> log;

    @Override
    public void open(Configuration parameters) throws Exception {
        log = getRuntimeContext().getListState(new ListStateDescriptor<LoginEvent>("log", LoginEvent.class));
    }

    @Override
    public void processElement(LoginEvent value, Context ctx, Collector<Warning> out) throws Exception {
        if(value.eventType.equals("fail")) {
            // 先获取之前失败的事件
            Iterator<LoginEvent> iterator = log.get().iterator();
            if(iterator.hasNext()) {
                // 如果之前已经有失败的事件,就做判断,如果没有就把当前失败事件保存进state
                LoginEvent next = iterator.next();
                if (value.eventTime < next.eventTime + 2){
                    out.collect(new Warning( value.userId,next.eventTime,value.eventTime,"在2秒内连续两次登录失败。"));
                }

                // 更新最近一次的登录失败事件,保存在状态里
                log.clear();
            }
            // 如果是第一次登录失败,之前把当前记录 保存至 state
            log.add(value);
        } else {
            // 当前登录状态 不为 fail,则直接清除状态
            log.clear();
        }
    }
}

这有两个很大的问题,计算的是最近两秒内的情况,是写死的,不能改,同时没有考虑到乱序的信息流,这个时候我们就需要使用到flink的cep了

FLINKCEP - COMPLEX EVENT PROCESSING FOR FLINK

FlinkCEP is the Complex Event Processing (CEP) library implemented on top of Flink.

It allows you to detect event patterns in an endless stream of events, giving you the opportunity to get hold of what’s important in your data.

一个或多个由简单事件构成的事件流通过简单的规则匹配,然后输出用户想得到的数据–满足规则的复杂事件。

方法二 CEP
public class LoginFailWithCep {

    public static void main(String[] args) throws Exception {

        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
        env.setParallelism(1);
        env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);

        DataStreamSource<String> stringDataStreamSource = env.readTextFile("data/LoginLog.csv");
        KeyedStream<LoginEvent, Long> loginEventStream = stringDataStreamSource.map(new MapFunction<String, LoginEvent>() {
            @Override
            public LoginEvent map(String s) throws Exception {
                String[] split = s.split(",");
                return new LoginEvent(Long.parseLong(split[0]), split[1], split[2], Long.parseLong(split[3]));
            }
        })
                .assignTimestampsAndWatermarks(WatermarkStrategy.<LoginEvent>forBoundedOutOfOrderness(Duration.ofSeconds(5))
                        .withTimestampAssigner((event, timestamr) -> event.eventTime * 1000))
                .keyBy(value -> value.userId);

        // 定义匹配的模式
        Pattern<LoginEvent, LoginEvent> pattern = Pattern.<LoginEvent>begin("begin").where(new SimpleCondition<LoginEvent>() {
            @Override
            public boolean filter(LoginEvent loginEvent) throws Exception {
                return loginEvent.eventType.equals("fail");
            }
        })
                .next("next")
                .where(new SimpleCondition<LoginEvent>() {
                    @Override
                    public boolean filter(LoginEvent loginEvent) throws Exception {
                        return loginEvent.eventType.equals("fail");
                    }
                })
                .within(Time.seconds(2));

        // 将 pattern 应用到 输入流 上,得到一个 pattern stream
        PatternStream<LoginEvent> patternStream = CEP.pattern(loginEventStream, pattern);

        // 用 select 方法检出 符合模式的事件序列
        SingleOutputStreamOperator<Warning> select = patternStream.select(new LoginFailMatch());
        select.print();

        env.execute();
    }
}

class LoginFailMatch implements PatternSelectFunction<LoginEvent,Warning> {

    @Override
    public Warning select(Map<String, List<LoginEvent>> map) throws Exception {
        LoginEvent begin = map.get("begin").iterator().next();
        LoginEvent next = map.get("next").iterator().next();
        return new Warning(begin.userId,begin.eventTime,next.eventTime,"在2秒内连续2次登录失败。");
    }
}

这算一个很简单的cep,做了个简单的入门,其实需要注意的点还很多

订单支付实时监控

在电商网站中,订单的支付作为直接与营销收入挂钩的一环,在业务流程中非常重要。对于订单而言,为了正确控制业务流程,也为了增加用户的支付意愿,网站一般会设置一个支付失效时间,超过一段时间不支付的订单就会被取消。另外,对于订单的支付,我们还应保证用户支付的正确性,这可以通过第三方支付平台的交易数据来做一个实时对账。在接下来的内容中,我们将实现这两个需求。

在电商平台中,最终创造收入和利润的是用户下单购买的环节;更具体一点,是用户真正完成支付动作的时候。用户下单的行为可以表明用户对商品的需求,但在现实中,并不是每次下单都会被用户立刻支付。当拖延一段时间后,用户支付的意愿会降低。所以为了让用户更有紧迫感从而提高支付转化率,同时也为了防范订单支付环节的安全风险,电商网站往往会对订单状态进行监控,设置一个失效时间(比如15分钟),如果下单后一段时间仍未支付,订单就会被取消。

方法一 CEP

这里的重点我们是要学一下cep中select函数的方法,源码如下

/**
	 * Applies a select function to the detected pattern sequence. For each pattern sequence the
	 * provided {@link PatternSelectFunction} is called. The pattern select function can produce
	 * exactly one resulting element.
	 *
	 * <p>Applies a timeout function to a partial pattern sequence which has timed out. For each
	 * partial pattern sequence the provided {@link PatternTimeoutFunction} is called. The pattern
	 * timeout function can produce exactly one resulting element.
	 *
	 * <p>You can get the stream of timed-out data resulting from the
	 * {@link SingleOutputStreamOperator#getSideOutput(OutputTag)} on the
	 * {@link SingleOutputStreamOperator} resulting from the select operation
	 * with the same {@link OutputTag}.
	 *
	 * @param timedOutPartialMatchesTag {@link OutputTag} that identifies side output with timed out patterns
	 * @param patternTimeoutFunction The pattern timeout function which is called for each partial
	 *                               pattern sequence which has timed out.
	 * @param patternSelectFunction The pattern select function which is called for each detected
	 *                              pattern sequence.
	 * @param <L> Type of the resulting timeout elements
	 * @param <R> Type of the resulting elements
	 * @return {@link DataStream} which contains the resulting elements with the resulting timeout
	 * elements in a side output.
	 */
	public <L, R> SingleOutputStreamOperator<R> select(
    // 重点是以下几个参数
			final OutputTag<L> timedOutPartialMatchesTag,
			final PatternTimeoutFunction<T, L> patternTimeoutFunction,
			final PatternSelectFunction<T, R> patternSelectFunction) {

		final TypeInformation<R> rightTypeInfo = TypeExtractor.getUnaryOperatorReturnType(
			patternSelectFunction,
			PatternSelectFunction.class,
			0,
			1,
			TypeExtractor.NO_INDEX,
			builder.getInputType(),
			null,
			false);

		return select(
			timedOutPartialMatchesTag,
			patternTimeoutFunction,
			rightTypeInfo,
			patternSelectFunction);
	}

可以发现select函数还可以返回超时的时间流,比我们想象的强大得多

// 定义输入的订单事件样例类
// case class OrderEvent(orderId: Long, eventType: String, eventTime: Long)

// 定义输出的订单检测结果样例类
// case class OrderResult(orderId: Long, resultMsg: String)
class OrderResult {
    public Long orderId;
    public String resultMsg;

    public OrderResult(Long orderId, String resultMsg) {
        this.orderId = orderId;
        this.resultMsg = resultMsg;
    }

    public OrderResult() {
    }

    @Override
    public String toString() {
        return "OrderResult{" +
                "orderId=" + orderId +
                ", resultMsg='" + resultMsg + '\'' +
                '}';
    }
}

class OrderEvent {
    public Long orderId;
    public String eventType;
    public Long eventTime;

    public OrderEvent() {
    }

    public OrderEvent(Long orderId, String eventType, Long eventTime) {
        this.orderId = orderId;
        this.eventType = eventType;
        this.eventTime = eventTime;
    }

    @Override
    public String toString() {
        return "OrderEvent{" +
                "orderId=" + orderId +
                ", eventType='" + eventType + '\'' +
                ", eventTime=" + eventTime +
                '}';
    }
}

public class OrderTimeout {

    public static void main(String[] args) throws Exception {

        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
        env.setParallelism(1);
        env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);

        DataStreamSource<String> stringDataStreamSource = env.readTextFile("data/OrderLog.csv");
        SingleOutputStreamOperator<OrderEvent> orderEventStream = stringDataStreamSource.map(new MapFunction<String, OrderEvent>() {
            @Override
            public OrderEvent map(String s) throws Exception {
                String[] split = s.split(",");
                return new OrderEvent(Long.parseLong(split[0].trim()), split[1].trim(), Long.parseLong(split[3].trim()));
            }
        })
                .assignTimestampsAndWatermarks(WatermarkStrategy.<OrderEvent>forBoundedOutOfOrderness(Duration.ofSeconds(5))
                        .withTimestampAssigner((event, timestamr) -> event.eventTime * 1000));

        KeyedStream<OrderEvent, Long> keyedStream = orderEventStream.keyBy(value -> value.orderId);

        // 定义匹配模式
        Pattern<OrderEvent, OrderEvent> orderPayPattern = Pattern.<OrderEvent>begin("begin").where(new SimpleCondition<OrderEvent>() {
            @Override
            public boolean filter(OrderEvent orderEvent) throws Exception {
                return orderEvent.eventType.equals("create");
            }
        }).followedBy("follow").where(new SimpleCondition<OrderEvent>() {
            @Override
            public boolean filter(OrderEvent orderEvent) throws Exception {
                return orderEvent.eventType.equals("pay");
            }
        }).within(Time.minutes(15));

        // 应用pattern到stream上
        PatternStream<OrderEvent> pattern = CEP.pattern(keyedStream, orderPayPattern);

        // select,提取事件序列,超时的事件要做报警提示
        OutputTag<OrderResult> orderTimeout = new OutputTag<OrderResult>("orderTimeout"){};
        SingleOutputStreamOperator<OrderResult> select = pattern.select(orderTimeout, new OrderTimeoutSelect(), new OrderPaySelect());

        select.print();
        select.getSideOutput(orderTimeout).print();

        env.execute();
    }
}

class OrderTimeoutSelect implements PatternTimeoutFunction<OrderEvent,OrderResult> {

    @Override
    public OrderResult timeout(Map<String, List<OrderEvent>> pattern, long timeoutTimestamp) throws Exception {
        Long begin = pattern.get("begin").iterator().next().orderId;
        return new OrderResult(begin,"timeout");
    }
}

class OrderPaySelect implements PatternSelectFunction<OrderEvent, OrderResult> {

    @Override
    public OrderResult select(Map<String, List<OrderEvent>> pattern) throws Exception {
        Long follow = pattern.get("follow").iterator().next().orderId;
        return new OrderResult(follow,"payed successfully");
    }
}
方法二 直接使用状态编程

遇到这种问题,直接理清思路就好,就先不写了

来自两条流的订单交易匹配

对于订单支付事件,用户支付完成其实并不算完,我们还得确认平台账户上是否到账了。而往往这会来自不同的日志信息,所以我们要同时读入两条流的数据来做合并处理。这里我们利用connect将两条流进行连接,然后用自定义的CoProcessFunction进行处理

方法一:CONNECT + COPROCESS

这里使用了流之间的connect


// 输入输出的样例类
//  case class ReceiptEvent(txId:String, payChannel:String, timestamp:Long)
//  case class OrderEvent(orderId:Long, eventType:String, txId:String, eventTime:Long)

class ReceiptEvent {
    public String txId;
    public String payChannel;
    public Long timestamp;

    public ReceiptEvent(String txId, String payChannel, Long timestamp) {
        this.txId = txId;
        this.payChannel = payChannel;
        this.timestamp = timestamp;
    }

    @Override
    public String toString() {
        return "ReceiptEvent{" +
                "txId='" + txId + '\'' +
                ", payChannel='" + payChannel + '\'' +
                ", timestamp=" + timestamp +
                '}';
    }
}

class OrderEvent2 {
    public Long orderId;
    public String eventType;
    public String txId;
    public Long eventTime;

    public OrderEvent2(Long orderId, String eventType, String txId, Long eventTime) {
        this.orderId = orderId;
        this.eventType = eventType;
        this.txId = txId;
        this.eventTime = eventTime;
    }

    @Override
    public String toString() {
        return "OrderEvent2{" +
                "orderId=" + orderId +
                ", eventType='" + eventType + '\'' +
                ", txId='" + txId + '\'' +
                ", eventTime=" + eventTime +
                '}';
    }
}

public class OrderPayTxMatch {

    public static void main(String[] args) throws Exception {

        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
        env.setParallelism(1);
        env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);

        DataStreamSource<String> stringDataStreamSource = env.readTextFile("data/OrderLog.csv");
        SingleOutputStreamOperator<OrderEvent2> orderEventStream = stringDataStreamSource.map(new MapFunction<String, OrderEvent2>() {
            @Override
            public OrderEvent2 map(String s) throws Exception {
                String[] split = s.split(",");
                return new OrderEvent2(Long.parseLong(split[0].trim()), split[1].trim(), split[2].trim(), Long.parseLong(split[3].trim()));
            }
        })
                .assignTimestampsAndWatermarks(WatermarkStrategy.<OrderEvent2>forBoundedOutOfOrderness(Duration.ofSeconds(5))
                        .withTimestampAssigner((event, timestamr) -> event.eventTime * 1000));

        KeyedStream<OrderEvent2, String> keyedStream = orderEventStream.keyBy(value -> value.txId);

        DataStreamSource<String> stringDataStreamSource2 = env.readTextFile("data/ReceiptLog.csv");
        SingleOutputStreamOperator<ReceiptEvent> receiptEventStream = stringDataStreamSource2.map(new MapFunction<String, ReceiptEvent>() {
            @Override
            public ReceiptEvent map(String s) throws Exception {
                String[] split = s.split(",");
                return new ReceiptEvent(split[0].trim(), split[1].trim(), Long.parseLong(split[2].trim()));
            }
        })
                .assignTimestampsAndWatermarks(WatermarkStrategy.<ReceiptEvent>forBoundedOutOfOrderness(Duration.ofSeconds(5))
                        .withTimestampAssigner((event, timestamr) -> event.timestamp * 1000));

        KeyedStream<ReceiptEvent, String> keyedStream2 = receiptEventStream.keyBy(value -> value.txId);

        // 定义测输出流
        // OutputTag<OrderEvent2> outputTag = new OutputTag<OrderEvent2>("unmatchedPay") {};
        // OutputTag<ReceiptEvent> outputTag2 = new OutputTag<ReceiptEvent>("unmatchedRec") {};

        // connect 连接两条流,匹配事件进行处理
        SingleOutputStreamOperator<Tuple2<OrderEvent2, ReceiptEvent>> process = keyedStream.connect(keyedStream2).process(new OrderPayTxDetect());

        // 打印输出
        process.print();
        process.getSideOutput(new OutputTag<OrderEvent2>("unmatchedPay") {}).print();
        process.getSideOutput(new OutputTag<ReceiptEvent>("unmatchedRec") {}).print();

        env.execute();
    }
}

class OrderPayTxDetect extends CoProcessFunction<OrderEvent2, ReceiptEvent, Tuple2<OrderEvent2, ReceiptEvent>> {

    public ValueState<OrderEvent2> pay;
    public ValueState<ReceiptEvent> receipt;

    @Override
    public void open(Configuration parameters) throws Exception {
        pay = getRuntimeContext().getState(new ValueStateDescriptor<OrderEvent2>("pay", OrderEvent2.class));
        receipt = getRuntimeContext().getState(new ValueStateDescriptor<ReceiptEvent>("receipt", ReceiptEvent.class));

    }

    @Override
    public void processElement1(OrderEvent2 value, Context ctx, Collector<Tuple2<OrderEvent2, ReceiptEvent>> out) throws Exception {
        // pay 来了,考察有没有对应的 receipt 来过
        ReceiptEvent value1 = receipt.value();
        if(value1!=null) {
            // 如果已经有 receipt,正常输出到主流
            out.collect(new Tuple2<>(value,value1));
            receipt.clear();
        } else {
            // 如果 receipt 还没来,那么把 pay 存入状态,注册一个定时器等待 5 秒
            pay.update(value);
            ctx.timerService().registerProcessingTimeTimer(value.eventTime*1000L + 5000L);
        }

    }

    @Override
    public void processElement2(ReceiptEvent value, Context ctx, Collector<Tuple2<OrderEvent2, ReceiptEvent>> out) throws Exception {
        //receipt来了,判断有没有对应的pay来过
        OrderEvent2 value1 = pay.value();
        if(value1!=null) {
            // 如果已经有 pay,正常输出到主流
            out.collect(new Tuple2<>(value1,value));
            pay.clear();
        } else {
            // 如果 pay 还没来,那么把 receipt 存入状态,注册一个定时器等待 3 秒
            receipt.update(value);
            ctx.timerService().registerProcessingTimeTimer(value.timestamp*1000L + 3000L);
        }
    }

    // 定时触发, 有两种情况,所以要判断当前有没有pay和receipt
    @Override
    public void onTimer(long timestamp, OnTimerContext ctx, Collector<Tuple2<OrderEvent2, ReceiptEvent>> out) throws Exception {
        //如果 pay 不为空,说明receipt没来,输出unmatchedPays
        if (pay.value() != null){
            ctx.output(new OutputTag<OrderEvent2>("unmatchedPay") {},pay.value());
        }

        if (receipt.value() != null){
            ctx.output(new OutputTag<ReceiptEvent>("unmatchedRec") {},receipt.value());
        }

        // 清除状态
        pay.clear();
        receipt.clear();
    }
}
方法二:JOIN
public class OrderPayTxMatchWithJoin {

    public static void main(String[] args) throws Exception {

        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
        env.setParallelism(1);
        env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);

        DataStreamSource<String> stringDataStreamSource = env.readTextFile("data/OrderLog.csv");
        SingleOutputStreamOperator<OrderEvent2> orderEventStream = stringDataStreamSource.map(new MapFunction<String, OrderEvent2>() {
            @Override
            public OrderEvent2 map(String s) throws Exception {
                String[] split = s.split(",");
                return new OrderEvent2(Long.parseLong(split[0].trim()), split[1].trim(), split[2].trim(), Long.parseLong(split[3].trim()));
            }
        })
                .assignTimestampsAndWatermarks(WatermarkStrategy.<OrderEvent2>forBoundedOutOfOrderness(Duration.ofSeconds(5))
                        .withTimestampAssigner((event, timestamr) -> event.eventTime * 1000));

        KeyedStream<OrderEvent2, String> keyedStream = orderEventStream.keyBy(value -> value.txId);

        DataStreamSource<String> stringDataStreamSource2 = env.readTextFile("data/ReceiptLog.csv");
        SingleOutputStreamOperator<ReceiptEvent> receiptEventStream = stringDataStreamSource2.map(new MapFunction<String, ReceiptEvent>() {
            @Override
            public ReceiptEvent map(String s) throws Exception {
                String[] split = s.split(",");
                return new ReceiptEvent(split[0].trim(), split[1].trim(), Long.parseLong(split[2].trim()));
            }
        })
                .assignTimestampsAndWatermarks(WatermarkStrategy.<ReceiptEvent>forBoundedOutOfOrderness(Duration.ofSeconds(5))
                        .withTimestampAssigner((event, timestamr) -> event.timestamp * 1000));

        KeyedStream<ReceiptEvent, String> keyedStream2 = receiptEventStream.keyBy(value -> value.txId);

        keyedStream.intervalJoin(keyedStream2)
                .between(Time.seconds(-5),Time.seconds(3))
                .process(new OrderPayTxDetectWithJoin()).print();

        env.execute();
    }
}

class OrderPayTxDetectWithJoin extends ProcessJoinFunction<OrderEvent2, ReceiptEvent, Tuple2<OrderEvent2, ReceiptEvent>> {

    @Override
    public void processElement(OrderEvent2 left, ReceiptEvent right, Context ctx, Collector<Tuple2<OrderEvent2, ReceiptEvent>> out) throws Exception {
        out.collect(new Tuple2<>(left,right));
    }
}

电商用户行为分析-Flink【Java重写版本】到这里就介绍完了,小伙伴们点赞、收藏、评论,一键三连走起呀,下期见~~
最后再次感谢尚硅谷和Xiaoyuyu的指导和分享,本文仅用于交流学习,不用于商业性使用

  • 49
    点赞
  • 62
    收藏
    觉得还不错? 一键收藏
  • 82
    评论
评论 82
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值