一、flink

一、需求:

  • 1、flink实时接收mqtt消息。
  • 2、根据租户表的租户数据开通动态状态,存储数据到相应数据Hbase表,每个设备的测点不固定;
  • 3、根据规则表中设备的动态规则配置情况,进行报警或者事件触发。
  • 4、根据能源表,获取配置信息中位置与能源类型,保存测点每个小时的最大值、最小值、平均值、总数值、首值、尾值,首位差值,7个指标

二、涉及知识:

1、概况:

  • 自定义sourth、sink;
  • flink CDC的配置与自定义序列化器
  • 侧输出流分流;
  • 广播流的连接与处理;
  • 状态编程与规则匹配计算;
  • 滚动窗口全量处理
  • watermark水印处理

2、具体:

  • 1、flink自定义mqtt数据源;
  • 2、flink CDC 实时获取Mysql 中binglog数据,并自定义序列化器接收binglog数据,生成维度信息配置流;
  • 3、维度信息配置流,通过侧输出流进行分流;
  • 4、flink流实时更新维度配置;
    • 1)维度数据流转换为广播流,需要注册一个MapStateDescriptor;
    • 2)mqtt数据与维度数据流连接,
    • 3)流连接的BroadcastProcessFunction处理,
    • 4)processBroadcastElement方法处理广播流,根据维度流的操作状态,对mapState的put与remove
    • 5)processElement处理mqtt流,进行数据存储过滤,与规则匹配触发报警或者事件。
  • 5、flink自定义hbase数据源,根据租户id连接或者创建表,根据设备id与时间生成rowkey,根据json存储各个测点及其值

3、难点:

(1)HBase数据存储
(2)规则匹配触发
1)怎么实时获取报警的动态,也就是维度表的实时更新
  • 最开始,准备每来一条消息,就到Redis中读取规则,但是规则太多,而且Redis读的太频繁,稍微网络原因导致Redis读取堵塞,程序就卡顿了
  • 又想,把Redis数据缓存到flink,5秒一更新,但是状态更新不及时,可能已经处于报警中,还在继续上个状态
  • 最后,维度信息放在mysql中,flinkCDC 实时监测mysql状态,实时更新缓存在flink中的维度信息。
2)多条规则的mapStated状态保存

一个设备有多个规则,表现在MySQL表中为多行数据,一个设备匹配多个维度。

解决办法:
mapState的value设置为string类型,但是这个string类型是接送格式的,

  • 表中添加一个规则,json就添加一个字段,key为ruleID,value为规则
  • 表中修改一个规则,先获取mapState的value,string转化为json,再对该字段进行重写,
  • 表中删除一个规则,先获取mapState的value,string转化为json,再对该字段进行删除,

三、业务部分:

1、数据存储

动态根据mysql中租户表的数据开通状态,根据租户名,存入数据到相应hbase表。

2、规则匹配

MapState存储每个设备的规则,key 为sysID + deviceID,value为没规则

  • 1、报警 ,只触发一次,如果不解除,不会下一次触发。
  • 2、事件,只要规则匹配,就触发,不需要解除。
  • 3、一个设备支持多个规则;
  • 4、报警三个级别,一般预警、普通报警,严重报警,严重报警的数据输出到kafka中保存。

四、相关代码

1、自定义mqtt数据源

//1.获取Flink 执行环境
 StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
 env.setParallelism(1);

 // todo 1、自定义mqtt数据源,实时接收mqtt消息,
 DataStreamSource<String> mqttOrianStream = env.addSource(new MqttSource()).setParallelism(2);
 // mqtt数据的预处理, json验证,topic验证,string类型转换为4元数组Tuple4
 SingleOutputStreamOperator<Tuple4<String, String, String,String>> mqttStream = mqttOrianStream.map(new DataPreProcessFunc());
public class MqttSource implements ParallelSourceFunction<String> {
    private long number = 1L;
    private boolean isRunning = true;
    private MqttClient mqttClient;
    private boolean mc = true;

    @Override
    public void run(final SourceContext<String> sct) throws Exception {
        while (isRunning) {
            if (mc){
                getMqttClient();
                mc=false;
            }
            // 设置回调
            mqttClient.setCallback(new MqttCallback() {
                @Override
                public void connectionLost(Throwable throwable) {
                    System.out.println("connectionLost");
                }

                @Override
                public void messageArrived(String s, MqttMessage mqttMessage) throws Exception {
                    String msg = mqttMessage.toString();
                    // 校验json2,
                    //为了保证类似于 {"1":"2"}mqttTime=1649832650049 的mqtt自带时间戳的消息能够通过json验证, 		
                    boolean msgFlag= true;
                    if(msg.contains("mqttTime=")) {
                        String[] split = msg.split("mqttTime=");
                        String payload=split[0];
                        try{
                            JSONObject jobtemp =JSONObject.parseObject(payload);
                        }catch (Exception e){
                            msgFlag=false;
                            System.out.println("错误msg==:" + msg);
                        }

                    }else {
                        try{
                            JSONObject jobtemp =JSONObject.parseObject(msg);
                           // JSONObject jobtemp =JSONObject.fromObject(msg);
                        }catch (Exception e){
                           msgFlag=false;
                            System.out.println("错误msg==:" + msg);
                        }
                    }


                    if(msgFlag){
                        sct.collect(s+"#"+msg); // 输出 :  topic#消息体json
                    }

                }

                @Override
                public void deliveryComplete(IMqttDeliveryToken iMqttDeliveryToken) {
                }

            });

        }

    }

    @Override
    public void cancel() {
        isRunning = false;
    }


    public  void getMqttClient( ) {

        try {
            // host为主机名,clientid即连接MQTT的客户端ID,一般以客户端唯一标识符表示,
            // MemoryPersistence设置clientid的保存形式,默认为以内存保存
              mqttClient = new MqttClient("tcp://localhost:1883", "client11", new MemoryPersistence());
            // 配置参数信息
            MqttConnectOptions options = new MqttConnectOptions();
            // 设置是否清空session,这里如果设置为false表示服务器会保留客户端的连接记录,

            options.setCleanSession(true);// 这里设置为true表示每次连接到服务器都以新的身份连接
            options.setUserName("admin"); // 设置用户名
            options.setPassword("123456".toCharArray());// 设置密码
            options.setConnectionTimeout(10); // 设置超时时间 单位为秒
            // 设置会话心跳时间 单位为秒 服务器会每隔1.5*20秒的时间向客户端发送个消息判断客户端是否在线,但这个方法并没有重连的机制
            options.setKeepAliveInterval(20);

            // 连接
            mqttClient.connect(options);
            // 订阅
            mqttClient.subscribe("up/aass/#");

        } catch (Exception e) {

            e.printStackTrace();

        }

    }
public class DataPreProcessFunc implements MapFunction<String, Tuple4<String, String, String,String>>  {
    @Override
    public Tuple4<String, String, String,String> map(String msg) throws Exception {
        try {
            String[] msg1 = msg.split("#");
            String[] sysDev = msg1[0].split("/");
            String jsonTime = msg1[1];
            String sysId = sysDev[1];
            String deviceId = sysDev[2];
//            String deviceId = "ww";
//            System.out.println("sysDev"+sysDev);
            String json = "";
            String mqttTime = "";
            if(jsonTime.contains("mqttTime=")) {
                //  若msg中带时间,则数据的采集时间为 消息的自带时间
                String[] split = jsonTime.split("mqttTime=");
                json=split[0];
                mqttTime=split[1];
                // System.out.println("++采集时间为 消息的自带时间 ");
            }else {
                // 若msg中不带时间,则数据的采集时间为 storm 处理 该条mqtt消息的时间
                json=jsonTime;
                mqttTime=System.currentTimeMillis()+"";
            }
            return Tuple4.of(sysId, deviceId, json,mqttTime);
        }catch (  IndexOutOfBoundsException e ) {
            e.printStackTrace();
        }
        return null;

    }

2、Flink CDC

 // todo 2. FlinkCDC构建 维度配置流;
        DebeziumSourceFunction<String> sourceFunction = MySqlSource.<String>builder()
                .hostname("node03")
                .port(3306)
                .username("root")
                .password("123456")
                .databaseList("iot")
              //  .tableList("iotRules")  可指定多个具体的表
                .deserializer(new CustomerDeserializationSchema())
                .startupOptions(StartupOptions.initial())
                .build();
        // 配置信息流,维度表,
        DataStreamSource<String> tableProcessStrDS = env.addSource(sourceFunction);
public class CustomerDeserializationSchema implements DebeziumDeserializationSchema<String> {


    /**
     * {
     * "db":"",
     * "tableName":"",
     * "before":{"id":"1001","name":""...},
     * "after":{"id":"1001","name":""...},
     * "op":""
     * }
     */
    @Override
    public void deserialize(SourceRecord sourceRecord, Collector<String> collector) throws Exception {

        //创建JSON对象用于封装结果数据
        JSONObject result = new JSONObject();

        //获取库名&表名
        String topic = sourceRecord.topic();
        String[] fields = topic.split("\\.");
        result.put("db", fields[1]);
        result.put("tableName", fields[2]);

        //获取before数据
        Struct value = (Struct) sourceRecord.value();
        Struct before = value.getStruct("before");
        JSONObject beforeJson = new JSONObject();
        if (before != null) {
            //获取列信息
            Schema schema = before.schema();
            List<Field> fieldList = schema.fields();

            for (Field field : fieldList) {
                beforeJson.put(field.name(), before.get(field));
            }
        }
        result.put("before", beforeJson);

        //获取after数据
        Struct after = value.getStruct("after");
        JSONObject afterJson = new JSONObject();
        if (after != null) {
            //获取列信息
            Schema schema = after.schema();
            List<Field> fieldList = schema.fields();

            for (Field field : fieldList) {
                afterJson.put(field.name(), after.get(field));
            }
        }
        result.put("after", afterJson);

        //获取操作类型
        Envelope.Operation operation = Envelope.operationFor(sourceRecord);
        result.put("op", operation);

        //输出数据
        collector.collect(result.toJSONString());

    }

    @Override
    public TypeInformation<String> getProducedType() {
        return BasicTypeInfo.STRING_TYPE_INFO;
    }
}

3、侧输出流分流

// todo 3.侧输出流分流,得到三个配置表的流
//用split算子(将数据流切分成多个数据流)进行分割时,显示为过时,可以使用SideOutPut替换split实现分流。
OutputTag<String> sysDataTag = new OutputTag<String>("sysData-tag"){};
OutputTag<String> rulesDataTag = new OutputTag<String>("rulesData-tag"){};
OutputTag<String> energDataTag = new OutputTag<String>("energData-tag"){};
SingleOutputStreamOperator<Object> tableProcessDataStream = tableProcessStrDS.process(new SplitProcessFunc(sysDataTag,rulesDataTag,energDataTag) );

DataStream<String> sysDataStream = tableProcessDataStream.getSideOutput(sysDataTag);
DataStream<String> rulesDataStream = tableProcessDataStream.getSideOutput(rulesDataTag);
DataStream<String> energDataStream = tableProcessDataStream.getSideOutput(energDataTag);
public class SplitProcessFunc extends ProcessFunction<String, Object> {

    private OutputTag<String> sysDataTag;
    private OutputTag<String> rulesDataTag;
    private OutputTag<String> energDataTag;

    public SplitProcessFunc(OutputTag<String> sysDataTag, OutputTag<String> rulesDataTag, OutputTag<String> energDataTag) {
        this.sysDataTag = sysDataTag;
        this.rulesDataTag = rulesDataTag;
        this.energDataTag = energDataTag;
    }

    //{"op":"CREATE","before":{},"after":{"rule_id":"11","sys_id":"22","start_rule":"44","device_id":"33","end_rule":"55","state":77,"type":66},"db":"iot","tableName":"iotRules"}
    @Override
    public void processElement(String value, Context context, Collector<Object> collector) throws Exception {

        //3.写入状态
        JSONObject jsonObject = JSON.parseObject(value);

        String tableName = jsonObject.getString("tableName");

        if (tableName.equals("iotRules")) {
            System.out.println("=======iotRules=======");
            context.output(rulesDataTag, value);
        } else if (tableName.equals("iotSysID")) {
            System.out.println("=======iotSysID=======");
            context.output(sysDataTag, value);
        } else if (tableName.equals("iotEnergy")) {
            System.out.println("======iotEnergy========");
            context.output(energDataTag, value);
        }

    }
}

业务1、hbase数据存储

  // todo 4. 业务1: 动态获取租户是否进行数据存储,进行数据的存储
        //  动态过滤数据
        //需要一个广播状态。主键key为sysid+devcId,string类型。(配置,为表名加操作类型)
        MapStateDescriptor<String, String> sysMapStateDescriptor = new MapStateDescriptor<>("sysmap-state", String.class, String.class);
        BroadcastStream<String> sysBroadcastStream = sysDataStream.broadcast(sysMapStateDescriptor);

        BroadcastConnectedStream<Tuple4<String, String, String, String>, String> sysConnectStream = mqttStream.connect(sysBroadcastStream);
        SingleOutputStreamOperator<Tuple4<String, String, String, String>> SysBroadProcessFuncStream = sysConnectStream.process(new SysBroadProcessFunc(sysMapStateDescriptor));

        // 动态过滤后的数据,进行存储到hbase,需要解析Tuple4,生成表名,rowkey,等
        SysBroadProcessFuncStream.addSink(new MyHBaseSinkFunction());

1) 流连接与处理
public class SysBroadProcessFunc extends BroadcastProcessFunction<Tuple4<String, String, String, String>, String, Tuple4<String, String, String, String>> {
    private MapStateDescriptor<String, String> sysMapStateDescriptor;

    public SysBroadProcessFunc(MapStateDescriptor<String, String> sysMapStateDescriptor) {
        this.sysMapStateDescriptor = sysMapStateDescriptor;
    }

    @Override
    public void processElement(Tuple4<String, String, String, String> value, ReadOnlyContext readOnlyContext, Collector<Tuple4<String, String, String, String>> collector) throws Exception {


        ReadOnlyBroadcastState<String, String> broadcastState = readOnlyContext.getBroadcastState(sysMapStateDescriptor);
        String sys = value.f0;
        if (broadcastState.contains(sys)) {
            String sysBroad = broadcastState.get(sys);
            JSONObject jsonObject = JSON.parseObject(sysBroad);
            String dataState = jsonObject.getString("dataState");
            if (dataState.equals("1")) {
                collector.collect(value);
            } else {
                System.out.println("当前租户:" + sys + "未开启数据存储");
            }

        } else {
            System.out.println("系统未发现该租户:" + sys);
        }


    }

    @Override
    public void processBroadcastElement(String value, Context context, Collector<Tuple4<String, String, String, String>> collector) throws Exception {

        //3.写入状态
        JSONObject jsonObject = JSON.parseObject(value);

        String op = jsonObject.getString("op");
        if (op.equals("DELETE")) {

            JSONObject before = jsonObject.getJSONObject("before");
            String sysid = before.getString("sysid");
            //3.写入状态,广播出去
            BroadcastState<String, String> broadcastState = context.getBroadcastState(sysMapStateDescriptor);
            broadcastState.remove(sysid);
            System.out.println("DELETE====sysid=====" + sysid);

        } else {

            JSONObject after = jsonObject.getJSONObject("after");
            String sysid = after.getString("sysid");
            //3.写入状态,广播出去
            BroadcastState<String, String> broadcastState = context.getBroadcastState(sysMapStateDescriptor);
            broadcastState.put(sysid, after+"");
            System.out.println("PUT====sysid=====" + sysid);

        }


    }
}

2)输出到hbase
   // todo 4. 业务1: 动态获取租户是否进行数据存储,进行数据的存储
  //  动态过滤数据
  //需要一个广播状态。主键key为sysid+devcId,string类型。(配置,为表名加操作类型)
  MapStateDescriptor<String, String> sysMapStateDescriptor = new MapStateDescriptor<>("sysmap-state", String.class, String.class);
  BroadcastStream<String> sysBroadcastStream = sysDataStream.broadcast(sysMapStateDescriptor);

  BroadcastConnectedStream<Tuple4<String, String, String, String>, String> sysConnectStream = mqttStream.connect(sysBroadcastStream);
  SingleOutputStreamOperator<Tuple4<String, String, String, String>> SysBroadProcessFuncStream = sysConnectStream.process(new SysBroadProcessFunc(sysMapStateDescriptor));

  // 动态过滤后的数据,进行存储到hbase,需要解析Tuple4,生成表名,rowkey,等
  SysBroadProcessFuncStream.addSink(new MyHBaseSinkFunction());

public class MyHBaseSinkFunction extends RichSinkFunction<Tuple4<String, String, String, String>> {

    private transient Connection conn = null;
    private transient Table table = null;

    @Override
    public void open(Configuration parameters) throws Exception {
        super.open(parameters);
        org.apache.hadoop.conf.Configuration conf = HBaseConfiguration.create();
        //链接服务器
        conf.set("hbase.zookeeper.quorum", "node01:2181,node02:2181,node03:2181");
      //  conf.set("zookeeper.znode.parent", "/hbase-unsecure");
        if (null == conn) {
            this.conn = ConnectionFactory.createConnection(conf);
            System.out.println("==========================hbase 连接成功====================================");
        }
    }

    @Override
    public void invoke(Tuple4<String, String, String, String> value, Context context) throws Exception {
        String sysId = value.f0;
        String deviceId = value.f1;
        String json = value.f2;
        String time = value.f3;

        JSONObject cloumjson =JSONObject.parseObject(json);
        System.out.println("=====sysId===="+sysId);
        //表名
        TableName tableName = TableName.valueOf(sysId);
        // 获取表对象
        table = conn.getTable(tableName);
        int i1 = Objects.hashCode(deviceId.hashCode()) ;
        //int i1 =  1;
        int reg = Math.abs(i1) % 6;
        String rk = reg+ "" + time + "."+ deviceId;// + "00 ;
        // 生成 rowkey
        List<Row> list = new ArrayList<>();
        for (Map.Entry<String, Object> entry : cloumjson.entrySet()) {
            String key = entry.getKey();
            String value1 =  entry.getValue() +"";
            Put put = new Put((rk).getBytes());//创建Put对象,并指定rowkey值
            put.addColumn("fdata".getBytes(), key.getBytes(), value1.getBytes());
            list.add(put);
        }
        Object[] results = new Object[list.size()];
        try {
            table.batch(list,results);
            System.out.println("数据写入成功=============");
        }catch (Exception e){
            e.printStackTrace();
        }



    }

    @Override
    public void close() throws Exception {
        super.close();

        if (table != null){
            table.close();
        }

        if (conn != null){
            conn.close();
        }

    }

业务2、规则匹配触发

/ todo 5、业务2,规则匹配
        // 把配置信息流,变为广播流。
        //需要一个广播状态。主键key为sysid+devcId,string类型。(配置,为表名加操作类型)
        MapStateDescriptor<String, String> rluesMapStateDescriptor = new MapStateDescriptor<>("rulesmap-state", String.class, String.class);
        BroadcastStream<String> rulesBroadcastStream = rulesDataStream.broadcast(rluesMapStateDescriptor);
        BroadcastConnectedStream<Tuple4<String, String, String, String>, String> rluesConnectStream = mqttStream.connect(rulesBroadcastStream);

        OutputTag<String> outTag = new OutputTag<String>("out-tag") {};
        SingleOutputStreamOperator<Tuple4<String, String, String, String>> rlueBroadProcessStream = rluesConnectStream.process(new RulesBroadProcess(rluesMapStateDescriptor, outTag));


        DataStream<String> sideOutput = rlueBroadProcessStream.getSideOutput(outTag);
        sideOutput.addSink(MyKafkaUtil.getKafkaProducer(new KafkaSerializationSchema<String>() {
            @Override
            public ProducerRecord<byte[], byte[]> serialize(String element, @Nullable Long timestamp) {
                
                return new ProducerRecord<byte[], byte[]>("sendTopic" ,
                        element.getBytes());
            }
        }));
1)维度流转换为广播流
// todo 5、业务2,规则匹配
// 把配置信息流,变为广播流。
//需要一个广播状态。主键key为sysid+devcId,string类型。(配置,为表名加操作类型)
MapStateDescriptor<String, String> rluesMapStateDescriptor = new MapStateDescriptor<>("rulesmap-state", String.class, String.class);
BroadcastStream<String> rulesBroadcastStream = rulesDataStream.broadcast(rluesMapStateDescriptor);
BroadcastConnectedStream<Tuple4<String, String, String, String>, String> rluesConnectStream = mqttStream.connect(rulesBroadcastStream);
2)流连接规则匹配
 OutputTag<String> outTag = new OutputTag<String>("out-tag") {};
 SingleOutputStreamOperator<Tuple4<String, String, String, String>> rlueBroadProcessStream = rluesConnectStream.process(new RulesBroadProcess(rluesMapStateDescriptor, outTag));
(1)状态处理
public class RulesBroadProcess extends BroadcastProcessFunction<Tuple4<String, String, String, String>, String, Tuple4<String, String, String, String>> {
    private MapStateDescriptor<String, String> mapStateDescriptor;
    private Connection connection;

    private Map<String, String> stateHis = new HashMap<>();
    OutputTag<String> outTag;

    public RulesBroadProcess(MapStateDescriptor<String, String> mapStateDescriptor, OutputTag<String> outTag) {
        this.mapStateDescriptor = mapStateDescriptor;
        this.outTag = outTag;
    }

    @Override
    public void open(Configuration parameters) throws Exception {
        Class.forName("com.mysql.jdbc.Driver");
        connection = DriverManager.getConnection("jdbc:mysql://node03:3306/iot?characterEncoding=UTF-8", "root", "123456");
    }

    @Override
    public void processElement(Tuple4<String, String, String, String> value, ReadOnlyContext readOnlyContext, Collector<Tuple4<String, String, String, String>> collector) throws Exception {


        ReadOnlyBroadcastState<String, String> broadcastState = readOnlyContext.getBroadcastState(mapStateDescriptor);
        String key = value.f0 + "/" + value.f1;

        System.out.println("key" + key);
        System.out.println("...................." + broadcastState.get(key));
        if (broadcastState.contains(key)) {

            String rules = broadcastState.get(key);
            JSONObject valueJson = JSON.parseObject(rules);
            // 遍历匹配多个规则
            for (Map.Entry rule : valueJson.entrySet()) {
                String value1 = (String) rule.getValue();
                RulesTable tableProcess = JSONObject.parseObject(value1, RulesTable.class);
                rulesProcess(tableProcess, value, key, readOnlyContext);
            }

        } else {
            System.out.println("不存在规则");
        }

    }


    @Override
    public void processBroadcastElement(String value, Context context, Collector<Tuple4<String, String, String, String>> collector) throws Exception {

        //3.写入状态
        JSONObject jsonObject = JSON.parseObject(value);
        String op = jsonObject.getString("op");


        if (op.equals("DELETE")) {
            JSONObject before = jsonObject.getJSONObject("before");
            String sysid = before.getString("sys_id");
            String devsid = before.getString("device_id");
            String ruleid = before.getString("rule_id");
            String key = sysid + "/" + devsid;
            BroadcastState<String, String> broadcastState = context.getBroadcastState(mapStateDescriptor);

            // 删除规则,先删除一个,如果删除完了,将删除状态map
            String v = broadcastState.get(key);
            JSONObject valueJson = JSON.parseObject(v);
            valueJson.remove(ruleid);
            if (valueJson.size() == 0) {
                broadcastState.remove(key);
            }
            System.out.println("DELETE====sysid=====" + key);

        } else {
            JSONObject after = jsonObject.getJSONObject("after");
            String sysid = after.getString("sys_id");
            String devsid = after.getString("device_id");
            String ruleid = after.getString("rule_id");
            String key = sysid + "/" + devsid;
            //3.写入状态,广播出去
            BroadcastState<String, String> broadcastState = context.getBroadcastState(mapStateDescriptor);

            if (broadcastState.contains(key)) {
                // 一个设备 继续添加规则时,获得以前的rulejson,转换为json后,继续添加
                String v = broadcastState.get(key);
                JSONObject valueJson = JSON.parseObject(v);
                valueJson.put(ruleid, after + "");
                broadcastState.put(key, valueJson + "");
                System.out.println("add PUT====sysid=====" + key);
            } else {
                // 一个设备新加入一个规则时,先创建一个rulejson,key为规则ID,vale为规则
                JSONObject rulejson = new JSONObject();
                rulejson.put(ruleid, after + "");
                broadcastState.put(key, rulejson + "");
                System.out.println("  new PUT====sysid=====" + key);
            }


        }


    }
    
}
(2)规则匹配
 public void rulesProcess(RulesTable tableProcess, Tuple4<String, String, String, String> value, String key, ReadOnlyContext readOnlyContext) throws SQLException {
        String type = tableProcess.getType(); // 类型: 0报警,1事件
        String state = tableProcess.getState();
        if (type.equals("1")) {
            String startRule = tableProcess.getStart_rule();
            Map<String, Object> msgvalueMap = NumberUtil.valueMap(value.f2);//将数据中数字 格式化
            System.out.println("msgvalueMap===" + msgvalueMap);

            boolean result = false;
            try {
                result = (Boolean) AviatorEvaluator.execute(startRule, msgvalueMap);//resolveData(msgvalueMap, startRule);
            } catch (Exception e) {
                System.out.println("=======#####========当前运算结果异常====");
            }

            if (result) {
                String rule_id = tableProcess.getRule_id();
                System.out.println("---------------事件-------------" + rule_id + "-----------触发--------------");
            }
        } else {
            // 报警
            if (state.equals("0")) { //解除报警
                String startRule = tableProcess.getStart_rule();
                Map<String, Object> msgvalueMap = NumberUtil.valueMap(value.f2);//将数据中数字 格式化
                System.out.println("msgvalueMap===" + msgvalueMap);

                boolean result = false;
                try {
                    result = (Boolean) AviatorEvaluator.execute(startRule, msgvalueMap);//resolveData(msgvalueMap, startRule);
                } catch (Exception e) {
                    System.out.println("=======#####========当前运算结果异常====");
                }

                if (result) {
                    String rule_id = tableProcess.getRule_id();
                    System.out.println("========报警=====" + rule_id + "========触发=======");
                    stateHis.put(key, "on");
                    changeState(rule_id, 1);

                    // todo 分流
                    if (type.equals("5")) {
                        readOnlyContext.output(outTag, tableProcess.toString() + value);
                    }

                }

            } else if (state.equals("1")) {  // "规则触发中,需要解除"

                String endRules = tableProcess.getEnd_rule();
                Map<String, Object> msgvalueMap = NumberUtil.valueMap(value.f2);//将数据中数字 格式化
                System.out.println("msgvalueMap===" + msgvalueMap);
                if (endRules.equals("auto")) {
                    String startRule = tableProcess.getStart_rule();
                    boolean result = false;
                    try {
                        result = (Boolean) AviatorEvaluator.execute(startRule, msgvalueMap);//resolveData(msgvalueMap, startRule);
                    } catch (Exception e) {
                        System.out.println("=======#####========当前运算结果异常====");
                    }

                    if (!result) {
                        String rule_id = tableProcess.getRule_id();
                        if (stateHis.get(key).equals("on")) {
                            System.out.println("========报警=====" + key + "========自动解除=======");
                            stateHis.put(key, "onece");
                            changeState(rule_id, 0);

                        }
                    }
                } else { //手动解除
                    boolean result = false;
                    try {
                        result = (Boolean) AviatorEvaluator.execute(endRules, msgvalueMap);//resolveData(msgvalueMap, startRule);
                    } catch (Exception e) {
                        System.out.println("=======#####========当前运算结果异常====");
                    }
                    if (result) {
                        String rule_id = tableProcess.getRule_id();
                        if (type.equals(0) && stateHis.get(key).equals("on")) {
                            System.out.println("========报警=====" + key + "========自动解除=======");
                            stateHis.put(key, "onece");
                            changeState(rule_id, 0);
                        }
                    }

                }


            }
        }
    }
public void changeState(String ruleID, int i) throws SQLException {
        Statement stmt = connection.createStatement();
        String str = String.format(
                "update iotRules set state=  '%s' where  rule_id ='%s';"
                , i, ruleID);
        System.out.println(str);
        stmt.executeUpdate(str);

        stmt.close();
    }
3)侧输出流输出到kafka
  DataStream<String> sideOutput = rlueBroadProcessStream.getSideOutput(outTag);
  sideOutput.addSink(MyKafkaUtil.getKafkaProducer(new KafkaSerializationSchema<String>() {
      @Override
      public ProducerRecord<byte[], byte[]> serialize(String element, @Nullable Long timestamp) {
          
          return new ProducerRecord<byte[], byte[]>("sendTopic" ,
                  element.getBytes());
      }
  }));
public class MyKafkaUtil {

    private static String brokers = "node01:9092,node02:9092,node02node02:9092";
    private static String default_topic = "TJC_DEFAULT_TOPIC";

    public static FlinkKafkaProducer<String> getKafkaProducer(String topic) {
        return new FlinkKafkaProducer<String>(brokers,
                topic,
                new SimpleStringSchema());
    }

    public static <T> FlinkKafkaProducer<T> getKafkaProducer(KafkaSerializationSchema<T> kafkaSerializationSchema) {

        Properties properties = new Properties();
        properties.setProperty(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, brokers);

        return new FlinkKafkaProducer<T>(default_topic,
                kafkaSerializationSchema,
                properties,
                FlinkKafkaProducer.Semantic.EXACTLY_ONCE);
    }

    public static FlinkKafkaConsumer<String> getKafkaConsumer(String topic, String groupId) {

        Properties properties = new Properties();

        properties.put(ConsumerConfig.GROUP_ID_CONFIG, groupId);
        properties.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, brokers);

        return new FlinkKafkaConsumer<String>(topic,
                new SimpleStringSchema(),
                properties);

    }

    //拼接Kafka相关属性到DDL
    public static String getKafkaDDL(String topic, String groupId) {
        return  " 'connector' = 'kafka', " +
                " 'topic' = '" + topic + "'," +
                " 'properties.bootstrap.servers' = '" + brokers + "', " +
                " 'properties.group.id' = '" + groupId + "', " +
                " 'format' = 'json', " +
                " 'scan.startup.mode' = 'latest-offset'  ";
    }

}

业务3、能源统计数据计算

 // todo 6、 业务3。能源统计数据计算
        MapStateDescriptor<String, EnergyTable> energyMapStateDescriptor = new MapStateDescriptor<>("enermap-state", String.class, EnergyTable.class);
        BroadcastStream<String> energyBroadcastStream = energDataStream.broadcast(energyMapStateDescriptor);

        SingleOutputStreamOperator<Tuple4<String, String, String, String>> energyProcess = mqttStream.connect(energyBroadcastStream).process(new EnergyBroadProcessFunc(energyMapStateDescriptor));

        SingleOutputStreamOperator<Tuple4<String, String, String, String>> energyApply = energyProcess.assignTimestampsAndWatermarks(new MyWaterMark())
                .keyBy(0, 1)
                .timeWindow(Time.seconds(600))
                .apply(new MyWindowFunc());

        // 4、 自定义hbase输出,动态列、
        energyApply.addSink(new MyHBaseSinkFunction());
1)连接广播流,动态添加维度信息
// 动态添加维度信息,过滤流
public class EnergyBroadProcessFunc extends BroadcastProcessFunction<Tuple4<String, String, String, String>, String, Tuple4<String, String, String, String>> {

    private MapStateDescriptor<String, EnergyTable> mapStateDescriptor;


    public EnergyBroadProcessFunc (MapStateDescriptor<String, EnergyTable> mapStateDescriptor) {
        this.mapStateDescriptor = mapStateDescriptor;
    }

    @Override
    public void processElement(Tuple4<String, String, String, String> value, ReadOnlyContext readOnlyContext, Collector<Tuple4<String, String, String, String>> collector) throws Exception {
        ReadOnlyBroadcastState<String, EnergyTable> broadcastState = readOnlyContext.getBroadcastState(mapStateDescriptor);

        String key =  value.f0 + "/"+ value.f1;
        if(broadcastState.contains(key)){
            EnergyTable energyTable = broadcastState.get(key);

            String dd = energyTable.getDeviceid() + "/" + energyTable.getEnergtype() + "/" + energyTable.getLocation();

            collector.collect(new Tuple4<>(value.f0, dd, value.f2, value.f3));

        }

    }

    @Override
    public void processBroadcastElement(String value, Context context, Collector<Tuple4<String, String, String, String>> collector) throws Exception {
        //3.写入状态
        JSONObject jsonObject = JSON.parseObject(value);

        String op = jsonObject.getString("op");
        if (op.equals("DELETE")) {

            JSONObject before = jsonObject.getJSONObject("before");

            String key =  before.getString("sysid") +"/"+ before.getString("deviceid");
            //3.写入状态,广播出去
            BroadcastState<String, EnergyTable> broadcastState = context.getBroadcastState(mapStateDescriptor);
            broadcastState.remove(key);
            System.out.println("DELETE====sysid=====" + key);

        } else {
            String data = jsonObject.getString("after");
            EnergyTable tableProcess = JSON.parseObject(data,EnergyTable.class);
            //3.写入状态,广播出去
            BroadcastState<String, EnergyTable> broadcastState = context.getBroadcastState(mapStateDescriptor);
            String key = tableProcess.getSysid()+"/"+tableProcess.getDeviceid();
            broadcastState.put(key, tableProcess);
            System.out.println("PUT====sysid=====" + tableProcess.getSysid());

        }
    }
}
2) 自定义注册waterMark
public class MyWaterMark implements AssignerWithPeriodicWatermarks<Tuple4<String,String,String,String>> {

    long currentMaxTimestamp=0L;
    final long maxOutputOfOrderness=2000L;//允许乱序时间20 秒

    @Override
    public long extractTimestamp(Tuple4<String, String, String,String> element, long l) {
        Long timeStamp = Long.valueOf(element.f3);
        currentMaxTimestamp=Math.max(timeStamp,currentMaxTimestamp);

//        System.out.println("currentMaxTimestamp============="+currentMaxTimestamp);
        return timeStamp;
    }

    @Nullable
    @Override
    public Watermark getCurrentWatermark() {
        System.out.println(" water mark ...");
        return new Watermark(currentMaxTimestamp - maxOutputOfOrderness);
    }

}
3)窗口处理函数
/**
 * IN,输入的数据类型
 * OUT,输出的数据类型
 * KEY,在flink里面这儿其实就是分组的字段,大家永远看到的是一个tuple字段
 *  只不过,如果你的分组的字段是有一个,那么这个tuple里面就只会有一个字段
 *  如果说你的分组的字段有多个,那么这个里面就会有多个字段。
 * W extends Window
 *
 */
public class MyWindowFunc implements WindowFunction<Tuple4<String,String,String,String>, Tuple4<String,String,String,String>, Tuple, TimeWindow> {

    FastDateFormat dataFormat = FastDateFormat.getInstance("HH:mm:ss");
    @Override
    public void apply(Tuple tuple, TimeWindow timeWindow,
                      Iterable<Tuple4<String, String, String,String>> IN,
                      Collector<Tuple4<String, String, String, String>> OUT) {

        System.out.println("当天系统的时间:"+dataFormat.format(System.currentTimeMillis()));
        JSONObject outData = new JSONObject();

        LinkedList<JSONObject> computeList = new LinkedList<>();
        int sum1 = 0;
        String time="";
        for ( Tuple4<String,String,String,String> ele : IN) {
            JSONObject jobtemp =JSONObject.parseObject(ele.f2);
            computeList.add(jobtemp);
            System.out.println("----------"+jobtemp);
            sum1 += 1;
           time = ele.f3;
        }

        String[] computeMetrics= {"1","2","3","4"};//测点名称
        try{
            for (int i = 0; i < computeMetrics.length; i++) {

                String computeMetric=computeMetrics[i];
 

                DoubleSummaryStatistics summarizing = computeList.stream()
                        .map(x -> new Double(x.getString(computeMetric))) // 将当前测点,每个value值转换为BigDecimal
                        .collect(Collectors.summarizingDouble(Double::doubleValue));
                outData.put(computeMetric + "-max", summarizing.getMax());
                outData.put(computeMetric + "-Average", summarizing.getAverage() );
                outData.put(computeMetric + "-Sum", summarizing.getSum() );
                outData.put(computeMetric + "-Count", summarizing.getCount() );
                outData.put(computeMetric + "-Min", summarizing.getMin() );



                JSONObject first = computeList.getFirst();

                outData.put(computeMetric+"-first", first.get(computeMetric) );

                JSONObject last = computeList.getLast();
                outData.put(computeMetric+"-last", last.get(computeMetric) );


                Double diff = Double.parseDouble((String) last.get(computeMetric))-Double.parseDouble((String) first.get(computeMetric));

                outData.put(computeMetric+"-diff", diff );

            }

        }catch (Throwable e){
            System.out.println("=======计算存在错误=====");
        }

        String devTypLoc = tuple.getField(1).toString();
        outData.put("deviceId", devTypLoc.split("/")[0]);
        outData.put("energyType", devTypLoc.split("/")[1]);
        outData.put("location", devTypLoc.split("/")[2]);


        System.out.println("当前窗口:  租户"+tuple.getField(0).toString()+"  设备"+tuple.getField(1)+" 统计数为:"+ sum1);
        // 输出单词出现的次数
        OUT.collect(Tuple4.of(tuple.getField(0).toString(), devTypLoc.split("/")[0],outData.toString(),time ));

    }
}
引用: Flink SQL 1.11版本引入了CDC机制,CDC全称为Change Data Capture,用于追踪数据库表的增删改查操作,并且是目前非常成熟的同步数据库变更的方案之一。Flink SQL内部完整支持了Changelog功能,可以处理Insert、Delete和Update这几种消息类型。动态表(Dynamic Table)是Flink SQL的基础概念,Flink SQL的各个算子之间传递的就是Changelog消息。对接CDC时,只需要将外部的数据流转换为Flink系统内部的Insert、Delete、Update消息即可,然后可以利用Flink的查询语法进行灵活的数据分析。 引用: 在实际应用中,可以将Debezium Kafka Connect Service注册到Kafka集群,并指定要同步的数据库表信息。Kafka会自动创建topic并监听Binlog,将变更同步到topic中。在Flink端消费带有CDC的数据也很简单,只需要在DDL中声明format = debezium-json即可。 根据引用和引用,Flink SQL的CDC机制可以通过连接Kafka实现数据变更的同步。首先,我们需要将Debezium Kafka Connect Service注册到Kafka集群,并配置要同步的数据库表信息。Kafka会自动创建相应的topic并监听数据库的Binlog,将数据变更同步到topic中。然后,在Flink中,我们可以通过声明相应的DDL语句,并设置format为debezium-json来消费带有CDC的数据。这样,我们就可以利用Flink的强大查询语法对同步的数据进行灵活的分析了。<span class="em">1</span><span class="em">2</span><span class="em">3</span><span class="em">4</span>
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值