Apache Flink学习笔记(8)Flink中流处理API(Source,Transformation,Sink)

Flink中流处理流程

在这里插入图片描述

  1. 创建执行环境
  2. Flink Data Source
  3. Flink Data Transformation
  4. Flink Data Sink

创建执行环境

getExecutionEnvironment

创建一个执行环境,表示当前执行程序的上下文。 如果程序是独立调用的,则此方法返回本地执行环境;如果从命令行客户端调用程序以提交到集群,则此方法返回此集群的执行环境,也就是说,getExecutionEnvironment 会根据查询运行的方式决定返回什么样的运行环境,是最常用的一种创建执行环境的方式

ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();

之间Flink有创建本地执行和远程执行API,但是本地执行完成还需要改成远程提交,很麻烦,所以就将两者合并这上述形式。

createLocalEnvironment

返回本地执行环境,需要在调用时指定默认的并行度。

LocalStreamEnvironment env = StreamExecutionEnvironment.createLocalEnvironment(1);

createRemoteEnvironment

返回集群执行环境,将 Jar 提交到远程服务器。需要在调用时指定 JobManager的 IP 和端口号,并指定要在集群中运行的 Jar 包。

StreamExecutionEnvironment env = StreamExecutionEnvironment.createRemoteEnvironment("jobmaster-hostname", 6123,"jar的路径");

Flink Data Source

从集合中读取数据

1. fromCollection(Collection)

基于集合构建,集合中的所有元素必须是同一类型。示例如下:

env.fromCollection(Arrays.asList(1,2,3,4,5)).print();

2. fromElements(T …)

基于元素构建,所有元素必须是同一类型。示例如下:

env.fromElements(1,2,3,4,5).print();

3. generateSequence(from, to)

基于给定的序列区间进行构建。示例如下:

env.generateSequence(0,100);

4. fromCollection(Iterator, Class)

基于迭代器进行构建。第一个参数用于定义迭代器,第二个参数用于定义输出元素的类型。使用示例如下:

env.fromCollection(new CustomIterator(), BasicTypeInfo.INT_TYPE_INFO).print();

其中 CustomIterator 为自定义的迭代器,这里以产生 1 到 100 区间内的数据为例,源码如下。需要注意的是自定义迭代器除了要实现 Iterator 接口外,还必须要实现序列化接口 Serializable ,否则会抛出序列化失败的异常:

public class CustomIterator implements Iterator<Integer>, Serializable {
    private Integer i = 0;

    @Override
    public boolean hasNext() {
        return i < 100;
    }

    @Override
    public Integer next() {
        i++;
        return i;
    }
}

fromCollection代码

package org.benchmark;

import org.apache.flink.api.common.typeinfo.BasicTypeInfo;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.api.datastream.DataStreamSink;
import org.apache.flink.streaming.api.datastream.DataStreamSource;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import java.io.Serializable;
import java.util.Arrays;
import java.util.Iterator;
import java.util.Objects;

public class SourceFromCollection{
    public static void main(String[] args) throws Exception{
        // 创建执行环境
        StreamExecutionEnvironment streamExecutionEnvironment = StreamExecutionEnvironment.getExecutionEnvironment();
        streamExecutionEnvironment.setParallelism(3);
        // 从集合中读取数据
        DataStreamSource<SensorReading> sensorReadingDataStreamSource = streamExecutionEnvironment.fromCollection(Arrays.asList(
                new SensorReading("sensor_1", 1547718199L, 35.8),
                new SensorReading("sensor_6", 1547718201L, 15.4),
                new SensorReading("sensor_7", 1547718202L, 6.7),
                new SensorReading("sensor_10", 1547718205L, 38.1)
        ));

        DataStream<Integer> integerDataStreamSource = streamExecutionEnvironment.fromElements(1, 2, 3, 4009, 3434);

        DataStreamSink<Integer> dataStreamSink = streamExecutionEnvironment.fromCollection(new CustomIterator(), BasicTypeInfo.INT_TYPE_INFO).print();

        // 输出
        sensorReadingDataStreamSource.print("sensor");
        integerDataStreamSource.print("int");


        // 执行
        streamExecutionEnvironment.execute();
    }
}

class CustomIterator implements Iterator<Integer>, Serializable {
    private Integer i = 0;

    @Override
    public boolean hasNext() {
        return i < 100;
    }

    @Override
    public Integer next() {
        i++;
        return i;
    }
}

class SensorReading{
    private String id;
    private Long timeStamp;
    private Double temperature;

    public SensorReading() {
    }

    public SensorReading(String id, Long timeStamp, Double temperature) {
        this.id = id;
        this.timeStamp = timeStamp;
        this.temperature = temperature;
    }

    public String getId() {
        return id;
    }

    public Long getTimeStamp() {
        return timeStamp;
    }

    public Double getTemperature() {
        return temperature;
    }

    public void setId(String id) {
        this.id = id;
    }

    public void setTimeStamp(Long timeStamp) {
        this.timeStamp = timeStamp;
    }

    public void setTemperature(Double temperature) {
        this.temperature = temperature;
    }

    @Override
    public String toString() {
        return "SensorReading{" +
                "id='" + id + '\'' +
                ", timeStamp=" + timeStamp +
                ", temperature=" + temperature +
                '}';
    }

    @Override
    public boolean equals(Object o) {
        if (this == o) return true;
        if (o == null || getClass() != o.getClass()) return false;
        SensorReading that = (SensorReading) o;
        return Objects.equals(id, that.id) && Objects.equals(timeStamp, that.timeStamp) && Objects.equals(temperature, that.temperature);
    }

    @Override
    public int hashCode() {
        return Objects.hash(id, timeStamp, temperature);
    }
}

从文件中读取

readTextFile(String filePath)

按照 TextInputFormat 格式读取文本文件,并将其内容以字符串的形式返回。示例如下:

env.readTextFile(filePath).print();

readFile(FileInputFormat inputFormat, String filePath)

按照指定格式读取文件。

readFile(inputFormat, filePath, watchType, interval, typeInformation)

按照指定格式周期性的读取文件。其中各个参数的含义如下:

inputFormat:数据流的输入格式。
filePath:文件路径,可以是本地文件系统上的路径,也可以是 HDFS 上的文件路径。
watchType:读取方式,它有两个可选值,分别是FileProcessingMode.PROCESS_ONCE和FileProcessingMode.PROCESS_CONTINUOUSLY:前者表示对指定路径上的数据只读取一次,然后退出;后者表示对路径进行定期地扫描和读取。需要注意的是如果 watchType 被设置为 PROCESS_CONTINUOUSLY,那么当文件被修改时,其所有的内容 (包含原有的内容和新增的内容) 都将被重新处理,因此这会打破 Flink 的 exactly-once 语义。
interval:定期扫描的时间间隔。
typeInformation:输入流中元素的类型。

使用示例如下:

final String filePath = "D:\\log4j.properties";
final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.readFile(new TextInputFormat(new Path(filePath)),
             filePath,
             FileProcessingMode.PROCESS_ONCE,
             1,
             BasicTypeInfo.STRING_TYPE_INFO).print();
env.execute();

从文件读取代码

测试数据sensor.txt:

sensor_1",1547718199,35.8
sensor_6",1547718201,15.4
sensor_7",1547718202,6.7
sensor_10",1547718205,38.1
package org.benchmark;

import org.apache.flink.api.java.tuple.Tuple2;
import org.apache.flink.api.java.utils.ParameterTool;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.api.datastream.DataStreamSource;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;

public class SourceFromFile {
    public static void main(String[] args) throws Exception{
        // 创建执行环境
        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();

        // 设置并行度
        env.setParallelism(1);

        // 从文件中读取数据
        String inputPath = Thread.currentThread().getContextClassLoader().getResource("sensor.txt").getFile();

        DataStreamSource<String> stringDataSource = env.readTextFile(inputPath);

        stringDataSource.print();
        // 启动Job
        env.execute();
    }
}

实现SourceFunction(自定义 Data Source)

除了内置的数据源外,用户还可以使用 addSource 方法来添加自定义的数据源。但是这种方法是不能设置大于2的并行度。

package org.benchmark;
import org.apache.flink.streaming.api.datastream.DataStreamSource;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.api.functions.source.ParallelSourceFunction;
import org.apache.flink.streaming.api.functions.source.SourceFunction;
import java.util.Objects;
import java.util.Random;
import java.util.TreeMap;

public class SourceFromUdf {
    public static void main(String[] args) throws Exception{
        // 创建执行环境
        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();

        // 实现
        DataStreamSource<SensorReading1> sensorReading1DataStreamSource= env.addSource(new SourceFunction<SensorReading1>() {
            private boolean running = true;
            @Override
            public void run(SourceContext<SensorReading1> ctx) throws Exception {
                // 定义一个随机数发生器
                Random random = new Random();
                // 设置10个传感器的初始温度
                TreeMap<String, Double> sensorTempMap = new TreeMap<>();
                for (int i = 0 ; i < 10 ; i ++){
                    sensorTempMap.put("senesor_" + (i + 1) , 60 + random.nextGaussian() * 20);
                }
                while (running){
                    for (String sensorID:sensorTempMap.keySet()){
                        // 在当前温度基础上随机波动
                        double newTmp = sensorTempMap.get(sensorID) + random.nextGaussian();
                        // 更改之后的温度传上去
                        sensorTempMap.put(sensorID , newTmp);
                        // 发送数据
                        ctx.collect(new SensorReading1(sensorID , System.currentTimeMillis() , newTmp));
                    }
                    //控制输出频率
                    Thread.sleep(1000);
                }
            }
            @Override
            public void cancel() {
                this.running = false;
            }
        }).setParallelism(2);
        sensorReading1DataStreamSource.print();
        // 启动Job
        env.execute();
    }
}

class SensorReading1{
    private String id;      // 传感器名称
    private Long timeStamp;     // 时间戳
    private Double temperature;     // 温度

    public SensorReading1() {
    }

    public SensorReading1(String id, Long timeStamp, Double temperature) {
        this.id = id;
        this.timeStamp = timeStamp;
        this.temperature = temperature;
    }

    public String getId() {
        return id;
    }

    public Long getTimeStamp() {
        return timeStamp;
    }

    public Double getTemperature() {
        return temperature;
    }

    public void setId(String id) {
        this.id = id;
    }

    public void setTimeStamp(Long timeStamp) {
        this.timeStamp = timeStamp;
    }

    public void setTemperature(Double temperature) {
        this.temperature = temperature;
    }

    @Override
    public String toString() {
        return "SensorReading{" +
                "id='" + id + '\'' +
                ", timeStamp=" + timeStamp +
                ", temperature=" + temperature +
                '}';
    }

    @Override
    public boolean equals(Object o) {
        if (this == o) return true;
        if (o == null || getClass() != o.getClass()) return false;
        SensorReading1 that = (SensorReading1) o;
        return Objects.equals(id, that.id) && Objects.equals(timeStamp, that.timeStamp) && Objects.equals(temperature, that.temperature);
    }

    @Override
    public int hashCode() {
        return Objects.hash(id, timeStamp, temperature);
    }
}

ParallelSourceFunction 和 RichParallelSourceFunction

上面通过 SourceFunction 实现的数据源是不具有并行度的,即不支持在得到大于2的并行度。

如果你想要实现具有并行度的输入流,则需要实现 ParallelSourceFunction 或 RichParallelSourceFunction 接口,其与 SourceFunction 的关系如下图:

在这里插入图片描述

ParallelSourceFunction 直接继承自 ParallelSourceFunction,具有并行度的功能。RichParallelSourceFunction 则继承自 AbstractRichFunction,同时实现了 ParallelSourceFunction 接口,所以其除了具有并行度的功能外,还提供了额外的与生命周期相关的方法,如 open() ,closen() 。

从kafka中读取(自定义 Data Source,有flink-kafka连接器)

在一台机器中搭建kafka环境,然后启动kafka:

kafka-console-producer.sh --broker-list h71:9092 --topic first
package org.benchmark;

import org.apache.flink.api.common.serialization.SimpleStringSchema;
import org.apache.flink.streaming.api.datastream.DataStreamSource;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer;
import java.util.Properties;

public class SourceFromKafka {
    public static void main(String[] args)throws Exception {
        // 创建执行环境
        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
        Properties properties = new Properties();
        properties.setProperty("bootstrap.servers", "10.10.108.71:9092");
        properties.setProperty("group.id", "consumer-group");
        properties.setProperty("key.deserializer",
                "org.apache.kafka.common.serialization.StringDeserializer");
        properties.setProperty("value.deserializer",
                "org.apache.kafka.common.serialization.StringDeserializer");
        properties.setProperty("auto.offset.reset", "latest");

        // 设置并行度
        env.setParallelism(1);

        // kafka读取数据
        DataStreamSource<String> stringDataStreamSource = env.addSource(new FlinkKafkaConsumer<String>("first", new SimpleStringSchema(), properties));
        stringDataStreamSource.print();

        // 启动Job
        env.execute();
    }
}

Flink Data Transformation

Flink 的 Transformations 操作主要用于将一个和多个 DataStream 按需转换成新的 DataStream。它主要分为以下三类:

DataStream Transformations:进行数据流相关转换操作;
Physical partitioning:物理分区。Flink 提供的底层 API ,允许用户定义数据的分区规则;
Task chaining and resource groups:任务链和资源组。允许用户进行任务链和资源组的细粒度的控制。

DataStream Transformations

Map [DataStream → DataStream]

对一个 DataStream 中的每个元素都执行特定的转换操作

FlatMap [DataStream → DataStream]

FlatMap 与 Map 类似,但是 FlatMap 中的一个输入元素可以被映射成一个或者多个输出元素

Filter [DataStream → DataStream]

用于过滤符合条件的数据

Map,FlatMap,Filter的代码

文件sensor.txt:

sensor_1,1547718199,35.8
sensor_6,1547718201,15.4
sensor_17,1547718202,6.7
sensor_10,1547718205,38.1
sensor_19,1547718199,35.8
sensor_14,1547718201,15.4
sensor_3,1547718202,6.7
sensor_14,1547718205,38.1
import org.apache.flink.api.common.functions.FlatMapFunction;
import org.apache.flink.streaming.api.datastream.DataStreamSource;
import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.util.Collector;

public class TransFormMap {
    public static void main(String[] args) throws Exception{
        // 创建执行环境
        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();

        // 设置并行度
        env.setParallelism(1);

        // 从文件中读取数据
        String inputPath = Thread.currentThread().getContextClassLoader().getResource("sensor.txt").getFile();
        DataStreamSource<String> stringDataSource = env.readTextFile(inputPath);

		// 对一个 DataStream 中的每个元素都执行特定的转换操作
        SingleOutputStreamOperator<Integer> map = stringDataSource.map((String value) -> {
            return value.length();
        });

		// FlatMap 与 Map 类似,但是 FlatMap 中的一个输入元素可以被映射成一个或者多个输出元素
        SingleOutputStreamOperator<String> stringSingleOutputStreamOperator = stringDataSource.flatMap(new FlatMapFunction<String, String>() {
            @Override
            public void flatMap(String value, Collector<String> out) throws Exception {
                String[] split = value.split(",");
                for (String s : split){
                    out.collect(s);
                }
            }
        });

		// 用于过滤符合条件的数据
        SingleOutputStreamOperator<String> sensor_1 = stringDataSource.filter((String value) -> {
            if (value.startsWith("sensor_1")) {
                return true;
            } else {
                return false;
            }
        });

        map.print("map");
        stringSingleOutputStreamOperator.print("flatmap");
        sensor_1.print("filter");


        // 启动Job
        env.execute();
    }
}

KeyBy [DataStream → KeyedStream]

在这里插入图片描述

用于将相同 Key 值的数据分到相同的分区中,但是两个不相同的key可能也会在同一分区。并且这种operator并不能真正算是一个任务,只起到分组 作用,不能设置并行度。

Aggregations [KeyedStream → DataStream]

滚动计算



// 滚动计算指定key的最小值,可以通过index或者fieldName来指定key,只更换最小值,其他内容相同
min(0);
min("key");

// 滚动计算指定key的最大值,只更换最大值,其他内容相同
max(0);
max("key");

// 滚动计算指定key的最小值,并返回其对应的元素,
minBy(0);
minBy("key");

// 滚动计算指定key的最大值,并返回其对应的元素
maxBy(0);
maxBy("key");

二者不同的是min会根据指定的字段取最小值,并且把这个值保存在对应的位置上,对于其他的字段取了最先获取的值,不能保证每个元素的数值正确,max同理。一句话就是只滚动计算指定的字段,不管别的字段值,别的字段值永远取第一个。

而minBy会返回指定字段取最小值的元素,并且会覆盖。maxBy同理。一句话就是,滚动计算之后,如果满足条件,所有字段都会替换成满足条件的字段值。

滚动计算代码

sensor.txt:

sensor_1,1547718144,35.8
sensor_6,1547718201,15.4
sensor_1,1547718202,6.7
sensor_1,1547718205,38.1
sensor_1,1547718198,35.8
sensor_6,15477182252,15.4
sensor_6,1547718288,6.7
sensor_6,154771888,38.1
import org.apache.flink.streaming.api.datastream.DataStreamSource;
import org.apache.flink.streaming.api.datastream.KeyedStream;
import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
public class TransFormAgg {
    public static void main(String[] args) throws Exception{
        // 创建执行环境
        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();

        // 读取文件数据
        String inputPath = Thread.currentThread().getContextClassLoader().getResource("sensor.txt").getFile();
        DataStreamSource<String> stringDataStreamSource = env.readTextFile(inputPath);

        // map操作
        SingleOutputStreamOperator<SensorReading2> map = stringDataStreamSource.map((String value) -> {
            String[] fields = value.split(",");
            return new SensorReading2(fields[0], new Long(fields[1]), new Double(fields[2]));
        }).setParallelism(1);

        // 分类操作
        KeyedStream<SensorReading2, String> sensorReading2StringKeyedStream = map.keyBy((SensorReading2 value) -> {
            return value.getId();
        });

        // 滚动聚合,参数就是POJO的字段
        SingleOutputStreamOperator<SensorReading2> temperature = sensorReading2StringKeyedStream.max("temperature");

        // 输出
        temperature.print();
        // 启动Job
        env.execute();
    }
}

Reduce [KeyedStream → DataStream]

用于对数据执行归约计算。

import org.apache.flink.api.common.functions.ReduceFunction;
import org.apache.flink.streaming.api.datastream.DataStreamSource;
import org.apache.flink.streaming.api.datastream.KeyedStream;
import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;

public class TransFormReduce {
    public static void main(String[] args) throws Exception{
        // 创建执行环境
        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
        env.setParallelism(1);

        // 读取文件数据
        String inputPath = Thread.currentThread().getContextClassLoader().getResource("sensor.txt").getFile();
        DataStreamSource<String> stringDataStreamSource = env.readTextFile(inputPath);

        // map操作
        SingleOutputStreamOperator<SensorReading2> map = stringDataStreamSource.map((String value) -> {
            String[] fields = value.split(",");
            return new SensorReading2(fields[0], new Long(fields[1]), new Double(fields[2]));
        });

        // 分类操作
        KeyedStream<SensorReading2, String> sensorReading2StringKeyedStream = map.keyBy((SensorReading2 value) -> {
            return value.getId();
        });

        SingleOutputStreamOperator<SensorReading2> reduce = sensorReading2StringKeyedStream.reduce(new ReduceFunction<SensorReading2>() {
            @Override
            public SensorReading2 reduce(SensorReading2 value1, SensorReading2 value2) throws Exception {
                // 第一次执行reduce操作的时候value1就是第一个流进来的数据,value2就是第二个流进来的数据
                // 从第二次开始value1就是上一次操作的结果,value2就是第三个流进来的数据,以此类推
                System.out.println(value1 + ":" + value2);
                return new SensorReading2(value2.getId(), value2.getTimeStamp(), Math.max(value1.getTemperature(), value2.getTemperature()));
            }
        });

        // 输出
        // reduce.print();

        // 启动Job
        env.execute();
    }
}

效果
在这里插入图片描述

Split 和 Select

Split [DataStream → SplitStream]:用于将一个 DataStream 按照指定规则进行拆分为多个 DataStream,需要注意的是这里进行的是逻辑拆分,即 Split 只是将数据贴上不同的类型标签,但最终返回的仍然只是一个 SplitStream;

Select [SplitStream → DataStream]:想要从逻辑拆分的 SplitStream 中获取真实的不同类型的 DataStream,需要使用 Select 算子

这两种算子本来就被抛弃了,在Flink1.12中完全取消

DataStream → SplitStream
在这里插入图片描述
SplitStream→DataStream
在这里插入图片描述

从一个SplitStream中获取一个或者多个DataStream。

Side-Output

import org.apache.flink.api.common.typeinfo.TypeInformation;
import org.apache.flink.api.java.tuple.Tuple3;
import org.apache.flink.streaming.api.datastream.DataStreamSource;
import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.api.functions.ProcessFunction;
import org.apache.flink.util.Collector;
import org.apache.flink.util.OutputTag;

public class TransFormSideOutPut {
    public static void main(String[] args) throws Exception {

        //运行环境
        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();

        //输入数据源
        DataStreamSource<Tuple3<Integer, String, String>> source = env.fromElements(
                new Tuple3<>(1, "1", "AAA"),
                new Tuple3<>(2, "2", "AAA"),
                new Tuple3<>(3, "3", "AAA"),
                new Tuple3<>(1, "1", "BBB"),
                new Tuple3<>(2, "2", "BBB"),
                new Tuple3<>(3, "3", "BBB")
        );

        //1、定义OutputTag
        OutputTag<Tuple3<Integer, String, String>> ATag = new OutputTag<Tuple3<Integer, String, String>>("A-tag") {};
        OutputTag<Tuple3<Integer, String, String>> BTag = new OutputTag<Tuple3<Integer, String, String>>("B-tag") {};

        // 其他非元组类型优先考虑这种方式
        // 如果使用一个参数的API,需要用匿名内部类来构造函数,这样才会能推断出来
        OutputTag<String> A_TAG = new OutputTag<String>("A",TypeInformation.of(String.class));
        OutputTag<String> B_TAG = new OutputTag<String>("B",TypeInformation.of(String.class));

        //2、在ProcessFunction中处理主流和分流
        SingleOutputStreamOperator<Tuple3<Integer, String, String>> processedStream =
                source.process(new ProcessFunction<Tuple3<Integer, String, String>, Tuple3<Integer, String, String>>() {
                    @Override
                    public void processElement(Tuple3<Integer, String, String> value, Context ctx, Collector<Tuple3<Integer, String, String>> out) throws Exception {

                        //侧流-只输出特定数据
                        if (value.f2.equals("AAA")) {
                            ctx.output(ATag, value);
                        } else {
                            //主流
                            out.collect(value);
                        }

                    }
                }).setParallelism(2);

        //获取主流
        processedStream.print("主流输出B:");

        //获取侧流
        processedStream.getSideOutput(ATag).print("分流输出A:");

        env.execute();
    }
}

Connect 和 CoMap

connect只能合并两个流,并且这两个流的数据类型可以不同。

Connect

DataStream,DataStream → ConnectedStreams:连接两个保持他们类型的数据流,两个数据流被 Connect 之后,只是被放在了一个同一个流中,内部依然保持各自的数据和形式不发生任何变化,两个流相互独立。
在这里插入图片描述

Connect 和 CoMap

ConnectedStreams → DataStream:作用于 ConnectedStreams 上,功能 与 map和 flatMap 一样,对 ConnectedStreams 中的每一 个 Stream 分别进行 map 和 flatMap处理。

在这里插入图片描述

ConnectedStreams<SensorReading2, String> connect = map.connect(map1);
package org.benchmark;

import org.apache.flink.api.common.functions.ReduceFunction;
import org.apache.flink.streaming.api.datastream.*;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.api.functions.co.CoFlatMapFunction;
import org.apache.flink.streaming.api.functions.co.CoMapFunction;
import org.apache.flink.util.Collector;

public class TransFormSplit {
    public static void main(String[] args) throws Exception{
        // 创建执行环境
        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
        env.setParallelism(1);


        // 读取文件数据
        String inputPath = Thread.currentThread().getContextClassLoader().getResource("sensor.txt").getFile();
        DataStreamSource<String> stringDataStreamSource = env.readTextFile(inputPath);

        // map操作
        DataStream<SensorReading2> map = stringDataStreamSource.map((String value) -> {
            String[] fields = value.split(",");
            return new SensorReading2(fields[0], new Long(fields[1]), new Double(fields[2]));
        }).setParallelism(2);

        // map1操作
        SingleOutputStreamOperator<String> map1 = stringDataStreamSource.map((String value) -> {
            return value.toString();
        }).setParallelism(3);

        ConnectedStreams<SensorReading2, String> connect = map.connect(map1);

        SingleOutputStreamOperator<Object> map2 = connect.map(new CoMapFunction<SensorReading2, String, Object>() {
            @Override
            public Object map1(SensorReading2 value) throws Exception {
                return value.getTemperature();
            }

            @Override
            public Object map2(String value) throws Exception {
                return value;
            }
        });

        map2.print();
        env.execute();
    }
}

Union

DataStream → DataStream:对两个或者两个以上的 DataStream 进行 union 操作,产生一个包含所有 DataStream 元素的新 DataStream。union和connect类似,他们都不作为一次任务,只是一个中间过程。

在这里插入图片描述

package org.benchmark;

import org.apache.flink.api.common.functions.ReduceFunction;
import org.apache.flink.streaming.api.datastream.*;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.api.functions.co.CoFlatMapFunction;
import org.apache.flink.streaming.api.functions.co.CoMapFunction;
import org.apache.flink.util.Collector;

public class TransFormSplit {
    public static void main(String[] args) throws Exception{
        // 创建执行环境
        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
        env.setParallelism(1);


        // 读取文件数据
        String inputPath = Thread.currentThread().getContextClassLoader().getResource("sensor.txt").getFile();
        DataStreamSource<String> stringDataStreamSource = env.readTextFile(inputPath);

        // map操作
        DataStream<SensorReading2> map = stringDataStreamSource.map((String value) -> {
            String[] fields = value.split(",");
            return new SensorReading2(fields[0], new Long(fields[1]), new Double(fields[2]));
        }).setParallelism(2);

        // map1操作
        SingleOutputStreamOperator<SensorReading2> map1 = stringDataStreamSource.map((String value) -> {
            String[] fields = value.split(",");
            return new SensorReading2(fields[0], new Long(fields[1]), new Double(fields[2]));
        }).setParallelism(3);

        // union可以合并多个相同的流
        DataStream<SensorReading2> union = map.union(map1);

        union.print();

        env.execute();
    }
}

Flink Sink

Flink存入kafka

<dependency>
      <groupId>org.apache.flink</groupId>
      <artifactId>flink-connector-kafka_2.12</artifactId>
      <version>1.12.2</version>
</dependency>

java代码:

package org.benchmark;

import org.apache.flink.api.common.serialization.SimpleStringSchema;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.api.datastream.DataStreamSource;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer;
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer;

import java.util.Properties;

public class SinkKafka {

    public static void main(String[] args) throws Exception{
        // 创建执行环境
        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
        env.setParallelism(1);

        Properties properties = new Properties();
        properties.setProperty("bootstrap.servers", "10.10.108.71:9092");
        properties.setProperty("group.id", "consumer-group");
        properties.setProperty("key.deserializer",
                "org.apache.kafka.common.serialization.StringDeserializer");
        properties.setProperty("value.deserializer",
                "org.apache.kafka.common.serialization.StringDeserializer");
        properties.setProperty("auto.offset.reset", "latest");


        // 读取kafka数据
        DataStreamSource<String> stringDataStreamSource = env.addSource(new FlinkKafkaConsumer<String>("sensor", new SimpleStringSchema(), properties));

        // map操作
        DataStream<String> map = stringDataStreamSource.map((String value) -> {
            String[] fields = value.split(",");
            return new SensorReading2(fields[0], new Long(fields[1]), new Double(fields[2])).toString();
        }).setParallelism(2);

        map.addSink(new FlinkKafkaProducer<String>("10.10.108.71:9092" , "sinktest" , new SimpleStringSchema()));

        env.execute();
    }
}

启动kafka的生产者和消费者

kafka-console-producer.sh --broker-list 10.10.108.71:9092 --topic sensor
kafka-console-consumer.sh --bootstrap-server 10.10.108.71:9092 --topic sinktest

在这里插入图片描述
在这里插入图片描述

Flink存入redis

import org.apache.flink.api.common.serialization.SimpleStringSchema;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.api.datastream.DataStreamSink;
import org.apache.flink.streaming.api.datastream.DataStreamSource;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer;
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer;
import org.apache.flink.streaming.connectors.redis.RedisSink;
import org.apache.flink.streaming.connectors.redis.common.config.FlinkJedisClusterConfig;
import org.apache.flink.streaming.connectors.redis.common.config.FlinkJedisPoolConfig;
import org.apache.flink.streaming.connectors.redis.common.mapper.RedisCommand;
import org.apache.flink.streaming.connectors.redis.common.mapper.RedisCommandDescription;
import org.apache.flink.streaming.connectors.redis.common.mapper.RedisMapper;

import java.util.ArrayList;
import java.util.Properties;

public class SinkRedis {
    public static void main(String[] args) throws Exception{
        // 创建执行环境
        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
        env.setParallelism(1);

        Properties properties = new Properties();
        properties.setProperty("bootstrap.servers", "10.10.108.71:9092");
        properties.setProperty("group.id", "consumer-group");
        properties.setProperty("key.deserializer",
                "org.apache.kafka.common.serialization.StringDeserializer");
        properties.setProperty("value.deserializer",
                "org.apache.kafka.common.serialization.StringDeserializer");
        properties.setProperty("auto.offset.reset", "latest");


        // 读取kafka数据
        DataStreamSource<String> stringDataStreamSource = env.addSource(new FlinkKafkaConsumer<String>("sensor", new SimpleStringSchema(), properties));


        // map操作
        DataStream<SensorReading2> map = stringDataStreamSource.map((String value) -> {
            String[] fields = value.split(",");
            return new SensorReading2(fields[0], new Long(fields[1]), new Double(fields[2]));
        });


        FlinkJedisPoolConfig config = new FlinkJedisPoolConfig.Builder()
                .setHost("10.10.108.71")
                .setPort(6379)
                .setPassword("123456")
                .build();


        DataStreamSink<SensorReading2> sink = map.addSink(new RedisSink<>(config, new RedisMapper<SensorReading2>() {
            // 定义保存命令到redis的命令,存成Hash表,hset sensor_temp id temperature
            @Override
            public RedisCommandDescription getCommandDescription() {
                return new RedisCommandDescription(RedisCommand.HSET,"sensor_tem");
            }

            @Override
            public String getKeyFromData(SensorReading2 data) {
                return data.getId();
            }

            @Override
            public String getValueFromData(SensorReading2 data) {
                return data.getTemperature().toString();
            }
        }));

        env.execute();
    }
}

Flink使用Jdbc

package org.benchmark;

import org.apache.flink.api.common.serialization.SimpleStringSchema;
import org.apache.flink.configuration.Configuration;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.api.datastream.DataStreamSink;
import org.apache.flink.streaming.api.datastream.DataStreamSource;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.api.functions.sink.RichSinkFunction;
import org.apache.flink.streaming.api.functions.sink.SinkFunction;
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer;
import org.apache.flink.streaming.connectors.redis.RedisSink;
import org.apache.flink.streaming.connectors.redis.common.config.FlinkJedisPoolConfig;
import org.apache.flink.streaming.connectors.redis.common.mapper.RedisCommand;
import org.apache.flink.streaming.connectors.redis.common.mapper.RedisCommandDescription;
import org.apache.flink.streaming.connectors.redis.common.mapper.RedisMapper;

import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.PreparedStatement;
import java.util.Properties;

public class SInkJdbc {
    public static void main(String[] args) throws  Exception{
        // 创建执行环境
        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
        env.setParallelism(1);

        Properties properties = new Properties();
        properties.setProperty("bootstrap.servers", "10.10.108.71:9092");
        properties.setProperty("group.id", "consumer-group");
        properties.setProperty("key.deserializer",
                "org.apache.kafka.common.serialization.StringDeserializer");
        properties.setProperty("value.deserializer",
                "org.apache.kafka.common.serialization.StringDeserializer");
        properties.setProperty("auto.offset.reset", "latest");


        // File读取数据
        String inputPath = Thread.currentThread().getContextClassLoader().getResource("sensor.txt").getFile();
        DataStreamSource<String> stringDataStreamSource = env.readTextFile(inputPath);


        // map操作
        DataStream<SensorReading2> map = stringDataStreamSource.map((String value) -> {
            String[] fields = value.split(",");
            return new SensorReading2(fields[0], new Long(fields[1]), new Double(fields[2]));
        });


        FlinkJedisPoolConfig config = new FlinkJedisPoolConfig.Builder()
                .setHost("127.0.0.1")
                .setPort(6379)
                .build();


        DataStreamSink<SensorReading2> sink = map.addSink(new MyJdbcSink());

        env.execute();
    }

    // jdbc需要初始化和关闭,invoke每一次来数据就会调用,每一次都要初始化,所以需要open和close使用RichSinkFunctio
    public static class MyJdbcSink extends RichSinkFunction<SensorReading2>{
        // 声明连接和预编译语句
        Connection connection =null;
        PreparedStatement insertStmt = null;
        PreparedStatement updateStmt = null;

        @Override
        public void open(Configuration parameters) throws Exception {
            connection = DriverManager.getConnection("jdbc:mysql://localhost:3306/test" , "root" , "123456");
            insertStmt = connection.prepareStatement("insert into sensor_temp(id,temp) values(?,?)");
            updateStmt = connection.prepareStatement("update sensor_temp set  temp = ? where id = ?");
        }

        // 每来一条数据,调用连接,执行sql
        @Override
        public void invoke(SensorReading2 value, Context context) throws Exception {
            updateStmt.setDouble(1,value.getTemperature());
            updateStmt.setString(2,value.getId());
            updateStmt.execute();
            if (updateStmt.getUpdateCount() == 0){
                insertStmt.setString(1,value.getId());
                insertStmt.setDouble(2,value.getTemperature());
                insertStmt.execute();
            }
        }

        @Override
        public void close() throws Exception {
            insertStmt.close();
            updateStmt.close();
            connection.close();
        }
    }
}
  • 0
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值