Flink学习03 - 流处理API
学习资料
尚硅谷2021最新Java版Flink(武老师清华硕士,原IBM-CDL负责人)
我的flink练习代码地址(github)
Flink流处理API(DataStreamAPI)
1.Environment
1.1 getExecutionEnvironment
创建一个执行环境,表示当前执行程序的上下文。 如果程序是独立调用的,则 此方法返回本地执行环境;如果从命令行客户端调用程序以提交到集群,则此方法 返回此集群的执行环境,也就是说,getExecutionEnvironment 会根据查询运行的方式决定返回什么样的运行环境,是最常用的一种创建执行环境的方式。
// 批处理环境
ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
// 流处理环境
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
// 设置并行度
env.setParallelism(1);
1.2 createLocalEnvironment
返回本地在执行环境,需要在调用时指定默认的并行度。
// 本地执行环境
LocalStreamEnvironment env = StreamExecutionEnvironment.createLocalEnvironment(1);
1.3 createRemoteEnvironment
返回集群执行环境,将 Jar 提交到远程服务器。需要在调用时指定 JobManager的 IP 和端口号,并指定要在集群中运行的 Jar 包。
// 远程执行环境
remoteEnvironment env = StreamExecutionEnvironment.createRemoteEnvironment("jobmanage-hostname", 6123,"YOURPATH//wordcount.jar");
这里直接使用1.1 getExecutionEnvironment即可。
2.Source
2.1 从集合、元素读取数据
// 定义样例类,传感器_id,时间戳,温度
public class SourceTest1_Collection{
public static void main(String[] args) throws Exception{
// 创建执行环境
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
// 设置并行度为1,使数据顺序一致
env.setParallelism(1);
// 1.Source: 从集合读取数据
DataStream<SensorReading> dataStream = env.fromCollection(
Arrays.asList(
new SensorReading("sensor_1", 1547718199L, 35.8),
new SensorReading("sensor_6", 1547718201L, 15.4),
new SensorReading("sensor_7", 1547718202L, 6.7),
new SensorReading("sensor_10", 1547718205L, 38.1)
)
);
DataStreamSource<Integer> integerDataStream = env.fromElements(1, 2, 3, 66, 888);
// 2.打印输出
dataStream.print("data");
integerDataStream.print("int");
// 3. 执行任务
env.execute();
}
}
2.2 从文件读取数据
public class SourceTest2_File {
public static void main(String[] args) throws Exception{
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
// 从文件读取数据
DataStream<String> dataStream = env.readTextFile("/Users/seafyliang/DEV/Code_projects/Java_projects/study_projects/flink_study/src/main/resources/sensor.txt");
// 打印输出
dataStream.print();
env.execute();
}
2.3 从Kafka消费数据
- 引入flink-connector-kafka依赖
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-connector-kafka-0.10_2.12</artifactId>
<version>1.10.1</version>
</dependency>
public class SourceTest3_Kafka {
public static void main(String[] args) throws Exception{
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
Properties props = new Properties();
props.setProperty("bootstrap.servers", "localhost:9092");
// 从Kafka读取数据
DataStream<String> dataStream = env.addSource(new FlinkKafkaConsumer010<String>("sensor",new SimpleStringSchema(),props));
// 打印输出
dataStream.print();
env.execute();
}
}
2.4 自定义Source数据源
除了以上的 source 数据来源,我们还可以自定义 source。需要做的,只是传入 一个 SourceFunction 就可以。具体调用如下:
public class SourceTest4_UDF {
public static void main(String[] args) throws Exception{
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
// 自定义数据源
DataStream<SensorReading> dataStream = env.addSource(new MySensorSource());
// 打印输出
dataStream.print();
env.execute();
}
// 实现自定义的SourceFunction
public static class MySensorSource implements SourceFunction<SensorReading>{
// 定义一个标识位,用来控制数据的产生
private boolean running = true;
@Override
public void run(SourceContext<SensorReading> ctx) throws Exception {
// 定义一个随机数发生器
Random random = new Random();
// 设置10个传感器的初始温度
HashMap<String,Double> sensorTempMap = new HashMap<>();
for ( int i = 0; i < 10; i++ ) {
sensorTempMap.put("sensor_" + (i+1),60+random.nextGaussian() * 20); // 高斯分布随机数 符合正态分布的随机数
}
while(running){
for ( String sensorId : sensorTempMap.keySet() ){
// 在当前温度基础上随机波动
Double newTemp = sensorTempMap.get(sensorId) + random.nextGaussian();
sensorTempMap.put(sensorId, newTemp);
ctx.collect(new SensorReading(sensorId, System.currentTimeMillis(),newTemp));
}
// 控制输出频率
Thread.sleep(1000L);
}
}
@Override
public void cancel() {
running = false;
}
}
}
3.Transform
3.1 基本转换算子:
map, flatMap, Filter
public class TransformTest1_Base {
public static void main(String[] args) throws Exception{
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.setParallelism(1);
// 从文件读取数据
DataStream<String> inputStream = env.readTextFile("/Users/seafyliang/DEV/Code_projects/Java_projects/study_projects/flink_study/src/main/resources/sensor.txt");
// 1. map,把String转换成长度输出
DataStream<Integer> mapStream = inputStream.map(new MapFunction<String, Integer>() {
@Override
public Integer map(String s) throws Exception {
return s.length();
}
});
// 2. flatmap,按','切分字段
DataStream<String> flatMapStream = inputStream.flatMap(new FlatMapFunction<String, String>() {
@Override
public void flatMap(String s, Collector<String> collector) throws Exception {
String[] fields = s.split(",");
for ( String field : fields ) {
collector.collect(field);
}
}
});
// 3. filter,筛选sensor_1开头的id对应的数据
DataStream<String> filterStream = inputStream.filter(new FilterFunction<String>() {
@Override
public boolean filter(String s) throws Exception {
return s.startsWith("sensor_1");
}
});
// 打印输出
mapStream.print("map");
flatMapStream.print("flatMap");
filterStream.print("filter");
env.execute();
}
}
![image-20210108094344584](https://i-blog.csdnimg.cn/blog_migrate/7077ff73a547fce85326d3e1dd61dd34.png)
3.2 聚合相关算子
3.2.1 滚动聚合
public class TransformTest2_RollingAggregation {
public static void main(String[] args) throws Exception {
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.setParallelism(1);
// 从文件读取数据
DataStream<String> inputStream = env.readTextFile("/Users/seafyliang/DEV/Code_projects/Java_projects/study_projects/flink_study/src/main/resources/sensor.txt");
// 转换成SensorReading类型
// DataStream<SensorReading> dataStream = inputStream.map(new MapFunction<String, SensorReading>() {
// @Override
// public SensorReading map(String s) throws Exception {
// String[] fields = s.split(",");
//
// return new SensorReading(fields[0], new Long(fields[1]), new Double(fields[2]));
// }
// });
// 使用java8的lambda表达式 转换成SensorReading类型
DataStream<SensorReading> dataStream = inputStream.map( line -> {
String[] fields = line.split(",");
return new SensorReading(fields[0], new Long(fields[1]), new Double(fields[2]));
});
// 分组 按照sensor的id分组
KeyedStream<SensorReading, Tuple> keyedStream = dataStream.keyBy("id");
// // 返回类型可使用对应字段类型 而不是 Tuple
// KeyedStream<SensorReading, String> keyedStream1 = dataStream.keyBy(data -> data.getId());
// // java8新特性:方法引用
// KeyedStream<SensorReading, String> keyedStream2 = dataStream.keyBy(SensorReading::getId);
// 滚动聚合,取当前最大的温度值
DataStream<SensorReading> resultStream = keyedStream.maxBy("temperature");
resultStream.print();
env.execute();
}
}
3.2.2 Reduce规约(一般化)聚合
public class TransformTest3_Reduce {
public static void main(String[] args) throws Exception {
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.setParallelism(1);
// 从文件读取数据
DataStream<String> inputStream = env.readTextFile("/Users/seafyliang/DEV/Code_projects/Java_projects/study_projects/flink_study/src/main/resources/sensor.txt");
// 使用java8的lambda表达式 转换成SensorReading类型
DataStream<SensorReading> dataStream = inputStream.map(line -> {
String[] fields = line.split(",");
return new SensorReading(fields[0], new Long(fields[1]), new Double(fields[2]));
});
// 分组 按照sensor的id分组
KeyedStream<SensorReading, Tuple> keyedStream = dataStream.keyBy("id");
// reduce聚合,取最大的温度值,以及当前最新的时间戳
keyedStream.reduce(new ReduceFunction<SensorReading>() {
@Override
public SensorReading reduce(SensorReading value1, SensorReading value2) throws Exception {
return new SensorReading(value1.getId(), value2.getTimestamp(), Math.max(value1.getTemperature(),value2.getTemperature()));
}
});
SingleOutputStreamOperator<SensorReading> resultStream = keyedStream.reduce((curState, newData) -> {
return new SensorReading(curState.getId(), newData.getTimestamp(), Math.max(curState.getTemperature(), newData.getTemperature()));
});
resultStream.print();
env.execute();
}
}
3.3 Split和Select
Split
DataStream → SplitStream:根据某些特征把一个DataStream 拆分成两个或者多个 DataStream。
Select
SplitStream→DataStream:从一个SplitStream 中获取一个或者多个DataStream。
public class TransformTest4_MultipleStreams {
public static void main(String[] args) throws Exception {
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.setParallelism(1);
// 从文件读取数据
DataStream<String> inputStream = env.readTextFile("/Users/seafyliang/DEV/Code_projects/Java_projects/study_projects/flink_study/src/main/resources/sensor.txt");
// 使用java8的lambda表达式 转换成SensorReading类型
DataStream<SensorReading> dataStream = inputStream.map(line -> {
String[] fields = line.split(",");
return new SensorReading(fields[0], new Long(fields[1]), new Double(fields[2]));
});
// 1. 分流,按照温度值30度为届分为两条流
// split已过时,用side output代替
SplitStream<SensorReading> splitStream = dataStream.split(new OutputSelector<SensorReading>() {
@Override
public Iterable<String> select(SensorReading sensorReading) {
return (sensorReading.getTemperature() > 30) ? Collections.singletonList("high") : Collections.singletonList("low");
}
});
DataStream<SensorReading> highTempStream = splitStream.select("high");
DataStream<SensorReading> lowTempStream = splitStream.select("low");
DataStream<SensorReading> allTempStream = splitStream.select("high", "low");
highTempStream.print("high");
lowTempStream.print("low");
allTempStream.print("all");
env.execute();
}
}
3.4 Connect和CoMap(2条流合并)
DataStream,DataStream →ConnectedStreams:连接两个保持他们类型的数 据流,两个数据流被Connect 之后,只是被放在了一个同一个流中,内部依然保持 各自的数据和形式不发生任何变化,两个流相互独立。
CoMap,CoFlatMap
ConnectedStreams → DataStream:作用于 ConnectedStreams 上,功能与 map和 flatMap 一样,对 ConnectedStreams 中的每一个 Stream 分别进行 map 和 flatMap处理。
public class TransformTest5_connectStream {
public static void main(String[] args) throws Exception {
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.setParallelism(1);
// 从文件读取数据
DataStream<String> inputStream = env.readTextFile("/Users/seafyliang/DEV/Code_projects/Java_projects/study_projects/flink_study/src/main/resources/sensor.txt");
// 使用java8的lambda表达式 转换成SensorReading类型
DataStream<SensorReading> dataStream = inputStream.map(line -> {
String[] fields = line.split(",");
return new SensorReading(fields[0], new Long(fields[1]), new Double(fields[2]));
});
// 1. 分流,按照温度值30度为届分为两条流
// split已过时,用side output代替
SplitStream<SensorReading> splitStream = dataStream.split(new OutputSelector<SensorReading>() {
@Override
public Iterable<String> select(SensorReading sensorReading) {
return (sensorReading.getTemperature() > 30) ? Collections.singletonList("high") : Collections.singletonList("low");
}
});
DataStream<SensorReading> highTempStream = splitStream.select("high");
DataStream<SensorReading> lowTempStream = splitStream.select("low");
DataStream<SensorReading> allTempStream = splitStream.select("high", "low");
// highTempStream.print("high");
// lowTempStream.print("low");
// allTempStream.print("all");
// 2. 合流 connect,将高温流转换成二元组类型,与低温流连接合并之后,输出状态信息
SingleOutputStreamOperator<Tuple2<String, Double>> warningStream = highTempStream.map(new MapFunction<SensorReading,Tuple2<String, Double>>(){
@Override
public Tuple2<String, Double> map(SensorReading sensorReading) throws Exception {
return new Tuple2<String, Double>(sensorReading.getId(), sensorReading.getTemperature());
}
});
ConnectedStreams<Tuple2<String, Double>, SensorReading> connectedStreams = warningStream.connect(lowTempStream);
SingleOutputStreamOperator<Object> resultStream = connectedStreams.map(new CoMapFunction<Tuple2<String, Double>, SensorReading, Object>() {
@Override
public Object map1(Tuple2<String, Double> stringDoubleTuple2) throws Exception {
return new Tuple3<>(stringDoubleTuple2.f0, stringDoubleTuple2.f1, "high temp warning");
}
@Override
public Object map2(SensorReading sensorReading) throws Exception {
return new Tuple2<>(sensorReading.getId(), "normal");
}
});
resultStream.print();
env.execute();
}
3.5 Union(多条流合并)
DataStream → DataStream:对两个或者两个以上的 DataStream 进行 union 操作,产生一个包含所有 DataStream 元素的新 DataStream。
但是必须是同样的数据类型
// 3. union联合多条流
// 数据类型不同会报错
// warningStream.union(lowTempStream);
highTempStream.union(lowTempStream, allTempStream);
Connect 与 Union 区别:
-
Union 之前两个流的类型必须是一样,Connect 可以不一样,在之后的 coMap中再去调整成为一样的。
-
Connect 只能操作两个流,Union 可以操作多个。
4.支持的数据类型
Flink 流应用程序处理的是以数据对象表示的事件流。所以在 Flink 内部,我们需要能够处理这些对象。它们需要被序列化和反序列化,以便通过网络传送它们;或者从状态后端、检查点和保存点读取它们。为了有效地做到这一点,Flink 需要明确知道应用程序所处理的数据类型。Flink 使用类型信息的概念来表示数据类型,并为每个数据类型生成特定的序列化器、反序列化器和比较器。
Flink 还具有一个类型提取系统,该系统分析函数的输入和返回类型,以自动获取类型信息,从而获得序列化器和反序列化器。但是,在某些情况下,例如 lambda函数或泛型类型,需要显式地提供类型信息,才能使应用程序正常工作或提高其性
能。
Flink 支持 Java 和 Scala 中所有常见数据类型。使用最广泛的类型有以下几种。
4.1 基础数据类型
Flink 支持所有的 Java 和 Scala 基础数据类型,Integer, Double, Long, String, …
DataStream<Integer> inputStream = env.fromElement(1,2,3,4);
inputStream.map(data -> data * 2);
4.2 Java和Scala元组(Tuples)
DataStream<Integer> personStream = env.fromElement(
new Tuple2("aa",17),
new Tuple2("bb",22)
);
personStream.filter(p -> p.f1 > 20);
4.3 Java简单对象(POJOs)
类似于java bean要求,必须有空参public构造,属性public或public的私有属性getter、setter
public class SensorReading {
// 属性:id, 时间戳, 温度值
private String id;
private Long timestamp;
private Double temperature;
public SensorReading() {
}
public SensorReading(String id, Long timestamp, Double temperature) {
this.id = id;
this.timestamp = timestamp;
this.temperature = temperature;
}
public String getId() {
return id;
}
public void setId(String id) {
this.id = id;
}
public Long getTimestamp() {
return timestamp;
}
public void setTimestamp(Long timestamp) {
this.timestamp = timestamp;
}
public Double getTemperature() {
return temperature;
}
public void setTemperature(Double temperature) {
this.temperature = temperature;
}
}
4.4 其他(Arrays, Lists, Maps, Enums, 等等)
Flink对Java和Scala中的一些特殊目的类型也都是支持的,比如Java的ArrayList, HashMap, Enum等等。