Flink处理流式数据
搭建项目环境并处理流式数据
pom
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.bygones</groupId>
<artifactId>learn-flink</artifactId>
<version>1.0.0-SNAPSHOT</version>
<dependencies>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-java</artifactId>
<version>1.10.1</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-streaming-java_2.12</artifactId>
<version>1.10.1</version>
</dependency>
</dependencies>
</project>
Java & flink 处理流式数据的代码
import org.apache.flink.api.common.functions.FlatMapFunction;
import org.apache.flink.api.java.tuple.Tuple2;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.api.datastream.DataStreamSource;
import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.util.Collector;
// data stream
// Flink 处理流式数据
// 处理流式数据的API是 DataStream
public class StreamWordCount {
public static void main(String[] args) throws Exception {
// 创建流处理执行环境
StreamExecutionEnvironment executionEnvironment = StreamExecutionEnvironment.getExecutionEnvironment();
// 设置并行度,默认是电脑CPU核数
executionEnvironment.setParallelism(8);
// 从文件中读取数据
String inputPath = "F:\\workspace\\data-process\\flink\\learn-flink\\src\\main\\resources\\batch1.txt";
// DataStreamSource<String> stringDataStreamSource = executionEnvironment.readTextFile(inputPath);
DataStream<String> stringDataStream = executionEnvironment.readTextFile(inputPath);
// 基于数据流对数据进行处理
SingleOutputStreamOperator<Tuple2<String, Integer>> resultStream = stringDataStream.flatMap(new MyFlatMapper())
.keyBy(0)
.sum(1);
resultStream.print();
// 上面的代码只是定义的对数据的处理流程
// 执行任务, 数据来一条处理一条
executionEnvironment.execute();
}
// FlatMapFunction<输入参数的类型, 输出参数的类型>
// Tuple2<String,Integer> 二元组数据类型,例如(word , 1);
public static class MyFlatMapper implements FlatMapFunction<String,Tuple2<String,Integer>> {
// 定义对数据的处理规则
/***
* <p>
* 知识点描述:
* Collector 收集器, 将需要返回的数据收集起来
* </p>
* @param value 输入的数据
* @param collector 收集器, 将需要返回的数据收集起来
* @throws Exception
*/
public void flatMap(String value, Collector<Tuple2<String, Integer>> collector) throws Exception {
// 按照空格分词
String[] words = value.split(" ");
// 遍历所有word,包成二元组
for (String word: words) {
collector.collect(new Tuple2<String, Integer>(word , 1));
}
}
}
}
流式数据的载体
hello world
hello flink
hello spark
hello scala
how are you
fine thank you
and you
输出结果分析
会逐条处理,相当于数据来一条处理一条
会记录数据的状态
默认会并行处理数据