在编写代码之前引入依赖
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-java</artifactId>
<version>1.7.2</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-streaming-java_2.11</artifactId>
<version>1.7.2</version>
</dependency>
首先我们看flink的算子map、flatMap、mapPartition,这些都是flink的循环操作算子,下面是演示代码:
import org.apache.flink.api.common.JobExecutionResult;
import org.apache.flink.api.common.functions.FlatMapFunction;
import org.apache.flink.api.common.functions.MapFunction;
import org.apache.flink.api.common.functions.MapPartitionFunction;
import org.apache.flink.api.java.DataSet;
import org.apache.flink.api.java.ExecutionEnvironment;
import org.apache.flink.util.Collector;
import java.util.ArrayList;
import java.util.Iterator;
import java.util.List;
//测试flink的算子map、flatMap,mapPartition
public class FlinkDemo1 {
public static void main(String[] args) throws Exception {
//获取flink的执行环境
ExecutionEnvironment env=ExecutionEnvironment.getExecutionEnvironment();
//模拟测试数据
List<String> list=new ArrayList<String>();
list.add("I love Beijing");
list.add("I love China");
list.add("Beijing is the capital of China");
//添加批处理的数据源
DataSet<String> source=env.fromCollection(list);
//使用flink的map算子来执行离线计算
source.map(new MapFunction<String,List<String>>() {
public List<String> map(String line) throws Exception {
String[] words = line.split(" ");
List<String> wds=new ArrayList<String>();
for (String word:words) {
wds.add(word);
}
return wds;
}
}).print();
System.out.println("*************************************");
//执行flink的flatMap算子
source.flatMap(new FlatMapFunction<String, String>() {
public void flatMap(String line, Collector<String> collector) throws Exception {
String[] words = line.split(" ");
for (String word:words
) {
collector.collect(word);
}
}
}).print();
System.out.println("*************************************");
//执行flink的mapPartition算子
source.mapPartition(new MapPartitionFunction<String, String>() {
Integer index=0;
public void mapPartition(Iterable<String> iterable, Collector<String> collector) throws Exception {
Iterator<String> iterator = iterable.iterator();
while (iterator.hasNext()){
String line = iterator.next();
String[] words = line.split(" ");
for (String word:words
) {
collector.collect("分区"+index+",单词为:"+word);
}
}
index ++;
}
}).print();
env.execute("FlinkDemo1");
}
}
这是一段java程序,可以看出map与flatMap、mapPartition在返回值上有区别。map直接处理完一条然后就返回去了,返回的参数由mapFunction中的第二参数决定。而flatMap和mapPartition的返回值都为void,它们把处理完的值直接交给collector处理,然后他们对应的返回值就是flatMap中的第二个参数。
mapPartition处理的分区中的数据,他没处理一个分区的数据时,就把这个分区的数据放到iterable,然后把分区中每一行的数据在放到iterator当中。还有在mapPartition中并没有维护分区信息,所有我们用index定义了一个分区号,用来记录来自不同的分区。
2、flink中的算子filter,用来过滤是否过滤掉数据
System.out.println("******************测试filter**********************");
source.flatMap(new FlatMapFunction<String, String>() {
public void flatMap(String line, Collector<String> collector) throws Exception {
String[] words = line.split(" ");
for (String word:words
) {
collector.collect(word);
}
}
}).filter(new FilterFunction<String>() {
public boolean filter(String word) throws Exception {
if (word.length()>3)
return true;
return false;
}
}).print();
filter函数更像是循环函数和一个where语句的结合,如果filter函数返回true那么就保留这个数据,如果返回false就不保留这个数据
3.flink中的distinct去重的算子
System.out.println("******************测试distinct**********************");
source.flatMap(new FlatMapFunction<String, String>() {
public void flatMap(String line, Collector<String> collector) throws Exception {
String[] words = line.split(" ");
for (String word:words
) {
collector.collect(word);
}
}
}).distinct().print();