前面的章节,我们讲了DataSet API和DataStream
API处理文本数据,文本其实就是一个批数据的形式,这个章节我们来操作一下真正的流式的环境
准备工作
在虚拟机打开Netcat
nc -lk 7777
保持当前的连接持续监听7777端口
代码编写
创建java类StreamWordCount
package org.chad.wordcount;
import org.apache.flink.api.common.functions.FlatMapFunction;
import org.apache.flink.api.common.typeinfo.Types;
import org.apache.flink.api.java.functions.KeySelector;
import org.apache.flink.api.java.tuple.Tuple2;
import org.apache.flink.streaming.api.datastream.DataStreamSource;
import org.apache.flink.streaming.api.datastream.KeyedStream;
import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.util.Collector;
public class StreamWordCount {
public static void main(String[] args) throws Exception {
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
DataStreamSource<String> ds = env.socketTextStream("node1", 7777);
SingleOutputStreamOperator<Tuple2<String, Long>> wordAndOne = ds.flatMap(new FlatMapFunction<String, Tuple2<String, Long>>() {
@Override
public void flatMap(String s, Collector<Tuple2<String, Long>> out) throws Exception {
String[] words = s.split(" ");
for (String word : words) {
out.collect(Tuple2.of(word, 1L));
}
}
}).returns(Types.TUPLE(Types.STRING, Types.LONG));
KeyedStream<Tuple2<String, Long>, String> tuple2StringKeyedStream = wordAndOne.keyBy(new KeySelector<Tuple2<String, Long>, String>() {
@Override
public String getKey(Tuple2<String, Long> stringLongTuple2) throws Exception {
return stringLongTuple2.f0;
}
});
SingleOutputStreamOperator<Tuple2<String, Long>> sum = tuple2StringKeyedStream.sum(1);
sum.print();
env.execute("StreamWordCount");
}
}
运行以上代码
我们的程序在等待数据的传送
此刻我们可以
在我们的控制台我们可以看到
数据在累加,发送一条就统计一条
上面的演示中设计到一个空的单词,是因为我在发送数据的时候多打了一个空格
补充:关于KeySelector中的写法,本篇文章写法是全写,如果想看lamda表达式,简写的方式可以查看文章开头的处理文本数据连接