Flink快速入门

目录

创建项目

 编写代码

批处理

流处理


创建项目

添加项目依赖

<properties>
<flink.version>1.13.0</flink.version>
<java.version>1.8</java.version>
<scala.binary.version>2.12</scala.binary.version>
<slf4j.version>1.7.30</slf4j.version>
</properties>

<dependencies>
<!-- 引入 Flink 相关依赖-->
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-java</artifactId>
<version>${flink.version}</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-streaming-java_${scala.binary.version}</artifactId>
<version>${flink.version}</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-clients_${scala.binary.version}</artifactId>
<version>${flink.version}</version>
</dependency>
<!-- 引入日志管理相关依赖-->
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
<version>${slf4j.version}</version>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
<version>${slf4j.version}</version>
</dependency>
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-to-slf4j</artifactId>
<version>2.14.0</version>
</dependency>
</dependencies>

配置日志管理

在目录 src/main/resources 下添加文件:log4j.properties,内容配置如下

log4j.rootLogger=error, stdout 
log4j.appender.stdout=org.apache.log4j.ConsoleAppender log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%-4r [%t] %-5p %c %x - %m%n

 编写代码

批处理

words.txt 中输入一些文字

hello world hello flink hello java 

 新建 Java 类 BatchWordCount,在静态 main 方法中编写测试代码。

import org.apache.flink.api.common.typeinfo.Types;
import org.apache.flink.api.java.ExecutionEnvironment;
import org.apache.flink.api.java.operators.AggregateOperator;
import org.apache.flink.api.java.operators.DataSource;
import org.apache.flink.api.java.operators.FlatMapOperator;
import org.apache.flink.api.java.operators.UnsortedGrouping;
import org.apache.flink.api.java.tuple.Tuple2;
import org.apache.flink.util.Collector;
public class BatchWordCount {
    public static void main(String[] args) throws Exception {
    // 1. 创建执行环境
    ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
    // 2. 从文件读取数据  按行读取(存储的元素就是每行的文本)
    DataSource<String> lineDS = env.readTextFile("input/words.txt");
    // 3. 转换数据格式
    FlatMapOperator<String, Tuple2<String, Long>> wordAndOne = lineDS
            .flatMap((String line, Collector<Tuple2<String, Long>> out) -> {
                String[] words = line.split(" ");
                for (String word : words) {
                    out.collect(Tuple2.of(word, 1L));
            }
            })
            .returns(Types.TUPLE(Types.STRING, Types.LONG));  //当Lambda表达式

    // 4. 按照 word 进行分组
    UnsortedGrouping<Tuple2<String, Long>> 	wordAndOneUG 	= wordAndOne.groupBy(0);
    // 5. 分组内聚合统计
    AggregateOperator<Tuple2<String, Long>> sum = wordAndOneUG.sum(1);

    // 6. 打印结果         sum.print();
        sum.print();
    }
}

 

流处理

 新建 Java 类 BoundedStreamWordCount,在静态 main 方法中编写测试代码。具体代码实现如下:

import org.apache.flink.api.common.typeinfo.Types; import org.apache.flink.api.java.tuple.Tuple2;
import org.apache.flink.streaming.api.datastream.DataStreamSource; import org.apache.flink.streaming.api.datastream.KeyedStream;
import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator; import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment; import org.apache.flink.util.Collector;

import java.util.Arrays;

import static org.apache.flink.api.java.ExecutionEnvironment.getExecutionEnvironment;

public class BoundedStreamWordCount {
    public static void main(String[] args) throws Exception {
// 1. 创建流式执行环境
        StreamExecutionEnvironment env=StreamExecutionEnvironment.getExecutionEnvironment();
// 2. 读取文件
        DataStreamSource<String> lineDSS = env.readTextFile("input/words.txt");
// 3. 转换数据格式
        SingleOutputStreamOperator<Tuple2<String, Long>> wordAndOne = lineDSS
                .flatMap((String line, Collector<String> words) -> { Arrays.stream(line.split(" ")).forEach(words::collect);
                })
                .returns(Types.STRING)
                .map(word -> Tuple2.of(word, 1L))
                .returns(Types.TUPLE(Types.STRING, Types.LONG));
// 4. 分 组
        KeyedStream<Tuple2<String, Long>, String> wordAndOneKS = wordAndOne
                .keyBy(t -> t.f0);
// 5. 求 和
        SingleOutputStreamOperator<Tuple2<String, Long>> result = wordAndOneKS
                .sum(1);
// 6. 打 印
        result.print();
// 7. 执 行
        env.execute();
    }
}

读取文本流

新建一个 Java 类 StreamWordCount,将 BoundedStreamWordCount 代码中读取文件数据的 readTextFile 方法,替换成读取 socket 文本流的方法 socketTextStream。

import org.apache.flink.api.common.typeinfo.Types;
import org.apache.flink.api.java.tuple.Tuple2;
import org.apache.flink.streaming.api.datastream.DataStreamSource; import org.apache.flink.streaming.api.datastream.KeyedStream;
import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator; import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment; import org.apache.flink.util.Collector;

import java.util.Arrays;

public class StreamWordCount {
    public static void main(String[] args) throws Exception {
// 1. 创建流式执行环境
        StreamExecutionEnvironment	env	= StreamExecutionEnvironment.getExecutionEnvironment();
// 2. 读取文本流
        DataStreamSource<String>	lineDSS	=	env.socketTextStream("hadoop102",

                7777);


// 3. 转换数据格式
        SingleOutputStreamOperator<Tuple2<String, Long>> wordAndOne = lineDSS
                .flatMap((String line, Collector<String> words) -> { Arrays.stream(line.split(" ")).forEach(words::collect);
                })
                .returns(Types.STRING)
                .map(word -> Tuple2.of(word, 1L))
                .returns(Types.TUPLE(Types.STRING, Types.LONG));
// 4. 分 组
        KeyedStream<Tuple2<String, Long>, String> wordAndOneKS = wordAndOne
                .keyBy(t -> t.f0);
// 5. 求 和
        SingleOutputStreamOperator<Tuple2<String, Long>> result = wordAndOneKS
                .sum(1);
// 6. 打 印
        result.print();
// 7. 执 行
        env.execute();
    }
}

需要连接hadoop102端口进行输入

输出结果

 

 实现任意端口

 //从参数中提取主机名和端口号
        ParameterTool parameterTool=ParameterTool.fromArgs(args);
        String hostname=parameterTool.get("host");
        Integer port=parameterTool.getInt("port");
// 2. 读取文本流
        DataStreamSource<String>	lineDSS	=	env.socketTextStream(hostname,

                port);

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值