Apache Flink v1.9(教程-设置教程)

本地安装教程

只需几个简单的步骤即可启动并运行Flink示例程序。

设置:下载并启动Flink

Flink可在Linux,Mac OS X和Windows上运行。为了能够运行Flink,唯一的要求是安装一个有效的Java 8.x环境。 Windows用户,请查看Windows上的Flink指南,该指南介绍了如何在Windows上运行Flink以进行本地设置。

您可以通过发出以下命令来检查Java正确安装:

java -version

如果你有Java 8,输出将如下所示:

java version "1.8.0_111"
Java(TM) SE Runtime Environment (build 1.8.0_111-b14)
Java HotSpot(TM) 64-Bit Server VM (build 25.111-b14, mixed mode)

下载和解压缩

  1. 从下载页面下载二进制文件。您可以选择任何您喜欢的Hadoop /Scala组合。如果您打算只使用本地文件系统,任何Hadoop版本都可以正常工作。
  2. 转到下载目录。
  3. 解压缩下载的存档。
$ cd ~/Downloads        # Go to download directory
$ tar xzf flink-*.tgz   # Unpack the downloaded archive
$ cd flink-1.8.0

对于MacOS X用户,可以通过Homebrew安装Flink 。

$ brew install apache-flink
...
$ flink --version
Version: 1.2.0, Commit ID: 1c659cf

启动本地Flink群集

$ ./bin/start-cluster.sh  # Start Flink

检查分派器的web前端HTTP://本地主机:8081,并确保一切都正常运行。Web前端应报告单个可用的TaskManager实例。
在这里插入图片描述
您还可以通过检查logs目录中的日志文件来验证系统是否正在运行:

$ tail log/flink-*-standalonesession-*.log
INFO ... - Rest endpoint listening at localhost:8081
INFO ... - http://localhost:8081 was granted leadership ...
INFO ... - Web frontend listening at http://localhost:8081.
INFO ... - Starting RPC endpoint for StandaloneResourceManager at akka://flink/user/resourcemanager .
INFO ... - Starting RPC endpoint for StandaloneDispatcher at akka://flink/user/dispatcher .
INFO ... - ResourceManager akka.tcp://flink@localhost:6123/user/resourcemanager was granted leadership ...
INFO ... - Starting the SlotManager.
INFO ... - Dispatcher akka.tcp://flink@localhost:6123/user/dispatcher was granted leadership ...
INFO ... - Recovering all persisted jobs.
INFO ... - Registering TaskManager ... under ... at the SlotManager.

阅读代码

您可以在Scala中找到SocketWindowWordCount示例的完整源代码,并在GitHub上找到Java。
scala

object SocketWindowWordCount {

    def main(args: Array[String]) : Unit = {

        // the port to connect to
        val port: Int = try {
            ParameterTool.fromArgs(args).getInt("port")
        } catch {
            case e: Exception => {
                System.err.println("No port specified. Please run 'SocketWindowWordCount --port <port>'")
                return
            }
        }

        // get the execution environment
        val env: StreamExecutionEnvironment = StreamExecutionEnvironment.getExecutionEnvironment

        // get input data by connecting to the socket
        val text = env.socketTextStream("localhost", port, '\n')

        // parse the data, group it, window it, and aggregate the counts
        val windowCounts = text
            .flatMap { w => w.split("\\s") }
            .map { w => WordWithCount(w, 1) }
            .keyBy("word")
            .timeWindow(Time.seconds(5), Time.seconds(1))
            .sum("count")

        // print the results with a single thread, rather than in parallel
        windowCounts.print().setParallelism(1)

        env.execute("Socket Window WordCount")
    }

    // Data type for words with count
    case class WordWithCount(word: String, count: Long)
}

java

public class SocketWindowWordCount {

    public static void main(String[] args) throws Exception {

        // the port to connect to
        final int port;
        try {
            final ParameterTool params = ParameterTool.fromArgs(args);
            port = params.getInt("port");
        } catch (Exception e) {
            System.err.println("No port specified. Please run 'SocketWindowWordCount --port <port>'");
            return;
        }

        // get the execution environment
        final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();

        // get input data by connecting to the socket
        DataStream<String> text = env.socketTextStream("localhost", port, "\n");

        // parse the data, group it, window it, and aggregate the counts
        DataStream<WordWithCount> windowCounts = text
            .flatMap(new FlatMapFunction<String, WordWithCount>() {
                @Override
                public void flatMap(String value, Collector<WordWithCount> out) {
                    for (String word : value.split("\\s")) {
                        out.collect(new WordWithCount(word, 1L));
                    }
                }
            })
            .keyBy("word")
            .timeWindow(Time.seconds(5), Time.seconds(1))
            .reduce(new ReduceFunction<WordWithCount>() {
                @Override
                public WordWithCount reduce(WordWithCount a, WordWithCount b) {
                    return new WordWithCount(a.word, a.count + b.count);
                }
            });

        // print the results with a single thread, rather than in parallel
        windowCounts.print().setParallelism(1);

        env.execute("Socket Window WordCount");
    }

    // Data type for words with count
    public static class WordWithCount {

        public String word;
        public long count;

        public WordWithCount() {}

        public WordWithCount(String word, long count) {
            this.word = word;
            this.count = count;
        }

        @Override
        public String toString() {
            return word + " : " + count;
        }
    }
}

运行示例

现在,我们将运行Flink应用程序。它将从socket中读取文本,并且每5秒打印一次前5秒内每个不同单词的出现次数,即处理时间的翻滚窗口,只要文字出现在其中。

首先,我们使用netcat来启动本地服务器

$ nc -l 9000

提交Flink应用:

$ ./bin/flink run examples/streaming/SocketWindowWordCount.jar --port 9000
Starting execution of program

程序连接到socket并等待输入。您可以检查Web界面以验证作业是否按预期运行:
在这里插入图片描述在这里插入图片描述

  • 单词在5秒的时间窗口(处理时间,翻滚窗口)中计算并打印到stdout。监控TaskManager的输出文件并写入一些文本到nc中(输入在点击后逐行发送到Flink):

    $ nc -l 9000
    lorem ipsum
    ipsum ipsum ipsum
    bye

该.out文件将在每个时间窗口结束时,只要有输入就会打印结果,例如:

$ tail -f log/flink-*-taskexecutor-*.out
lorem : 1
bye : 1
ipsum : 4

停止Flink:

$ ./bin/stop-cluster.sh

下一步

查看更多示例以更好地了解Flink的编程API。完成后,请继续阅读streaming指南。

  • 1
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值