3. Flink集群配置文件以及日志系统概述

1. 常用Scope区别

  • compile

    • 默认scope为compile,表示为当前依赖参与项目的编译测试运行阶段,属于强依赖。打包之时,会打到包里去。
  • test

    • 该依赖仅仅参与测试相关的内容,包括测试用例的编译和执行,比如定性的Junit
  • runtime

    • 依赖仅参与运行周期中的使用。一般这种类库都是接口与实现相分离的类库,比如JDBC类库,在编译之时仅依赖相关的接口,在具体的运行之时,才需要具体的mysql、oracle等等数据的驱动程序。此类的驱动都是为runtime的类库。
  • provided

    • 该依赖在打包过程中,不需要打进去,这个由运行的环境来提供,比如tomcat或者基础类库等等,事实上,该依赖可以参与编译测试运行等周期,与compile等同。区别在于打包阶段进行了exclude操作。

2. Flink集群搭建

  • 要求:
    • Java 1.8.x或更高版本
    • ssh免密登录
    • 相同的目录结构

2.1. 独立集群

  • JAVA_HOME 配置

    • conf/flink-conf.yaml 通过 env.java.home键设置此变量
      env.java.home: /usr/lib/jdk1.8.0_162
      
  • Flink配置

    • 选择一个master结点, 配置conf/flink-conf.yaml

      jobmanager.rpc.address: 192.168.1.27
      
    • conf / slaves 配置从结点
      在这里插入图片描述

    • 其他重要配置
      在这里插入图片描述

    • 官网配置参数

  • 分发安装包

    scp -r flink hadoop@192.168.1.28:$PWD
    scp -r flink hadoop@192.168.1.29:$PWD
    scp -r flink hadoop@192.168.1.30:$PWD
    
  • 启动Flink集群

    ${FLINK_HOME}/bin/start-cluster.sh 
    
  • 将JobManager / TaskManager实例添加到集群
    在这里插入图片描述

  • Flink UI界面
    在这里插入图片描述

2.2. yarn集群

  • 设置Hadoop环境变量

    export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
    
  • 以集群模式提交任务,每次都会新建flink集群

    ./flink run -m yarn-cluster -p 1 -yjm 1024 -ytm 1024m /home/hadoop/fanjh/FirstFlinkDemo-assembly-1.0.jar 19168.1.27 9999
    
  • 启动Session flink集群,提交任务

    • 查看使用方法

      $FLINK_HOME/bin/yarn-session.sh -h
      
    • 在YARN上启动一个长期运行的Flink集群的会话

      bin/yarn-session.sh -jm 1024m -tm 1024m -s 4
      
    • 会话开启后,在YARN上提交Flink作业

      bin/flink run /home/hadoop/fanjh/FirstFlinkDemo-assembly-1.0.jar 192.168.1.27 9999
      
  • 使用yarn 工具 来停止yarn session

    yarn application -kill <applicationId> 
    

2.3. Flink 1.10.0 分布式高可用集群搭建

  • 独立集群高可用
    • 配置masters文件conf/masters,并分发到每台机器
      在这里插入图片描述

    • 配置flink-conf.yam,并分发到每台机器

      high-availability: zookeeper
      high-availability.storageDir: hdfs:///flink/ha/
      high-availability.zookeeper.quorum: baojiabei03:2181,baojiabei04:2181,baojiabei05:2181
      high-availability.zookeeper.path.root: /flink
      

2.4. 配置文件

################################################################################
#  Licensed to the Apache Software Foundation (ASF) under one
#  or more contributor license agreements.  See the NOTICE file
#  distributed with this work for additional information
#  regarding copyright ownership.  The ASF licenses this file
#  to you under the Apache License, Version 2.0 (the
#  "License"); you may not use this file except in compliance
#  with the License.  You may obtain a copy of the License at
#
#      http://www.apache.org/licenses/LICENSE-2.0
#
#  Unless required by applicable law or agreed to in writing, software
#  distributed under the License is distributed on an "AS IS" BASIS,
#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#  See the License for the specific language governing permissions and
# limitations under the License.
################################################################################


#==============================================================================
# Common
#==============================================================================

# The external address of the host on which the JobManager runs and can be
# reached by the TaskManagers and any clients which want to connect. This setting
# is only used in Standalone mode and may be overwritten on the JobManager side
# by specifying the --host <hostname> parameter of the bin/jobmanager.sh executable.
# In high availability mode, if you use the bin/start-cluster.sh script and setup
# the conf/masters file, this will be taken care of automatically. Yarn/Mesos
# automatically configure the host name based on the hostname of the node where the
# JobManager runs.

jobmanager.rpc.address: 192.168.1.27

# The RPC port where the JobManager is reachable.

jobmanager.rpc.port: 6123


# The heap size for the JobManager JVM

jobmanager.heap.size: 1024m


# The total process memory size for the TaskManager.
#
# Note this accounts for all memory usage within the TaskManager process, including JVM metaspace and other overhead.

taskmanager.memory.process.size: 1024m

# To exclude JVM metaspace and overhead, please, use total Flink memory size instead of 'taskmanager.memory.process.size'.
# It is not recommended to set both 'taskmanager.memory.process.size' and Flink memory.
#
# taskmanager.memory.flink.size: 1280m

# The number of task slots that each TaskManager offers. Each slot runs one parallel pipeline.

taskmanager.numberOfTaskSlots: 4

# The parallelism used for programs that did not specify and other parallelism.

parallelism.default: 8

# The default file system scheme and authority.
# 
# By default file paths without scheme are interpreted relative to the local
# root file system 'file:///'. Use this to override the default and interpret
# relative paths relative to a different file system,
# for example 'hdfs://mynamenode:12345'
#
# fs.default-scheme
env.java.home: /usr/lib/jdk1.8.0_162

#==============================================================================
# High Availability
#==============================================================================

# The high-availability mode. Possible options are 'NONE' or 'zookeeper'.
#
high-availability: zookeeper

# The path where metadata for master recovery is persisted. While ZooKeeper stores
# the small ground truth for checkpoint and leader election, this location stores
# the larger objects, like persisted dataflow graphs.
# 
# Must be a durable file system that is accessible from all nodes
# (like HDFS, S3, Ceph, nfs, ...) 
#
high-availability.storageDir: hdfs://cluster/flink/ha/
yarn.application-attempts: 10

# The list of ZooKeeper quorum peers that coordinate the high-availability
# setup. This must be a list of the form:
# "host1:clientPort,host2:clientPort,..." (default clientPort: 2181)
#
high-availability.zookeeper.quorum: 192.168.1.23:2181,192.168.1.24:2181,192.168.1.25:2181
high-availability.zookeeper.path.root: /flink

# ACL options are based on https://zookeeper.apache.org/doc/r3.1.2/zookeeperProgrammers.html#sc_BuiltinACLSchemes
# It can be either "creator" (ZOO_CREATE_ALL_ACL) or "open" (ZOO_OPEN_ACL_UNSAFE)
# The default value is "open" and it can be changed to "creator" if ZK security is enabled
#
# high-availability.zookeeper.client.acl: open

#==============================================================================
# Fault tolerance and checkpointing
#==============================================================================

# The backend that will be used to store operator state checkpoints if
# checkpointing is enabled.
#
# Supported backends are 'jobmanager', 'filesystem', 'rocksdb', or the
# <class-name-of-factory>.
#
state.backend: filesystem

# Directory for checkpoints filesystem, when using any of the default bundled
# state backends.
#
state.checkpoints.dir: hdfs://cluster/flink/flink-checkpoints

# Default target directory for savepoints, optional.
#
state.savepoints.dir: hdfs://cluster/flink/flink-checkpoints

# Flag to enable/disable incremental checkpoints for backends that
# support incremental checkpoints (like the RocksDB state backend). 
#
# state.backend.incremental: false

# The failover strategy, i.e., how the job computation recovers from task failures.
# Only restart tasks that may have been affected by the task failure, which typically includes
# downstream tasks and potentially upstream tasks if their produced data is no longer available for consumption.

jobmanager.execution.failover-strategy: region

#==============================================================================
# Rest & web frontend
#==============================================================================

# The port to which the REST client connects to. If rest.bind-port has
# not been specified, then the server will bind to this port as well.
#
rest.port: 8083

# The address to which the REST client will connect to
#
#rest.address: 0.0.0.0

# Port range for the REST and web server to bind to.
#
#rest.bind-port: 8080-8090

# The address that the REST & web server binds to
#
#rest.bind-address: 0.0.0.0

# Flag to specify whether job submission is enabled from the web-based
# runtime monitor. Uncomment to disable.

web.submit.enable: true

#==============================================================================
# Advanced
#==============================================================================

# Override the directories for temporary files. If not specified, the
# system-specific Java temporary directory (java.io.tmpdir property) is taken.
#
# For framework setups on Yarn or Mesos, Flink will automatically pick up the
# containers' temp directories without any need for configuration.
#
# Add a delimited list for multiple directories, using the system directory
# delimiter (colon ':' on unix) or a comma, e.g.:
#     /data1/tmp:/data2/tmp:/data3/tmp
#
# Note: Each directory entry is read from and written to by a different I/O
# thread. You can include the same directory multiple times in order to create
# multiple I/O threads against that directory. This is for example relevant for
# high-throughput RAIDs.
#
# io.tmp.dirs: /tmp

# The classloading resolve order. Possible values are 'child-first' (Flink's default)
# and 'parent-first' (Java's default).
#
# Child first classloading allows users to use different dependency/library
# versions in their application than those in the classpath. Switching back
# to 'parent-first' may help with debugging dependency issues.
#
# classloader.resolve-order: child-first

# The amount of memory going to the network stack. These numbers usually need 
# no tuning. Adjusting them may be necessary in case of an "Insufficient number
# of network buffers" error. The default min is 64MB, the default max is 1GB.
# 
# taskmanager.memory.network.fraction: 0.1
# taskmanager.memory.network.min: 64mb
# taskmanager.memory.network.max: 1gb

#==============================================================================
# Flink Cluster Security Configuration
#==============================================================================

# Kerberos authentication for various components - Hadoop, ZooKeeper, and connectors -
# may be enabled in four steps:
# 1. configure the local krb5.conf file
# 2. provide Kerberos credentials (either a keytab or a ticket cache w/ kinit)
# 3. make the credentials available to various JAAS login contexts
# 4. configure the connector to use JAAS/SASL

# The below configure how Kerberos credentials are provided. A keytab will be used instead of
# a ticket cache if the keytab path and principal are set.

# security.kerberos.login.use-ticket-cache: true
# security.kerberos.login.keytab: /path/to/kerberos/keytab
# security.kerberos.login.principal: flink-user

# The configuration below defines which JAAS login contexts

# security.kerberos.login.contexts: Client,KafkaClient

#==============================================================================
# ZK Security Configuration
#==============================================================================

# Below configurations are applicable if ZK ensemble is configured for security

# Override below configuration to provide custom ZK service name if configured
# zookeeper.sasl.service-name: zookeeper

# The configuration below must match one of the values set in "security.kerberos.login.contexts"
# zookeeper.sasl.login-context-name: Client

#==============================================================================
# HistoryServer
#==============================================================================

# The HistoryServer is started and stopped via bin/historyserver.sh (start|stop)

# Directory to upload completed jobs to. Add this directory to the list of
# monitored directories of the HistoryServer as well (see below).
jobmanager.archive.fs.dir: hdfs://cluster/flink/flink-completed-jobs/

# The address under which the web-based HistoryServer listens.
historyserver.web.address: 192.168.1.29

# The port under which the web-based HistoryServer listens.
historyserver.web.port: 8082

# Comma separated list of directories to monitor for completed jobs.
historyserver.archive.fs.dir: hdfs://cluster/flink/flink-completed-jobs/

# Interval in milliseconds for refreshing the monitored directories.
historyserver.archive.fs.refresh-interval: 10000


3. SLF4J和Logback和Log4j和Logging的区别与联系

3.1. 一个著名的日志系统是怎么设计出来的

  • 日志消息除了能打印到控制台, 还可以输出到文件,甚至可以通过邮件发送出去(例如生成环境出错的消息)
  • 日志内容应该可以做格式化, 例如变成纯文本,XML, HTML格式等等
  • 对于不同的Java class,不同的 package , 还有不同级别的日志,应该可以灵活地输出到不同的文件中
    • 例如对于com.foo 这个package,所有的日志都输出到 foo.log 文件中
    • 对于com.bar 这个package ,所有文件都输出到bar. log文件中
    • 对于所有的ERROR级别的日志,都输出到 errors.log文件中
  • 能对日志进行分级, 有些日志纯属debug , 在本机或者测试环境使用, 方便程序员的调试, 生产环境完全不需要。有些日志是描述错误(error)的, 在生产环境下出错的话必须要记录下来,帮助后续的分析。
    在这里插入图片描述

3.2. 在standalone集群模式下运行案例遇到的一个问题

  • 在运行状态下,清空log文件夹下的内容之后,以后就不产生日志文件和输出文件了,重启之后恢复正常??

4. Flink DataStream API 编程指南

4.1. Flink程序剖析

Flink 程序看起来像是转换数据集合的规范化程序。每个程序由一些基本的部分组成

  • 获取执行环境
  • 加载/创建初始数据
  • 指定对数据的转换操作
  • 指定计算结果存放的位置
  • 触发程序执行

4.2. 第一个Flink程序(WordCount)

  • 整体结构
    在这里插入图片描述

  • build.sbt

    ThisBuild / resolvers ++= Seq(
        "Apache Development Snapshot Repository" at "https://repository.apache.org/content/repositories/snapshots/",
        Resolver.mavenLocal
    )
    
    name := "FirstFlinkDemo"
    
    version := "1.0"
    
    organization := "com.xiaofan"
    
    ThisBuild / scalaVersion := "2.12.6"
    
    val flinkVersion = "1.10.0"
    
    val flinkDependencies = Seq(
      "org.apache.flink" %% "flink-scala" % flinkVersion % "provided",
      "org.apache.flink" %% "flink-streaming-scala" % flinkVersion % "provided",
      "org.slf4j" % "slf4j-log4j12" % "1.8.0-beta4")
    
    lazy val root = (project in file(".")).
      settings(
        libraryDependencies ++= flinkDependencies
      )
    
    // 打包指定的主类
    assembly / mainClass := Some("com.xiaofan.SocketTextStreamWordCount")
    
    // make run command include the provided dependencies
    Compile / run  := Defaults.runTask(Compile / fullClasspath,
                                       Compile / run / mainClass,
                                       Compile / run / runner
                                      ).evaluated
    
    // stays inside the sbt console when we press "ctrl-c" while a Flink programme executes with "run" or "runMain"
    Compile / run / fork := true
    Global / cancelable := true
    
    // exclude Scala library from assembly
    assembly / assemblyOption  := (assembly / assemblyOption).value.copy(includeScala = false)
    
    fork in run := true
    
    
  • mainRunner
    这个好像设置不设置都没有什么影响
    在这里插入图片描述

  • SocketTextStreamWordCount类
    在这里插入图片描述

  • 本地运行结果
    在这里插入图片描述

  • 打包提交的集群

    • 修改代码
      在这里插入图片描述

    • 打fat包:sbt clean assembly

    • 提交到集群:bin/flink run /home/hadoop/fanjh/FirstFlinkDemo-assembly-1.0.jar 192.168.1.27 9999

      在这里插入图片描述

5. Flink任务提交流程(Standalone和Yarn)

6. 寄语:天行健,君子以自强不息

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值