第三章 Ambari二次开发之集成Flink安装教程(开源版本DEMO)

1、创建Flink源

(1)安装httpd服务并创建flink目录

  • 注意事项:需要安装httpd服务生成 /var/www/html目录,如已存在则不用安装
yum -y install httpd

service httpd restart

chkconfig httpd on

mkdir  /var/www/html/flink

(2)远程下载相关模块

wget --no-check-certificate https://mirrors.tuna.tsinghua.edu.cn/apache/flink/flink-1.14.5/flink-1.14.5-bin-scala_2.11.tgz
wget https://repo.maven.apache.org/maven2/org/apache/flink/flink-shaded-hadoop-2-uber/2.8.3-10.0/flink-shaded-hadoop-2-uber-2.8.3-10.0.jar

2、下载ambari-flink-service服务

(1)查看hdp版本

VERSION=`hdp-select status hadoop-client | sed 's/hadoop-client - \([0-9]\.[0-9]\).*/\1/'`

echo $VERSION
  • 版本结果

(2)下载ambari-flink-service服务到ambari-server 资源目录下

git clone https://github.com/abajwa-hw/ambari-flink-service.git   /var/lib/ambari-server/resources/stacks/HDP/$VERSION/services/FLINK
  • 注意事项:因git连接失败,通过Google单独下载后上传到linux上

在这里插入图片描述

3、修改配置文件

(1)编辑metainfo.xml文件

  • 修改内容:将安装的版本修改为 1.14.5
cd /var/lib/ambari-server/resources/stacks/HDP/3.1/services/FLINK/
vim metainfo.xml
  • 修改内容

在这里插入图片描述

(2)配置JAVA_HOME路径

cd /var/lib/ambari-server/resources/stacks/HDP/3.1/services/FLINK/configuration
vim flink-env.xml

#改为自己的java路径
env.java.home: /home/soft/jdk1.8.0_121   

#删除以下两个参数
jobmanager.heap.mb: 256

taskmanager.heap.mb: 512
  • 修改内容如下

在这里插入图片描述

(3)编辑flink-ambari-config.xml文件

  • 修改内容:修改下载地址为第一步创建的网络路径
cd /var/lib/ambari-server/resources/stacks/HDP/3.1/services/FLINK/configuration

vim flink-ambari-config.xml
  • 修改内容如下

在这里插入图片描述

  • 将Flink安装路径改为本地

在这里插入图片描述

4、其余操作

(1)添加用户和组

# 添加用户组
groupadd flink
# 添加用户
useradd  -d /home/flink  -g flink flink

(2)重启ambari-server

ambari-server restart

5、ambari安装flink

(1)ambari web选择Flink服务

在这里插入图片描述

(2)添加Flink服务

在这里插入图片描述

(3)选择Flink安装到服务器

在这里插入图片描述

在这里插入图片描述

(4)配置Flink on Yarn故障转移方式

<property>
    <name>yarn.client.failover-proxy-provider</name>
    <value>org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider</value>
</property>
  • 配置结果如下:
    • key:yarn.client.failover-proxy-provider
    • value:org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider

在这里插入图片描述

  • 集群信息

在这里插入图片描述

  • 安装结果

在这里插入图片描述

  • Flink启动未成功

在这里插入图片描述

附、常见问题

Ⅰ、ambari安装报错日志 - 500 status code received on POST method for API
Error message: Error occured during stack advisor command invocation: Cannot create /var/run/ambari-server/stack-recommendations
  • 解决方案
sudo chown -R ambari /var/run/ambari-server
Ⅱ、Ambari安装Flink异常一
Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/stack-hooks/before-ANY/scripts/hook.py", line 38, in <module>
    BeforeAnyHook().execute()
  File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 352, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/stack-hooks/before-ANY/scripts/hook.py", line 31, in hook
    setup_users()
  File "/var/lib/ambari-agent/cache/stack-hooks/before-ANY/scripts/shared_initialization.py", line 50, in setup_users
    groups = params.user_to_groups_dict[user],
KeyError: u'flink'
Error: Error: Unable to run the custom hook script ['/usr/bin/python', '/var/lib/ambari-agent/cache/stack-hooks/before-ANY/scripts/hook.py', 'ANY', '/var/lib/ambari-agent/data/command-153.json', '/var/lib/ambari-agent/cache/stack-hooks/before-ANY', '/var/lib/ambari-agent/data/structured-out-153.json', 'INFO', '/var/lib/ambari-agent/tmp', 'PROTOCOL_TLSv1_2', '']
  • 解决方案

在这里插入图片描述

①修改配置文件

cd /var/lib/ambari-server/resources/scripts

# 查看ignore_groupsusers_create配置
python configs.py -u admin -p admin -n BigDataPlatform -l leidi01 -t 8080 -a get -c cluster-env |grep -i ignore_groupsusers_create

在这里插入图片描述

②将ignore_groupsusers_create更改为true

python configs.py -u admin -p admin -n BigDataPlatform -l leidi01 -t 8080 -a set -c cluster-env -k ignore_groupsusers_create -v true
  • 修改结果

在这里插入图片描述

Ⅲ、Ambari安装Flink异常二
  • 问题现象:找不到相关配置文件目录
Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/stacks/HDP/3.1/services/FLINK/package/scripts/flink.py", line 172, in <module>
    Master().execute()
  File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 352, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/stacks/HDP/3.1/services/FLINK/package/scripts/flink.py", line 108, in start
    self.configure(env) 
  File "/var/lib/ambari-agent/cache/stacks/HDP/3.1/services/FLINK/package/scripts/flink.py", line 91, in configure
    File(format("{conf_dir}/flink-conf.yaml"), content=properties_content, owner=params.flink_user)
  File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__
    self.env.run()
  File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run
    self.run_action(resource, action)
  File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action
    provider_action()
  File "/usr/lib/ambari-agent/lib/resource_management/core/providers/system.py", line 120, in action_create
    raise Fail("Applying %s failed, parent directory %s doesn't exist" % (self.resource, dirname))
resource_management.core.exceptions.Fail: Applying File['/opt/flink/conf/flink-conf.yaml'] failed, parent directory /opt/flink/conf doesn't exist
  • 解决方案:手动解压安装包到该目录下
cd /var/www/html/flink

tar -zxvf flink-1.14.5-bin-scala_2.11.tgz -C /opt/flink

cd  /opt/flink

mv flink-1.14.5/* /opt/flink

在这里插入图片描述

4、Ambari安装Flink异常三
  • 问题现象
 raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of 'export HADOOP_CONF_DIR=/etc/hadoop/conf; export HADOOP_CLASSPATH=/usr/hdp/3.1.0.0-78/hadoop/conf:/usr/hdp/3.1.0.0-78/hadoop/lib/*:/usr/hdp/3.1.0.0-78/hadoop/.//*:/usr/hdp/3.1.0.0-78/hadoop-hdfs/./:/usr/hdp/3.1.0.0-78/hadoop-hdfs/lib/*:/usr/hdp/3.1.0.0-78/hadoop-hdfs/.//*:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/lib/*:/usr/hdp/3.1.0.0-78/hadoop-mapreduce/.//*:/usr/hdp/3.1.0.0-78/hadoop-yarn/./:/usr/hdp/3.1.0.0-78/hadoop-yarn/lib/*:/usr/hdp/3.1.0.0-78/hadoop-yarn/.//*; /opt/flink/bin/yarn-session.sh -d -nm flinkapp-from-ambari -n 1 -s 1 -jm 768 -tm 1024 -qu default >> /var/log/flink/flink-setup.log' returned 1. SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/flink/lib/log4j-slf4j-impl-2.17.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/3.1.0.0-78/hadoop/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]

------------------------------------------------------------
 The program finished with the following exception:

org.apache.flink.configuration.IllegalConfigurationException: JobManager memory configuration failed: Sum of configured JVM Metaspace (256.000mb (268435456 bytes)) and JVM Overhead (192.000mb (201326592 bytes)) exceed configured Total Process Memory (256.000mb (268435456 bytes)).
  • 解决方案:修改flink-env.sh文件,适用于1.11以上版本
#在界面卸载Flink服务后更改一下配置进行重装

cd /var/lib/ambari-server/resources/stacks/HDP/3.1/services/FLINK/configuration

vi flink-env.sh

#删除以下两个参数
jobmanager.heap.mb: 256

taskmanager.heap.mb: 512

#添加以下两个参数 -- 不够继续增大内存
jobmanager.memory.flink.size: 1024m

taskmanager.memory.flink.size: 2048m

#重启服务
ambari-server restart 

在这里插入图片描述

Ⅴ、Ambari安装Flink异常四
  • 问题现象
org.apache.flink.client.deployment.ClusterDeploymentException: Couldn't deploy Yarn session cluster

Caused by: org.apache.flink.configuration.IllegalConfigurationException: The number of requested virtual cores for application master 1 exceeds the maximum number of virtual cores 0 available in the Yarn Cluster.
  • 问题原因:flink在yarn模式使用时会受到yarn.scheduler.maximum-allocation-vcores参数影响, yarn 节点上没有可用的资源,numYarnMaxVcores = 0,

  • 解决方案:

    • 方法一:调大vcores参数
    • 方法二:减少slot个数taskmanager.numberOfTaskSlots: 5
      在flink安装目录下conf/flink-conf.yaml文件修改以下内容
    taskmanager.numberOfTaskSlots: 0
    
  • 2
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

随缘清风殇

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值