Ambari 2.7.X安装Flink1.14.X

一、创建flink源

1、安装 httpd 服务

(随便一台服务器,这里选hdp01)(之前已经装过,可以省略)

[root@hdp01 ~]# yum -y install httpd
[root@hdp01~]# service httpd restart
[root@hdp01 ~]# chkconfig httpd on

安装完成后,会生成 /var/www/html目录(相当于Tomcat的webapps目录)

2、下载资源并移动到指定目录

下载下边两个包,并放到 /var/www/html/flink 目录 (需要在 /var/www/html先建 flink目录)

[root@hdp01 ~]# wget https://archive.apache.org/dist/flink/flink-1.14.4/flink-1.14.4-bin-scala_2.12.tgz
[root@hdp01 ~]# wget https://repo.maven.apache.org/maven2/org/apache/flink/flink-shaded-hadoop-2-uber/2.8.3-10.0/flink-shaded-hadoop-2-uber-2.8.3-10.0.jar
mv flink-1.14.4-bin-scala_2.11.tgz /var/www/html/flink/
mv flink-shaded-hadoop-2-uber-2.8.3-10.0.jar /var/www/html/flink/

在浏览器输入 http://hdp01/flink/

可以看到如下结果

二、下载ambari-flink-service服务

(在 ambari-server 所在的服务器,这里是hdp01)

[root@hdp01 ~]# VERSION=`hdp-select status hadoop-client | sed 's/hadoop-client - [0−9]\.[0−9][0−9]\.[0−9].*/\1/'`
[root@hdp01 ~]# echo $VERSION 3.1

#命令不生效,未知,反正是3.1

3.1 (这里的版本是3.1)

[root@hdp01 ~]# git clone https://github.com/abajwa-hw/ambari-flink-service.git   /var/lib/ambari-server/resources/stacks/HDP/$VERSION/services/FLINK

1、修改配置文件(编辑 metainfo.xml 将安装的版本修改为 1.14.4)

[root@hdp01 ~]# cd /var/lib/ambari-server/resources/stacks/HDP/3.1/services/FLINK/
[root@hdp01 ~]# vim metainfo.xml
<displayName>Flink</displayName>
   <comment>Apache Flink is a streaming dataflow...</comment>
   <version>1.14.4</version>

2、修改 flink-env.xml (JAVA_HOME 和 内存参数)

[root@hdp01 ~]#  cd /var/lib/ambari-server/resources/stacks/HDP/3.1/services/FLINK/configuration
[root@hdp01 ~]#  vim flink-env.xml
env.java.home: /usr/bin/java/ #改为自己的java路径
jobmanager.memory.process.size: 1600m #(就按这个配置,不配置这里后边启动会报内存的错误)
taskmanager.memory.process.size: 1728m #(就按这个配置,不配置这里后边启动会报内存的错误)


jobmanager.heap.mb: 1024

3.编辑flink-ambari-config.xml修改下载地址为第一步创建的网络路径

[root@hdp01 ~]# cd /var/lib/ambari-server/resources/stacks/HDP/3.1/services/FLINK/configuration
vim flink-ambari-config.xml
<property>
    <name>flink_download_url</name>
    <value>http://node01/flink/flink-1.14.4-bin-scala_2.11.tgz</value>
    <description>Snapshot download location. Downloaded when setup_prebuilt is true</description>
  </property>
  <property>
    <name>flink_hadoop_shaded_jar</name>
    <value>http://node01/flink/flink-shaded-hadoop-2-uber-2.8.3-10.0.jar</value>
    <description>Flink shaded hadoop jar download location. Downloaded when setup_prebuilt is true</description>
  </property>

三、部署机器创建用户和组

[root@hdp01 ~]# groupadd flink
[root@hdp01 ~]# useradd -d /home/flink -g flink flink

四、重启 ambari-server

[root@hdp01 ~]# ambari-server restart

五.ambari 安装 Flink

  1. ambari web选择添加Flink服务

  2. 界面找到 Stack and Versions 然后在 Flink 那里点击 add Service

  3. 选择将Flink Master安装到哪台服务

  4. 比如hdp03(slave 和 client 无法选择,直接跳到下一步,不影响)

  5. 配置 Flink on yarn 故障转移方式(在 custom flink-env 那里配置)

<property>
    <name>yarn.client.failover-proxy-provider</name>
    <value>org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider</value>
</property>
  1. 下一步,部署,完成

  2. 在yarn 运行界面可以看到多了一个应用,通过后边的 ApplicationMaster 可以跳到 flink 管理界面

六、跑一个作业进行测试(在Flink Master所在的服务器 node03)

1、在ambari flink 配置界面加上下边的参数,classloader.check-leaked-classloader: false 并保存。

2、执行测试程序(hdp03)

[root@hdp03 ~]# /opt/flink/bin/flink run \
-m yarn-cluster -yjm 1024 -ytm 1024 \
/opt/flink/examples/batch/WordCount.jar

4、可以在yarn看到多了一个flink作业在运行

5、完成

七、问题和解决方法

1、异常一:

stderr: 
Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/stack-hooks/before-ANY/scripts/hook.py", line 38, in <module>
    BeforeAnyHook().execute()
  File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 352, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/stack-hooks/before-ANY/scripts/hook.py", line 31, in hook
    setup_users()
  File "/var/lib/ambari-agent/cache/stack-hooks/before-ANY/scripts/shared_initialization.py", line 50, in setup_users
    groups = params.user_to_groups_dict[user],
KeyError: u'flink'
Error: Error: Unable to run the custom hook script ['/usr/bin/python', '/var/lib/ambari-agent/cache/stack-hooks/before-ANY/scripts/hook.py', 'ANY', '/var/lib/ambari-agent/data/command-710.json', '/var/lib/ambari-agent/cache/stack-hooks/before-ANY', '/var/lib/ambari-agent/data/structured-out-710.json', 'INFO', '/var/lib/ambari-agent/tmp', 'PROTOCOL_TLSv1_2', '']
 stdout:
2021-08-30 23:22:11,769 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=None -> 3.1
2021-08-30 23:22:11,773 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2021-08-30 23:22:11,773 - Group['flink'] {}
2021-08-30 23:22:11,774 - Group['livy'] {}
2021-08-30 23:22:11,774 - Group['spark'] {}
2021-08-30 23:22:11,774 - Group['hdfs'] {}
2021-08-30 23:22:11,774 - Group['hadoop'] {}
2021-08-30 23:22:11,775 - Group['users'] {}
2021-08-30 23:22:11,775 - User['yarn-ats'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2021-08-30 23:22:11,775 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2021-08-30 23:22:11,776 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2021-08-30 23:22:11,776 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2021-08-30 23:22:11,777 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None}
Error: Error: Unable to run the custom hook script ['/usr/bin/python', '/var/lib/ambari-agent/cache/stack-hooks/before-ANY/scripts/hook.py', 'ANY', '/var/lib/ambari-agent/data/command-710.json', '/var/lib/ambari-agent/cache/stack-hooks/before-ANY', '/var/lib/ambari-agent/data/structured-out-710.json', 'INFO', '/var/lib/ambari-agent/tmp', 'PROTOCOL_TLSv1_2', '']
2021-08-30 23:22:11,788 - The repository with version 3.1.5.0-152 for this command has been marked as resolved. It will be used to report the version of the component which was installed
2021-08-30 23:22:11,791 - Skipping stack-select on FLINK because it does not exist in the stack-select package structure.

Command failed after 1 tries

解决:

cd /var/lib/ambari-server/resources/scripts
python configs.py -u admin -p admin -n hdpcluster -l hdp01 -t 8080 -a get -c cluster-env |grep -i ignore_groupsusers_create
python configs.py -u admin -p admin -n hdpcluster -l hdp01 -t 8080 -a set -c cluster-env -k ignore_groupsusers_create -v true

2、异常二:

Flink 提交任务报错: NoClassDefFoundError: javax/ws/rs/ext/MessageBodyReader

下载 javax.ws.rs-api-2.0.jar放到${FLINK_HOME}的lib目录即可.

下载地址 : javax.ws.rs-api-2.0.jar

3、异常三:

yarn启动报错

Caused by: org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode="/user":hdfs:supergroup:drwxrwxr-x

解决办法:

更改/user的权限

hdfs dfs -chmod 777 /user

引申错误:无法更改权限

解决方法:

sudo -u hdfs hadoop fs -chown root /user

Caused by: org.apache.hadoop.security.AccessControlException: Permission denied: user=autoai-nj, access=EXECUTE, inode="/warehouse/tablespace/managed/hive":hive:hadoop:drwx------

hadoop fs -ls /warehouse/tablespace/managed/hive

hdfs dfs -chmod 777 /warehouse
sudo -u hive hadoop fs -chown root  /warehouse

原文链接:https://blog.csdn.net/weixin_42022134/article/details/105593273

参考链接:

  https://blog.csdn.net/Lbg_007/article/details/123579359

  https://blog.csdn.net/qq_36048223/article/details/116114765

  https://ask.hellobi.com/blog/hql15/3153

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值