flink-OnYarn部署中关于zookeeper的匹配问题

平台环境

在hdp大数据平台搭建OnYarn模式的flink。各组件版本如下(注意zk版本、后续幺蛾子出自它):

组件版本
HDFS3.1.1
YARN
MapReduce23.1.1
Tez0.9.1
Hive3.1.0
ZooKeeper3.4.6
Kafka1.1.1
Spark22.3.1

部署

(此处部署三大版本、便于测试)
1 安装包下载
wget https://www.apache.org/dyn/closer.lua/flink/flink-1.15.0/flink-1.15.0-bin-scala_2.12.tgz --no-check-certificate
wget https://www.apache.org/dyn/closer.lua/flink/flink-1.16.2/flink-1.16.2-bin-scala_2.12.tgz --no-check-certificate
wget https://www.apache.org/dyn/closer.lua/flink/flink-1.17.1/flink-1.17.1-bin-scala_2.12.tgz --no-check-certificate
2 解压二进制
tar zxvf flink-1.17.1-bin-scala_2.12.tgz -C ./
tar zxvf flink-1.16.2-bin-scala_2.12.tgz -C ./
tar zxvf flink-1.15.0-bin-scala_2.12.tgz -C ./
3 比较lib依赖
注意:这里是原生版本自带的包。
(问题就出现在flink-shaded-zookeeper-3.5.9.jar和flink-shaded-zookeeper-3.6.3.jar)

[root@node104 ~]# ll software/flink-1.15.0/lib/
-rw-r--r-- 1 root root      58284 Sep  1 15:16 commons-cli-1.5.0.jar
-rw-r--r-- 1  502 games    194418 Apr 21  2022 flink-cep-1.15.0.jar
-rw-r--r-- 1  502 games    484660 Apr 21  2022 flink-connector-files-1.15.0.jar
-rw-r--r-- 1  502 games     95181 Apr 21  2022 flink-csv-1.15.0.jar
-rw-r--r-- 1  502 games 115817937 Apr 21  2022 flink-dist-1.15.0.jar
-rw------- 1 root root    8452519 Sep  1 15:17 flink-doris-connector-1.15-1.4.0.jar
-rw-r--r-- 1  502 games    175482 Apr 21  2022 flink-json-1.15.0.jar
-rw-r--r-- 1  502 games  21041640 Apr 21  2022 flink-scala_2.12-1.15.0.jar
-rw-r--r-- 1  502 games  10737871 Feb  7  2022 flink-shaded-zookeeper-3.5.9.jar
-rw-r--r-- 1  502 games  15262696 Apr 21  2022 flink-table-api-java-uber-1.15.0.jar
-rw-r--r-- 1  502 games  36246376 Apr 21  2022 flink-table-planner-loader-1.15.0.jar
-rw-r--r-- 1  502 games   2995493 Apr 21  2022 flink-table-runtime-1.15.0.jar
-rw-r--r-- 1  502 games    208006 Dec 31  2021 log4j-1.2-api-2.17.1.jar
-rw-r--r-- 1  502 games    301872 Dec 31  2021 log4j-api-2.17.1.jar
-rw-r--r-- 1  502 games   1790452 Dec 31  2021 log4j-core-2.17.1.jar
-rw-r--r-- 1  502 games     24279 Dec 31  2021 log4j-slf4j-impl-2.17.1.jar
[root@node104 ~]# ll software/flink-1.15.0/opt/
-rw-r--r-- 1 502 games 20134536 Apr 21  2022 flink-azure-fs-hadoop-1.15.0.jar
-rw-r--r-- 1 502 games    48492 Apr 21  2022 flink-cep-scala_2.12-1.15.0.jar
-rw-r--r-- 1 502 games   639867 Apr 21  2022 flink-gelly-1.15.0.jar
-rw-r--r-- 1 502 games   732617 Apr 21  2022 flink-gelly-scala_2.12-1.15.0.jar
-rw-r--r-- 1 502 games 39088469 Apr 21  2022 flink-gs-fs-hadoop-1.15.0.jar
-rw-r--r-- 1 502 games 18261843 Apr 21  2022 flink-oss-fs-hadoop-1.15.0.jar
-rw-r--r-- 1 502 games 38798357 Apr 21  2022 flink-python_2.12-1.15.0.jar
-rw-r--r-- 1 502 games    20402 Apr 21  2022 flink-queryable-state-runtime-1.15.0.jar
-rw-r--r-- 1 502 games 22459205 Apr 21  2022 flink-s3-fs-hadoop-1.15.0.jar
-rw-r--r-- 1 502 games 88102603 Apr 21  2022 flink-s3-fs-presto-1.15.0.jar
-rw-r--r-- 1 502 games   231708 Feb  7  2022 flink-shaded-netty-tcnative-dynamic-2.0.44.Final-15.0.jar
-rw-r--r-- 1 502 games 11169011 Mar 24  2022 flink-shaded-zookeeper-3.6.3.jar
-rw-r--r-- 1 502 games   483395 Apr 21  2022 flink-sql-client-1.15.0.jar
-rw-r--r-- 1 502 games   185909 Apr 21  2022 flink-state-processor-api-1.15.0.jar
-rw-r--r-- 1 502 games 19045100 Apr 21  2022 flink-table-planner_2.12-1.15.0.jar
drwxr-xr-x 2 502 games     4096 Sep  1 15:13 python
[root@node104 ~]# ll software/flink-1.16.2/lib/
-rw-r--r-- 1 root root      58284 Sep  1 14:28 commons-cli-1.5.0.jar
-rw-r--r-- 1  501 games    198819 May 18 13:50 flink-cep-1.16.2.jar
-rw-r--r-- 1  501 games    516146 May 18 13:52 flink-connector-files-1.16.2.jar
-rw-r--r-- 1  501 games    102473 May 18 13:55 flink-csv-1.16.2.jar
-rw-r--r-- 1  501 games 117113058 May 18 14:07 flink-dist-1.16.2.jar
-rw------- 1 root root    8452528 Aug 30 22:05 flink-doris-connector-1.16-1.4.0.jar
-rw-r--r-- 1  501 games    180246 May 18 13:55 flink-json-1.16.2.jar
-rw-r--r-- 1  501 games  21052641 May 18 14:04 flink-scala_2.12-1.16.2.jar
-rw-r--r-- 1  501 games  10737871 May 17 18:19 flink-shaded-zookeeper-3.5.9.jar
-rw-r--r-- 1 root root   22096298 Sep  1 14:29 flink-sql-connector-mysql-cdc-2.2.1.jar
-rw-r--r-- 1  501 games  15365909 May 18 14:04 flink-table-api-java-uber-1.16.2.jar
-rw-r--r-- 1  501 games  36252890 May 18 13:57 flink-table-planner-loader-1.16.2.jar
-rw-r--r-- 1  501 games   3151160 May 18 13:50 flink-table-runtime-1.16.2.jar
-rw-r--r-- 1  501 games    208006 May 17 18:07 log4j-1.2-api-2.17.1.jar
-rw-r--r-- 1  501 games    301872 May 17 18:07 log4j-api-2.17.1.jar
-rw-r--r-- 1  501 games   1790452 May 17 18:07 log4j-core-2.17.1.jar
-rw-r--r-- 1  501 games     24279 May 17 18:07 log4j-slf4j-impl-2.17.1.jar
[root@node104 ~]# ll software/flink-1.16.2/opt/
-rw-r--r-- 1 501 games 27781883 May 18 14:01 flink-azure-fs-hadoop-1.16.2.jar
-rw-r--r-- 1 501 games    48466 May 18 14:06 flink-cep-scala_2.12-1.16.2.jar
-rw-r--r-- 1 501 games   639869 May 18 14:05 flink-gelly-1.16.2.jar
-rw-r--r-- 1 501 games   732582 May 18 14:05 flink-gelly-scala_2.12-1.16.2.jar
-rw-r--r-- 1 501 games 46545623 May 18 14:01 flink-gs-fs-hadoop-1.16.2.jar
-rw-r--r-- 1 501 games 26084616 May 18 14:00 flink-oss-fs-hadoop-1.16.2.jar
-rw-r--r-- 1 501 games 40352086 May 18 13:59 flink-python-1.16.2.jar
-rw-r--r-- 1 501 games    20403 May 18 14:04 flink-queryable-state-runtime-1.16.2.jar
-rw-r--r-- 1 501 games 30515842 May 18 14:00 flink-s3-fs-hadoop-1.16.2.jar
-rw-r--r-- 1 501 games 96171268 May 18 14:00 flink-s3-fs-presto-1.16.2.jar
-rw-r--r-- 1 501 games   231708 May 17 18:19 flink-shaded-netty-tcnative-dynamic-2.0.44.Final-15.0.jar
-rw-r--r-- 1 501 games 11169011 May 17 20:38 flink-shaded-zookeeper-3.6.3.jar
-rw-r--r-- 1 501 games   541003 May 18 13:59 flink-sql-client-1.16.2.jar
-rw-r--r-- 1 501 games   170147 May 18 13:56 flink-sql-gateway-1.16.2.jar
-rw-r--r-- 1 501 games   186221 May 18 14:06 flink-state-processor-api-1.16.2.jar
-rw-r--r-- 1 501 games 19058856 May 18 13:55 flink-table-planner_2.12-1.16.2.jar
drwxr-xr-x 2 501 games     4096 May 18 13:59 python
[root@node104 ~]# ll software/flink-1.17.1/lib/
-rw-r--r-- 1 root root      58284 Aug 30 18:15 commons-cli-1.5.0.jar
-rw-r--r-- 1  501 games    196491 May 19 18:56 flink-cep-1.17.1.jar
-rw-r--r-- 1  501 games    542620 May 19 18:59 flink-connector-files-1.17.1.jar
-rw-r--r-- 1  501 games    102472 May 19 19:02 flink-csv-1.17.1.jar
-rw-r--r-- 1  501 games 135975541 May 19 19:13 flink-dist-1.17.1.jar
-rw------- 1 root root    8452171 Aug 30 18:14 flink-doris-connector-1.17-1.4.0.jar
-rw-r--r-- 1  501 games    180248 May 19 19:02 flink-json-1.17.1.jar
-rw-r--r-- 1  501 games  21043319 May 19 19:12 flink-scala_2.12-1.17.1.jar
-rw-r--r-- 1 root root    3704559 Aug 30 18:28 flink-sql-connector-kafka_2.12-1.14.4.jar
-rw-r--r-- 1 root root   22096298 Aug 30 18:10 flink-sql-connector-mysql-cdc-2.2.1.jar
-rw-r--r-- 1 root root   39635530 Aug 30 18:29 flink-table_2.12-1.14.4.jar
-rw-r--r-- 1  501 games  15407424 May 19 19:13 flink-table-api-java-uber-1.17.1.jar
-rw-r--r-- 1  501 games  38191226 May 19 19:08 flink-table-planner-loader-1.17.1.jar
-rw-r--r-- 1  501 games   3146210 May 19 18:56 flink-table-runtime-1.17.1.jar
-rw-r--r-- 1  501 games    208006 May 17 18:07 log4j-1.2-api-2.17.1.jar
-rw-r--r-- 1  501 games    301872 May 17 18:07 log4j-api-2.17.1.jar
-rw-r--r-- 1  501 games   1790452 May 17 18:07 log4j-core-2.17.1.jar
-rw-r--r-- 1  501 games     24279 May 17 18:07 log4j-slf4j-impl-2.17.1.jar
[root@node104 ~]# ll software/flink-1.17.1/opt/
-rw-r--r-- 1 501 games 26814183 May 19 19:10 flink-azure-fs-hadoop-1.17.1.jar
-rw-r--r-- 1 501 games    48465 May 19 19:13 flink-cep-scala_2.12-1.17.1.jar
-rw-r--r-- 1 501 games 45936642 May 19 19:10 flink-gs-fs-hadoop-1.17.1.jar
-rw-r--r-- 1 501 games 25602806 May 19 19:10 flink-oss-fs-hadoop-1.17.1.jar
-rw-r--r-- 1 501 games 32999218 May 19 19:09 flink-python-1.17.1.jar
-rw-r--r-- 1 501 games    20402 May 19 19:12 flink-queryable-state-runtime-1.17.1.jar
-rw-r--r-- 1 501 games 30939417 May 19 19:09 flink-s3-fs-hadoop-1.17.1.jar
-rw-r--r-- 1 501 games 96610877 May 19 19:09 flink-s3-fs-presto-1.17.1.jar
-rw-r--r-- 1 501 games   233709 May 18 11:25 flink-shaded-netty-tcnative-dynamic-2.0.54.Final-16.1.jar
-rw-r--r-- 1 501 games   952720 May 19 19:09 flink-sql-client-1.17.1.jar
-rw-r--r-- 1 501 games   209911 May 19 19:02 flink-sql-gateway-1.17.1.jar
-rw-r--r-- 1 501 games   191820 May 19 19:13 flink-state-processor-api-1.17.1.jar
-rw-r--r-- 1 501 games 21331589 May 19 19:02 flink-table-planner_2.12-1.17.1.jar
drwxr-xr-x 2 501 games     4096 May 19 19:08 python

另外需要将 flink-shaded-hadoop-3-uber-3.1.1.7.1.1.0-565-9.0.jar commons-cli-1.5.0.jar 依赖一并复制到lib下。

4 修改onYarn配置
vim flink-conf.yaml

taskmanager.numberOfTaskSlots: 1
parallelism.default: 1
jobmanager.execution.failover-strategy: region
rest.port: 8087
classloader.resolve-order: parent-first
yarn.application-attempts: 3
# # 高可用模式
high-availability: zookeeper
# # JobManager元数据保留在文件系统storageDir中,指向此状态的指针存储在ZooKeeper中
high-availability.storageDir: hdfs:///flink-yarn-ha/
# # Zookeeper集群,修改为自己的集群
high-availability.zookeeper.quorum: node117.data:2181,node118.data:2181,node119.data:2181,node173.data:2181,node174.data:2181
# # 在zookeeper下的根目录
high-availability.zookeeper.path.root: /flink-yarn
classloader.check-leaked-classloader: false

5 分发解压包

scp -r flink-1.15.0  root@node117.data:/

6 配置环境变量

vim /etc/profile
export FLINK15_HOME=/flink-1.15.0
export PATH=$PATH:$FLINK15_HOME/bin
source  /etc/profile
echo  $FLINK15_HOME

7 启动测试

./bin/flink run -d -t yarn-per-job $FLINK15_HOME/examples/streaming/WordCount.jar
./bin/flink run  $FLINK15_HOME/examples/streaming/WordCount.jar
./bin/yarn-session.sh -n 2 -jm 900 -tm 900

8 报错
表面异常:

The program finished with the following exception:

org.apache.flink.client.program.ProgramInvocationException: The main method caused an error: Could not deploy Yarn job cluster.
        at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:372)
        at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:222)
        at org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:114)
        at org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:836)
        at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:247)
        at org.apache.flink.client.cli.CliFrontend.parseAndRun(CliFrontend.java:1078)
        at org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1156)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
        at org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
        at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1156)
Caused by: org.apache.flink.client.deployment.ClusterDeploymentException: Could not deploy Yarn job cluster.
        at org.apache.flink.yarn.YarnClusterDescriptor.deployJobCluster(YarnClusterDescriptor.java:491)
        at org.apache.flink.client.deployment.executors.AbstractJobClusterExecutor.execute(AbstractJobClusterExecutor.java:82)
        at org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.executeAsync(StreamExecutionEnvironment.java:2095)
        at org.apache.flink.client.program.StreamContextEnvironment.executeAsync(StreamContextEnvironment.java:188)
        at org.apache.flink.client.program.StreamContextEnvironment.execute(StreamContextEnvironment.java:119)
        at org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1969)
        at org.apache.flink.streaming.examples.wordcount.WordCount.main(WordCount.java:159)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:355)
        ... 11 more
Caused by: org.apache.flink.yarn.YarnClusterDescriptor$YarnDeploymentException: The YARN application unexpectedly switched to state FAILED during deployment. 

yarn查看:yarn logs -applicationId application_1693449577140_0907

2023-09-01 16:29:16,253 INFO  org.apache.flink.runtime.blob.FileSystemBlobStore            [] - Creating highly available BLOB storage directory at hdfs:/flink-yarn-ha/application_1693449577140_0907/blob
2023-09-01 16:29:16,314 INFO  org.apache.flink.runtime.entrypoint.ClusterEntrypoint        [] - Shutting YarnJobClusterEntrypoint down with application status FAILED. Diagnostics java.lang.NoClassDefFoundError: org/apache/flink/shaded/curator5/org/apache/curator/framework/api/ACLProvider
        at org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createZooKeeperHaServices(HighAvailabilityServicesUtils.java:90)
        at org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createHighAvailabilityServices(HighAvailabilityServicesUtils.java:140)
        at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.createHaServices(ClusterEntrypoint.java:427)
        at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.initializeServices(ClusterEntrypoint.java:376)
        at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:277)
        at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$1(ClusterEntrypoint.java:227)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
        at org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
        at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:224)
        at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runClusterEntrypoint(ClusterEntrypoint.java:711)
        at org.apache.flink.yarn.entrypoint.YarnJobClusterEntrypoint.main(YarnJobClusterEntrypoint.java:109)
Caused by: java.lang.ClassNotFoundException: org.apache.flink.shaded.curator5.org.apache.curator.framework.api.ACLProvider
        at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
        ... 13 more
.
2023-09-01 16:29:16,319 INFO  org.apache.flink.runtime.rpc.akka.AkkaRpcService             [] - Stopping Akka RPC service.
2023-09-01 16:29:16,349 INFO  akka.remote.RemoteActorRefProvider$RemotingTerminator        [] - Shutting down remote daemon.
2023-09-01 16:29:16,350 INFO  akka.remote.RemoteActorRefProvider$RemotingTerminator        [] - Remote daemon shut down; proceeding with flushing remote transports.
2023-09-01 16:29:16,367 INFO  akka.remote.RemoteActorRefProvider$RemotingTerminator        [] - Remoting shut down.
2023-09-01 16:29:16,387 INFO  org.apache.flink.runtime.rpc.akka.AkkaRpcService             [] - Stopped Akka RPC service.
2023-09-01 16:29:16,388 ERROR org.apache.flink.runtime.entrypoint.ClusterEntrypoint        [] - Could not start cluster entrypoint YarnJobClusterEntrypoint.
org.apache.flink.runtime.entrypoint.ClusterEntrypointException: Failed to initialize the cluster entrypoint YarnJobClusterEntrypoint.
        at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:250) ~[flink-dist-1.15.0.jar:1.15.0]
        at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runClusterEntrypoint(ClusterEntrypoint.java:711) [flink-dist-1.15.0.jar:1.15.0]
        at org.apache.flink.yarn.entrypoint.YarnJobClusterEntrypoint.main(YarnJobClusterEntrypoint.java:109) [flink-dist-1.15.0.jar:1.15.0]
Caused by: java.lang.NoClassDefFoundError: org/apache/flink/shaded/curator5/org/apache/curator/framework/api/ACLProvider
        at org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createZooKeeperHaServices(HighAvailabilityServicesUtils.java:90) ~[flink-dist-1.15.0.jar:1.15.0]
        at org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createHighAvailabilityServices(HighAvailabilityServicesUtils.java:140) ~[flink-dist-1.15.0.jar:1.15.0]
        at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.createHaServices(ClusterEntrypoint.java:427) ~[flink-dist-1.15.0.jar:1.15.0]
        at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.initializeServices(ClusterEntrypoint.java:376) ~[flink-dist-1.15.0.jar:1.15.0]
        at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:277) ~[flink-dist-1.15.0.jar:1.15.0]
        at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$1(ClusterEntrypoint.java:227) ~[flink-dist-1.15.0.jar:1.15.0]
        at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_151]
        at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_151]
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730) ~[hadoop-common-3.1.1.3.0.1.0-187.jar:?]
        at org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41) ~[flink-dist-1.15.0.jar:1.15.0]
        at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:224) ~[flink-dist-1.15.0.jar:1.15.0]
        ... 2 more
Caused by: java.lang.ClassNotFoundException: org.apache.flink.shaded.curator5.org.apache.curator.framework.api.ACLProvider
        at java.net.URLClassLoader.findClass(URLClassLoader.java:381) ~[?:1.8.0_151]
        at java.lang.ClassLoader.loadClass(ClassLoader.java:424) ~[?:1.8.0_151]
        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335) ~[?:1.8.0_151]
        at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ~[?:1.8.0_151]
        at org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createZooKeeperHaServices(HighAvailabilityServicesUtils.java:90) ~[flink-dist-1.15.0.jar:1.15.0]
        at org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createHighAvailabilityServices(HighAvailabilityServicesUtils.java:140) ~[flink-dist-1.15.0.jar:1.15.0]
        at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.createHaServices(ClusterEntrypoint.java:427) ~[flink-dist-1.15.0.jar:1.15.0]
        at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.initializeServices(ClusterEntrypoint.java:376) ~[flink-dist-1.15.0.jar:1.15.0]
        at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:277) ~[flink-dist-1.15.0.jar:1.15.0]
        at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$1(ClusterEntrypoint.java:227) ~[flink-dist-1.15.0.jar:1.15.0]
        at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_151]
        at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_151]
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730) ~[hadoop-common-3.1.1.3.0.1.0-187.jar:?]
        at org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41) ~[flink-dist-1.15.0.jar:1.15.0]
        at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:224) ~[flink-dist-1.15.0.jar:1.15.0]
        ... 2 more

实际原因:org.apache.flink.shaded.curator5.org.apache.curator.framework.api.ACLProvider。是zk的原因。

询问csdn-AI:

在这里插入图片描述

查看官网
(https://nightlies.apache.org/flink/flink-docs-release-1.15/zh/docs/deployment/ha/zookeeper_ha/)
(官网说明flink 1.15和1.16 都是支持zk 3.4 的,其实不然。如下图:)
在这里插入图片描述
好像是官网没有更新该文档。

结论

尽量用原生lib 或者opt下规定的zk依赖包.Flink1.15+仅支持ZooKeeper3.5/3.6,不再支持3.4

zk 2.3.6不能部署flink 1.15+ 版本 onYarn。同样的部署方式,flink1.14.4在yarn上畅行无阻。

补充

1问:为啥将flink1.1.4 升级为 1.15+ ?
1答:Doris使用FlinkCDC接入多表或整库功能仅flink 1.15+ 支持。(doris-connector原理是一个库的表都可以共用一个 cdc source,通过 side output 分流同步不同的表)

使用FlinkCDC接入多表或整库示例语法
<FLINK_HOME>/bin/flink run \
    -Dexecution.checkpointing.interval=10s \
    -Dparallelism.default=1 \
    -c org.apache.doris.flink.tools.cdc.CdcTools \
    lib/flink-doris-connector-1.16-1.4.0-SNAPSHOT.jar \
    mysql-sync-database \
    --database test_db \
    --mysql-conf hostname=127.0.0.1 \
    --mysql-conf username=root \
    --mysql-conf password=123456 \
    --mysql-conf database-name=mysql_db \
    --including-tables "tbl1|test.*" \
    --sink-conf fenodes=127.0.0.1:8030 \
    --sink-conf username=root \
    --sink-conf password=123456 \
    --sink-conf jdbc-url=jdbc:mysql://127.0.0.1:9030 \
    --sink-conf sink.label-prefix=label \
    --table-conf replication_num=1 

2问:flinkCDC 如何批量提交sql?
2答:

Flink SQL> BEGIN STATEMENT SET;
[INFO] Begin a statement set.

Flink SQL> INSERT INTO ods_cou_course select * from cou_course;
[INFO] Add SQL update statement to the statement set.

Flink SQL> INSERT INTO ods_sys_course select * from sys_course;
[INFO] Add SQL update statement to the statement set.

Flink SQL> END;
[INFO] Submitting SQL update statement to the cluster...
[INFO] SQL update statement has been successfully submitted to the cluster:
Job ID: beef7842fbab826c777380036860b787
  • 1
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

独狐游清湖

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值