基于CDH-6.1.0编译flink-1.11-SNAPSHOT

编译准备环境:

  • jdk-1.8
  • maven-3.2.5

下载flink-1.11安装包

git clone --branch master https://github.com/apache/flink.git

执行编译命令:

mvn -T2C clean install -DskipTests -Dfast -Pinclude-hadoop -Pvendor-repos -Dhadoop.version=3.0.0-cdh6.1.0 -Dscala-2.11

执行报错:

[ERROR] Failed to execute goal on project flink-hadoop-fs: Could not resolve dependencies for project org.apache.flink:flink-hadoop-fs:jar:1.11-SNAPSHOT: The following artifacts could not be resolved: org.apach
e.flink:flink-shaded-hadoop-2:jar:3.0.0-cdh6.1.0-9.0, org.apache.hadoop:hadoop-hdfs:jar:tests:3.0.0-cdh6.1.0, org.apache.hadoop:hadoop-common:jar:tests:3.0.0-cdh6.1.0: Could not find artifact org.apache.flink:f
link-shaded-hadoop-2:jar:3.0.0-cdh6.1.0-9.0 in alimaven (http://maven.aliyun.com/nexus/content/groups/public/) -> [Help 1]

这里缺少flink-shaded-hadoop 9.0,下载并编译:

CDH-6.1.0的Hadoop版本:Hadoop 3.0.0-cdh6.1.0

开始编译:

mvn clean install -DskipTests -Dhadoop.version=3.0.0-cdh6.1.0

在 flink-shaded-9.0/pom.xml文件中添加 cloudera 的maven库:

<!--添加CDH的仓库-->
<repositories>
	<repository>
		<id>cloudera</id>
		<url>https://repository.cloudera.com/artifactory/cloudera-repos</url>
	</repository>
</repositories>

编译成功:

[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] flink-shaded ....................................... SUCCESS [  2.267 s]
[INFO] flink-shaded-force-shading ......................... SUCCESS [  0.647 s]
[INFO] flink-shaded-asm-7 ................................. SUCCESS [  0.928 s]
[INFO] flink-shaded-guava-18 .............................. SUCCESS [  2.467 s]
[INFO] flink-shaded-netty-4 ............................... SUCCESS [ 20.499 s]
[INFO] flink-shaded-netty-tcnative-dynamic ................ SUCCESS [  1.570 s]
[INFO] flink-shaded-jackson-parent ........................ SUCCESS [  0.073 s]
[INFO] flink-shaded-jackson-2 ............................. SUCCESS [  1.314 s]
[INFO] flink-shaded-jackson-module-jsonSchema-2 ........... SUCCESS [  0.917 s]
[INFO] flink-shaded-hadoop-2 .............................. SUCCESS [ 15.471 s]
[INFO] flink-shaded-hadoop-2-uber ......................... SUCCESS [ 17.707 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:04 min
[INFO] Finished at: 2020-01-15T15:49:45+08:00
[INFO] Final Memory: 52M/1423M
[INFO] ------------------------------------------------------------------------

再次编译flink,执行报错:

[ERROR] Failed to execute goal on project flink-hadoop-fs: Could not resolve dependencies for project org.apache.flink:flink-hadoop-fs:jar:1.11-SNAPSHOT: The following artifacts could not be resolved: org.apach
e.hadoop:hadoop-hdfs:jar:tests:3.0.0-cdh6.1.0, org.apache.hadoop:hadoop-common:jar:tests:3.0.0-cdh6.1.0: Failure to find org.apache.hadoop:hadoop-hdfs:jar:tests:3.0.0-cdh6.1.0 in http://maven.aliyun.com/nexus/c
ontent/groups/public/ was cached in the local repository, resolution will not be reattempted until the update interval of alimaven has elapsed or updates are forced -> [Help 1]

在flink-1.11的pom.xml文件中也加入cloudera的库:

<!--添加CDH的仓库-->
<repositories>
	<repository>
		<id>cloudera</id>
		<url>https://repository.cloudera.com/artifactory/cloudera-repos</url>
	</repository>
</repositories>

再次编译,报错:

[ERROR] /G:/Projects/IdeaProjects/flink-1.11/flink-master/flink-yarn/src/test/java/org/apache/flink/yarn/AbstractYarnClusterTest.java:[85,41] 对于newInstance(org.apache.hadoop.yarn.api.records.ApplicationId,org
.apache.hadoop.yarn.api.records.ApplicationAttemptId,java.lang.String,java.lang.String,java.lang.String,java.lang.String,int,<nulltype>,org.apache.hadoop.yarn.api.records.YarnApplicationState,<nulltype>,<nullty
pe>,long,long,org.apache.hadoop.yarn.api.records.FinalApplicationStatus,<nulltype>,<nulltype>,float,<nulltype>,<nulltype>), 找不到合适的方法
[ERROR] 方法 org.apache.hadoop.yarn.api.records.ApplicationReport.newInstance(org.apache.hadoop.yarn.api.records.ApplicationId,org.apache.hadoop.yarn.api.records.ApplicationAttemptId,java.lang.String,java.lang.
String,java.lang.String,java.lang.String,int,org.apache.hadoop.yarn.api.records.Token,org.apache.hadoop.yarn.api.records.YarnApplicationState,java.lang.String,java.lang.String,long,long,long,org.apache.hadoop.y
arn.api.records.FinalApplicationStatus,org.apache.hadoop.yarn.api.records.ApplicationResourceUsageReport,java.lang.String,float,java.lang.String,org.apache.hadoop.yarn.api.records.Token)不适用
[ERROR] (实际参数列表和形式参数列表长度不同)
[ERROR] 方法 org.apache.hadoop.yarn.api.records.ApplicationReport.newInstance(org.apache.hadoop.yarn.api.records.ApplicationId,org.apache.hadoop.yarn.api.records.ApplicationAttemptId,java.lang.String,java.lang.
String,java.lang.String,java.lang.String,int,org.apache.hadoop.yarn.api.records.Token,org.apache.hadoop.yarn.api.records.YarnApplicationState,java.lang.String,java.lang.String,long,long,org.apache.hadoop.yarn.a
pi.records.FinalApplicationStatus,org.apache.hadoop.yarn.api.records.ApplicationResourceUsageReport,java.lang.String,float,java.lang.String,org.apache.hadoop.yarn.api.records.Token,java.util.Set<java.lang.Strin
g>,boolean,org.apache.hadoop.yarn.api.records.Priority,java.lang.String,java.lang.String)不适用
[ERROR] (实际参数列表和形式参数列表长度不同)
[ERROR] 方法 org.apache.hadoop.yarn.api.records.ApplicationReport.newInstance(org.apache.hadoop.yarn.api.records.ApplicationId,org.apache.hadoop.yarn.api.records.ApplicationAttemptId,java.lang.String,java.lang.
String,java.lang.String,java.lang.String,int,org.apache.hadoop.yarn.api.records.Token,org.apache.hadoop.yarn.api.records.YarnApplicationState,java.lang.String,java.lang.String,long,long,long,org.apache.hadoop.y
arn.api.records.FinalApplicationStatus,org.apache.hadoop.yarn.api.records.ApplicationResourceUsageReport,java.lang.String,float,java.lang.String,org.apache.hadoop.yarn.api.records.Token,java.util.Set<java.lang.
String>,boolean,org.apache.hadoop.yarn.api.records.Priority,java.lang.String,java.lang.String)不适用
[ERROR] (实际参数列表和形式参数列表长度不同)

可以看到flink-yarn的测试项目发生了异常,在这个版本中org.apache.hadoop.yarn.api.records.ApplicationReport.newInstance方法是不可用的,因此我们需要在flink-yarn模块下的pom文件的build中添加如下插件,跳过本模块的测试代码的编译。

    <plugin>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-compiler-plugin</artifactId>
        <version>3.8.0</version>
        <configuration>
            <source>${java.version}</source>
            <target>${java.version}</target>
            <!-- 略过测试代码的编译 -->
            <skip>true</skip>
            <!-- The semantics of this option are reversed, see MCOMPILER-209. -->
            <useIncrementalCompilation>false</useIncrementalCompilation>
            <compilerArgs>
                 <!-- Prevents recompilation due to missing package-info.class, see MCOMPILER-205 -->
                <arg>-Xpkginfo:always</arg>
            </compilerArgs>
        </configuration>
    </plugin> 

这里在YARN集群运行时不会受到影响。

参考:https://blog.csdn.net/github_39577257/article/details/97648316

再次编译,报错:

[ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.8.0:testCompile (default-testCompile) on project flink-yarn-tests: Compilation failure: Compilation failure:
[ERROR] /G:/Projects/IdeaProjects/flink-1.11/flink-master/flink-yarn-tests/src/test/java/org/apache/flink/yarn/YarnPrioritySchedulingITCase.java:[28,36] 找不到符号
[ERROR] 符号:   类 YarnTestUtils
[ERROR] 位置: 程序包 org.apache.flink.yarn
[ERROR] /G:/Projects/IdeaProjects/flink-1.11/flink-master/flink-yarn-tests/src/test/java/org/apache/flink/yarn/YarnPrioritySchedulingITCase.java:[28,1] 仅从类和接口静态导入
[ERROR] /G:/Projects/IdeaProjects/flink-1.11/flink-master/flink-yarn-tests/src/test/java/org/apache/flink/yarn/YarnTestBase.java:[308,69] 找不到符号
[ERROR] 符号:   变量 YarnTestUtils
[ERROR] 位置: 类 org.apache.flink.yarn.YarnTestBase
[ERROR] /G:/Projects/IdeaProjects/flink-1.11/flink-master/flink-yarn-tests/src/test/java/org/apache/flink/yarn/YarnPrioritySchedulingITCase.java:[42,25] 找不到符号
[ERROR] 符号:   方法 isHadoopVersionGreaterThanOrEquals(int,int)
[ERROR] 位置: 类 org.apache.flink.yarn.YarnPrioritySchedulingITCase
[ERROR] /G:/Projects/IdeaProjects/flink-1.11/flink-master/flink-yarn-tests/src/test/java/org/apache/flink/yarn/YarnConfigurationITCase.java:[94,73] 找不到符号
[ERROR] 符号:   变量 YarnTestUtils
[ERROR] 位置: 类 org.apache.flink.yarn.YarnConfigurationITCase

同样在flink-yarn-test模块下的pom.xml中添加:

    <plugin>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-compiler-plugin</artifactId>
        <version>3.8.0</version>
        <configuration>
            <source>${java.version}</source>
            <target>${java.version}</target>
            <!-- 略过测试代码的编译 -->
            <skip>true</skip>
            <!-- The semantics of this option are reversed, see MCOMPILER-209. -->
            <useIncrementalCompilation>false</useIncrementalCompilation>
            <compilerArgs>
                 <!-- Prevents recompilation due to missing package-info.class, see MCOMPILER-205 -->
                <arg>-Xpkginfo:always</arg>
            </compilerArgs>
        </configuration>
    </plugin>

编译成功!

[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] force-shading ...................................... SUCCESS [  3.364 s]
[INFO] flink .............................................. SUCCESS [  2.476 s]
[INFO] flink-annotations .................................. SUCCESS [  5.092 s]
[INFO] flink-shaded-curator ............................... SUCCESS [  4.777 s]
[INFO] flink-test-utils-parent ............................ SUCCESS [  0.775 s]
[INFO] flink-test-utils-junit ............................. SUCCESS [  6.614 s]
[INFO] flink-metrics ...................................... SUCCESS [  0.959 s]
[INFO] flink-metrics-core ................................. SUCCESS [  2.883 s]
[INFO] flink-core ......................................... SUCCESS [ 39.715 s]
[INFO] flink-java ......................................... SUCCESS [ 15.704 s]
[INFO] flink-queryable-state .............................. SUCCESS [  0.622 s]
[INFO] flink-queryable-state-client-java .................. SUCCESS [  3.589 s]
[INFO] flink-filesystems .................................. SUCCESS [  0.577 s]
[INFO] flink-hadoop-fs .................................... SUCCESS [  6.644 s]
[INFO] flink-runtime ...................................... SUCCESS [01:43 min]
[INFO] flink-scala ........................................ SUCCESS [01:25 min]
[INFO] flink-mapr-fs ...................................... SUCCESS [  4.183 s]
[INFO] flink-filesystems :: flink-fs-hadoop-shaded ........ SUCCESS [  7.549 s]
[INFO] flink-s3-fs-base ................................... SUCCESS [  5.433 s]
[INFO] flink-s3-fs-hadoop ................................. SUCCESS [ 11.240 s]
[INFO] flink-s3-fs-presto ................................. SUCCESS [ 17.500 s]
[INFO] flink-swift-fs-hadoop .............................. SUCCESS [ 33.569 s]
[INFO] flink-oss-fs-hadoop ................................ SUCCESS [ 23.678 s]
[INFO] flink-azure-fs-hadoop .............................. SUCCESS [ 21.042 s]
[INFO] flink-optimizer .................................... SUCCESS [ 10.199 s]
[INFO] flink-clients ...................................... SUCCESS [  5.264 s]
[INFO] flink-streaming-java ............................... SUCCESS [ 43.541 s]
[INFO] flink-test-utils ................................... SUCCESS [  5.967 s]
[INFO] flink-runtime-web .................................. SUCCESS [04:13 min]
[INFO] flink-examples ..................................... SUCCESS [  0.804 s]
[INFO] flink-examples-batch ............................... SUCCESS [01:23 min]
[INFO] flink-connectors ................................... SUCCESS [  0.577 s]
[INFO] flink-hadoop-compatibility ......................... SUCCESS [ 27.781 s]
[INFO] flink-state-backends ............................... SUCCESS [  0.774 s]
[INFO] flink-statebackend-rocksdb ......................... SUCCESS [ 11.679 s]
[INFO] flink-tests ........................................ SUCCESS [01:17 min]
[INFO] flink-streaming-scala .............................. SUCCESS [01:18 min]
[INFO] flink-table ........................................ SUCCESS [  0.782 s]
[INFO] flink-table-common ................................. SUCCESS [ 15.568 s]
[INFO] flink-table-api-java ............................... SUCCESS [ 14.810 s]
[INFO] flink-table-api-java-bridge ........................ SUCCESS [  4.774 s]
[INFO] flink-table-api-scala .............................. SUCCESS [ 15.980 s]
[INFO] flink-table-api-scala-bridge ....................... SUCCESS [ 16.605 s]
[INFO] flink-sql-parser ................................... SUCCESS [ 15.369 s]
[INFO] flink-libraries .................................... SUCCESS [  0.525 s]
[INFO] flink-cep .......................................... SUCCESS [ 58.830 s]
[INFO] flink-table-planner ................................ SUCCESS [04:41 min]
[INFO] flink-jdbc ......................................... SUCCESS [ 12.326 s]
[INFO] flink-table-runtime-blink .......................... SUCCESS [ 44.744 s]
[INFO] flink-table-planner-blink .......................... SUCCESS [07:07 min]
[INFO] flink-hbase ........................................ SUCCESS [ 15.265 s]
[INFO] flink-hcatalog ..................................... SUCCESS [ 37.772 s]
[INFO] flink-metrics-jmx .................................. SUCCESS [  4.656 s]
[INFO] flink-formats ...................................... SUCCESS [  0.621 s]
[INFO] flink-json ......................................... SUCCESS [ 14.060 s]
[INFO] flink-connector-kafka-base ......................... SUCCESS [ 12.890 s]
[INFO] flink-connector-kafka-0.10 ......................... SUCCESS [  3.954 s]
[INFO] flink-connector-kafka-0.11 ......................... SUCCESS [  4.675 s]
[INFO] flink-connector-elasticsearch-base ................. SUCCESS [ 18.038 s]
[INFO] flink-connector-elasticsearch2 ..................... SUCCESS [ 39.863 s]
[INFO] flink-connector-elasticsearch5 ..................... SUCCESS [ 43.666 s]
[INFO] flink-connector-elasticsearch6 ..................... SUCCESS [ 19.390 s]
[INFO] flink-connector-elasticsearch7 ..................... SUCCESS [ 19.741 s]
[INFO] flink-orc .......................................... SUCCESS [ 12.435 s]
[INFO] flink-csv .......................................... SUCCESS [  4.265 s]
[INFO] flink-connector-hive ............................... SUCCESS [ 24.227 s]
[INFO] flink-connector-rabbitmq ........................... SUCCESS [  3.825 s]
[INFO] flink-connector-twitter ............................ SUCCESS [  5.428 s]
[INFO] flink-connector-nifi ............................... SUCCESS [  2.346 s]
[INFO] flink-connector-cassandra .......................... SUCCESS [ 29.089 s]
[INFO] flink-avro ......................................... SUCCESS [ 22.772 s]
[INFO] flink-connector-filesystem ......................... SUCCESS [ 13.748 s]
[INFO] flink-connector-kafka .............................. SUCCESS [  3.999 s]
[INFO] flink-connector-gcp-pubsub ......................... SUCCESS [  6.061 s]
[INFO] flink-connector-kinesis ............................ SUCCESS [ 26.639 s]
[INFO] flink-sql-connector-elasticsearch7 ................. SUCCESS [ 23.964 s]
[INFO] flink-sql-connector-elasticsearch6 ................. SUCCESS [ 21.835 s]
[INFO] flink-sql-connector-kafka-0.10 ..................... SUCCESS [  1.523 s]
[INFO] flink-sql-connector-kafka-0.11 ..................... SUCCESS [  2.891 s]
[INFO] flink-sql-connector-kafka .......................... SUCCESS [  2.267 s]
[INFO] flink-avro-confluent-registry ...................... SUCCESS [ 11.554 s]
[INFO] flink-parquet ...................................... SUCCESS [ 14.598 s]
[INFO] flink-sequence-file ................................ SUCCESS [  6.775 s]
[INFO] flink-compress ..................................... SUCCESS [  4.573 s]
[INFO] flink-examples-streaming ........................... SUCCESS [ 32.028 s]
[INFO] flink-examples-table ............................... SUCCESS [ 15.357 s]
[INFO] flink-examples-build-helper ........................ SUCCESS [  0.492 s]
[INFO] flink-examples-streaming-twitter ................... SUCCESS [  1.576 s]
[INFO] flink-examples-streaming-state-machine ............. SUCCESS [  1.231 s]
[INFO] flink-examples-streaming-gcp-pubsub ................ SUCCESS [  8.301 s]
[INFO] flink-container .................................... SUCCESS [  5.040 s]
[INFO] flink-queryable-state-runtime ...................... SUCCESS [  9.197 s]
[INFO] flink-end-to-end-tests ............................. SUCCESS [  0.782 s]
[INFO] flink-cli-test ..................................... SUCCESS [  2.876 s]
[INFO] flink-parent-child-classloading-test-program ....... SUCCESS [  3.194 s]
[INFO] flink-parent-child-classloading-test-lib-package ... SUCCESS [  5.423 s]
[INFO] flink-dataset-allround-test ........................ SUCCESS [  3.042 s]
[INFO] flink-dataset-fine-grained-recovery-test ........... SUCCESS [  3.037 s]
[INFO] flink-datastream-allround-test ..................... SUCCESS [  9.758 s]
[INFO] flink-batch-sql-test ............................... SUCCESS [  1.323 s]
[INFO] flink-stream-sql-test .............................. SUCCESS [  1.056 s]
[INFO] flink-bucketing-sink-test .......................... SUCCESS [  8.995 s]
[INFO] flink-distributed-cache-via-blob ................... SUCCESS [  2.875 s]
[INFO] flink-high-parallelism-iterations-test ............. SUCCESS [ 16.787 s]
[INFO] flink-stream-stateful-job-upgrade-test ............. SUCCESS [ 12.093 s]
[INFO] flink-queryable-state-test ......................... SUCCESS [ 10.283 s]
[INFO] flink-local-recovery-and-allocation-test ........... SUCCESS [  3.863 s]
[INFO] flink-elasticsearch2-test .......................... SUCCESS [  6.756 s]
[INFO] flink-elasticsearch5-test .......................... SUCCESS [  6.704 s]
[INFO] flink-elasticsearch6-test .......................... SUCCESS [ 15.619 s]
[INFO] flink-quickstart ................................... SUCCESS [  1.669 s]
[INFO] flink-quickstart-java .............................. SUCCESS [  4.340 s]
[INFO] flink-quickstart-scala ............................. SUCCESS [  4.757 s]
[INFO] flink-quickstart-test .............................. SUCCESS [  1.488 s]
[INFO] flink-confluent-schema-registry .................... SUCCESS [  4.117 s]
[INFO] flink-stream-state-ttl-test ........................ SUCCESS [ 20.106 s]
[INFO] flink-sql-client-test .............................. SUCCESS [  5.903 s]
[INFO] flink-streaming-file-sink-test ..................... SUCCESS [  2.908 s]
[INFO] flink-state-evolution-test ......................... SUCCESS [  7.972 s]
[INFO] flink-mesos ........................................ SUCCESS [01:39 min]
[INFO] flink-kubernetes ................................... SUCCESS [ 20.125 s]
[INFO] flink-yarn ......................................... SUCCESS [  6.253 s]
[INFO] flink-gelly ........................................ SUCCESS [ 42.014 s]
[INFO] flink-gelly-scala .................................. SUCCESS [ 36.521 s]
[INFO] flink-gelly-examples ............................... SUCCESS [ 27.928 s]
[INFO] flink-metrics-dropwizard ........................... SUCCESS [  2.265 s]
[INFO] flink-metrics-graphite ............................. SUCCESS [  0.839 s]
[INFO] flink-metrics-influxdb ............................. SUCCESS [  3.368 s]
[INFO] flink-metrics-prometheus ........................... SUCCESS [  2.525 s]
[INFO] flink-metrics-statsd ............................... SUCCESS [  1.714 s]
[INFO] flink-metrics-datadog .............................. SUCCESS [  1.720 s]
[INFO] flink-metrics-slf4j ................................ SUCCESS [  1.715 s]
[INFO] flink-cep-scala .................................... SUCCESS [ 19.332 s]
[INFO] flink-table-uber ................................... SUCCESS [ 14.607 s]
[INFO] flink-table-uber-blink ............................. SUCCESS [  7.316 s]
[INFO] flink-sql-client ................................... SUCCESS [ 12.390 s]
[INFO] flink-state-processor-api .......................... SUCCESS [  5.691 s]
[INFO] flink-python ....................................... SUCCESS [ 32.964 s]
[INFO] flink-scala-shell .................................. SUCCESS [ 43.815 s]
[INFO] flink-dist ......................................... SUCCESS [ 22.283 s]
[INFO] flink-end-to-end-tests-common ...................... SUCCESS [  1.452 s]
[INFO] flink-metrics-availability-test .................... SUCCESS [  0.710 s]
[INFO] flink-metrics-reporter-prometheus-test ............. SUCCESS [  0.706 s]
[INFO] flink-heavy-deployment-stress-test ................. SUCCESS [ 30.449 s]
[INFO] flink-connector-gcp-pubsub-emulator-tests .......... SUCCESS [  5.651 s]
[INFO] flink-streaming-kafka-test-base .................... SUCCESS [  2.944 s]
[INFO] flink-streaming-kafka-test ......................... SUCCESS [ 14.949 s]
[INFO] flink-streaming-kafka011-test ...................... SUCCESS [ 15.905 s]
[INFO] flink-streaming-kafka010-test ...................... SUCCESS [ 12.944 s]
[INFO] flink-plugins-test ................................. SUCCESS [  2.659 s]
[INFO] dummy-fs ........................................... SUCCESS [  1.475 s]
[INFO] another-dummy-fs ................................... SUCCESS [  1.371 s]
[INFO] flink-tpch-test .................................... SUCCESS [  9.452 s]
[INFO] flink-streaming-kinesis-test ....................... SUCCESS [ 29.559 s]
[INFO] flink-elasticsearch7-test .......................... SUCCESS [ 15.898 s]
[INFO] flink-end-to-end-tests-common-kafka ................ SUCCESS [  2.041 s]
[INFO] flink-tpcds-test ................................... SUCCESS [  8.540 s]
[INFO] flink-statebackend-heap-spillable .................. SUCCESS [  2.477 s]
[INFO] flink-contrib ...................................... SUCCESS [  0.504 s]
[INFO] flink-connector-wikiedits .......................... SUCCESS [  3.773 s]
[INFO] flink-yarn-tests ................................... SUCCESS [  5.609 s]
[INFO] flink-fs-tests ..................................... SUCCESS [  6.487 s]
[INFO] flink-docs ......................................... SUCCESS [  4.947 s]
[INFO] flink-ml-parent .................................... SUCCESS [  0.971 s]
[INFO] flink-ml-api ....................................... SUCCESS [  1.593 s]
[INFO] flink-ml-lib ....................................... SUCCESS [ 11.757 s]
[INFO] flink-walkthroughs ................................. SUCCESS [  1.885 s]
[INFO] flink-walkthrough-common ........................... SUCCESS [  8.696 s]
[INFO] flink-walkthrough-table-java ....................... SUCCESS [  4.048 s]
[INFO] flink-walkthrough-table-scala ...................... SUCCESS [  4.153 s]
[INFO] flink-walkthrough-datastream-java .................. SUCCESS [  4.061 s]
[INFO] flink-walkthrough-datastream-scala ................. SUCCESS [  5.361 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 19:33 min (Wall Clock)
[INFO] Finished at: 2020-01-15T17:03:40+08:00
[INFO] Final Memory: 584M/1829M
[INFO] ------------------------------------------------------------------------

切换到flink-dist/target/flink-1.11-SNAPSHOT-bin目录,将生成后的文件打包成安装包:

tar -czvf flink-1.11-cdh-6.1.0.tar.gz flink-1.11-SNAPSHOT/

上传到服务器并使用。

  • 2
    点赞
  • 7
    收藏
    觉得还不错? 一键收藏
  • 2
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值