junit版本_分享Flink1.9最新版本编译(收藏版)

ee34532a56957f6d945300b4e07f354e.png

第一步:JDK环境准备

特别说明一下【来自aikfk官方实测】:官网要求Flink编译和使用的时候JDK版本必须是1.8以上,所以这里我们使用的额是1.8.0.221版本,建议大家JDK用1.8.0.191或者1.8.0.221以上的版本。不要用低于1.8.0.191的版本,低于1.8.0.191版本编译没有问题,但运行Flink程序的时候会出问题。JDK配置完之后,要检查一下,确保你的版本是OK的。

#JAVA_HOMEexport JAVA_HOME=/opt/modules/jdk1.8.0_221export PATH=$PATH:$JAVA_HOME/bin[kfk@bigdata-pro-m03 ~]$ java -version java version "1.8.0_221"Java(TM) SE Runtime Environment (build 1.8.0_221-b11)Java HotSpot(TM) 64-Bit Server VM (build 25.221-b11, mixed mode)

第二步:Maven环境准备

Flink官网对Maven版本的要求是必须Maven3.x.x的,所以我分别测试了Maven3.2.5和Maven3.3.9这两个版本,编译都是没有问题。所以大家放心选择这两个版本。

需要对Maven中conf/settings.xml文件进行配置一下:

1)在Maven根目录创建一个仓库目录repository,配置的地址指定到这个repository目录

/opt/modules/apache-maven-3.3.9/repository

2)添加一个mirror

 nexus-aliyun*Nexus aliyunhttp://maven.aliyun.com/nexus/content/groups/public

3)配置环境变量

#MAVEN_HOMEexport MAVEN_HOME=/opt/modules/apache-maven-3.3.9export PATH=$PATH:$MAVEN_HOME/binexport MAVEN_OPTS="-Xmx4g -XX:MaxPermSize=1024M -XX:ReservedCodeCacheSize=1024m"

第三步:保证你的虚拟机可以访问外网,因为在编译的过程中要下载jar包的

具体怎么设置,这里就不多说 了!

第四步:下载Flink源码 Flink 1.9

下载地址:http://archive.apache.org/dist/flink/

第五步:开始编译

将源码tar包解压之后,我们进入的解压之后的工程根目录,开始执行命令:

注意:这里我们编译Hadoop指定的版本,也是Flink官网指定的可以编译Hadoop的几个版本,分别是Hadoop 2.4.1, 2.6.5, 2.7.5 or 2.8.3

我尝试了2.7.5和2.8.3的两个版本,编译都没有问题。只是在在编译过程中出现了一个小问题,稍等我们在说,好,我们选择一个2.8.3的版本开始编译,我再说说执行编译命令之前我们得环境:

JDK : 1.8.0.221Maven : 3.3.9Hadoop : 2.8.3Flink源码版本:1.9

开始执行命令:

mvn clean install -DskipTests -Drat.skip=true -Dhadoop.version=2.8.3慢慢等吧...如果你的网络好,大概30多分钟

在这个过程中出现了一个问题,就是缺jar包的问题,编译过程会抛出这样一个错误:

[ERROR] Failed to execute goal on project flink-avro-confluent-registry: Could not resolve dependencies for project org.apache.flink:flink-avro-confluent-registry:jar:1.9-SNAPSHOT: Failure to find io.confluent:kafka-schema-registry-client:jar:3.3.1 in http://maven.aliyun.com/nexus/content/groups/public was cached in the local repository, resolution will not be reattempted until the update interval of nexus-aliyun has elapsed or updates are forced -> [Help 1]

解决方式就是我们将这个jar包下载下来,放在repository仓库对应的目录中就可以了。我们可以这么干,通过依次执行下面的两个命令来解决这个问题:

wget http://packages.confluent.io/maven/io/confluent/kafka-schema-registry-client/3.3.1/kafka-schema-registry-client-3.3.1.jarmvn install:install-file -DgroupId=io.confluent -DartifactId=kafka-schema-registry-client -Dversion=3.3.1 -Dpackaging=jar -Dfile=kafka-schema-registry-client-3.3.1.jar

好,执行完之后,我们重新执行一次编译命令:

mvn clean install -DskipTests -Drat.skip=true -Dhadoop.version=2.8.3

漫长的等待后.................

17058561199c944fdc28b65a3fefd6de.png
[INFO] ------------------------------------------------------------------------[INFO] Reactor Summary:[INFO] [INFO] force-shading ...................................... SUCCESS [ 4.038 s][INFO] flink .............................................. SUCCESS [ 6.015 s][INFO] flink-annotations .................................. SUCCESS [ 3.200 s][INFO] flink-shaded-curator ............................... SUCCESS [ 3.064 s][INFO] flink-metrics ...................................... SUCCESS [ 0.278 s][INFO] flink-metrics-core ................................. SUCCESS [ 4.034 s][INFO] flink-test-utils-parent ............................ SUCCESS [ 0.321 s][INFO] flink-test-utils-junit ............................. SUCCESS [ 1.626 s][INFO] flink-core ......................................... SUCCESS [01:07 min][INFO] flink-java ......................................... SUCCESS [ 11.649 s][INFO] flink-queryable-state .............................. SUCCESS [ 0.229 s][INFO] flink-queryable-state-client-java .................. SUCCESS [ 1.826 s][INFO] flink-filesystems .................................. SUCCESS [ 0.207 s][INFO] flink-hadoop-fs .................................... SUCCESS [ 3.669 s][INFO] flink-runtime ...................................... SUCCESS [02:41 min][INFO] flink-scala ........................................ SUCCESS [01:26 min][INFO] flink-mapr-fs ...................................... SUCCESS [ 1.533 s][INFO] flink-filesystems :: flink-fs-hadoop-shaded ........ SUCCESS [ 7.591 s][INFO] flink-s3-fs-base ................................... SUCCESS [ 10.563 s][INFO] flink-s3-fs-hadoop ................................. SUCCESS [ 13.045 s][INFO] flink-s3-fs-presto ................................. SUCCESS [ 19.744 s][INFO] flink-swift-fs-hadoop .............................. SUCCESS [ 22.910 s][INFO] flink-oss-fs-hadoop ................................ SUCCESS [ 9.790 s][INFO] flink-azure-fs-hadoop .............................. SUCCESS [ 12.446 s][INFO] flink-optimizer .................................... SUCCESS [ 16.912 s][INFO] flink-clients ...................................... SUCCESS [ 3.760 s][INFO] flink-streaming-java ............................... SUCCESS [ 20.746 s][INFO] flink-test-utils ................................... SUCCESS [ 7.005 s][INFO] flink-runtime-web .................................. SUCCESS [05:43 min][INFO] flink-examples ..................................... SUCCESS [ 2.326 s][INFO] flink-examples-batch ............................... SUCCESS [ 38.672 s][INFO] flink-connectors ................................... SUCCESS [ 0.317 s][INFO] flink-hadoop-compatibility ......................... SUCCESS [ 11.759 s][INFO] flink-state-backends ............................... SUCCESS [ 0.565 s][INFO] flink-statebackend-rocksdb ......................... SUCCESS [ 3.076 s][INFO] flink-tests ........................................ SUCCESS [01:22 min][INFO] flink-streaming-scala .............................. SUCCESS [01:11 min][INFO] flink-table ........................................ SUCCESS [ 0.307 s][INFO] flink-table-common ................................. SUCCESS [ 4.323 s][INFO] flink-table-api-java ............................... SUCCESS [ 3.365 s][INFO] flink-table-api-java-bridge ........................ SUCCESS [ 1.694 s][INFO] flink-table-api-scala .............................. SUCCESS [ 12.969 s][INFO] flink-table-api-scala-bridge ....................... SUCCESS [ 18.920 s][INFO] flink-sql-parser ................................... SUCCESS [ 13.187 s][INFO] flink-libraries .................................... SUCCESS [ 0.504 s][INFO] flink-cep .......................................... SUCCESS [ 5.783 s][INFO] flink-table-planner ................................ SUCCESS [05:01 min][INFO] flink-orc .......................................... SUCCESS [ 2.811 s][INFO] flink-jdbc ......................................... SUCCESS [ 1.897 s][INFO] flink-table-runtime-blink .......................... SUCCESS [ 10.040 s][INFO] flink-table-planner-blink .......................... SUCCESS [06:58 min][INFO] flink-hbase ........................................ SUCCESS [ 12.104 s][INFO] flink-hcatalog ..................................... SUCCESS [ 13.873 s][INFO] flink-metrics-jmx .................................. SUCCESS [ 1.262 s][INFO] flink-connector-kafka-base ......................... SUCCESS [ 5.355 s][INFO] flink-connector-kafka-0.9 .......................... SUCCESS [ 2.989 s][INFO] flink-connector-kafka-0.10 ......................... SUCCESS [ 1.448 s][INFO] flink-connector-kafka-0.11 ......................... SUCCESS [ 2.302 s][INFO] flink-formats ...................................... SUCCESS [ 0.250 s][INFO] flink-json ......................................... SUCCESS [ 1.188 s][INFO] flink-connector-elasticsearch-base ................. SUCCESS [ 4.432 s][INFO] flink-connector-elasticsearch2 ..................... SUCCESS [ 14.290 s][INFO] flink-connector-elasticsearch5 ..................... SUCCESS [ 16.062 s][INFO] flink-connector-elasticsearch6 ..................... SUCCESS [ 3.969 s][INFO] flink-csv .......................................... SUCCESS [ 0.878 s][INFO] flink-connector-hive ............................... SUCCESS [ 9.950 s][INFO] flink-connector-rabbitmq ........................... SUCCESS [ 0.997 s][INFO] flink-connector-twitter ............................ SUCCESS [ 3.031 s][INFO] flink-connector-nifi ............................... SUCCESS [ 1.517 s][INFO] flink-connector-cassandra .......................... SUCCESS [ 4.637 s][INFO] flink-avro ......................................... SUCCESS [ 5.800 s][INFO] flink-connector-filesystem ......................... SUCCESS [ 2.701 s][INFO] flink-connector-kafka .............................. SUCCESS [ 3.245 s][INFO] flink-connector-gcp-pubsub ......................... SUCCESS [ 3.096 s][INFO] flink-sql-connector-elasticsearch6 ................. SUCCESS [ 9.616 s][INFO] flink-sql-connector-kafka-0.9 ...................... SUCCESS [ 0.845 s][INFO] flink-sql-connector-kafka-0.10 ..................... SUCCESS [ 0.964 s][INFO] flink-sql-connector-kafka-0.11 ..................... SUCCESS [ 1.138 s][INFO] flink-sql-connector-kafka .......................... SUCCESS [ 1.297 s][INFO] flink-connector-kafka-0.8 .......................... SUCCESS [ 2.426 s][INFO] flink-avro-confluent-registry ...................... SUCCESS [ 0.750 s][INFO] flink-parquet ...................................... SUCCESS [ 16.537 s][INFO] flink-sequence-file ................................ SUCCESS [ 0.630 s][INFO] flink-examples-streaming ........................... SUCCESS [ 41.209 s][INFO] flink-examples-table ............................... SUCCESS [ 15.962 s][INFO] flink-examples-build-helper ........................ SUCCESS [ 1.064 s][INFO] flink-examples-streaming-twitter ................... SUCCESS [ 1.613 s][INFO] flink-examples-streaming-state-machine ............. SUCCESS [ 0.930 s][INFO] flink-examples-streaming-gcp-pubsub ................ SUCCESS [ 6.883 s][INFO] flink-container .................................... SUCCESS [ 1.075 s][INFO] flink-queryable-state-runtime ...................... SUCCESS [ 1.765 s][INFO] flink-end-to-end-tests ............................. SUCCESS [ 0.208 s][INFO] flink-cli-test ..................................... SUCCESS [ 0.482 s][INFO] flink-parent-child-classloading-test-program ....... SUCCESS [ 0.599 s][INFO] flink-parent-child-classloading-test-lib-package ... SUCCESS [ 0.268 s][INFO] flink-dataset-allround-test ........................ SUCCESS [ 0.506 s][INFO] flink-dataset-fine-grained-recovery-test ........... SUCCESS [ 0.466 s][INFO] flink-datastream-allround-test ..................... SUCCESS [ 2.759 s][INFO] flink-batch-sql-test ............................... SUCCESS [ 0.494 s][INFO] flink-stream-sql-test .............................. SUCCESS [ 0.564 s][INFO] flink-bucketing-sink-test .......................... SUCCESS [ 1.678 s][INFO] flink-distributed-cache-via-blob ................... SUCCESS [ 0.398 s][INFO] flink-high-parallelism-iterations-test ............. SUCCESS [ 10.304 s][INFO] flink-stream-stateful-job-upgrade-test ............. SUCCESS [ 1.389 s][INFO] flink-queryable-state-test ......................... SUCCESS [ 2.175 s][INFO] flink-local-recovery-and-allocation-test ........... SUCCESS [ 0.570 s][INFO] flink-elasticsearch2-test .......................... SUCCESS [ 7.355 s][INFO] flink-elasticsearch5-test .......................... SUCCESS [ 10.760 s][INFO] flink-elasticsearch6-test .......................... SUCCESS [ 5.561 s][INFO] flink-quickstart ................................... SUCCESS [ 7.115 s][INFO] flink-quickstart-java .............................. SUCCESS [ 20.829 s][INFO] flink-quickstart-scala ............................. SUCCESS [ 0.491 s][INFO] flink-quickstart-test .............................. SUCCESS [ 1.197 s][INFO] flink-confluent-schema-registry .................... SUCCESS [ 2.192 s][INFO] flink-stream-state-ttl-test ........................ SUCCESS [ 5.204 s][INFO] flink-sql-client-test .............................. SUCCESS [ 9.818 s][INFO] flink-streaming-file-sink-test ..................... SUCCESS [ 0.918 s][INFO] flink-state-evolution-test ......................... SUCCESS [ 1.482 s][INFO] flink-e2e-test-utils ............................... SUCCESS [ 17.095 s][INFO] flink-mesos ........................................ SUCCESS [01:28 min][INFO] flink-yarn ......................................... SUCCESS [ 7.320 s][INFO] flink-gelly ........................................ SUCCESS [ 7.463 s][INFO] flink-gelly-scala .................................. SUCCESS [01:03 min][INFO] flink-gelly-examples ............................... SUCCESS [ 33.002 s][INFO] flink-metrics-dropwizard ........................... SUCCESS [ 2.134 s][INFO] flink-metrics-graphite ............................. SUCCESS [ 0.860 s][INFO] flink-metrics-influxdb ............................. SUCCESS [ 16.888 s][INFO] flink-metrics-prometheus ........................... SUCCESS [ 5.810 s][INFO] flink-metrics-statsd ............................... SUCCESS [ 0.686 s][INFO] flink-metrics-datadog .............................. SUCCESS [ 1.402 s][INFO] flink-metrics-slf4j ................................ SUCCESS [ 0.296 s][INFO] flink-cep-scala .................................... SUCCESS [ 27.836 s][INFO] flink-table-uber ................................... SUCCESS [ 5.151 s][INFO] flink-table-uber-blink ............................. SUCCESS [ 4.506 s][INFO] flink-sql-client ................................... SUCCESS [ 16.856 s][INFO] flink-state-processor-api .......................... SUCCESS [ 2.331 s][INFO] flink-python ....................................... SUCCESS [ 6.404 s][INFO] flink-scala-shell .................................. SUCCESS [ 37.841 s][INFO] flink-dist ......................................... SUCCESS [01:12 min][INFO] flink-end-to-end-tests-common ...................... SUCCESS [ 6.264 s][INFO] flink-metrics-availability-test .................... SUCCESS [ 0.739 s][INFO] flink-metrics-reporter-prometheus-test ............. SUCCESS [ 0.401 s][INFO] flink-heavy-deployment-stress-test ................. SUCCESS [ 12.483 s][INFO] flink-connector-gcp-pubsub-emulator-tests .......... SUCCESS [ 28.399 s][INFO] flink-streaming-kafka-test-base .................... SUCCESS [ 1.118 s][INFO] flink-streaming-kafka-test ......................... SUCCESS [ 11.153 s][INFO] flink-streaming-kafka011-test ...................... SUCCESS [ 12.495 s][INFO] flink-streaming-kafka010-test ...................... SUCCESS [ 27.699 s][INFO] flink-plugins-test ................................. SUCCESS [ 1.775 s][INFO] flink-tpch-test .................................... SUCCESS [ 6.953 s][INFO] flink-contrib ...................................... SUCCESS [ 0.998 s][INFO] flink-connector-wikiedits .......................... SUCCESS [ 7.272 s][INFO] flink-yarn-tests ................................... SUCCESS [ 56.412 s][INFO] flink-fs-tests ..................................... SUCCESS [ 2.654 s][INFO] flink-docs ......................................... SUCCESS [ 4.547 s][INFO] flink-ml-parent .................................... SUCCESS [ 0.596 s][INFO] flink-ml-api ....................................... SUCCESS [ 1.041 s][INFO] flink-ml-lib ....................................... SUCCESS [ 0.369 s][INFO] ------------------------------------------------------------------------[INFO] BUILD SUCCESS[INFO] ------------------------------------------------------------------------[INFO] Total time: 46:13 min[INFO] Finished at: 2019-09-29T13:10:28-04:00[INFO] Final Memory: 534M/1519M[INFO] ------------------------------------------------------------------------ 

看到这个日志,说明我们编程成功了!在根目录中会生成一个输出目录,里面就是我们需要的flink-1.9.0-hadoop2.8.3版本编译后的工程。

lrwxrwxrwx 1 kfk kfk 71 Sep 29 13:07 build-target -> /opt/flink339/flink-1.9.0/flink-dist/target/flink-1.9.0-bin/flink-1.9.0

也就是这个目录,正是我们需要的东东

opt/flink339/flink-1.9.0/flink-dist/target/flink-1.9.0-bin/flink-1.9.0
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值