超音速 启动_使用“超音速亚原子Java”实现企业应用程序现代化的陷阱和陷阱

超音速 启动

This is a post about first steps and first pitfalls in modernization of old enterprise application using "supersonic subatomic java" quarkus as it is positioned by RedHat.

这是一篇有关RedHat定位的使用“超音速亚原子Java” 夸克的旧企业应用程序现代化的第一步和首要陷阱的文章。

最初设定 (Initial setup)

In the end of 2019 I was invited to work in a project in our company, where an old monolithic application should be split into micro-services. Basic reasoning behind this decision was, that framework which is used in application is near to it's end-of-life. The application should be rewritten in any case. If it should be rewritten in any case, why not to split it into micro-services?

在2019年底,我受邀在我们公司的一个项目中工作,在该项目中,应将旧的整体应用程序拆分为微服务。 该决定背后的基本理由是,在应用程序中使用的框架已接近使用寿命。 在任何情况下都应重写该应用程序。 如果无论如何都要重写它,为什么不将其拆分为微服务?

Since last 10 years, I was working mostly with java and we had in the project specialists with java knowledge, we have decided to give java-based frameworks a try for back-end functionality. OK, let's use spring-cloud for that purpose, was our first thought. But then, we have had a look onto quarkus, which has been released in the end of 2019. We have decided to give it a try, keeping in mind building of native applications using GraalVM.

从过去的10年以来,我主要从事Java工作,并且在具有Java知识的项目专家中,我们决定尝试基于Java的框架进行后端功能。 好吧,我们首先想到的是使用spring-cloud 。 但是之后,我们看了一下quarkus ,它已于2019年底发布 。我们决定尝试一下,同时牢记使用GraalVM构建本机应用程序。

Native applications from our perspective could give us following benefits:

从我们的角度来看,本机应用程序可以为我们带来以下好处:

  • shorten start time of container

    缩短集装箱的启动时间
  • reduce resource consumption of container and application

    减少容器和应用程序的资源消耗

We were aware about possible drawbacks of this solution:

我们知道此解决方案可能存在的缺点:

  • no experience in our team with quarkus framework

    我们团队中没有使用quarkus框架的经验
  • significantly less amount of available feedback from community, since this is very young framework

    社区提供的反馈数量明显减少,因为这是一个非常年轻的框架

“ hello world”应用程序的首次成功 (First success with "hello world" application)

To start with something, we have decided to write a prototype of very simple CRUD REST micro-service. We've taken a starter hibernate-panache-quickstart, modified it for our simple entity and ported it from maven to gradle. So, we have just followed the guides from official documentation, and first toy-application was ready.

首先,我们决定编写一个非常简单的CRUD REST微服务的原型。 我们使用了一个启动器hibernate-panache-quickstart ,将其修改为我们的简单实体,并将其从maven移植到gradle。 因此,我们只是按照官方文档中的指南进行操作,因此第一个玩具应用程序已准备就绪。

First run of quarkusDev gradle task was very impressive.

quarkusDev gradle任务的首次运行非常出色。

First of all, the application started really fast. Then it activates so-called live-coding mode, where you are able to change your source code, changes will be tracked by quarkus and it invokes recompilation (and restart) of application automatically. Third nice thing, was presence of swagger-ui functionality in development profile.

首先,该应用程序启动非常快。 然后,它激活了所谓的实时编码模式,您可以在其中更改源代码,quarkus将跟踪更改,它会自动调用应用程序的重新编译(和重新启动)。 第三件事是在开发配置文件中包含了swagger-ui功能。

本机编译的第一个问题 (First problems with native compilation)

Let's go native, we thought, and opened native compilation guide. To do so, we need to have GraalVM installed. We've taken corresponding docker image for 19.2.1, installed gradle, put our sources within, and started build.

我们想,我们去原生吧,然后打开原生编译指南。 为此,我们需要安装GraalVM 。 我们已经为19.2.1拍摄了相应的docker 映像 ,安装了gradle,将源放入其中,然后开始构建。

First surprise, which we have discovered, was the necessity to downgrade target java version of our application from 11 to 8 (as of 1.1.0.Final). Due to the fact, that we haven't used and haven't planned to use any specific java 11 features, this change was rather harmless for us. Second surprise was rather slow build process, native compilation took about 10 times longer, than the jvm one. Third surprise was rather sufficient memory consumption for native compilation (we have been forced to increase docker image memory consumption to 8Gb).

我们发现的第一个惊喜是必须将应用程序的目标Java版本从11降级到8(从1.1.0.Final版本开始)。 由于这个事实,我们没有使用过,也没有计划使用任何特定的java 11功能,所以这一更改对我们来说是无害的。 第二个惊喜是相当缓慢的构建过程,本机编译花费的时间比jvm花费了10倍左右。 第三个惊喜是用于本机编译的足够的内存消耗(我们被迫将docker映像的内存消耗增加到8Gb)。

Finally native application has been build. We have started it for the test purposes from the same container, where we have build it, and the start time has been reduced amazingly.

最终,本机应用程序已构建。 为了测试目的,我们从构建容器的同一容器中启动了它,并且开始时间大大减少了。

Comparison of the CPU and memory consumption for container in kubernetes cluster one can find below.

可以在下面找到kubernetes集群中容器的CPU和内存消耗的比较。

测验覆盖率的问题 (Problems with measuring of test coverage)

The second pitfall we have found, when we have tried to build corresponding pipeline for our toy project and measure unit test coverage. Rather fast we have discovered, that report concerning unit test coverage from JaCoCo gradle plugin doesn't reflect the real picture. Having a closer look into the warning messages in the console and corresponding quarkus guide revealed the source of our problems. According to the guide, we need to get rid of jacoco online instrumentation and use offline instrumentation instead. While this seems to be a trivial task for maven, gradle jacoco plugin doesn't support this functionality (see stackoverflow discussion). Luckily in the same discussion and in the guides an algorithm is present, how this problem could be resolved by usage of corresponding ant tasks. For detailed description of those tasks one could refer to JaCoCo documentation.

当我们尝试为玩具项目建立相应的管道并测量单元测试覆盖率时,我们发现了第二个陷阱。 我们很快发现,JaCoCo gradle插件有关单元测试覆盖率的报告并不能反映真实情况。 仔细查看控制台中的警告消息以及相应的quarkus指南,可以发现问题的根源。 根据指南,我们需要摆脱jacoco在线检测,而要使用离线检测。 虽然这对于Maven来说似乎是一项微不足道的任务,但是gradle jacoco插件不支持此功能(请参阅stackoverflow讨论 )。 幸运的是,在同一讨论和指南中,提供了一种算法,该问题如何通过使用相应的蚂蚁任务解决。 有关这些任务的详细说明,请参阅JaCoCo文档

将本机映像嵌入Docker容器 (Embedding native image into docker container)

The next step after successful compilation of our code to native application was the injecting of the native application into corresponding docker image. After some investigations, we have decided to use ubi image from RedHat (ubi8 in minimal configuration). This step was run surprisingly rather smoothly. Moreover we have found additional benefit of using native applications in docker images without jre. Beyond reduced size of such images, they also have significantly less amount of vulnerabilities.

将我们的代码成功编译到本机应用程序之后,下一步是将本机应用程序注入到相应的docker映像中。 经过一些调查,我们决定使用RedHat的ubi映像 (最小配置中的ubi8)。 令人惊讶的是,这一步骤运行得相当顺利。 此外,我们还发现了在没有jre的docker映像中使用本机应用程序的其他好处。 除了减小此类图像的大小外,它们的漏洞数量也大大减少。

Below you can find security scans from our quay repository regarding image for jvm-applications (based on openjdk:11-jre-slim) and native applications (based on ubi8-minimal).

在下面,您可以从我们的码头信息库中找到有关jvm应用程序(基于openjdk:11-jre-slim)和本机应用程序(基于ubi8-minimal)的映像的安全扫描。

从Hello World项目到真实微服务的艰难路径 (Hard path from hello-world project to real-world micro-service)

第一次尝试和失去希望 (First attempt and lost hope)

Encouraged by the benefits of native images, we have started with building of our first CRUD REST micro-service, which might be useful inside of the project. One of the prerequisites for migration of monolithic application to the bunch of micro-services, was the we would like to reuse Oracle database instance for the micro-services as the first step. We have planned to introduce separate Oracle schema for each micro-service, so that they are decoupled. The final step (when all application is already rewritten) would be then to replace Oracle database with appropriate database for each micro-service. In the meanwhile the new version of quarkus has been released, which has enabled support of java 11. So, for the real-world micro-service we have enabled java 11 as target by default.

受本机映像好处的鼓舞,我们开始构建第一个CRUD REST微服务,该服务可能在项目内部有用。 要将整体应用程序迁移到一系列微服务的先决条件之一就是第一步,我们希望为微服务重用Oracle数据库实例。 我们计划为每个微服务引入单独的Oracle模式,以便将它们分离。 最后一步(当所有应用程序都已被重写时)将是为每个微服务用适当的数据库替换Oracle数据库。 同时,已经发布了新版本的quarkus,该版本启用了对Java 11的支持。因此,对于实际的微服务,默认情况下,我们已将Java 11作为目标。

To enable native compilation of libraries in quarkus, the library need to be part of quarkus extension. Rather fast we have found, that no quarkus extension has been implemented for Oracle jdbc driver. Despite the fact that it was planned to introduce it, currently the work is put on ice due to license concerns. We have tried to enable native compilation without introduction of quarkus extension, as it was described in comment. Unfortunately even with these hints, gradle task regarding native compilation just failed without any useful error message. There was really no clue regarding the native compilation errors in our application even with --debug and --stacktrace options for gradle task.

要启用quarkus中库的本机编译,该库需要是quarkus extension的一部分。 我们很快发现,尚未为Oracle jdbc驱动程序实现quarkus扩展。 尽管它计划将其引入,目前的工作放在冰上 ,由于许可证的担忧。 我们已尝试在不引入quarkus扩展的情况下启用本机编译,如comment中所述 。 不幸的是,即使有了这些提示,有关本机编译的gradle任务也没有任何有用的错误消息就失败了。 即使使用gradle任务的--debug和--stacktrace选项,我们的应用程序中的本机编译错误确实没有任何线索。

本机图像工具的有用日志 (Useful logs of native-image tool)

Rather frustrated, we have put this activity on ice. In meanwhile we have set up pipeline for building a jvm-based quarkus application in our CI/CD tool.

颇为沮丧的是,我们将此活动搁置在冰上。 同时,我们在CI / CD工具中建立了构建基于jvm的quarkus应用程序的管道。

After the break we have elaborated some further ideas, how to proceed. Having a closer look at gradle task debug output for native compilation, we have mentioned the command string for invocation of native-image tool of GraalVM.

休息后,我们阐述了一些进一步的想法,如何进行。 在仔细观察gradle任务调试输出以进行本机编译的过程中,我们提到了用于调用GraalVM的本机映像工具的命令字符串。

Below you can find an extract of such kind of output:

您可以在下面找到此类输出的摘录:

/opt/graalvm/bin/native-image -J-Dsun.nio.ch.maxUpdateArraySize=100 -J-DCoordinatorEnvironmentBean.transactionStatusManagerEnable=false -J-Djava.util.logging.manager=org.jboss.logmanager.LogManager -J-Dvertx.logger-delegate-factory-class-name=io.quarkus.vertx.core.runtime.VertxLogDelegateFactory -J-Dvertx.disableDnsResolver=true -J-Dio.netty.leakDetection.level=DISABLED -J-Dio.netty.allocator.maxOrder=1 -J-Duser.language=en -J-Dfile.encoding=ANSI_X3.4-1968 --enable-all-security-services --allow-incomplete-classpath -H:ReflectionConfigurationFiles=... --initialize-at-run-time=... -O0 --verbose -H:+TraceClassInitialization -H:IncludeResources=... -H:+ReportExceptionStackTraces -H:-SpawnIsolates -H:EnableURLProtocols=http -H:+ReportUnsupportedElementsAtRuntime -H:IncludeResourceBundles=... -H:ResourceConfigurationFiles=... --initialize-at-build-time= -H:InitialCollectionPolicy=com.oracle.svm.core.genscavenge.CollectionPolicy$BySpaceAndTime -H:+JNI -jar <jar_file_of_application> -H:FallbackThreshold=0 -H:+ReportExceptionStackTraces -H:-AddAllCharsets -H:-IncludeAllTimeZones --enable-all-security-services -H:-SpawnIsolates --no-server -H:-UseServiceLoaderFeature -H:+StackTrace <native_application_name>

So, we have taken this string and tried to use native-image tool directly, without wrapping it into the gradle task. As the result we have received an error message from the tool, that InitialCollectionPolicy is wrong. Having googled for similar problems, we have found, that this is related with the way, how command line arguments are parsed, as described in this post. Having escaped $ signs we have moved a little bit further.

因此,我们采用了此字符串并尝试直接使用本机图像工具,而没有将其包装到gradle任务中。 结果,我们从该工具收到一条错误消息,即InitialCollectionPolicy错误。 已经用Google搜索类似的问题,我们发现,这与方式,命令行参数是如何解析,如在相关的这篇文章 。 摆脱了$符号后,我们又往前走了一点。

Next task was trying out to get rid of compilation errors related to initialization of classes at build-time, which should be initialized only in runtime. Namely you should list all the classes of your application or in libraries, which are not suited for native compilation, using initialize-at-run-time option as described here. It appeared to be rather time-consuming task, but finally we have managed it and native compilation has finished without error. After that we have put our configuration for native-image tool to buildNative gradle task:

下一个任务是尝试摆脱与在构建时初始化类有关的编译错误,这些错误仅应在运行时初始化。 也就是说,应使用此处描述的运行时初始化选项列出应用程序或库中不适合本机编译的所有类。 这似乎是一项非常耗时的任务,但最终我们对其进行了管理,并且本机编译已完成,没有错误。 之后,我们将本机图像工具的配置放入buildNative gradle任务:

buildNative {
    additionalBuildArgs = [
     ...
    ]
}

希望再次失去 (Hope is lost again)

Encouraged by the progress, we have tried to start our native application and… got runtime error by attempt to establish connection to Oracle database, having only error code and without appropriate error message. When we have specified that Oracle JDBC driver resources bundle should be included into the native application using -H:IncludeResourceBundles option, we have got the error message text:

受进度的鼓舞,我们尝试启动本机应用程序,并…通过尝试建立与Oracle数据库的连接而遇到运行时错误,其中只有错误代码,而没有适当的错误消息。 当我们使用-H:IncludeResourceBundles选项指定应将Oracle JDBC驱动程序资源捆绑包含在本机应用程序中时,我们得到了错误消息文本:

寻人代理加入游戏 (Tracing agent joins the game)

After brainstorming, we have decided to apply another approach to the problem. Namely, another approach described in GraalVM tracing agent guide. So, we have executed our java application with tracing agent enabled and run majority of possible usage scenarios. As a result we have got reflect-config.json, proxy-config.json and resource-config.json files, which contained rather big amount of classes to be considered.

经过集思广益,我们决定对问题采用另一种方法。 即,GraalVM 跟踪代理指南中描述的另一种方法。 因此,我们在启用跟踪代理的情况下执行了Java应用程序,并运行了大多数可能的使用场景。 结果,我们得到了reflect-config.json,proxy-config.jsonresource-config.json文件,其中包含大量需要考虑的类。

Next step was to try to perform native compilation with json configuration gathered by tracing agent using -H:ReflectionConfigurationFiles, -H:ResourceConfigurationFiles and -H:DynamicProxyConfigurationFiles. Compilation run successfully and we tried to start native application again. Surprisingly this time everything run fine, and we were able to get results from our application back using REST endpoint.

下一步是尝试使用-H:ReflectionConfigurationFiles-H:ResourceConfigurationFiles-H:DynamicProxyConfigurationFiles 由跟踪代理收集的json配置执行本机编译。 编译成功运行,我们尝试再次启动本机应用程序。 令人惊讶的是,这次一切运行良好,并且我们能够使用REST端点从应用程序中获取结果。

After several minutes of triumph we have recognized, that now we should strip-down configuration files from tracing agent to the minimal configuration, that shouldn't raise run-time errors of the application. We have accomplished this routine task by having significantly reduced configuration files, and in our case it also appeared, that DynamicProxyConfigurationFiles could be completely skipped off.

经过几分钟的胜利,我们已经意识到,现在我们应该将配置文件从跟踪代理简化为最小配置,这不会引起应用程序运行时错误。 通过大大减少配置文件,我们已经完成了此例行任务,在我们的案例中,它似乎还可以完全跳过DynamicProxyConfigurationFiles

基于jvm和本机应用程序的比较 (Comparison of jvm-based and native applications)

Having got the native application working, we have deployed both to our kubernetes managed cluster and performed comparison of start time and resource utilization.

在运行本机应用程序后,我们已将其部署到kubernetes受管群集中,并进行了启动时间和资源利用率的比较。

Picture below shows CPU (black line) and RAM usage for the applications during get operation:

下图显示了get操作期间应用程序的CPU(黑线)和RAM使用情况:

From the picture one can see, that RAM usage is approximately 10 times lower for native image than for jvm-based one, while CPU usage is approximately the same.

从图中可以看出,本机映像的RAM使用率大约比基于jvm的RAM使用率低10倍,而CPU使用率则大致相同。

The graphics are taken from the pods with following resource limits: jvm-application

图形取自具有以下资源限制的Pod:jvm-application

limits:
               memory: "1024Mi"
               cpu: 1500m
            requests:
               memory: "400Mi"
               cpu: 100m

native application

本机应用

limits:
               memory: "256Mi"
               cpu: 100m
            requests:
               memory: "180Mi"
               cpu: 50m

If we compare application start-up times, we will have following picture: jvm-application:

如果我们比较应用程序的启动时间,我们将看到以下图片:jvm-application:

native application

本机应用

So, start of the native application takes about 4 times less, than the jvm-based one. Both applications have significant restrictions regarding resource consumption.

因此,本机应用程序的启动时间大约比基于jvm的启动时间少4倍。 两种应用程序都在资源消耗方面有明显的限制。

得到教训 (Lessons learned)

Having a look at all the efforts we have spent to have a native application running in our container orchestration infrastructure, we think, that it was worth doing.

查看我们为使本机应用程序在容器编排基础结构中运行而付出的所有努力,我们认为这是值得做的。

From our point of view there are following benefits of using Quarkus framework + native application compared to spring jvm-based applications:

从我们的角度来看,与基于spring jvm的应用程序相比,使用Quarkus框架+本机应用程序具有以下好处:

  • Reduced resource consumption

    减少资源消耗
  • Nice features for development

    不错的开发功能
  • Potential smaller amount of vulnerabilities in docker images for application hosting

    用于应用程序托管的Docker映像中潜在的潜在漏洞数量较少
  • Creation of know-how regarding quarkus framework and GraalVM features.

    创建有关quarkus框架和GraalVM功能的专有技术。

There are following drawbacks:

有以下缺点:

  • While simple hello-world application might work well, adoption of real-world applications might require significant efforts

    尽管简单的hello-world应用程序可能运行良好,但采用真实世界的应用程序可能需要花费大量精力
  • Significantly smaller amount of available documentation compared to the one present for e.g. spring framework

    与例如spring框架的可用文档相比,可用文档数量少得多

翻译自: https://habr.com/en/post/500112/

超音速 启动

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值