java实现jit即时编译_第1集进化java jit热点c2编译器构建超级最佳容器

java实现jit即时编译

In my search for the most optimum container tech, I have been playing around with various combinations of OpenSource and frameworks.

在寻找最理想的容器技术时,我一直在尝试各种OpenSource和框架的组合。

In this blog, I will walk you through what, I think, is the one of the most optimum container stack.

在此博客中,我将带您了解最理想的容器堆栈之一。

Before I dig into the stack, let me spend some time, walking through what are some of the non-functional requirements of a Container & Serverless/FaaS based MicroServices Architecture

在深入探讨堆栈之前,让我花一些时间,逐步了解基于Container&Serverless / FaaS的MicroServices体系结构的一些非功能需求。

IMHO, the following are some of the key requirements

恕我直言,以下是一些关键要求

Smaller Foot-print: Eventually all of these MicroServices are going to run on cloud, where we “pay for what we use”…What we need is a runtime that has a smaller footprint and runs with optimum CPU cycles, so that we can run more on less infrastructure

占地面积更小 :最终所有这些MicroServices都将在云上运行,我们在这里“为所用的东西付费”…我们需要的是运行时具有较小的占用空间并以最佳的CPU周期运行,以便我们可以运行更多,更少的基础架构

Quicker bootstrap: Scalability is one of the most important aspects of container based MicroServices architecture. So the faster the containers bootup, the faster it can scale the clusters. This is even more important for Serverless architectures.

快速启动 :可伸缩性是基于容器的MicroServices体系结构最重要的方面之一。 因此,容器启动速度越快,扩展集群的速度就越快。 对于无服务器架构而言,这甚至更为重要。

Built on Open Standard: It's important that we have the underlying platform/runtime built on open standards, as it's easy for me to port or run the workloads in a hybrid multi-cloud world!!, and avoid vendor lock in.

基于开放标准构建 :重要的是,我们必须基于开放标准构建底层平台/运行时,因为对于我来说,在混合多云世界中轻松移植或运行工作负载很容易,而且避免了供应商锁定。

Faster Build time: In this agile world, where we roll out fixes/features/updates very frequently, it's important that the build and roll outs are quicker…including real-time deployments of the changes (during development time, to test, as we develop)

更快的构建时间 :在这个敏捷的世界中,我们非常频繁地推出修订/功能/更新,因此,加快构建和推出速度非常重要……包括更改的实时部署(在开发期间进行测试,因为我们发展)

Let's park these requirements for sometime…let me go down the stack, to the foundational elements and work my way up the stack, to build (what I believe is) the most optimum container platform, that would deliver the above requirements.

让我们暂时停放这些需求…让我从堆栈中移到基础元素,然后沿着堆栈向上工作,以构建(我认为是)最佳的容器平台,从而满足上述需求。

Since there is lot to go through, I have divided into 4 episodes.

由于要经历的事情很多,所以我分为4集。

Episode 1: “The Evolution” — Java JIT Hotspot & C2 compilers (the current episode…scroll down)

第1集:“ The Evolution” — Java JIT Hotspot和C2编译器(当前集……向下滚动)

Episode 2: “The Holy Grail” — GraalVM

第2集:“圣杯” — GraalVM

In this blog, I will talk about how GraalVM embraces polyglot, providing interoperability between various programming languages. I will then cover how it extends from Hotspot, and provides faster execution, and smaller footprints with “Ahead-of-time” compilations & other optimisations

在此博客中,我将讨论GraalVM如何包含多语言,并提供各种编程语言之间的互操作性。 然后,我将介绍它是如何从Hotspot扩展的,并通过“提前”编译和其他优化来提供更快的执行速度和更小的占用空间

Episode 3: “The Leapstep” — Quarkus+CRI-O

第3集:“飞跃” — Quarkus + CRI-O

In this blog, I will talk about how Quarkus takes a leap-step, and provides fastest, smallest and the best developer experience in building Java MicroServices. I will also introduce CRI-O, and how it brings its ecosystem of tools.

在这个博客中,我将讨论Quarkus如何迈出一步,并在构建Java MicroServices方面提供最快,最小和最佳的开发人员体验。 我还将介绍CRI-O及其如何带来其工具生态系统。

Episode 4: “The Final Showdown” — Full stack MicroServices/Serverless Architecture

第4集:“最终决战” —全栈MicroServices /无服务器架构

In this blog, I will put all the pieces together and talk about how they build a robust, scalable, fast, thin MicroServices Architecture.

在此博客中,我将所有内容放在一起,并讨论它们如何构建健壮,可伸缩,快速,精简的MicroServices体系结构。

I hope you will enjoy this series…

希望您会喜欢这个系列...

第1集:“进化” (Episode 1: “The Evolution”)

Java JIT Hotspot和C2编译器 (Java JIT Hotspot & C2 compilers)

With Java, we achieved “write-once-run-anywhere” dream, in early 90s. The approach was very simple. The Java programs are compiled to “byte-code”

使用 Java,我们在90年代初实现了“随处编写一次写入”的梦想。 该方法非常简单。 Java程序被编译为“字节码”

Interesting fact: byte-code is called byte-code, as each instruction in byte-code is of byte length, so that it can be loaded into the CPU cache, and in fact there were also java CPUs built!!! didn’t take-off

有趣的事实:字节码称为字节码,因为字节码中的每条指令都具有字节长度,因此可以将其加载到CPU缓存中,实际上还内置了Java CPU !!! 没有起飞

We have JVM implementations, for each supported operating system. The respective JVM will “interpret” the byte-code to machine instruction (using something like a map). Obviously this is slow, as interpreter goes one statement at a time!!!

对于每个受支持的操作系统,我们都有JVM实现。 相应的JVM会将字节码“解释”为机器指令(使用诸如map之类的东西)。 显然,这很慢,因为解释器每次只发表一个声明!!!

To speed up this, it makes sense to identify the code, that is run more commonly, and compile them ahead of time, and cache it 🤔.

为了加快执行速度,有必要识别运行更常见的代码,并提前对其进行编译并进行缓存🤔。

That is exactly, what later versions of JVMs started doing. A performance counter was introduced, that counted the number of times a particular method/snippets of code is executed. Once a method/code snippet is used to a particular number of times (threshold), then that particular code snippet, is compiled, optimised & cached, by “C1 compiler”. Next time, that code snippet is called, it directly executes the compiled machine instructions from the cache, rather then going through the interpreter. This brought in the first level of optimisation.

确切地说,JVM的更高版本开始做什么。 引入了一个性能计数器,该计数器计算执行特定方法/代码片段的次数。 一旦方法/代码段被使用了特定的次数(阈值),则该特定的代码段将由“ C1编译器”进行编译,优化和缓存。 下次调用该代码段时,它将直接从缓存中执行已编译的机器指令,而不是通过解释器。 这带来了第一级的优化。

While the code is getting executed, the JVM will perform runtime code profiling, and come up with code paths and hotspots. It then runs the “C2 compiler”, to further optimize the hot code paths…and hence the name “Hotspot”

在执行代码时,JVM将执行运行时代码分析,并提供代码路径和热点。 然后,它运行“ C2编译器”,以进一步优化热代码路径,因此名称为“ Hotspot”

C1 is faster, and good for short running applications, while C2 is slower and heavy, but is ideal for long running processes like daemons, servers etc, the code performs better over the time.

C1速度更快,适用于短期运行的应用程序,而C2速度较慢且繁重,但对于长时间运行的进程(如守护程序,服务器等)来说是理想的选择,随着时间的推移,代码的性能会更好。

In Java 6, we has an option to use either C1 or C2 methods (with a command line argument -client (for C1), -server (for C2)), in Java 7, we could use both, and from Java 8 onwards it became default behaviour.

在Java 6中,我们可以选择使用C1或C2方法(带有命令行参数-client (对于C1),- -server (对于C2)),在Java 7中,我们可以同时使用,从Java 8开始它成为默认行为。

The below diagram, illustrates the flow…

下图说明了流程...

Image for post

Here are some of the code optimization, that JVM compiler

这是JVM编译器的一些代码优化

  • Removing null checks (for the variable that are never null)

    删除空检查(对于永远不为空的变量)
  • Inlining smaller, most called methods (small methods) reducing the method calls

    内联较小的,最常调用的方法(较小的方法),以减少方法调用
  • Optimizing the loops, by combining, unrolling & inversions

    通过组合,展开和反转来优化循环
  • Removing the code that is never called (Dead code)

    删除从未调用的代码(无效代码)

and many more…

还有很多…

Whatever said and done, JIT (Just-In-time compilation) is slow, as there is a lot of work that the JVM has to do in the runtime.

不管怎么说,JIT(即时编译)都很慢,因为JVM必须在运行时中做很多工作。

Ahead-of-Time compilation option, was introduced since Java 9, where u can generate the final machine code, directly using jaotc

Java 9以来引入了提前编译选项,您可以直接使用jaotc生成最终的机器代码。

This code is compiled to a target architecture, so it is not portable…in X86, we can have both Java bytecode and AOT compiled code, working together.

该代码被编译为目标体系结构,因此不可移植……在X86中,我们可以使Java字节码和AOT编译代码一起工作。

The bytecode will go through the approach, that I explained previously (C1, C2) while the AOT compiled code directly goes and sits in the code cache, reducing the load on JVM. Typically the most frequently used libraries can be AOT compiled, for faster responses.

字节码将通过我之前介绍的方法(C1,C2)进行处理,而AOT编译后的代码直接进入并位于代码缓存中,从而减轻了JVM的负担。 通常,最常用的库可以是AOT编译的,以获得更快的响应。

Image for post

This is the story of Java VM…and pretty much every language has a similar story, where it goes thru the similar inception and over a period of time, the compiler/VM gets optimised to run faster

这就是Java VM的故事……几乎每种语言都有一个相似的故事,经历了相似的开始,经过一段时间,编译器/ VM进行了优化以提高运行速度

In the next episode, we will look at how GraalVM, takes this further, by reducing the footprint, optimising the execution and bring in support for polyglot/multi language interoperability.

在下一集中,我们将研究GraalVM如何通过减少占用空间,优化执行并为多语言/多语言互操作性提供支持来进一步实现这一目标。

The Holy Grail

圣杯

You can read the blog here

您可以在这里阅读博客

Episode 2: GraalVM — “The Holy Grail” (coming soon…)

第2集:GraalVM-“圣杯”(即将推出……)

ttyl

下次再谈

翻译自: https://medium.com/swlh/episode-1-the-evolution-java-jit-hotspot-c2-compilers-building-super-optimum-containers-f0db19e6f19a

java实现jit即时编译

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值