golang echo_测量golang和echo对kotlin和vert x

本文对比了Golang的Echo框架与Kotlin及Vert.x在性能方面的表现,探讨了不同语言和框架在后端开发中的效率差异。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

golang echo

介绍(Intro)

Software bench-marking is extremely tough but it is something I really enjoy doing. Whether that is running apache bench on HTTP servers, running redis-benchmark, or pgbench — it is always interesting seeing how various tweaks impact performance.

软件基准测试非常困难,但这是我真正喜欢的事情。 无论是在HTTP服务器上运行apache Bench,运行redis-benchmark还是pgbench,看到各种调整如何影响性能始终是很有趣的。

Two of my go to languages for building anything are Kotlin and Golang. In those, two of my go to libraries to build HTTP services are Vert.x and Echo respectively. It was natural instinct to see how they perform when put under a stress test.

我用来构建任何东西的两种语言是Kotlin和Golang。 在这些库中,我用于构建HTTP服务的两个库分别是Vert.x和Echo。 看到它们在压力测试下的表现是自然的本能。

The internet is filled with comments around the JVM being slower compared to native code — It is, while the code gets JIT’d but after that it should perform at a similar level as native code.

与本地代码相比,互联网上充斥着有关JVM的注释,该注释的速度比本地代码慢-确实,虽然代码得到了JIT,但之后它的性能应与本地代码相似。

应用程式码 (App code)

Let’s look at the code for the two applications.

让我们看一下这两个应用程序的代码。

fun main() {
Vertx.vertx()
.createHttpServer()
.requestHandler { it.response().end("") }
.listen(9999)
}func main() {
e := echo.New()
e.GET("/", func(c echo.Context) error {
c.String(http.StatusOK, "")
return nil
})
e.Logger.Fatal(e.Start(":1323"))
}

I am using Openj9 JVM on JDK 13.

我在JDK 13上使用Openj9 JVM。

$ java -version
openjdk version "13.0.1" 2019-10-15
OpenJDK Runtime Environment AdoptOpenJDK (build 13.0.1+9)
Eclipse OpenJ9 VM AdoptOpenJDK (build openj9-0.17.0, JRE 13 Mac OS X amd64-64-Bit Compressed References 20191031_101 (JIT enabled, AOT enabled)
OpenJ9 - 77c1cf708
OMR - 20db4fbc
JCL - c973c65658 based on jdk-13.0.1+9)

Both languages start a server on their respective port and return a 200 on / with an empty string response. Something to keep in mind is that the vert.x http server is single threaded where as the echo server is not.

两种语言都在各自的端口上启动服务器,并在/上返回200 /带有空字符串响应)。 要记住的一点是,vert.x http服务器是单线程的,而echo服务器则不是。

卷曲时间 (Curl timing)

The first thing I wanted to see was a breakdown of the time it takes to do a single request. We can see the breakdown in curl by using the following file

我想看的第一件事是执行单个请求所需的时间。 通过使用以下文件,我们可以看到卷曲的分解

# curl-format.txt
time_namelookup: %{time_namelookup}s\n
time_connect: %{time_connect}s\n
time_appconnect: %{time_appconnect}s\n
time_pretransfer: %{time_pretransfer}s\n
time_redirect: %{time_redirect}s\n
time_starttransfer: %{time_starttransfer}s\n
----------\n
time_total: %{time_total}s\n

$ curl -w "@curl-format.txt" -o /dev/null -s http://localhost:9999/

Once we make the first request — as expected the first request to the JVM application was quite slow coming in at around 180ms

一旦我们发出第一个请求-如预期的那样,对JVM应用程序的第一个请求很慢,大约在180毫秒内到达

$ curl -w "@curl-format.txt" -o /dev/null -s http://localhost:9999/
time_namelookup: 0.004885s
time_connect: 0.005071s
time_appconnect: 0.000000s
time_pretransfer: 0.005100s
time_redirect: 0.000000s
time_starttransfer: 0.180707s
----------
time_total: 0.180724s

The first request to the Go server was already quite optimised and responded in under 5ms

对Go服务器的第一个请求已经进行了优化,并在5毫秒内响应

$ curl -w "@curl-format.txt" -o /dev/null -s http://localhost:1323/
time_namelookup: 0.004139s
time_connect: 0.004329s
time_appconnect: 0.000000s
time_pretransfer: 0.004378s
time_redirect: 0.000000s
time_starttransfer: 0.004816s
----------
time_total: 0.004830s

The JVM does optimisations based on the traffic it sees, which became evident as I started making a few more requests.

JVM根据看到的流量进行优化,这在我开始提出更多请求时变得显而易见。

$ curl -w "@curl-format.txt" -o /dev/null -s http://localhost:9999/
time_namelookup: 0.005040s
time_connect: 0.005238s
time_appconnect: 0.000000s
time_pretransfer: 0.005287s
time_redirect: 0.000000s
time_starttransfer: 0.006986s
----------
time_total: 0.006998s

$ curl -w "@curl-format.txt" -o /dev/null -s http://localhost:9999/
time_namelookup: 0.004111s
time_connect: 0.004386s
time_appconnect: 0.000000s
time_pretransfer: 0.004437s
time_redirect: 0.000000s
time_starttransfer: 0.005847s
----------
time_total: 0.005859s

$ curl -w "@curl-format.txt" -o /dev/null -s http://localhost:1323/
time_namelookup: 0.004139s
time_connect: 0.004329s
time_appconnect: 0.000000s
time_pretransfer: 0.004378s
time_redirect: 0.000000s
time_starttransfer: 0.004816s
----------
time_total: 0.004830s

The response times fell from 180ms down to 5ms once the JVM had optimised for that code path and the code was running at similar speeds to the native code that Go had produced.

一旦JVM针对该代码路径进行了优化,并且代码以与Go生成的本机代码相似的速度运行,则响应时间从180ms减少到5ms。

负载测试 (Load testing)

The next step was to hammer them with HTTP request using wrk.

下一步是使用wrk通过HTTP请求对它们进行锤击。

Wrk was configured to use 2 connections and 1 thread for 60 seconds. Since the server and load tester were both running on the same machine I wanted to limit the amount of resources wrk would use.

Wrk配置为使用2个连接和1个线程持续60秒。 由于服务器和负载测试器都在同一台计算机上运行,​​因此我想限制wrk将使用的资源量。

The Go echo server results are as follows. It was able to achieve 37k requests per seconds with an average latency of 50 micro seconds and Stdev of 44 microseconds.

Go echo服务器结果如下。 它每秒能够处理37k个请求,平均延迟为50微秒,Stdev为44微秒。

$ wrk -t 1 -c 2 -d60s http://localhost:1323/
Running 1m test @ http://localhost:1323/
1 threads and 2 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 50.46us 43.79us 6.44ms 99.07%
Req/Sec 37.30k 2.53k 41.54k 80.87%
2230715 requests in 1.00m, 246.78MB read
Requests/sec: 37116.78
Transfer/sec: 4.11MB

The Kotlin vertx server results are as follows. It was able to achieve 47k requests per seconds with an average latency of 271 micro seconds and Stdev of 4.8 milliseconds.

Kotlin vertx服务器结果如下。 它每秒能够处理47k个请求,平均延迟为271微秒,Stdev为4.8毫秒。

$ wrk -t 1 -c 2 -d60s http://localhost:9999/
Running 1m test @ http://localhost:9999/
1 threads and 2 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 270.89us 4.82ms 165.23ms 99.67%
Req/Sec 48.16k 12.36k 59.36k 82.14%
2870424 requests in 1.00m, 104.02MB read
Requests/sec: 47837.12
Transfer/sec: 1.73MB

结论 (Conclusion)

What that seems to indicate is that Vert.x was able to achieve a higher throughput but with a much wider band on response times where as Go sacrificed throughput to keep response times very tight, which makes sense since Go is optimised for low latency.

这似乎表明Vert.x能够实现更高的吞吐量,但响应时间的带宽要宽得多,而Go牺牲了吞吐量以保持非常紧的响应时间,这是有道理的,因为Go已针对低延迟进行了优化。

Optimising for high throughput AND low latency is quite difficult. A good read on this trade-off can be found here

针对高吞吐量和低延迟进行优化非常困难。 可以在这里找到有关此权衡的很好的阅读

Pick your technology stack based on your application requirements and not hacker news headlines.

根据您的应用程序需求而不是黑客新闻标题选择技术堆栈。

Originally published at https://aawadia.hashnode.dev.

最初发布在https://aawadia.hashnode.dev

翻译自: https://medium.com/swlh/measuring-golang-and-echo-against-kotlin-and-vert-x-98c19d0c41b5

golang echo

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值