haproxy 负载_负载测试HAProxy(第1部分)

haproxy 负载

by Sachin Malhotra

由Sachin Malhotra

负载测试HAProxy(第1部分) (Load Testing HAProxy (Part 1))

This is the first post in a 3 part series on load testing HAProxy, which is a reliable, high performant TCP/HTTP load balancer.

这是有关负载测试HAProxy的3部分系列文章的第一篇,HAProxy是一种可靠的高性能TCP / HTTP负载均衡器。

Load Testing? HAProxy? If all this seems greek to you, don’t worry. I will provide inline links to read up on everything I’m talking about in this blog post.

负载测试? HAProxy? 如果这一切对您来说都很希腊,请不用担心。 我将提供内联链接,以阅读我在本博文中谈论的所有内容。

For reference, our current stack is:

供参考,我们当前的堆栈为:

  • Instances hosted on Amazon EC2 (not that this one should matter)

    托管在Amazon EC2上的实例(这并不重要)

  • Ubuntu 14.04 (Trusty) for the OS

    适用于操作系统的Ubuntu 14.04(Trusty)
  • Supervisor for process management

    流程管理主管

On production, we have around 30-odd HAProxy load balancers that help us route our traffic to the backend servers which are in an autoscaling mode and hence don’t have a fixed number. Number of backend servers ranges from 12–32 throughout the day.

在生产中,我们有大约30多种HAProxy负载均衡器,可帮助我们将流量路由到处于自动扩展模式的后端服务器,因此没有固定数量。 后端服务器的数量全天范围从12到32。

This article should help you get up-to-speed on the basics of load balancing and how it works with HAProxy. It will also explain what routing algorithms are available.

本文应帮助您快速了解负载平衡的基本知识及其在HAProxy中的工作方式。 它还将说明可用的路由算法。

Coming back to our topic at hand, which is load testing HAProxy.

回到我们的主题,即负载测试HAProxy。

Never before did we put any dedicated effort in finding out the limits of our HAProxy setup in handling HTTP and HTTPs requests. Currently, on production, we have 4 core, 30 Gig instances of HAProxy machines.

之前,我们从未做出任何专门的努力来发现我们的HAProxy设置在处理HTTP和HTTPs请求方面的局限性。 当前,在生产中,我们有4个核心,30 Gig的HAProxy机器实例。

Introducing Amazon EC2 R4 Instances, the next generation of memory-optimized instancesYou can now launch R4 instances, the next generation of Amazon EC2 Memory Optimized instances, featuring a larger…aws.amazon.com

引入Amazon EC2 R4实例,下一代内存优化实例 现在,您可以启动R4实例,这是下一代Amazon EC2内存优化实例,具有更大的性能…… aws.amazon.com

As I am writing this post, we’re in the process of moving our entire traffic (HTTP) to HTTPs (that is, encrypted traffic). But before moving further, we needed some definitive answers to the following questions:

在撰写本文时,我们正在将整个流量(HTTP)移至HTTP(即加密流量)。 但是,在进一步发展之前,我们需要对以下问题给出一些明确的答案:

  1. What is the impact as we shift our traffic from Non-SSL to SSL? CPU should definitely take a hit because SSL handshake is not a normal 3 way handshake, it is rather a 5 way handshake and after the handshake is complete, further communication is encrypted using the secret key generated during the handshake and this is bound to take up CPU.

    将流量从非SSL转移到SSL有什么影响? CPU肯定会受到打击,因为SSL握手不是正常的三向握手,而是五向握手,并且在握手完成后,使用握手期间生成的密钥对进一步的通信进行加密,这势必会占用中央处理器。

  2. What are some other hardware/software limits that might be reached on production as a result of SSL termination at the HAProxy level. We could also go for the SSL PassThrough option provided by HAProxy which terminates/decrypts the SSL connection at the backend servers. However, SSL termination at the HAProxy level is more performant and so this is what we intend to test.

    HAProxy级别的SSL终止可能会在生产上达到其他一些硬件/软件限制 。 我们还可以使用HAProxy提供的SSL PassThrough选项,该选项可终止/解密后端服务器上的SSL连接。 但是,HAProxy级别的SSL终止性能更高,因此这就是我们要测试的内容。

  3. What is the best hardware required on production to support the kind of load that we see today. Will the existing hardware scale or do we need bigger machines? This was also one of the prime questions we wanted an answer to via this test.

    生产中所需的最佳硬件是什么,以支持我们今天看到的那种负载 。 是现有的硬件规模还是我们需要更大的机器? 这也是我们希望通过此测试回答的主要问题之一。

For this purpose we put in a dedicated effort for load testing HAProxy version 1.6 to find out answers to the above questions. I won’t be outlining the approach we took nor will I be outlining the results of this exercise in this blog post.

为此,我们专门进行了负载测试HAProxy 1.6版的工作,以找出上述问题的答案。 我不会在本文中概述我们采用的方法,也不会概述此练习的结果。

Rather, I will be discussing an important aspect of any load testing exercise that most of us tend to ignore.

相反,我将讨论我们大多数人倾向于忽略的任何负载测试活动的重要方面。

限界者 (The Ulimiter)

If you have ever done any kind of load testing or hosted any server serving a lot of concurrent requests, you definitely would have run into the dreaded “Too many open files” issue.

如果您曾经进行过任何类型的负载测试或托管任何服务器来处理大量并发请求,那么您肯定会遇到可怕的“打开文件太多”问题。

An important part of any stress testing exercise is the ability of your load testing client to establish a lot of concurrent connections to your backend server or to the proxy like HAProxy in between.

任何压力测试活动的重要组成部分是负载测试客户端能否在其后端服务器或代理之间建立大量并发连接,例如HAProxy。

A lot of times we end up being bottleneck on the client not being able to generate the amount of load we expect it to generate. The reason for this is not because the client is not performing optimally, but something else entirely on the hardware level.

很多时候,我们最终成为客户端的瓶颈,无法产生我们期望它产生的负载量。 这样做的原因不是因为客户端的性能不是最佳,而是完全在硬件级别。

Ulimit is used to restrict the number of user level resources. For all practical purposes pertaining to load testing environments, ulimit gives us the number of file descriptors that can be opened by a single process on the system. On most machines if you check the limit on file descriptors, it comes out to be this number = 1024.

Ulimit用于限制用户级别资源的数量。 出于与负载测试环境有关的所有实际目的,ulimit为我们提供了可以由系统上的单个进程打开的文件描述符的数量。 在大多数计算机上,如果您检查文件描述符的限制,结果就是这个数字= 1024。

As you can see, the number of open files is 1024 it on our staging setup. Opening a new TCP connection / socket also counts as an open file or a file descriptor and hence the limitation.

如您所见,在我们的暂存设置中,打开的文件数为1024。 打开新的TCP连接/套接字也算作打开的文件或文件描述符,因此也受到限制。

What this generally means is that a single client process can only open 1024 connections to the backend servers and no more. It means you need to increase this limit to a very high number on your load testing environment before proceeding further. Checkout the ulimit setting we have on our production machines.

这通常意味着单个客户端进程只能打开与后端服务器的1024个连接,而不能打开更多。 这意味着您需要在负载测试环境中将此限制增加到很高的数量,然后再继续进行。 检出生产机器上的ulimit设置。

This information is what you would generally find after 10 seconds of Googling, but keep in mind that ulimit is not guaranteed to give you the limits your processes actually have! There’s a million things that can modify a limits of a process after (or before) you initialized your shell. So what you should do instead is fire up top, htop, ps, or whatever you want to use to get the ID of the problematic process, and do a cat /proc/{process_id}/limits:

该信息是您在谷歌搜索10秒钟后通常会找到的信息,但请记住, 不能保证ulimit会给您您的进程实际具有的限制! 在初始化外壳之后(或之前),有上百万种东西可以修改进程的限制。 因此,您应该执行的操作是up tophtopps或您想要用来获取有问题的进程的ID的任何东西,然后执行cat /proc/{process_id}/limits

The max open files for this specific process is different than the system wide limits we have on this server.
此特定进程的最大打开文件数与我们在此服务器上的系统范围限制不同。

Let’s move on to the interesting part. Raising the limits :D

让我们继续进行有趣的部分。 提高极限:D

您要阅读的内容:提高极限 (The Stuff You Came Here to Read: Raising the Limit)

There are two ways of changing the ulimit setting on a machine.

有两种方法可以更改计算机上的ulimit设置。

  1. ulimit -n <some_value>. This will change the ulimit settings only for the current shell session. As soon as you open another shell session, you are back to square one i.e. 1024 file descriptors. So this is probably not what you want.

    ulimit -n <some_value >。 这将仅更改当前Shell会话的ulimit设置。 一旦打开另一个Shell会话,您将回到平方,即1024个文件描述符。 因此,这可能不是您想要的。

  2. fs.file-max = 500000. Add this line to the end of the file /etc/sysctl.conf. And add the following

    fs.file-max = 500000 。 将此行添加到文件/etc/sysctl.conf的末尾 并添加以下内容

    fs.file-max = 500000. Add this line to the end of the file /etc/sysctl.conf. And add the following* soft nofile 500000

    fs.file-max = 500000 。 将此行添加到文件/etc/sysctl.conf的末尾 并添加以下* soft nofile 500000

    fs.file-max = 500000. Add this line to the end of the file /etc/sysctl.conf. And add the following* soft nofile 500000* hard nofile 500000

    fs.file-max = 500000 。 将此行添加到文件/etc/sysctl.conf的末尾 并添加以下内容* soft nofile 500000 * hard nofile 500000

    fs.file-max = 500000. Add this line to the end of the file /etc/sysctl.conf. And add the following* soft nofile 500000* hard nofile 500000root soft nofile 500000

    fs.file-max = 500000 。 将此行添加到文件/etc/sysctl.conf的末尾 并添加以下内容* soft nofile 500000 * hard nofile 500000 root soft nofile 500000

    fs.file-max = 500000. Add this line to the end of the file /etc/sysctl.conf. And add the following* soft nofile 500000* hard nofile 500000root soft nofile 500000root hard nofile 500000

    fs.file-max = 500000 。 将此行添加到文件/etc/sysctl.conf的末尾 并添加以下内容* soft nofile 500000 * hard nofile 500000 root soft nofile 500000 root hard nofile 500000

    to the file

    到文件

    /etc/security/limits.conf.

    /etc/security/limits.conf。

The * basically represents that we are setting these values for all the users except root. “soft or hard” basically represent soft or hard limits. The next entry specifies the item for which we want to change the limit values i.e. nofile in this case which means the number of open files . And finally we have the value we wanna set which in this case is 500000. The * here does not apply to a root user, hence the last two lines specially for the root user.

*基本上表示我们正在为root用户以外的所有用户设置这些值。 “软硬”基本上代表软硬限制。 下一个条目指定我们要更改其极限值的项目,在这种情况下,即nofile,这意味着打开的文件数。 最后,我们要设置的值是500000。这里的*不适用于root用户,因此最后两行专门用于root用户。

After doing this, you need to take a reboot of the system. Sadly yes :( And the changes should reflect in the ulimit -n command.

完成此操作后,您需要重新引导系统。 遗憾的是:(并且所做的更改应反映在ulimit -n命令中。

Hurray !. Pat yourself on the back. You successfully changed the ulimit settings for the system. However, it is not necessary that changing this will affect all the user processes running on the system. It is quite possible that even after changing the system wide ulimit, you might find that /etc/<pid>/limits give you a smaller number than what you might expect to find.

万岁! 轻拍自己的背部。 您已成功更改了系统的ulimit设置。 但是,不必更改此设置将影响系统上运行的所有用户进程。 即使在更改了系统范围的ulimit之后,您很有可能会发现/ etc / <pid> / limits给您的数量比预期的要少。

In this case, you almost certainly have a process manager, or something similar that is messing up your limits. You need to keep in mind that processes inherit the limits of their parent processes. So if you have something like a Supervisor managing your processes, they will inherit the settings of the Supervisor daemon and this overrides any changes you make to the system level limits.

在这种情况下,几乎可以肯定您有一个流程管理器,或者类似的东西正在破坏您的极限。 您需要记住,进程继承了其父进程的限制。 因此,如果您有像Supervisor这样的东西来管理您的进程,它们将继承Supervisor守护程序的设置,这将覆盖您对系统级别限制所做的任何更改。

Supervisor has a config variable that sets the file descriptor limit of its main process. Apparently, this setting is in turn inherited by any and all processes it launches. To override the default setting, you can add the following line to /etc/supervisor/supervisord.conf, in the [supervisord]section:

Supervisor具有一个config变量 ,用于设置其主进程的文件描述符限制。 显然,此设置反过来会由它启动的所有进程继承。 要覆盖默认设置,可以在[supervisord]部分的/etc/supervisor/supervisord.conf添加以下行:

minfds=500000

Updating this will lead to all the child processes being controlled by supervisor inheriting this updated limit. You just need to restart the supervisor daemon to bring this change into effect.

更新此设置将导致所有子进程都由主管继承此更新的限制来控制。 您只需要重新启动超级用户守护程序即可使此更改生效。

Remember to do this on any machine that intends to have a lot of concurrent connections open. Be it the client in a load testing scenario or a server trying to serve a lot of concurrent requests.

请记住在要打开大量并发连接的任何计算机上执行此操作。 无论是负载测试场景中的客户端,还是尝试服务于大量并发请求的服务器。

In Part 2, we’ll learn how to deal with the Sysctl port range monster.

在第2部分中 ,我们将学习如何处理Sysctl端口范围内的怪物

Do let me know how this blog post helped you. Also, please recommend (❤) this post if you think this may be useful for someone.

让我知道此博客文章如何帮助您。 另外,如果您认为这可能对某人有用,请推荐(❤)这篇文章。

翻译自: https://www.freecodecamp.org/news/load-testing-haproxy-part-1-f7d64500b75d/

haproxy 负载

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值