ruby 并发_打开ruby并发工具箱

ruby 并发

This article was originally written by Alex Braha Stoll on the Honeybadger Developer Blog.

本文最初由 Alex Braha Stoll Honeybadger开发人员博客 撰写

Concurrency and parallelism are more important than ever for Ruby developers. They can make our applications faster, utilizing the hardware that powers them to its fullest potential. In this article, we are going to explore the tools currently available to every Rubyist and also what Ruby promises to soon deliver in this department.

对于Ruby开发人员而言,并发性和并行性比以往任何时候都更为重要。 利用可充分发挥其潜力的硬件,它们可以使我们的应用程序更快。 在本文中,我们将探讨每个Rubyist当前可用的工具,以及Ruby承诺不久将在该部门提供的工具。

Not everyone uses concurrency directly, but we all use it indirectly via tools like Sidekiq. Understanding Ruby concurrency won’t just help you build your own solutions; it will help you understand and troubleshoot existing ones.

并非所有人都直接使用并发,但是我们都通过诸如Sidekiq之类的工具间接使用并发。 了解Ruby并发不仅可以帮助您构建自己的解决方案,还可以帮助您解决问题。 它将帮助您了解现有的问题并进行故障排除。

But first let’s take a step back and look at the big picture.

但是首先让我们退后一步,看看大局。

并发与并行 (Concurrency vs. Parallelism)

These terms are used loosely, but they do have distinct meanings.

这些术语用处不大,但确实有不同的含义。

  • Concurrency: The art of doing many tasks, one at a time. By switching between them quickly, it may appear to the user as though they happen simultaneously.

    并发:一次完成许多任务的艺术。 通过在它们之间快速切换,用户似乎可以同时看到它们。

  • Parallelism: Doing many tasks at literally the same time. Instead of appearing simultaneous, they are simultaneous.

    并行性:实际上可以同时执行许多任务。 它们不是同时出现,而是同时出现。

Concurrency is most often used for applications that are IO heavy. For example, a web app may regularly interact with a database or make lots of network requests. By using concurrency, we can keep our application responsive, even while we wait for the database to respond to our query.

并发最常用于IO繁重的应用程序。 例如,Web应用程序可能会定期与数据库进行交互或发出大量网络请求。 通过使用并发,即使我们等待数据库响应查询,我们也可以使应用程序保持响应。

This is possible because the Ruby VM allows other threads to run while one is waiting during IO. Even if a program has to make dozens of requests, if we use concurrency, the requests will be made at virtually the same time.

这是可能的,因为Ruby VM允许其他线程在IO期间等待时运行。 即使程序必须发出数十个请求,但如果我们使用并发,则这些请求实际上将在同一时间发出。

Parallelism, on the other hand, is not currently supported by Ruby.

另一方面,Ruby当前不支持并行。

为什么在Ruby中没有并行性? (Why No Parallelism in Ruby?)

Today, there is no way of achieving parallelism within a single Ruby process using the default Ruby implementation (generally called MRI or CRuby). The Ruby VM enforces a lock (the GVM, or Global VM Lock) that prevents multiple threads from running Ruby code at the same time. This lock exists to protect the internal state of the virtual machine and to prevent scenarios that could result in the VM crashing. This is not a great spot to be in, but all hope is not lost: Ruby 3 is coming soon and it promises to solve this handicap by introducing a concept codenamed Guild (explained in the last sections of this article).

如今,无法使用默认的Ruby实现(通常称为MRI或CRuby)在单个Ruby进程中实现并行性。 Ruby VM强制执行一个锁定(GVM或全局VM锁定),以防止多个线程同时运行Ruby代码。 存在此锁的目的是保护虚拟机的内部状态,并防止可能导致VM崩溃的情况。 这不是一个好地方,但是所有希望都不会丢失:Ruby 3即将面世,它有望通过引入一个代号为Guild的概念(在本文的最后部分中进行说明)来解决这一障碍。

线程数 (Threads)

Threads are Ruby’s concurrency workhorse. To better understand how to use them and what pitfalls to be aware of, we’re going to give an example. We’ll build a little program that consumes an API and stores its results in a datastore using concurrency.

线程是Ruby的并发主力。 为了更好地理解如何使用它们以及需要注意的陷阱,我们将举一个例子。 我们将构建一个使用API​​的小程序,并使用并发将其结果存储在数据存储区中。

Before we build the API client, we need an API. Below is the implementation of a tiny API that accepts a number and responds as plain text if the number provided is even odd. If the syntax looks strange to you, don’t worry. This doesn’t have anything to do with concurrency. It’s just a tool we’ll use.

在构建API客户端之前,我们需要一个API。 下面是一个微型API的实现,该API接受一个数字,如果提供的数字为奇数,则以纯文本形式响应。 如果语法对您来说很奇怪,请不要担心。 这与并发无关。 这只是我们将要使用的工具。

To run this web app you’ll need to have the rack gem installed, then execute rackup config.ru.

要运行此Web应用程序,您需要先安装rack gem,然后执行rackup config.ru

We also need a mock datastore. Here’s a class that simulates a key-value database:

我们还需要一个模拟数据存储。 这是一个模拟键值数据库的类:

Now, let’s go through the implementation of our concurrent solution. We have a method, run, which concurrently fetches 1,000 records and stores them in our datastore.

现在,让我们来看一下并发解决方案的实现。 我们有一个方法run ,它可以同时获取1,000条记录并将它们存储在我们的数据存储区中。

We create four threads, each processing 250 records. We use this strategy in order not to overwhelm the third-party API or our own systems.

我们创建四个线程,每个线程处理250条记录。 我们使用此策略是为了不压倒第三方API或我们自己的系统。

By having the requests being made concurrently using multiple threads, the total execution will take a fraction of the time that a sequential implementation would take. While each thread has moments of inactivity during all the steps necessary to establish and communicate through an HTTP request, the Ruby VM allows a different thread to start running. This is the reason why this implementation is much faster than a sequential one.

通过使用多个线程同时进行请求,总执行将花费顺序实现所需时间的一小部分。 尽管每个线程在通过HTTP请求建立和通信所需的所有步骤中都有不活动的时间,但Ruby VM允许不同的线程开始运行。 这就是为什么此实现比顺序实现快得多的原因。

The AdHocHTTP class is a straightforward HTTP client implemented specially for this article to allow us to focus only on the differences between code powered by threads and code powered by fibers. It's beyond the scope of this article to discuss its implementation, but you can check it out here if you're curious.

AdHocHTTP 类是专门为这篇文章,让我们只注重动力由提供纤维纱线和代码代码之间的差异来实现一个简单的HTTP客户端。 讨论其实现超出了本文的讨论范围,但是 如果您感到好奇 ,可以 在这里 查看

Finally, we handle the server’s response by the end of the inner loop. Here’s how the method handle_response looks:

最后,我们在内循环结束之前处理服务器的响应。 这是handle_response方法的外观:

This method looks all right, doesn’t it? Let’s run it and see what ends up at our datastore:

这种方法看起来不错,不是吗? 让我们运行它,看看最终在我们的数据存储中:

This is pretty strange, as I’m sure that between 1 and 1000 there are 500 even numbers and 500 odd ones. In the next section, let’s understand what’s happening and briefly explore one of the ways to solve this bug.

这很奇怪,因为我确定1到1000之间有500个偶数和500个奇数。 在下一节中,让我们了解发生了什么,并简要探讨解决此错误的方法之一。

线程和数据竞赛:细节决定成败 (Threads and Data Races: The Devil Is in the Details)

Using threads allows our IO heavy programs to run much faster, but they’re also tough to get right. The error in our results above is caused by a race condition in the handle_response method. A race condition happens when two threads manipulate the same data.

使用线程可使我们的IO繁重程序运行得更快,但也很难正确执行。 上面结果中的错误是由handle_response方法中的竞争条件引起的。 当两个线程操纵相同的数据时,就会发生竞争状态。

Since we’re operating on a shared resource (the ds datastore object), we have to be especially careful with non-atomic operations. Notice that we first read from the datastore and--in a second statement--we write to it the count incremented by 1. This is problematic because our thread may stop running after the read but before the write. Then, if another thread runs and increments the value of the key we're interested in, we'll write an out-of-date count when the original thread resumes.

由于我们在共享资源( ds数据存储对象)上进行操作,因此对于非原子操作必须格外小心。 请注意,我们首先从数据存储区读取数据,并在第二条语句中向其写入计数加1。这是有问题的,因为我们的线程可能在读取之后但在写入之前停止运行。 然后,如果另一个线程运行并增加了我们感兴趣的键的值,那么当原始线程恢复时,我们将写入过期计数。

One way to mitigate the dangers of using threads is to use higher-level abstractions to structure a concurrent implementation. Check out the concurrent-ruby gem for different patterns to use and a safer thread-powered program.

减轻使用线程的危险的一种方法是使用更高级别的抽象来构造并发实现。 查看 并发Rubygem ,以了解要使用的不同模式和更安全的线程驱动程序。

There are many ways to fix a data race. A simple solution is to use a mutex. This synchronization mechanism enforces one-at-a-time access to a given segment of code. Here’s our previous implementation fixed by the usage of a mutex:

有许多方法可以解决数据争用问题。 一个简单的解决方案是使用互斥锁。 此同步机制强制一次访问给定的代码段。 这是我们先前通过互斥锁实现的实现:

If you plan to use threads inside a Rails application, the official guide Threading and Code Execution in Rails is a must-read. Failing to follow these guidelines may result in very unpleasant consequences, like leaking database connections.

如果您打算在Rails应用程序中使用线程, 则必须阅读 官方指南 Rails 中的 线程和代码执行 不遵循这些准则可能会导致非常不愉快的后果,例如泄漏数据库连接。

After running our corrected implementation, we get the expected result:

运行正确的实现后,我们将获得预期的结果:

Instead of using a mutex, we can also get rid of data races by dropping threads altogether and reaching for another concurrency tool available in Ruby. In the next section, we’re going to take a look at Fiber as a mechanism for improving the performance of IO-heavy apps.

除了使用互斥锁,我们还可以通过完全删除线程并使用Ruby中的另一个并发工具来摆脱数据争用。 在下一节中,我们将看一下Fiber作为一种提高IO繁重的应用程序性能的机制。

光纤:用于并发的苗条工具 (Fiber: A Slender Tool for Concurrency)

Ruby Fibers let you achieve cooperative concurrency within a single thread. This means that fibers are not preempted and the program itself must do the scheduling. Because the programmer controls when fibers start and stop, it is much easier to avoid race conditions.

Ruby Fibers使您可以在单个线程内实现协作并发。 这意味着光纤不会被抢占,并且程序本身必须进行调度。 由于程序员控制光纤的启动和停止时间,因此避免竞争情况要容易得多。

Unlike threads, fibers do not grant us better performance when IO happens. Fortunately, Ruby provides asynchronous reads and writes through its IO class. By using these async methods we can prevent IO operations from blocking our fiber-based code.

与线程不同,光纤在发生IO时不会为我们提供更好的性能。 幸运的是,Ruby通过其IO类提供异步读取和写入。 通过使用这些异步方法,我们可以防止IO操作阻塞我们基于光纤的代码。

相同场景,现在使用光纤 (Same Scenario, Now with Fibers)

Let’s go through the same example, but now using fibers combined with the async capabilities of Ruby’s IO class. It’s beyond the scope of this article to explain all the details of async IO in Ruby. Still, we’ll touch on the essential parts of its workings and you can take a look at the implementation of the relevant methods of AdHocHTTP (the same client appearing in the threaded solution we’ve just explored) if you’re curious.

让我们来看一个相同的示例,但是现在结合使用光纤和Ruby IO类的异步功能。 解释Ruby中的异步IO的所有详细信息超出了本文的范围。 不过,如果您感到好奇的话,我们将介绍其工作的主要部分,并且可以看看AdHocHTTP (在我们刚刚探讨的线程解决方案中出现的相同客户端)的相关方法的实现。

We’ll start by looking at the run method of our fiber-powered implementation:

我们将从查看光纤驱动的实施的run方法开始:

We first create a fiber for each subset of the numbers we want to check if even or odd.

首先,我们为要检查偶数或奇数的每个数字子集创建一个光纤。

Then we loop over the numbers, calling yield_if_waiting. This method is responsible for stopping the current fiber and allowing another one to resume.

然后我们遍历数字,调用yield_if_waiting 。 此方法负责停止当前光纤并允许另一根光纤恢复。

Notice also that after creating a fiber, we call resume. This causes the fiber to start running. By calling resume immediately after creation, we start making HTTP requests even before the main loop going from 1 to 1000 finishes.

还要注意,在创建光纤之后,我们将其称为resume 。 这将导致光纤开始运行。 通过在创建后立即调用resume ,我们甚至在主循环从1到1000完成之前就开始发出HTTP请求。

At the end of the run method, there's a call to wait_all_requests. This method selects fibers that are ready to run and also guarantees we make all the intended requests. We'll take a look at it in the last segment of this section.

run方法的末尾,有一个调用wait_all_requests 。 这种方法选择准备运行的光纤,并保证我们提出所有预期的要求。 我们将在本节的最后一部分中对其进行研究。

Now, let’s see yield_if_waiting in detail:

现在,让我们详细了解yield_if_waiting

We first try to perform an operation (connect, read, or write) using our client. Two primary outcomes are possible:

我们首先尝试使用客户端执行操作(连接,读取或写入)。 可能会有两个主要结果:

  • Success: When that happens, we return.

    成功:发生这种情况时,我们会返回。

  • We can receive a symbol: This means we have to wait.

    我们可以收到一个符号:这意味着我们必须等待。

How does one “wait”?

一个人如何“等待”?

  1. We create a kind of checkpoint by adding our socket combined with the current fiber to the instance variable waiting (which is a Hash).

    我们通过将套接字与当前光纤结合在一起添加到waiting的实例变量(这是一个Hash )中来创建一种检查点。

  2. We store this pair inside a collection that holds IO waiting for reading or writing (we’ll see why that’s important in a moment), depending on the result we get back from the client.

    我们将这对存储在一个集合中,该集合中包含IO等待读取或写入的信息(稍后我们将了解为什么这很重要),具体取决于我们从客户端获得的结果。
  3. We stop the execution of the current fiber, allowing another one to run. The paused fiber will get the opportunity to resume work at some point after the associated network socket becomes ready. Then, the IO operation will be retried (and this time will succeed).

    我们停止执行当前的光纤,允许另一根光纤运行。 在关联的网络插座准备就绪后,暂停的光纤将有机会在某个时候恢复工作。 然后,将重试IO操作(并且这次将成功)。

Every Ruby program runs inside a fiber that itself is part of a thread (everything inside a process). As a consequence, when we create a first fiber, run it, and then at some point yield, we’re resuming the execution of the central part of the program.

每个Ruby程序都在光纤内部运行,而光纤本身就是线程的一部分(进程内的所有内容)。 结果,当我们创建第一根光纤时,先运行它,然后以某个点的收益率运行,我们将恢复程序中心部分的执行。

Now that we understand the mechanism used to yield execution when a fiber is waiting IO, let’s explore the last bit needed to comprehend this fiber-powered implementation.

现在,我们了解了光纤在等待IO时用于产生执行的机制,让我们探究理解这种光纤驱动的实现所需的最后一点。

The chief idea here is to wait (in other words, to loop) until all pending IO operations are complete.

这里的主要思想是等待(换句话说,循环)直到所有未完成的IO操作完成。

To do that, we use IO.select. It accepts two collections of pending IO objects: one for reading and one for writing. It returns those IO objects that have finished their job. Because we associated these IO objects with the fibers responsible for running them, it's simple to resume those fibers.

为此,我们使用IO.select 。 它接受两个未决IO对象的集合:一个用于读取,一个用于写入。 它返回已完成工作的IO对象。 因为我们将这些IO对象与负责运行它们的光纤相关联,所以恢复这些光纤很简单。

We keep on repeating these steps until all requests are fired and completed.

我们会继续重复这些步骤,直到所有请求都被触发并完成为止。

总决赛:可比的性能,无需锁 (The Grand Finale: Comparable Performance, No Need for Locks)

Our handle_response method is exactly the same as that initially used in the code using threads (the version without a mutex). However, since all our fibers run inside the same thread, we won't have any data races. When we run our code, we get the expected result:

我们的handle_response方法与最初使用线程的代码(不带互斥锁的版本)完全相同。 但是,由于我们所有的光纤都在同一线程中运行,因此我们不会进行任何数据竞争。 运行我们的代码时,我们得到了预期的结果:

You probably don’t want to deal with all that fiber switching business every time you leverage async IO. Fortunately, some gems abstract all this work and make the usage of fibers something the developer doesn’t need to think about. Check out the async project as a great start.

您可能不想在每次使用异步IO时都处理所有光纤交换业务。 幸运的是,一些宝石将所有这些工作抽象化,并使开发人员不需要考虑纤维的使用。 检查 异步 项目是一个很好的开始。

当必须具有高可扩展性时,光纤会发光 (Fibers Shine When High Scalability Is a Must)

Although we can reap the benefits of virtually eliminating the risks of data races even in small scale scenarios, fibers are a great tool when high scalability is needed. Fibers are much more lightweight than threads. Given the same available resources, creating threads will overwhelm a system much sooner than fibers. For an excellent exploration on the topic, we recommend the presentation The Journey to One Million by Ruby Core Team’s Samuel Williams.

尽管即使在小规模场景中,我们也可以从实际上消除数据争用的风险中获益,但是当需要高可伸缩性时,光纤是一个很好的工具。 纤维比螺纹轻得多。 给定相同的可用资源,创建线程将比光纤更快地淹没系统。 要对该主题进行出色的探索,我们建议使用Ruby Core Team的Samuel Williams的演讲“百万之旅”

行会— Ruby中的并行编程 (Guild — Parallel Programming in Ruby)

So far we’ve seen two useful tools for concurrency in Ruby. Neither of them, however, can improve the performance of pure computations. For that you would need true parallelism, which doesn’t currently exist in Ruby (here we’re considering MRI, the default implementation).

到目前为止,我们已经看到了两个有用的Ruby并发工具。 但是,它们都不能提高纯计算的性能。 为此,您将需要真正的并行性,而Ruby当前不存在这种并行性(这里我们考虑使用MRI,这是默认实现)。

This may be changing in Ruby 3 with the coming of a new feature called “Guilds.” Details are still hazy, but in the following sections we’ll take a look at how this work-in-progress feature promises to allow parallelism in Ruby.

在Ruby 3中,随着名为“ Guilds”的新功能的出现,这种情况可能会有所改变。 细节仍然模糊不清,但是在以下各节中,我们将看一下这个进行中的功能如何保证在Ruby中允许并行化。

公会如何运作 (How Guilds Might Work)

A significant source of pain when implementing concurrent/parallel solutions is shared memory. In the section on threads, we already saw how easy it is to make a slip and write code that may seem innocuous at first glance but actually contains subtle bugs.

实施并发/并行解决方案时,痛苦的一个重要原因是共享内存。 在有关线程的部分中,我们已经看到滑动和编写看起来乍一看无害但实际上包含细微错误的代码是多么容易。

Koichi Sasada — the Ruby Core Team member heading the development of the new Guild feature — is hard at work designing a solution that tackles head on the dangers of sharing memory among multiple threads. In his presentation at the 2018 RubyConf, he explains that when using guilds one won’t be able to simply share mutable objects. The main idea is to prevent data races by only allowing immutable objects to be shared between different guilds.

负责开发新的Guild功能的Ruby核心团队成员Koichi Sasada努力设计一个解决方案,以解决在多个线程之间共享内存的危险。 在2018年RubyConf的演讲中,他解释说,使用行会时,人们将无法简单地共享可变对象。 主要思想是通过仅允许不可变对象在不同行会之间共享来防止数据争用。

Specialized data structures will be introduced in Ruby to allow some measure of shared memory between guilds, but the details of how exactly this is going to work are still not fully fleshed out. There will also be an API that will allow objects to be copied or moved between guilds, plus a safeguard to impede an object from being referenced after it’s been moved to a different guild.

Ruby中将引入专门的数据结构,以允许对公会之间的共享内存进行某种程度的测量,但是如何精确地实现工作原理的细节仍未完全充实。 还有一个API,它允许在行会之间复制或移动对象,还有一种保护措施,可以防止在将对象移动到另一个行会后对其进行引用。

使用公会探索常见场景 (Using Guilds to Explore a Common Scenario)

There are many situations where you might wish you could speed up computations by running them in parallel. Let’s imagine that we have to calculate the average and mean of the same dataset.

在许多情况下,您可能希望通过并行运行它们来加快计算速度。 假设我们必须计算同一数据集的平均值和均值。

The example below shows how we might do this with guilds. Keep in mind that this code doesn’t currently work and might never work, even after guilds are released.

以下示例显示了如何使用公会进行此操作。 请记住,即使发布了行会,此代码当前也不起作用,也可能永远不起作用。

总结一下 (Summing It Up)

Concurrency and parallelism are not the main strengths of Ruby, but even in this department the language does offer tools that are probably good enough to deal with most use cases. Ruby 3 is coming and it seems things will get considerably better with the introduction of the Guild primitive. In my opinion, Ruby is still a very suitable choice in many situations, and its community is clearly hard at work making the language even better. Let’s keep an ear to the ground for what’s coming!

并发和并行性不是Ruby的主要优势,但是即使在这个部门中,该语言也确实提供了足以应付大多数用例的工具。 Ruby 3即将面世,并且随着Guild基元的引入,情况似乎会变得更好。 在我看来,Ruby在许多情况下仍然是非常合适的选择,并且它的社区显然正在努力使该语言变得更好。 让我们密切关注即将发生的事情!

Originally published at https://www.honeybadger.io.

最初发布在 https://www.honeybadger.io

翻译自: https://medium.com/@Honeybadger.io/opening-the-ruby-concurrency-toolbox-20f2885d40ff

ruby 并发

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值