如何在Python 3中使用ThreadPoolExecutor

The author selected the COVID-19 Relief Fund to receive a donation as part of the Write for DOnations program.

作者选择了COVID-19救济基金来接受捐赠,这是Write for DOnations计划的一部分。

介绍 (Introduction)

Python threads are a form of parallelism that allow your program to run multiple procedures at once. Parallelism in Python can also be achieved using multiple processes, but threads are particularly well suited to speeding up applications that involve significant amounts of I/O (input/output).

Python 线程是一种并行形式,它允许您的程序一次运行多个过程。 Python中的并行性也可以使用多个进程来实现,但是线程特别适合加速包含大量I / O(输入/输出)的应用程序。

Example I/O-bound operations include making web requests and reading data from files. In contrast to I/O-bound operations, CPU-bound operations (like performing math with the Python standard library) will not benefit much from Python threads.

I / O绑定操作示例包括发出Web请求和从文件读取数据。 与I / O绑定操作相比, CPU绑定操作 (例如使用Python标准库执行数学运算)不会从Python线程中受益太多。

Python 3 includes the ThreadPoolExecutor utility for executing code in a thread.

Python 3包含ThreadPoolExecutor实用程序,用于在线程中执行代码。

In this tutorial, we will use ThreadPoolExecutor to make network requests expediently. We’ll define a function well suited for invocation within threads, use ThreadPoolExecutor to execute that function, and process results from those executions.

在本教程中,我们将使用ThreadPoolExecutor方便地发出网络请求。 我们将定义一个非常适合在线程内调用的函数,使用ThreadPoolExecutor执行该函数,并处理这些执行的结果。

For this tutorial, we’ll make network requests to check for the existence of Wikipedia pages.

对于本教程,我们将发出网络请求以检查Wikipedia页面的存在。

Note: The fact that I/O-bound operations benefit more from threads than CPU-bound operations is caused by an idiosyncrasy in Python called the, global interpreter lock. If you’d like, you can learn more about Python’s global interpreter lock in the official Python documentation.

注意: I / O绑定操作比CPU绑定操作更多地受益于线程这一事实是由于Python中一种称为全局解释器锁的特性引起的。 如果愿意,您可以在Python 官方文档中了解有关Python的全局解释器锁的更多信息。

先决条件 (Prerequisites)

To get the most out of this tutorial, it is recommended to have some familiarity with programming in Python and a local Python programming environment with requests installed.

为了充分利用本教程,建议您熟悉Python编程以及安装了requests的本地Python编程环境。

You can review these tutorials for the necessary background information:

您可以查看这些教程以获取必要的背景信息:

  • pip install --user requests==2.23.0

    pip install-用户请求== 2.23.0

步骤1 —定义要在线程中执行的函数 (Step 1 — Defining a Function to Execute in Threads)

Let’s start by defining a function that we’d like to execute with the help of threads.

让我们首先定义一个我们希望在线程帮助下执行的函数。

Using nano or your preferred text editor/development environment, you can open this file:

使用nano或您首选的文本编辑器/开发环境,可以打开以下文件:

  • nano wiki_page_function.py

    纳米wiki_page_function.py

For this tutorial, we’ll write a function that determines whether or not a Wikipedia page exists:

在本教程中,我们将编写一个确定Wikipedia页面是否存在的函数:

wiki_page_function.py
wiki_page_function.py
import requests

def get_wiki_page_existence(wiki_page_url, timeout=10):
    response = requests.get(url=wiki_page_url, timeout=timeout)

    page_status = "unknown"
    if response.status_code == 200:
        page_status = "exists"
    elif response.status_code == 404:
        page_status = "does not exist"

    return wiki_page_url + " - " + page_status

The get_wiki_page_existence function accepts two arguments: a URL to a Wikipedia page (wiki_page_url), and a timeout number of seconds to wait for a response from that URL.

get_wiki_page_existence 函数接受两个参数:Wikipedia页面的URL( wiki_page_url ),以及等待该URL响应的timeout秒数。

get_wiki_page_existence uses the requests package to make a web request to that URL. Depending on the status code of the HTTP response, a string is returned that describes whether or not the page exists. Different status codes represent different outcomes of a HTTP request. This procedure assumes that a 200 “success” status code means the Wikipedia page exists, and a 404 “not found” status code means the Wikipedia page does not exist.

get_wiki_page_existence使用requests包向该URL发出Web请求。 根据HTTP response状态码 ,返回一个字符串,该字符串描述页面是否存在。 不同的状态码表示HTTP请求的不同结果。 此过程假定状态代码为200 “成功”)表示存在Wikipedia页面,状态代码为404 “未找到”)表示Wikipedia页面不存在。

As described in the Prerequisites section, you’ll need the requests package installed to run this function.

如前提条件部分所述,您将需要安装requests包才能运行此功能。

Let’s try running the function by adding the url and function call following the get_wiki_page_existence function:

让我们尝试通过在get_wiki_page_existence函数之后添加url和function调用来get_wiki_page_existence函数:

wiki_page_function.py
wiki_page_function.py
. . .
url = "https://en.wikipedia.org/wiki/Ocean"
print(get_wiki_page_existence(wiki_page_url=url))

Once you’ve added the code, save and close the file.

添加代码后,保存并关闭文件。

If we run this code:

如果我们运行以下代码:

  • python wiki_page_function.py

    python wiki_page_function.py

We’ll see output like the following:

我们将看到如下输出:


   
   
Output
https://en.wikipedia.org/wiki/Ocean - exists

Calling the get_wiki_page_existence function with a valid Wikipedia page returns a string that confirms the page does, in fact, exist.

使用有效的Wikipedia页面调用get_wiki_page_existence函数将返回一个字符串,确认该页面确实存在。

Warning: In general, it is not safe to share Python objects or state between threads without taking special care to avoid concurrency bugs. When defining a function to execute in a thread, it is best to define a function that performs a single job and does not share or publish state to other threads. get_wiki_page_existence is an example of such a function.

警告:通常,如果不特别注意避免并发错误,则在线程之间共享Python对象或状态是不安全的。 在定义要在线程中执行的函数时,最好定义一个函数,该函数执行单个作业,并且不与其他线程共享或发布状态。 get_wiki_page_existence是此类功能的一个示例。

步骤2 —使用ThreadPoolExecutor在线程中执行函数 (Step 2 — Using ThreadPoolExecutor to Execute a Function in Threads)

Now that we have a function well suited to invocation with threads, we can use ThreadPoolExecutor to perform multiple invocations of that function expediently.

现在我们有了一个非常适合于线程调用的函数,我们可以使用ThreadPoolExecutor地执行该函数的多次调用。

Let’s add the following highlighted code to your program in wiki_page_function.py:

让我们在wiki_page_function.py以下突出显示的代码添加到您的程序中:

wiki_page_function.py
wiki_page_function.py
import requests
import concurrent.futures

def get_wiki_page_existence(wiki_page_url, timeout=10):
    response = requests.get(url=wiki_page_url, timeout=timeout)

    page_status = "unknown"
    if response.status_code == 200:
        page_status = "exists"
    elif response.status_code == 404:
        page_status = "does not exist"

    return wiki_page_url + " - " + page_status

wiki_page_urls = [
    "https://en.wikipedia.org/wiki/Ocean",
    "https://en.wikipedia.org/wiki/Island",
    "https://en.wikipedia.org/wiki/this_page_does_not_exist",
    "https://en.wikipedia.org/wiki/Shark",
]
with concurrent.futures.ThreadPoolExecutor() as executor:
    futures = []
    for url in wiki_page_urls:
        futures.append(executor.submit(get_wiki_page_existence, wiki_page_url=url))
    for future in concurrent.futures.as_completed(futures):
        print(future.result())

Let’s take a look at how this code works:

让我们看一下这段代码是如何工作的:

  • concurrent.futures is imported to give us access to ThreadPoolExecutor.

    concurrent.futures ThreadPoolExecutor导入以使我们能够访问ThreadPoolExecutor

  • A with statement is used to create a ThreadPoolExecutor instance executor that will promptly clean up threads upon completion.

    with语句用于创建ThreadPoolExecutor实例executor ,该executor程序将在完成后立即清理线程。

  • Four jobs are submitted to the executor: one for each of the URLs in the wiki_page_urls list.

    四个作业submittedexecutorwiki_page_urls列表中的每个URL都有一个。

  • Each call to submit returns a Future instance that is stored in the futures list.

    每个submit返回一个Future实例 ,该实例存储在futures列表中。

  • The as_completed function waits for each Future get_wiki_page_existence call to complete so we can print its result.

    as_completed函数等待每个Future get_wiki_page_existence调用完成,以便我们打印其结果。

If we run this program again, with the following command:

如果我们再次运行此程序,请使用以下命令:

  • python wiki_page_function.py

    python wiki_page_function.py

We’ll see output like the following:

我们将看到如下输出:


   
   
Output
https://en.wikipedia.org/wiki/Island - exists https://en.wikipedia.org/wiki/Ocean - exists https://en.wikipedia.org/wiki/this_page_does_not_exist - does not exist https://en.wikipedia.org/wiki/Shark - exists

This output makes sense: 3 of the URLs are valid Wikipedia pages, and one of them this_page_does_not_exist is not. Note that your output may be ordered differently than this output. The concurrent.futures.as_completed function in this example returns results as soon as they are available, regardless of what order the jobs were submitted in.

该输出很有意义:其中3个URL是有效的Wikipedia页面,而其中一个this_page_does_not_exist不是。 请注意,您的输出顺序可能与此输出不同。 在此示例中, concurrent.futures.as_completed函数在结果可用时立即返回结果,而不管作业以什么顺序提交。

步骤3 —处理线程中运行的函数的异常 (Step 3 — Processing Exceptions From Functions Run in Threads)

In the previous step, get_wiki_page_existence successfully returned a value for all of our invocations. In this step, we’ll see that ThreadPoolExecutor can also raise exceptions generated in threaded function invocations.

在上一步中, get_wiki_page_existence成功返回了我们所有调用的值。 在这一步中,我们将看到ThreadPoolExecutor还可以引发线程函数调用中生成的异常。

Let’s consider the following example code block:

让我们考虑以下示例代码块:

wiki_page_function.py
wiki_page_function.py
import requests
import concurrent.futures


def get_wiki_page_existence(wiki_page_url, timeout=10):
    response = requests.get(url=wiki_page_url, timeout=timeout)

    page_status = "unknown"
    if response.status_code == 200:
        page_status = "exists"
    elif response.status_code == 404:
        page_status = "does not exist"

    return wiki_page_url + " - " + page_status


wiki_page_urls = [
    "https://en.wikipedia.org/wiki/Ocean",
    "https://en.wikipedia.org/wiki/Island",
    "https://en.wikipedia.org/wiki/this_page_does_not_exist",
    "https://en.wikipedia.org/wiki/Shark",
]
with concurrent.futures.ThreadPoolExecutor() as executor:
    futures = []
    for url in wiki_page_urls:
        futures.append(
            executor.submit(
                get_wiki_page_existence, wiki_page_url=url, timeout=0.00001
            )
        )
    for future in concurrent.futures.as_completed(futures):
        try:
            print(future.result())
        except requests.ConnectTimeout:
            print("ConnectTimeout.")

This code block is nearly identical to the one we used in Step 2, but it has two key differences:

此代码块与我们在步骤2中使用的代码块几乎相同,但是有两个主要区别:

  • We now pass timeout=0.00001 to get_wiki_page_existence. Since the requests package won’t be able to complete its web request to Wikipedia in 0.00001 seconds, it will raise a ConnectTimeout exception.

    现在,我们将timeout=0.00001传递给get_wiki_page_existence 。 由于requests包将无法在0.00001秒内完成对Wikipedia的Web请求,因此将引发ConnectTimeout异常。

  • We catch ConnectTimeout exceptions raised by future.result() and print out a string each time we do so.

    我们捕获由future.result()引发的ConnectTimeout异常,并每次输出一个字符串。

If we run the program again, we’ll see the following output:

如果再次运行该程序,我们将看到以下输出:


   
   
Output
ConnectTimeout. ConnectTimeout. ConnectTimeout. ConnectTimeout.

Four ConnectTimeout messages are printed—one for each of our four wiki_page_urls, since none of them were able to complete in 0.00001 seconds and each of the four get_wiki_page_existence calls raised the ConnectTimeout exception.

打印了四条ConnectTimeout消息-我们的四个wiki_page_urls每一个消息,因为它们都不能够在0.00001秒内完成,并且四个get_wiki_page_existence调用中的每一个都引发ConnectTimeout异常。

You’ve now seen that if a function call submitted to a ThreadPoolExecutor raises an exception, then that exception can get raised normally by calling Future.result. Calling Future.result on all your submitted invocations ensures that your program won’t miss any exceptions raised from your threaded function.

您现在已经看到,如果提交给ThreadPoolExecutor的函数调用引发异常,则可以通过调用Future.result正常引发该异常。 在所有提交的调用上调用Future.result可确保您的程序不会丢失线程函数引发的任何异常。

步骤4 —比较有线程和无线程的执行时间 (Step 4 — Comparing Execution Time With and Without Threads)

Now let’s verify that using ThreadPoolExecutor actually makes your program faster.

现在,让我们验证使用ThreadPoolExecutor实际上可以使您的程序更快。

First, let’s time get_wiki_page_existence if we run it without threads:

首先,如果我们在没有线程的情况下运行get_wiki_page_existence ,请让它计时:

wiki_page_function.py
wiki_page_function.py
import time
import requests
import concurrent.futures


def get_wiki_page_existence(wiki_page_url, timeout=10):
    response = requests.get(url=wiki_page_url, timeout=timeout)

    page_status = "unknown"
    if response.status_code == 200:
        page_status = "exists"
    elif response.status_code == 404:
        page_status = "does not exist"

    return wiki_page_url + " - " + page_status

wiki_page_urls = ["https://en.wikipedia.org/wiki/" + str(i) for i in range(50)]

print("Running without threads:")
without_threads_start = time.time()
for url in wiki_page_urls:
    print(get_wiki_page_existence(wiki_page_url=url))
print("Without threads time:", time.time() - without_threads_start)

In the code example we call our get_wiki_page_existence function with fifty different Wikipedia page URLs one by one. We use the time.time() function to print out the number of seconds it takes to run our program.

在代码示例中,我们get_wiki_page_existence调用了具有五十个不同的Wikipedia页面URL的get_wiki_page_existence函数。 我们使用time.time()函数打印出运行程序所需的秒数。

If we run this code again as before, we’ll see output like the following:

如果像以前一样再次运行此代码,我们将看到如下输出:


   
   
Output
Running without threads: https://en.wikipedia.org/wiki/0 - exists https://en.wikipedia.org/wiki/1 - exists . . . https://en.wikipedia.org/wiki/48 - exists https://en.wikipedia.org/wiki/49 - exists Without threads time: 5.803015232086182

Entries 2–47 in this output have been omitted for brevity.

为简洁起见,此输出中的条目2–47已被省略。

The number of seconds printed after Without threads time will be different when you run it on your machine—that’s OK, you are just getting a baseline number to compare with a solution that uses ThreadPoolExecutor. In this case, it was ~5.803 seconds.

在计算机上运行“无线程Without threads time后打印的秒数会有所不同-没关系,您只是获得了一个基线数,可以与使用ThreadPoolExecutor的解决方案进行比较。 在这种情况下,它是~5.803秒。

Let’s run the same fifty Wikipedia URLs through get_wiki_page_existence, but this time using ThreadPoolExecutor:

让我们通过get_wiki_page_existence运行相同的五十个Wikipedia URL,但这一次使用ThreadPoolExecutor

wiki_page_function.py
wiki_page_function.py
import time
import requests
import concurrent.futures


def get_wiki_page_existence(wiki_page_url, timeout=10):
    response = requests.get(url=wiki_page_url, timeout=timeout)

    page_status = "unknown"
    if response.status_code == 200:
        page_status = "exists"
    elif response.status_code == 404:
        page_status = "does not exist"

    return wiki_page_url + " - " + page_status
wiki_page_urls = ["https://en.wikipedia.org/wiki/" + str(i) for i in range(50)]

print("Running threaded:")
threaded_start = time.time()
with concurrent.futures.ThreadPoolExecutor() as executor:
    futures = []
    for url in wiki_page_urls:
        futures.append(executor.submit(get_wiki_page_existence, wiki_page_url=url))
    for future in concurrent.futures.as_completed(futures):
        print(future.result())
print("Threaded time:", time.time() - threaded_start)

The code is the same code we created in Step 2, only with the addition of some print statements that show us the number of seconds it takes to execute our code.

该代码与我们在步骤2中创建的代码相同,只是添加了一些打印语句,这些语句向我们显示了执行代码所需的秒数。

If we run the program again, we’ll see the following:

如果再次运行该程序,则会看到以下内容:


   
   
Output
Running threaded: https://en.wikipedia.org/wiki/1 - exists https://en.wikipedia.org/wiki/0 - exists . . . https://en.wikipedia.org/wiki/48 - exists https://en.wikipedia.org/wiki/49 - exists Threaded time: 1.2201685905456543

Again, the number of seconds printed after Threaded time will be different on your computer (as will the order of your output).

同样,“ Threaded time后打印的秒数在您的计算机上将有所不同(输出顺序也将有所不同)。

You can now compare the execution time for fetching the fifty Wikipedia page URLs with and without threads.

现在,您可以比较获取带有和不带有线程的五十个Wikipedia页面URL的执行时间。

On the machine used in this tutorial, without threads took ~5.803 seconds, and with threads took ~1.220 seconds. Our program ran significantly faster with threads.

在机器在本教程中使用,无绪花~5.803秒,并用螺纹花~1.220秒。 使用线程,我们的程序运行速度明显加快。

结论 (Conclusion)

In this tutorial, you have learned how to use the ThreadPoolExecutor utility in Python 3 to efficiently run code that is I/O bound. You created a function well suited to invocation within threads, learned how to retrieve both output and exceptions from threaded executions of that function, and observed the performance boost gained by using threads.

在本教程中,您学习了如何在Python 3中使用ThreadPoolExecutor实用程序来有效运行受I / O约束的代码。 您创建了一个非常适合在线程内调用的函数,学习了如何从该函数的线程执行中检索输出和异常,并观察了使用线程获得的性能提升。

From here you can learn more about other concurrency functions offered by the concurrent.futures module.

从这里您可以了解concurrent.futures模块提供的其他并发功能的更多信息。

翻译自: https://www.digitalocean.com/community/tutorials/how-to-use-threadpoolexecutor-in-python-3

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值