concurrent.futures
- 在Python3.2 后被引入
- 是对 threading 和 multiprocessing 的高级别抽象
- 包含两个类:ThreadPoolExecutor 和 ProcessPoolExecutor,分别实现多线程和多进程的 Pool 管理
- 速度会比threading 和 multiprocessing 慢一些,文章《使用Python进行并发编程-PoolExecutor篇》有详细分析
以下代码均摘自: https://www.ziwenxie.site/2016/12/24/python-concurrent-futures/
使用 submit 来操作
多线程和多进程的方法都是一样的,这里以多进程为例
# 摘自: https://www.ziwenxie.site/2016/12/24/python-concurrent-futures/
from concurrent.futures import ProcessPoolExecutor #多进程
# from concurrent.futures import ThreadPoolExecutor #多线程
import time
def return_future_result(message):
time.sleep(2)
return message
pool = ThreadPoolExecutor(max_workers=2) # 创建一个最大可容纳2个task的线程池
future1 = pool.submit(return_future_result, ("hello")) # 往线程池里面加入一个task
future2 = pool.submit(return_future_result, ("world")) # 往线程池里面加入一个task
print(future1.done()) # 判断task1是否结束
time.sleep(3)
print(future2.done()) # 判断task2是否结束
print(future1.result()) # 查看task1返回的结果
print(future2.result()) # 查看task2返回的结果
使用 map 来操作
- map(func, *iterables, timeout=None) , iterables是一个参数列表,每次从中取一个执行
import concurrent.futures
import urllib.request
URLS = ['http://httpbin.org', 'http://example.com/', 'https://api.github.com/']
def load_url(url):
with urllib.request.urlopen(url, timeout=60) as conn:
return conn.read()
# We can use a with statement to ensure threads are cleaned up promptly
with concurrent.futures.ThreadPoolExecutor(max_workers=3) as executor:
for url, data in zip(URLS, executor.map(load_url, URLS)):
print('%r page is %d bytes' % (url, len(data)))
使用 wait 来阻塞
- 类似 join
- 可选3种参数:FIRST_COMPLETED, FIRST_EXCEPTION, ALL_COMPLETE, 默认第三种,等所有进程/线程结束
...
pool = ThreadPoolExecutor(5)
futures = []
for x in range(5):
futures.append(pool.submit(return_after_random_secs, x))
print(wait(futures))
# print(wait(futures, timeout=None, return_when='FIRST_COMPLETED'))