线程池和进程池
开局来张图
-
使用线程池的好处
1.提升性能:因为减去了大量新建、终止线程的开销,重用了线程资源
2.使用场景:适合处理突发性大量请求或需要大量线程完成任务、但实际任务处理时间较短
3.防御功能:能有效避免系统因为创建线程过多,而导致系统负荷过大相应变慢等问题
4.代码优势:使用线程池的语法比自己新建线程执行线程更加简洁
concurrent.futures进行并发编程
concurrent.futures模块对threading和multiprocessing模块进行了进一步的包装,可以很方便地实现池的功能
concurrent.futures提供了ThreadPoolExecutor和ProcessPoolExecutor两个类,都继承自Executor,分别被用来创建线程池和进程池,接收max_workers参数,代表创建的线程数或者进程数。ProcessPoolExecutor的max_workers参数可以为空,程序会自动创建基于电脑cpu数据的进程数。
from concurrent.futures import ThreadPoolExecutor, ProcessPoolExecutor
import requests
def load_url(url):
return requests.get(url)
url = 'http://httpbin.org'
executor = ThreadPoolExecutor(max_workers=1)
future = executor.submit(load_url, url)
Executor中定义了submit()方法,这个方法的作用是提交一个可执行的回调task,返回一个future实例。future能够使用done()方法判断该任务是否结束,done()方法是不阻塞的,使用result()方法可以获取任务的返回值,这个方法是阻塞的。
print future.done()
print future.result().status_code
submit()方法只能进行单个任务,用并发多个任务,需要使用mao与as_completed
-
map
URLS = ['http://httpbin.org', 'http://example.com/', 'https://api.github.com/'] def load_url(url): return requests.get(url) with ThreadPoolExecutor(max_workers=3) as executor: for url, data in zip(URLS, executor.map(load_url, URLS)): print('%r page status_code %s' % (url, data.status_code))
结果:
'http://httpbin.org'` `page status_code ``200``'http://example.com/'` `page status_code ``200``'https://api.github.com/'` `page status_code ``200
map方法接收两个参数,第一个是要执行的函数,第二个为一个序列,会对序列中的每个元素执行这个函数。
-
as_completed
as_completed()方法返回一个Future组成的生成器,在没有任务完成的时候,会阻塞,在有某个任务完成的时候,会yield这个任务,直到所有的任务结束。
def load_url(url): return url, requests.get(url).status_code with ThreadPoolExecutor(max_workers=3) as executor: tasks = [executor.submit(load_url, url) for url in URLS] for future in as_completed(tasks): print future.result()
结果:
('http://example.com/', 200) ('http://httpbin.org', 200) ('https://api.github.com/', 200)
可以看出,结果与序列顺序不一致,先完成的任务会先通知主线程
-
wait
wait方法可以让主线程阻塞,直到满足设定的要求。有三种条件有三种条件ALL_COMPLETED, FIRST_COMPLETED,FIRST_EXCEPTION。
-
from concurrent.futures import ThreadPoolExecutor, ProcessPoolExecutor, wait, ALL_COMPLETED, FIRST_COMPLETED from concurrent.futures import as_completed import requests URLS = ['http://httpbin.org', 'http://example.com/', 'https://api.github.com/'] def load_url(url): requests.get(url) print url with ThreadPoolExecutor(max_workers=3) as executor: tasks = [executor.submit(load_url, url) for url in URLS] wait(tasks, return_when=ALL_COMPLETED) print 'all_cone'
返回:
http://example.com/
http://httpbin.org
https://api.github.com/
all_cone
可以看出阻塞到任务全部完成。
ProcessPoolExecutor
使用ProcessPoolExecutor与ThreadPoolExecutor方法基本一致