stream.py是基于数据流编程思想的并行python(通过线程或进程)的一个有点实验性但很可爱的UI:示例中提供了一个URL检索器:
因为它很短:#!/usr/bin/env python
"""
Demonstrate the use of a ThreadPool to simultaneously retrieve web pages.
"""
import urllib2
from stream import ThreadPool
URLs = [
'http://www.cnn.com/',
'http://www.bbc.co.uk/',
'http://www.economist.com/',
'http://nonexistant.website.at.baddomain/',
'http://slashdot.org/',
'http://reddit.com/',
'http://news.ycombinator.com/',
]
def retrieve(urls, timeout=30):
for url in urls:
yield url, urllib2.urlopen(url, timeout=timeout).read()
if __name__ == '__main__':
retrieved = URLs >> ThreadPool(retrieve, poolsize=4)
for url, content in retrieved:
print '%r is %d bytes' % (url, len(content))
for url, exception in retrieved.failure:
print '%r failed: %s' % (url, exception)
您只需要将urllib2.urlopen(url, timeout=timeout).read()替换为urlretrieve...。