我同意@aix,multiprocessing绝对是一条路。不管您将受到i/o的限制——无论您运行了多少并行进程,您只能如此快速地读取数据。但是很容易有一些加速。
请考虑以下内容(input/是一个目录,其中包含来自ProjectGutenberg的几个.txt文件)。import os.path
from multiprocessing import Pool
import sys
import time
def process_file(name):
''' Process one file: count number of lines and words '''
linecount=0
wordcount=0
with open(name, 'r') as inp:
for line in inp:
linecount+=1
wordcount+=len(line.split(' '))
return name, linecount, wordcount
def process_files_parallel(arg, dirname, names):
''' Process each file in parallel via Poll.map() '''
pool=Pool()
results=pool.map(process_file, [os.path.join(dirname, name) for name in names])
def process_files(arg, dirname, names):
''' Process each file in via map() '''
results=map(process_file, [os.path.join(dirname, name) for name in names])
if __name__ == '__main__':
start=time.time()
os.path.walk('input/', process_files, None)
print "process_files()", time.time()-start
start=time.time()
os.path.walk('input/', process_files_parallel, None)
print "process_files_parallel()", time.time()-start
当我在我的双核机器上运行这个程序时,有一个明显的(但不是2倍)加速:$ python process_files.py
process_files() 1.71218085289
process_files_parallel() 1.28905105591
如果文件足够小,可以放在内存中,并且您有许多不受i/o限制的处理要做,那么您应该会看到更好的改进。