Let's say I have this sample code:
x = foo1(something1)
y = foo2(something2)
z = max(x, y)
I want to improve the execution time of this code by using threads (hope it helps isn't it?). I'd like to keep things as simple as possible so basically what I'd like to do is creating two threads working at the same time which compute respectively foo1 and foo2.
I'm reading something about threads but I found it a little tricky and I can't lose too much time in it just for doing such a simple thing.
解决方案
Assuming foo1 or foo2 is CPU-bound, threading doesn't improve the execution time... in fact, it normally makes it worse... for more information, see David Beazley's PyCon2010 presentation on the Global Interpreter Lock / Pycon2010 GIL slides. This presentation is very informative, I highly recommend it to anyone trying to distribute load across CPU cores.
The best way to improve performance is with the multiprocessing module
Assuming there is no shared state required between foo1() and foo2(), do this to improve execution performance...
from multiprocessing import Process, Queue
import time
def foo1(queue, arg1):
# Measure execution time and return the total time in the queue
print "Got arg1=%s" % arg1
start = time.time()
while (arg1 > 0):
arg1 = arg1 - 1
time.sleep(0.01)
# return the output of the call through the Queue
queue.put(time.time() - start)
def foo2(queue, arg1):
foo1(queue, 2*arg1)
_start = time.time()
my_q1 = Queue()
my_q2 = Queue()
# The equivalent of x = foo1(50) in OP's code
p1 = Process(target=foo1, args=[my_q1, 50])
# The equivalent of y = foo2(50) in OP's code
p2 = Process(target=foo2, args=[my_q2, 50])
p1.start(); p2.start()
p1.join(); p2.join()
# Get return values from each Queue
x = my_q1.get()
y = my_q2.get()
print "RESULT", x, y
print "TOTAL EXECUTION TIME", (time.time() - _start)
From my machine, this results in:
mpenning@mpenning-T61:~$ python test.py
Got arg1=100
Got arg1=50
RESULT 0.50578212738 1.01011300087
TOTAL EXECUTION TIME 1.02570295334
mpenning@mpenning-T61:~$