I am using a commercial application called Abaqus/CAE1 with a built-in Python 2.6 interpreter and API. I've developed a long-running script that I'm attempting to split into simultaneous, independent tasks using Python's multiprocessing module. However, once spawned the processes just hang.
The script itself uses various objects/methods available only through Abaqus's proprietary cae module, which can only be loaded by starting up the Python bundled with Abaqus/CAE first, which then executes my script with Python's execfile.
To try to get multiprocessing working, I've attempted to run a script that avoids accessing any Abaqus objects, and instead just performs a calculation and prints the result to file2. This way, I can run the same script from the regular system Python installation as well as from the Python bundled with Abaqus.
The example code below works as expected when run from the command line using either of the following:
C:\some\path>python multi.py #
C:\some\path>abaqus python multi.py #
This spawns the new processes, and each runs the function and writes the result to file as expected. However, when called from the Abaqus/CAE Python environment using:
abaqus cae noGUI=multi.py
Abaqus will then start up, automatically import its own proprietary modules, and then executes my file using:
execfile("multi.py", __main__.__dict__)
where the global namespace arg __main__.__dict__ is setup by Abaqus. Abaqus then checks out licenses for each process successfully, spawns the new processes, and ... and that's it. The processes are created, but they all hang and do nothing. There are no error messages.
What might be causing the hang-up, and how can I fix it? Is there an environment variable that must be set? Are there other commercial systems that use a similar procedure that I can learn from/emulate?
Note that any solution must be available in the Python 2.6 standard library.
System details: Windows 10 64-bit, Python 2.6, Abaqus/CAE 6.12 or 6.14
Example Test Script:
# multi.py
import multiprocessing
import time
def fib(n):
a,b = 0,1
for i in range(n):
a, b = a+b, a
return a
def workerfunc(num):
fname = ''.join(('worker_', str(num), '.txt'))
with open(fname, 'w') as f:
f.write('Starting Worker {0}\n'.format(num))
count = 0
while count < 1000: #
count += 1
a=fib(20)
line = ''.join((str(a), '\n'))
f.write(line)
f.write('End Worker {0}\n'.format(num))
if __name__ == '__main__':
jobs = []
for i in range(2): #
p = multiprocessing.Process(target=workerfunc, args=(i,))
jobs.append(p)
print 'starting', p
p.start()
print 'done starting', p
for j in jobs:
print 'joining', j
j.join()
print 'done joining', j
1A widely known finite element analysis package
2The script is a blend of a fairly standard Python function for fib(), and examples from PyMOTW
解决方案
I have to write an answer as I cannot comment yet.
What I can imagine as a reason is that python multiprocessing spawns a whole new process with it's own non-shared memory. So if you create an object in your script, the start a new process, that new process contains a copy of the memory and you have two objects that can go into different directions. When something of abaqus is present in the original python process (which I suspect) that gets copied too and this copy could create such a behaviour.
As a solution I think you could extend python with C (which is capable to use multiple cores in a single process) and use threads there.