python关于multiprocessing(多进程)模块

1.概述

multiprocessing is a package that supports spawning processes using an API similar to the threading module. The multiprocessing package offers both local and remote concurrency, effectively side-stepping the Global Interpreter Lock by using subprocesses instead of threads. Due to this, the multiprocessing module allows the programmer to fully leverage multiple processors on a given machine. It runs on both Unix and Windows.

多进程模块支持利用与threading模块相近的接口产生进程. 此模块利用多个子进程而不是线程提供本地和远程并发, 有效的回避了全局解释锁; 因此, 该模块允许程序员充分利用计算机的多核, 在unix和Windows系统下有效;

The multiprocessing module also introduces APIs which do not have analogs in the threading module. A prime example of this is the Pool object which offers a convenient means of parallelizing the execution of a function across multiple input values, distributing the input data across processes (data parallelism). The following example demonstrates the common practice of defining such functions in a module so that child processes can successfully import that module

除此之外, 该模块还介绍了threading模块没有的接口; 例如pool, 可以在不同的输入参数下并行的执行同一个函数(通过把不同输入参数传递给不同进程的方式)下例:

from multiprocessing import Pool

def f(x):
    return x*x

if __name__ == '__main__':
    with Pool(5) as p:
        print(p.map(f, [1, 2, 3]))
#result
[1, 4, 9]

2.Process类
2.1process类

In multiprocessing, processes are spawned by creating a Process object and then calling its start() method. Process follows the API of threading.Thread.

进程对象的创建可参考threading模块的API接口
class multiprocessing.Process(group=None, target=None, name=None, args=(), kwargs={}, *, daemon=None)

2.2实例的方法:
2.2.1以下实例的方法与threading模块相同:run, start, is_alive, join, name, daemon
特别的,关于daemon:

Note that a daemonic process is not allowed to create child processes. Otherwise a daemonic process would leave its children orphaned if it gets terminated when its parent process exits. Additionally, these are not Unix daemons or services, they are normal processes that will be terminated (and not joined) if non-daemonic processes have exited.

守护进程不允许创建子进程,并且这些守护进程不是Unix中的守护进程,他们会随着非守护进程的退出被终结.

2.2.2除以上的相同方法外,多进程模块还有自己的方法:
(1)pid

Return the process ID. Before the process is spawned, this will be None.

返回进程ID, 在进程被创造前返回None

(2)exitcode

The child’s exit code. This will be None if the process has not yet terminated. A negative value -N indicates that the child was terminated by signal N.

退出码:子进程的退出码, 如果子进程没有结束,那返回None, 如果子进程被信号N终结,则返回负数-N

(3)authkey

The process’s authentication key (a byte string).
When multiprocessing is initialized the main process is assigned a random string using os.urandom().
When a Process object is created, it will inherit the authentication key of its parent process, although this may be changed by setting authkey to another byte string.
See Authentication keys.

认证密钥:进程的认证密钥,一个字节串, 当多进程初始化时, 主进程被使用os.urandom()指定一个随机字符串; 当进程被创建时,从他的父进程中继承认证密钥, 尽管可以通过设定密钥来更改它.

(4)sentinel

A numeric handle of a system object which will become “ready” when the process ends.
You can use this value if you want to wait on several events at once using multiprocessing.connection.wait(). Otherwise calling join() is simpler.
On Windows, this is an OS handle usable with the WaitForSingleObject and WaitForMultipleObjects family of API calls. On Unix, this is a file descriptor usable with primitives from the select module

当进程结束时变为ready状态, 可用于同时等待多个事件, 否则用join()更简单些.
此属性具体待补充…

(5)terminate()

1)Terminate the process. On Unix this is done using the SIGTERM signal; on Windows TerminateProcess() is used. Note that exit handlers and finally clauses, etc., will not be executed.
2)Note that descendant processes of the process will not be terminated – they will simply become orphaned.
3)Warning:
If this method is used when the associated process is using a pipe or queue then the pipe or queue is liable to become corrupted and may become unusable by other process. Similarly, if the process has acquired a lock or semaphore etc. then terminating it is liable to cause other processes to deadlock.

终结程序:在Unix上是使用SIGTERM信号来完成的, 在windows上是使用TerminateProcess()来完成的, 注意退出句柄和finally语句等不会被执行.

子进程不会被终结, 他们将变成孤儿进程;

警告:如果终结时相关进程正在使用Pipe或Queue, 数据可能被损坏,无法被别的进程使用; 相似的, 如果进程获取了锁或信号量, 突然终结会导致其他进程死锁;

(6)kill()

Same as terminate() but using the SIGKILL signal on Unix.

(7)close()

Close the Process object, releasing all resources associated with it. ValueError is raised if the underlying process is still running. Once close() returns successfully, most other methods and attributes of the Process object will raise ValueError.

关闭进程对象, 释放有关的一切资源; 如果该进程还在运行,那抛出异常(可以在join或terminate后调用);一旦close()成功返回, 此进程的大多数方法和属性将抛出异常,不可再调用此进程的有关方法;

(8)

Note that the start(), join(), is_alive(), terminate() and exitcode methods should only be called by the process that created the process object.

需要注意的是,start, join, is_alive, terminate, 和exitcode方法只能在创建进程的进程中调用

(9)模块相关异常

exception multiprocessing.ProcessError
The base class of all multiprocessing exceptions.

exception multiprocessing.BufferTooShort
Exception raised by Connection.recv_bytes_into() when the supplied buffer object is too small for the message read.
If e is an instance of BufferTooShort then e.args[0] will give the message as a byte string.

exception multiprocessing.AuthenticationError
Raised when there is an authentication error.

exception multiprocessing.TimeoutError
Raised by methods with a timeout when the timeout expires.

使用实例:

import multiprocessing
import time


def test1():
    print('test1 starting')
    print(multiprocessing.current_process())
    time.sleep(3)
    print('test1 ending')

def test2():
    print('test2 starting')
    # print(b)  # 与线程相比,进程数据不共享
    time.sleep(5)
    print('test2 ending')


if __name__ == '__main__':  # windows多进程编程中必须有,Linux可以没有
    t = time.time()
    a = multiprocessing.Process(target=test1, )
    b = multiprocessing.Process(target=test2, name='shit2')

    b.daemon = True  # start前设置
    print(a.authkey)
    a.start()
    b.start()
    time.sleep(1)
    a.terminate()
    print(a.pid)  # start()前调用为None,terminate后还可调用
    a.join()
    # a.close()  # close()在join或terminate后调用,释放资源,没有close()的区别在于,close后无法再调用方法
    print(a.exitcode)  # 进程正常退出为0, 正在运行为None,终结为负数
    print('main ending')
    print(b.exitcode)  # 守护进程为None

#result
b'\x12w\xf7N \xfb\xefy\x8b\x03A\xf2\xa9\xd5-\x11\x0f\xed3j\xc9-(Z\x0f\x0f_r\xc1\xa5a\xb2'
9260
test2 starting
test1 starting
<Process(Process-1, started)>
9260
-15
main ending
None

3.启动方法和context
3.1启动方法共有3种
(1)spawn

The parent process starts a fresh python interpreter process. The child process will only inherit those resources necessary to run the process objects run() method. In particular, unnecessary file descriptors and handles from the parent process will not be inherited. Starting a process using this method is rather slow compared to using fork or forkserver.
Available on Unix and Windows. The default on Windows.

该方法父进程会启动一个新的python解释器, 子进程只会继承那些运行对象run()方法的资源, 特别是, 不会继承父进程不必要的文件描述符和句柄; 与使用fork和forkserver方法相比相当慢; 支持unix和Windows, 在Windows上为默认方法.

(2)fork

The parent process uses os.fork() to fork the Python interpreter. The child process, when it begins, is effectively identical to the parent process. All resources of the parent are inherited by the child process. Note that safely forking a multithreaded process is problematic.
Available on Unix only. The default on Unix.

父进程使用os.fork()函数分叉python解释器; 实际上, 在开始时 ,子进程与父进程完全相同,子进程继承父进程的所有资源.需要注意的是, 多线程进程不能安全的分叉;
只支持unix系统, 在Unix系统上为默认方式;

(3)forkserver

When the program starts and selects the forkserver start method, a server process is started. From then on, whenever a new process is needed, the parent process connects to the server and requests that it fork a new process. The fork server process is single threaded so it is safe for it to use os.fork(). No unnecessary resources are inherited.
Available on Unix platforms which support passing file descriptors over Unix pipes.

这种方法会先开启一个服务器进程, 从这时起, 不论何时需要新进程, 父进程会链接服务器并且请求它分叉一个新进程; 因为服务器是单线程的, 所以可以安全的使用os.fork(), 无必要的资源不会被继承;
在支持从Unix管道传递文件描述符的Unix平台上可用;

附加说明:

On Unix using the spawn or forkserver start methods will also start a semaphore tracker process which tracks the unlinked named semaphores created by processes of the program. When all processes have exited the semaphore tracker unlinks any remaining semaphores. Usually there should be none, but if a process was killed by a signal there may be some “leaked” semaphores. (Unlinking the named semaphores is a serious matter since the system allows only a limited number, and they will not be automatically unlinked until the next reboot.)

在Unix上使用spawn或forkserver方法启动时, 同时也会启动一个信号量追踪进程, 它会追踪程序进程创建的未链接的命名信号量; 当所有进程退出时, 信号量追踪器断开任何剩余信号量的链接;通常应该没有, 但如果有一个进程被信号杀死, 那可能就是泄露的信号量(断开命名的信号量链接是个很严肃的问题, 因为系统只允许有限数量的断开, 并且在下次重启前它们还不会自动断开)

3.1.2选择启动方法
(1)

To select a start method you use the set_start_method() in the if __ name__ == ‘__ main__’ clause of the main module. For example:

使用set_start_method(),并且此方法在一个程序中不该使用超过一次:

import multiprocessing as mp

def foo(q):
    q.put('hello')

if __name__ == '__main__':
    mp.set_start_method('spawn')
    q = mp.Queue()
    p = mp.Process(target=foo, args=(q,))
    p.start()
    print(q.get())
    p.join()

Alternatively, you can use get_context() to obtain a context object. Context objects have the same API as the multiprocessing module, and allow one to use multiple start methods in the same program.

(2)作为另一种选择, 可使用get_context()方法,context对象和multiprocessing模块有相同的API, 并且允许一个程序中多次使用;

import multiprocessing as mp

def foo(q):
    q.put('hello')

if __name__ == '__main__':
    ctx = mp.get_context('spawn')
    q = ctx.Queue()
    p = ctx.Process(target=foo, args=(q,))
    p.start()
    print(q.get())
    p.join()

Note that objects related to one context may not be compatible with processes for a different context. In particular, locks created using the fork context cannot be passed to processes started using the spawn or forkserver start methods.
A library which wants to use a particular start method should probably use get_context() to avoid interfering with the choice of the library user

请注意,与一个context相关的对象可能与有不同context的进程不兼容。 特别是,使用fork上下文创建的锁不能传递给使用spawn或forkserver start方法启动的进程;
想要使用特定start方法的库应该使用get_context()来避免干扰库用户的选择;

4进程间通信
4.1概述:
(1)

When using multiple processes, one generally uses message passing for communication between processes and avoids having to use any synchronization primitives like locks.
For passing messages one can use Pipe() (for a connection between two processes) or a queue (which allows multiple producers and consumers).

当使用多进程时, 一种常用的在进程间传递消息并避免使用任何同步原语(像锁)的方法: Pipe可用于两个进程间链接; Queue可用于多生产者和多消费者.

1)The Queue, SimpleQueue and JoinableQueue types are multi-producer, multi-consumer FIFO queues modelled on the queue.Queue class in the standard library. They differ in that Queue lacks the task_done() and join() methods introduced into Python 2.5’s queue.Queue class.
2)If you use JoinableQueue then you must call JoinableQueue.task_done() for each task removed from the queue or else the semaphore used to count the number of unfinished tasks may eventually overflow, raising an exception.
3)Note that one can also create a shared queue by using a manager object – see Managers.

Queue,SimpleQueue, JoinableQueue类型是以queue.Queue模块为模型建立的FIFO队列; 区别在于Queue缺少task_done()和Join()方法;

如果你使用JoinableQueue时,在每个任务从队列中移除后必须调用JoinableQueue.task_done()方法, 否则计算为完成任务的semaphore最终可能溢出,抛出异常;
我们也可以通过使用manager创建共享队列

(2)几个注意与警示:

Note:
multiprocessing uses the usual queue.Empty and queue.Full exceptions to signal a timeout. They are not available in the multiprocessing namespace so you need to import them from queue.

多线程经常用queue.Empty和queue.Full来发出超时信号,他们需要从queue模块导入(这两个是queue中的异常)

Note:
When an object is put on a queue, the object is pickled and a background thread later flushes the pickled data to an underlying pipe. This has some consequences which are a little surprising, but should not cause any practical difficulties – if they really bother you then you can instead use a queue created with a manager.
1.After putting an object on an empty queue there may be an infinitesimal delay before the queue’s empty() method returns False and get_nowait() can return without raising queue.Empty.
2.If multiple processes are enqueuing objects, it is possible for the objects to be received at the other end out-of-order. However, objects enqueued by the same process will always be in the expected order with respect to each other.

当队列中放入对象时, 可能产生一些奇怪的情况,如果介意的话就用manager来创建队列:
1.在空队列中放入对象后,empty()方法返回False之前可能有很小很小的延迟;
2.多个进程往队列中放入对象时可能会产生无序接收的现象, 当然同一个线程内不会发生这种事

Warning:
If a process is killed using Process.terminate() or os.kill() while it is trying to use a Queue, then the data in the queue is likely to become corrupted. This may cause any other process to get an exception when it tries to use the queue later on.

在放入队列时终结一个进程可能会损坏队列;

Warning:
As mentioned above, if a child process has put items on a queue (and it has not used JoinableQueue.cancel_join_thread), then that process will not terminate until all buffered items have been flushed to the pipe.
This means that if you try joining that process you may get a deadlock unless you are sure that all items which have been put on the queue have been consumed. Similarly, if the child process is non-daemonic then the parent process may hang on exit when it tries to join all its non-daemonic children.
Note that a queue created using a manager does not have this issue. See Programming guidelines.

如果子进程在队列中放数据后没有调用JoinableQueue.cancel_join_thread, 那么在将所有缓冲的条目刷入管道之前,该进程不会终止; 这意味着如果joining这个进程可能会导致死锁.除非你确信队列中所有数据已经用完; 同样的,如果该进程是非守护进程, 父进程可能在退出时挂起;
manager产生的队列无此问题;

主要支持两种方式:
4.2Queue
class multiprocessing.Queue([maxsize])

Returns a process shared queue implemented using a pipe and a few locks/semaphores. When a process first puts an item on the queue a feeder thread is started which transfers objects from a buffer into the pipe.
The usual queue.Empty and queue.Full exceptions from the standard library’s queue module are raised to signal timeouts.

该类返回一个进程共享的队列(使用管道和一些锁, 信号量实现); 当一个进程将第一个数据放入队列时, 一个feeder进程启动,将对象传入管道;
多进程队列几乎是queue.Queue的克隆; 队列是线程和进程安全的;

官方示例:

from multiprocessing import Process, Queue

def f(q):
    q.put([42, None, 'hello'])

if __name__ == '__main__':
    q = Queue()
    p = Process(target=f, args=(q,))
    p.start()
    print(q.get())    # prints "[42, None, 'hello']"
    p.join()

4.2.2队列的方法
除了克隆queue.Queue的方法(qsize, empty, full, get, put)外, 还有以下几种独有的方法:
(1)close()

Indicate that no more data will be put on this queue by the current process. The background thread will quit once it has flushed all buffered data to the pipe. This is called automatically when the queue is garbage collected.

表明目前的进程再没有需要放入队列的数据了. 一旦将所有缓冲刷新到管道, 后台线程将退出; 当队列被垃圾回收时, 此方法自动执行

(2)join_thread()

Join the background thread. This can only be used after close() has been called. It blocks until the background thread exits, ensuring that all data in the buffer has been flushed to the pipe.
By default if a process is not the creator of the queue then on exit it will attempt to join the queue’s background thread. The process can call cancel_join_thread() to make join_thread() do nothing.

只能再close()后调用, 等待后台线程; 它会一直阻塞直到后台线程退出,确保数据都刷入管道;
默认情况下, 如果一个进程不是队列的创建者, 那它退出时会试图等待队列的后台线程; 可通过cancel_join_thread()来取消等待;

(3)cancel_join_thread()

1)Prevent join_thread() from blocking. In particular, this prevents the background thread from being joined automatically when the process exits – see join_thread().
2)A better name for this method might be allow_exit_without_flush(). It is likely to cause enqueued data to lost, and you almost certainly will not need to use it. It is really only there if you need the current process to exit immediately without waiting to flush enqueued data to the underlying pipe, and you don’t care about lost data.

避免进程退出时等待队列的后台线程; 它可能导致数据丢失, 你几乎肯定不会用到这种方法, 除非你希望进程立即退出并不介意数据丢失!

(4)注意事项:

Note:
This class’s functionality requires a functioning shared semaphore implementation on the host operating system. Without one, the functionality in this class will be disabled, and attempts to instantiate a Queue will result in an ImportError. See bpo-3770 for additional information. The same holds true for any of the specialized queue types listed below.

这个类的功能需要在主机操作系统上运行共享信号量, 如果没有将无法使用该类进行实例化; 这同样适用于下面提到的队列;

4.2.3 其他队列类
(1)class multiprocessing.SimpleQueue

It is a simplified Queue type, very close to a locked Pipe.
empty()
Return True if the queue is empty, False otherwise.
get()
Remove and return an item from the queue.
put(item)
Put item into the queue.

(2)class multiprocessing.JoinableQueue([maxsize])

JoinableQueue, a Queue subclass, is a queue which additionally has task_done() and join() methods.

Queue的子类, 这种队列具有task_done和join方法.

个人示例:

import multiprocessing
import time

def test1(q):
    print('test1 starting')
    time.sleep(2)
    q.put('shit')
    q.put([1, 2, 3])
    q.close()  # 关闭操作管道的线程后再无法操作队列
    # print(q.get())
    print(q.qsize())  # 然而这个还可以用
    print('test1 ending')

def test2(q):
    print('test2 starting')
    print(q.get())
    q.put(666)
    print('test2 ending')

if __name__ == '__main__':  # windows多进程编程中必须有
    t = time.time()
    c = multiprocessing.Queue(3)  # 注意无法使用queue.Queue
    a = multiprocessing.Process(target=test1, args=(c,))
    b = multiprocessing.Process(target=test2, name='shit2', args=(c,))

    a.start()
    b.start()

    a.join()  # 队列中数据没有被取走,进程不会退出,产生死锁
    print('main ending')
#result
test1 starting
test2 starting
1
shit
test1 ending
test2 ending
main ending

示例2:JoinableQueue的使用

import multiprocessing
import time


def test1(q):
    print('test1 starting')
    time.sleep(2)
    q.put('shit')
    q.put([1, 2, 3])
    q.join()
    q.close()  # 关闭操作管道的线程后再无法操作队列
    # print(q.get())
    print(q.qsize())  # 然而这个还可以用
    print('test1 ending')

def test2(q):
    print('test2 starting')
    print(q.get())
    q.put(666)
    q.task_done()
    q.task_done()
    q.task_done()
    print('test2 ending')

if __name__ == '__main__':  # windows多进程编程中必须有
    t = time.time()
    c = multiprocessing.JoinableQueue(3)
    a = multiprocessing.Process(target=test1, args=(c,))
    b = multiprocessing.Process(target=test2, name='shit2', args=(c,))

    a.start()
    b.start()

    a.join()  
    print('main ending')
#result
test2 starting
test1 starting
shit
test2 ending
2
test1 ending
main ending

4.3Pipe

The Pipe() function returns a pair of connection objects connected by a pipe which by default is duplex (two-way).

multiprocessing.Pipe([duplex])
Returns a pair (conn1, conn2) of Connection objects representing the ends of a pipe.
If duplex is True (the default) then the pipe is bidirectional. If duplex is False then the pipe is unidirectional: conn1 can only be used for receiving messages and conn2 can only be used for sending messages.

Pipe方法返回一对被pipe链接的connection对象,默认为双工的(双向管道)

如果duplex参数为True, 那管道是双向的, 如果为False,管道是单向的:conn1只能用来接收消息, conn2只能用来发送消息.

4.4.2connection对象的方法

Connection objects allow the sending and receiving of picklable objects or strings. They can be thought of as message oriented connected sockets.

connection对象允许发送和接收可pickle的对象或字符串; 可以认为是面向消息的socket连接;

(1)send(obj)

Send an object to the other end of the connection which should be read using recv().
The object must be picklable. Very large pickles (approximately 32 MiB+, though it depends on the OS) may raise a ValueError exception.

给connection的另一端发送对象;对象必须是可pickle的,非常大的数据(超过32MB)可能抛出ValueError异常;

(2)recv()

Return an object sent from the other end of the connection using send(). Blocks until there is something to receive. Raises EOFError if there is nothing left to receive and the other end was closed.

返回从connection另一端接收的对象,在接收前会一直阻塞, 没要接收的对象并且另一端关闭的话会抛出异常;

(3)poll([timeout])

Return whether there is any data available to be read.
If timeout is not specified then it will return immediately. If timeout is a number then this specifies the maximum time in seconds to block. If timeout is None then an infinite timeout is used.

返回是否有数据可读;如果timeout没有指定则立即返回; 如果给出timeout则表明阻塞的最长时间; 如果timeout是None则相当于无限;

(4)send_bytes(buffer[, offset[, size]])

Send byte data from a bytes-like object as a complete message.
If offset is given then data is read from that position in buffer. If size is given then that many bytes will be read from buffer. Very large buffers (approximately 32 MiB+, though it depends on the OS) may raise a ValueError exception

发送字节数据, 如果offset给出,则从缓冲区的相应位置读取; 如果size给出, 则从缓冲读取相应字节的数据(数据超过32M可能抛出异常)

(5)recv_bytes([maxlength])

Return a complete message of byte data sent from the other end of the connection as a string. Blocks until there is something to receive. Raises EOFError if there is nothing left to receive and the other end has closed.
If maxlength is specified and the message is longer than maxlength then OSError is raised and the connection will no longer be readable.

返回从另一端发送来的完整的字节数据; 在接收前会一直阻塞;如果没有收到任何信息并且另一端关闭的话将会抛出异常;
如果maxlength参数给出并且数据长度超出此参数将抛出异常,并且connection将不再可读;

(6)剩下3个待补充

fileno()
Return the file descriptor or handle used by the connection.

close()
Close the connection.
This is called automatically when the connection is garbage collected.

recv_bytes_into(buffer[, offset])
1)Read into buffer a complete message of byte data sent from the other end of the connection and return the number of bytes in the message. Blocks until there is something to receive. Raises EOFError if there is nothing left to receive and the other end was closed.
2)buffer must be a writable bytes-like object. If offset is given then the message will be written into the buffer from that position. Offset must be a non-negative integer less than the length of buffer (in bytes).
3)If the buffer is too short then a BufferTooShort exception is raised and the complete message is available as e.args[0] where e is the exception instance.

(7)一些警示

Warning:
The Connection.recv() method automatically unpickles the data it receives, which can be a security risk unless you can trust the process which sent the message.
Therefore, unless the connection object was produced using Pipe() you should only use the recv() and send() methods after performing some sort of authentication. See Authentication keys.

connnection自动unpickle数据,可能导致数据安全的风险;
因此除非connection对象由pipe()生成, 否则只应在身份认证后使用send()和recv()

Warning:
If a process is killed while it is trying to read or write to a pipe then the data in the pipe is likely to become corrupted, because it may become impossible to be sure where the message boundaries lie.

如果进程在读取或写入管道时被终结, 有可能使管道奔溃,因为会导致无法确认消息边界

官方示例1:

from multiprocessing import Process, Pipe

def f(conn):
    conn.send([42, None, 'hello'])
    conn.close()

if __name__ == '__main__':
    parent_conn, child_conn = Pipe()
    p = Process(target=f, args=(child_conn,))
    p.start()
    print(parent_conn.recv())   # prints "[42, None, 'hello']"
    p.join()

官方示例2:

>>> from multiprocessing import Pipe
>>> a, b = Pipe()
>>> a.send([1, 'hello', None])
>>> b.recv()
[1, 'hello', None]
>>> b.send_bytes(b'thank you')
>>> a.recv_bytes()
b'thank you'
>>> import array
>>> arr1 = array.array('i', range(5))
>>> arr2 = array.array('i', [0] * 10)
>>> a.send_bytes(arr1)
>>> count = b.recv_bytes_into(arr2)
>>> assert count == len(arr1) * arr1.itemsize
>>> arr2
array('i', [0, 1, 2, 3, 4, 0, 0, 0, 0, 0])

The two connection objects returned by Pipe() represent the two ends of the pipe. Each connection object has send() and recv() methods (among others). Note that data in a pipe may become corrupted if two processes (or threads) try to read from or write to the same end of the pipe at the same time. Of course there is no risk of corruption from processes using different ends of the pipe at the same time.

Pipe()返回的两个连接对象代表管道的两端, 每个链接都有send()和recv()方法(不限于这两个). 注意, 两个进程同时向管道的同一端读取或写入数据可能导致数据损坏, 当然同时使用管道的两端是没有危险的.

个人示例:

import multiprocessing
import time


def test1(con):
    print('test1 starting')
    time.sleep(3)
    con.send([1, 2, 3])  # 只有可pickle就行
    print(con.recv_bytes())
    print('test1 ending')

def test2(con):
    print('test2 starting')
    print(con.poll(None))  # 是否有可接收的消息,返回True或False,None为无限等待
    data = con.recv()
    print(data)
    con.send_bytes(b'shit')  # 发送字节
    print('test2 ending')


if __name__ == '__main__':  # windows多进程编程中必须有
    t = time.time()
    c1, c2 = multiprocessing.Pipe()  # 双向管道
    a = multiprocessing.Process(target=test1, args=(c1,))
    b = multiprocessing.Process(target=test2, name='shit2', args=(c2,))

    a.start()
    b.start()

    a.join()

    print('main ending')
    
#result
test2 starting
test1 starting
True
[1, 2, 3]
test2 ending
b'shit'
test1 ending
main ending

5.进程间同步

multiprocessing contains equivalents of all the synchronization primitives from threading. For instance one can use a lock to ensure that only one process prints to standard output at a time

多进程模块包括和threading模块相同的同步原语; 例如可以使用锁来保证一次只有一个进程在标准输出上打印;

Generally synchronization primitives are not as necessary in a multiprocess program as they are in a multithreaded program. See the documentation for threading module.
Note that one can also create synchronization primitives by using a manager object – see Managers.

通常情况下同步原语在多进程编程中不像多线程编程那样重要, 锁的使用方法基本和threading模块相同, 就不再赘述;
同样, 也可使用manager来创建同步原语;
注意:只能使用本模块下的Lock对象, 不可使用threading下的Lock

官方示例:

from multiprocessing import Process, Lock

def f(l, i):
    l.acquire()
    try:
        print('hello world', i)
    finally:
        l.release()

if __name__ == '__main__':
    lock = Lock()

    for num in range(10):
        Process(target=f, args=(lock, num)).start()

Without using the lock output from the different processes is liable to get all mixed up.

不使用锁来锁定不同进程的输出有可能导致混乱

6进程间共享

As mentioned above, when doing concurrent programming it is usually best to avoid using shared state as far as possible. This is particularly true when using multiple processes.
However, if you really do need to use some shared data then multiprocessing provides a couple of ways of doing so.

使用并发编程最好避免使用共享状态; 这在多进程编程时尤其正确;
当然如果你确实需要进行数据共享, 下面提供了两种方法:
6.1共享内存(暂时不研究)

Shared memory
Data can be stored in a shared memory map using Value or Array. For example, the following code
from multiprocessing import Process, Value, Array

def f(n, a):
    n.value = 3.1415927
    for i in range(len(a)):
        a[i] = -a[i]

if __name__ == '__main__':
    num = Value('d', 0.0)
    arr = Array('i', range(10))

    p = Process(target=f, args=(num, arr))
    p.start()
    p.join()

    print(num.value)
    print(arr[:])

#will print
3.1415927
[0, -1, -2, -3, -4, -5, -6, -7, -8, -9]

The ‘d’ and ‘i’ arguments used when creating num and arr are typecodes of the kind used by the array module: ‘d’ indicates a double precision float and ‘i’ indicates a signed integer. These shared objects will be process and thread-safe.

For more flexibility in using shared memory one can use the multiprocessing.sharedctypes module which supports the creation of arbitrary ctypes objects allocated from shared memory.

6.2服务器进程
6.1

A manager object returned by Manager() controls a server process which holds Python objects and allows other processes to manipulate them using proxies.
A manager returned by Manager() will support types list, dict, Namespace, Lock, RLock, Semaphore, BoundedSemaphore, Condition, Event, Barrier, Queue, Value and Array.

Manager类返回manager对象控制一个服务器进程,这个进程可以保存python对象并允许其他进程使用代理操纵他们;
manager对象支持列表,字典, 锁等多种对象;
(1)multiprocessing.Manager()

Returns a started SyncManager object which can be used for sharing objects between processes. The returned manager object corresponds to a spawned child process and has methods which will create shared objects and return corresponding proxies.

返回一个启动的SyncManager对象, 可以在进程间共享数据; 该对象是一个生成的子进程, 具有创建进程间共享对象并返回相关代理的方法;

(2)class multiprocessing.managers.BaseManager([address[, authkey]])
初始化manager对象的基类,暂不研究;

(3)class multiprocessing.managers.SyncManager

A subclass of BaseManager which can be used for the synchronization of processes. Objects of this type are returned by multiprocessing.Manager().
Its methods create and return Proxy Objects for a number of commonly used data types to be synchronized across processes. This notably includes shared lists and dictionaries.

BaseManager的子类, 用来同步进程, Manager()实际上返回的就是这个类的实例; 它的方法可以为进程间常用的数据类型创造并返回代理对象, 包括列表, 字典, 队列, 锁等;

(4)用户定制manager, 远程manager等高级用法待补充…

官方示例:

from multiprocessing import Process, Manager

def f(d, l):
    d[1] = '1'
    d['2'] = 2
    d[0.25] = None
    l.reverse()

if __name__ == '__main__':
    with Manager() as manager:
        d = manager.dict()
        l = manager.list(range(10))

        p = Process(target=f, args=(d, l))
        p.start()
        p.join()

        print(d)
        print(l)
#will print
{0.25: None, 1: '1', '2': 2}
[9, 8, 7, 6, 5, 4, 3, 2, 1, 0]

Server process managers are more flexible than using shared memory objects because they can be made to support arbitrary object types. Also, a single manager can be shared by processes on different computers over a network. They are, however, slower than using shared memory.

服务器进程manager比使用共享内存对象更灵活,因为它支持任意对象类型; 此外, 单个manager可以通过网络在不同的计算机上共享; 但是缺点是他比共享内存要慢

6.2proxy对象
(1)

A proxy is an object which refers to a shared object which lives (presumably) in a different process. The shared object is said to be the referent of the proxy. Multiple proxy objects may have the same referent.
A proxy object has methods which invoke corresponding methods of its referent (although not every method of the referent will necessarily be available through the proxy). In this way, a proxy can be used just like its referent can.

代理是一个参考存在于不同进程的共享对象的对象, 共享对象可以称为是代理的参考对象; 多个代理对象可能指向同一个参考对象;

代理对象具有调用其参考对象相应方法的方法, 尽管并不是参考对象的每个方法都可被代理对象使用; 换句话说, 可以像使用参考对象一样使用代理对象;

(2)

Notice that applying str() to a proxy will return the representation of the referent, whereas applying repr() will return the representation of the proxy

注意, 代理对象使用str()方法将返回代理对象的相关信息, 使用repr()方法将返回代理对象的相关信息

(3)

An important feature of proxy objects is that they are picklable so they can be passed between processes. As such, a referent can contain Proxy Objects. This permits nesting of these managed lists, dicts, and other Proxy Objects.

代理对象的一大重要特点是都是可pickle的, 所有它们可以在进程间传递; 因此, 代理对象之间可以嵌套!,见下例:

>>> a = manager.list()
>>> b = manager.list()
>>> a.append(b)         # referent of a now contains referent of b
>>> print(a, b)
[<ListProxy object, typeid 'list' at ...>] []
>>> b.append('hello')
>>> print(a[0], b)
['hello'] ['hello']

(4)

If standard (non-proxy) list or dict objects are contained in a referent, modifications to those mutable values will not be propagated through the manager because the proxy has no way of knowing when the values contained within are modified. However, storing a value in a container proxy (which triggers a __ setitem__ on the proxy object) does propagate through the manager and so to effectively modify such an item, one could re-assign the modified value to the container proxy.

如果参考对象中包含像列表, 字典等可变数据类型, 那对这些可变数据类型的更改将无法通过manager传递; 因为代理对象无法探测其中内容是否更改; 然而, 可以通过重新对可变数据类型赋值的方式解决这个问题;
见下例:

# create a list proxy and append a mutable object (a dictionary)
lproxy = manager.list()
lproxy.append({})
# now mutate the dictionary
d = lproxy[0]
d['a'] = 1
d['b'] = 2
# at this point, the changes to d are not yet synced, but by
# updating the dictionary, the proxy is notified of the change
lproxy[0] = d

这种方法可能不如嵌套代理对象方便.

(5)另外注意:

Note:
The proxy types in multiprocessing do nothing to support comparisons by value. So, for instance, we have:
manager.list([1,2,3]) == [1,2,3]
False
One should just use a copy of the referent instead when making comparisons.

代理对象不支持比较大小, 如果需要比较那只能使用参考对象的副本了;

使用案例大全:

import multiprocessing
import time

def test1(l):
    print('test1 starting')
    l.append('shit')
    l[0] = 666
    l[1][0] = 0  # 更改无效
    l[2][0] = 'aaa'
    print(l)  # 调用__str__方法成功
    print('test1 ending')

def test2(l):

    print('test2 starting')
    time.sleep(2)
    l.append('hahahahahah')  # 注意:多个进程同时操作共享对象可能出错!!!,必须同步
    print(l)  
    print('test2 ending')


if __name__ == '__main__':  # windows多进程编程中必须有
    t = time.time()
    c = multiprocessing.Manager()
    f = c.list([4,5])
    d = c.list([1, [7, 8, 9], f])  # 代理对象嵌套好用!!
    a = multiprocessing.Process(target=test1, args=(d,))
    b = multiprocessing.Process(target=test2, name='shit2', args=(d,))

    a.start()
    b.start()
    time.sleep(3)
    a.join()
    d[1] = [7, 888, 9]  # 更改成功
    d[1][1] = 'hhh'  # 更改无效
    d[2][0] = 'aaa'  # 更改成功
    print(d[2])
    print(str(d), repr(d))  # 两种不同显示
    print('main ending')

#result
test1 starting
[666, [7, 8, 9], <ListProxy object, typeid 'list' at 0x2390b595358>, 'shit']
test1 ending
test2 starting
[666, [7, 8, 9], <ListProxy object, typeid 'list' at 0x2390b595358>, 'shit', 'hahahahahah']
test2 ending
['aaa', 5]
[666, [7, 888, 9], <ListProxy object, typeid 'list' at 0x2390b595358>, 'shit', 'hahahahahah'] <ListProxy object, typeid 'list' at 0x12b6403f240>
main ending

7.进程池Pool
7.1简介

The Pool class represents a pool of worker processes. It has methods which allows tasks to be offloaded to the worker processes in a few different ways.

Pool类的实例代表了一个工作进程池; 他的方法可以用几种不同的方式将任务卸载到工作进程中去;
官方示例:

from multiprocessing import Pool, TimeoutError
import time
import os

def f(x):
    return x*x

if __name__ == '__main__':
    # start 4 worker processes
    with Pool(processes=4) as pool:

        # print "[0, 1, 4,..., 81]"
        print(pool.map(f, range(10)))

        # print same numbers in arbitrary order
        for i in pool.imap_unordered(f, range(10)):
            print(i)

        # evaluate "f(20)" asynchronously
        res = pool.apply_async(f, (20,))      # runs in *only* one process
        print(res.get(timeout=1))             # prints "400"

        # evaluate "os.getpid()" asynchronously
        res = pool.apply_async(os.getpid, ()) # runs in *only* one process
        print(res.get(timeout=1))             # prints the PID of that process

        # launching multiple evaluations asynchronously *may* use more processes
        multiple_results = [pool.apply_async(os.getpid, ()) for i in range(4)]
        print([res.get(timeout=1) for res in multiple_results])

        # make a single worker sleep for 10 secs
        res = pool.apply_async(time.sleep, (10,))
        try:
            print(res.get(timeout=1))
        except TimeoutError:
            print("We lacked patience and got a multiprocessing.TimeoutError")

        print("For the moment, the pool remains available for more work")

    # exiting the 'with'-block has stopped the pool
    print("Now the pool is closed and no longer available

Note that the methods of a pool should only ever be used by the process which created it.

注意Pool的方法只能由创建它的进程使用;

Functionality within this package requires that the __ main__ module be importable by the children. This is covered in Programming guidelines however it is worth pointing out here. This means that some examples, such as the multiprocessing.pool.Pool examples will not work in the interactive interpreter

这个模块的功能需要子进程导入__ main__模块; 所以如果多进程的例子在交互式解释器中运行就无效了;

7.2 Pool类及其方法
(1)class multiprocessing.pool.Pool([processes[, initializer[, initargs[, maxtasksperchild[, context]]]]])

1)A process pool object which controls a pool of worker processes to which jobs can be submitted. It supports asynchronous results with timeouts and callbacks and has a parallel map implementation.
2)processes is the number of worker processes to use. If processes is None then the number returned by os.cpu_count() is used.
3)If initializer is not None then each worker process will call initializer(*initargs) when it starts.
4)maxtasksperchild is the number of tasks a worker process can complete before it will exit and be replaced with a fresh worker process, to enable unused resources to be freed. The default maxtasksperchild is None, which means worker processes will live as long as the pool.
5)context can be used to specify the context used for starting the worker processes. Usually a pool is created using the function multiprocessing.Pool() or the Pool() method of a context object. In both cases context is set appropriately.
6)Note that the methods of the pool object should only be called by the process which created the pool.

1.该类的对象是一个进程池, 要执行的任务可以提交入进程池中; 支持具有超时和回调功能的异步结果; 并可实现并行
2.processes参数是进程池中的进程数量, 如果为None, 则使用计算机的cpu数量(os.cpu_count()的结果);
3.如果initializer不是None, 那每个工作进程在开始使都会执行initializer(*initargs);
4.maxtasksperchild 是每个进程在退出或被其他进程替代前可以完成的任务数,用于确保未使用的资源被及时释放;如果该参数是None, 则意味着该进程和进程池持续时间一样长
5.context参数可用来指定用于开启进程的上下文, 通常进程池通过multiprocessing.Pool()或Pool()方法的上下文对象来创建, 这两种情况都可以自动设置context
6. 注意进程池的方法只能被创建进程池的进程调用;

(2)apply(func[, args[, kwds]])

Call func with arguments args and keyword arguments kwds. It blocks until the result is ready. Given this blocks, apply_async() is better suited for performing work in parallel. Additionally, func is only executed in one of the workers of the pool

调用函数及其相关参数(kwds为关键字参数); 它会一直阻塞,直到得到执行结果; 鉴于这种阻塞, apply_async()更适合完成并行工作; 另外, func函数只会被池中一个进程执行;

(3)apply_async(func[, args[, kwds[, callback[, error_callback]]]])

1)A variant of the apply() method which returns a result object.
If callback is specified then it should be a callable which accepts a single argument. When the result becomes ready callback is applied to it, that is unless the call failed, in which case the error_callback is applied instead.
2)If error_callback is specified then it should be a callable which accepts a single argument. If the target function fails, then the error_callback is called with the exception instance.
3)Callbacks should complete immediately since otherwise the thread which handles the results will get blocked.

apply()方法的变体, 异步的方式执行,返回一个结果对象;
如果callback被指定, 那它应该是个可调用对象, 并且只传入一个参数;当函数结果得到后将其传入回调函数; 在func函数执行失败的情况下,会执行error_callback;
如果error_callback被给出, 那它应该是个单参数的可调用对象, 在目标函数执行失败的情况下, 会把异常传递给error_callback然后执行
回调函数应该立即完成, 因为处理结果的线程会一直阻塞;

(4)map(func, iterable[, chunksize])

A parallel equivalent of the map() built-in function (it supports only one iterable argument though). It blocks until the result is ready.
This method chops the iterable into a number of chunks which it submits to the process pool as separate tasks. The (approximate) size of these chunks can be specified by setting chunksize to a positive integer.

跟内置函数map()相似, 不过只支持一个iterable, 它会阻塞直到结果准备好; 这个方法把iterable切为一块一块, 然后作为独立任务提交到进程池中; 块的近似大小可以通过设置整数chunksize来指定;返回列表;

(5)map_async(func, iterable[, chunksize[, callback[, error_callback]]])

1)A variant of the map() method which returns a result object.
If callback is specified then it should be a callable which accepts a single argument. When the result becomes ready callback is applied to it, that is unless the call failed, in which case the error_callback is applied instead.
2)If error_callback is specified then it should be a callable which accepts a single argument. If the target function fails, then the error_callback is called with the exception instance.
3)Callbacks should complete immediately since otherwise the thread which handles the results will get blocked.

map()方法的变体, 异步的方式执行,返回一个结果对象; callback和error_callback的使用同apply_async()相同;

(6)imap(func, iterable[, chunksize])

1)A lazier version of map().
The chunksize argument is the same as the one used by the map() method. For very long iterables using a large value for chunksize can make the job complete much faster than using the default value of 1.
2)Also if chunksize is 1 then the next() method of the iterator returned by the imap() method has an optional timeout parameter: next(timeout) will raise multiprocessing.TimeoutError if the result cannot be returned within timeout seconds.

map()的懒人版本, 返回迭代器;
chunksize参数的使用和map()相同, 对于非常长的iterable, 使用一个大的chunksize比默认参数1要快很多;
另外,如果chunksize是1, 那迭代器的next()方法会另外有一个可选超时参数, 如果给定的时间没有得到结果会抛出异常;

(7)imap_unordered(func, iterable[, chunksize])

The same as imap() except that the ordering of the results from the returned iterator should be considered arbitrary. (Only when there is only one worker process is the order guaranteed to be “correct”.)

与imap()相同, 除了结果的顺序是随意的, 除非进程池中只有一个进程可以保证顺序正确;

(8)starmap(func, iterable[, chunksize])

Like map() except that the elements of the iterable are expected to be iterables that are unpacked as arguments.
Hence an iterable of [(1,2), (3, 4)] results in [func(1,2), func(3,4)].

跟map()相似, 除了iterable的每个元素也是iterable,以便可以解压为func的参数;例如 [(1,2), (3, 4)] results in [func(1,2), func(3,4)]

(9)starmap_async(func, iterable[, chunksize[, callback[, error_callback]]])

A combination of starmap() and map_async() that iterates over iterable of iterables and calls func with the iterables unpacked. Returns a result object

starmap()和map_async()的结合, 迭代iterable中的iterrable并用解压的iterable调用func, 返回结果对象;

(10)close()

Prevents any more tasks from being submitted to the pool. Once all the tasks have been completed the worker processes will exit.

避免再有任何任务被放到进程池中, 一旦任务都完成, 工作进程都会退出

(11)terminate()

Stops the worker processes immediately without completing outstanding work. When the pool object is garbage collected terminate() will be called immediately.

立即停止工作进程,不管正在进行的任务, 当进程池被垃圾回收时自动调用此方法;

(12)join()

Wait for the worker processes to exit. One must call close() or terminate() before using join().

等待工作进程退出, 必须在close()或terminate()前调用;

(13)进程池支持上下文管理协议;

7.3 AsyncResult类

The class of the result returned by Pool.apply_async() and Pool.map_async().

Pool.apply_async() 和 Pool.map_async()返回的结果对象的类;

(1)get([timeout])

Return the result when it arrives. If timeout is not None and the result does not arrive within timeout seconds then multiprocessing.TimeoutError is raised. If the remote call raised an exception then that exception will be reraised by get().

当结果到达时返回它, 未到达会阻塞;如果timeout参数不为None, 并且结果没有在超时前返回, 会抛出错误; 如果远程调用发生异常,那get()会重新抛出该异常

(2)wait([timeout])

Wait until the result is available or until timeout seconds pass.

等待阻塞直到结果可获取或超时,返回None;

(3)ready()

Return whether the call has completed.

返回调用是否完成;(True或False)也即是否可获取结果或异常

(4)successful()

Return whether the call completed without raising an exception. Will raise AssertionError if the result is not ready.

返回调用是否在无异常的情况下完成(True或False), 如果结果未准备好会抛出异常

经典官方实例:

from multiprocessing import Pool
import time

def f(x):
    return x*x

if __name__ == '__main__':
    with Pool(processes=4) as pool:         # start 4 worker processes
        result = pool.apply_async(f, (10,)) # evaluate "f(10)" asynchronously in a single process
        print(result.get(timeout=1))        # prints "100" unless your computer is *very* slow

        print(pool.map(f, range(10)))       # prints "[0, 1, 4,..., 81]"

        it = pool.imap(f, range(10))
        print(next(it))                     # prints "0"
        print(next(it))                     # prints "1"
        print(it.next(timeout=1))           # prints "4" unless your computer is *very* slow

        result = pool.apply_async(time.sleep, (10,))
        print(result.get(timeout=1))        # raises multiprocessing

个人实例终级版本:

import multiprocessing
import time

def test1(l):
    time.sleep(1)
    print('test{} starting'.format(l))
    # raise TypeError('fuck')
    return l

def test2(l):
    print('callback{} starting'.format(l))

def test3(l):
    print('error_callback_test{} starting'.format(l))

def test4(x):
    return x+1

if __name__ == '__main__':  # windows多进程编程中必须有
    p = multiprocessing.Pool(3)
    for i in range(5):
        # p.apply(test1, (i,))  # 同步方式
        # callback在主进程调用,有利于节省资源, error_callback可处理异常
        res = p.apply_async(func=test1, args=(i,), callback=test2, error_callback=test3)
        # print(res, res.get())  # res为结果对象, get方法获取函数返回结果,注意:获取结果前会阻塞
        print(res, res.ready())  # 结果未到达时ready为False
        # print(res.wait())  # 阻塞直到结果返回!! 此方法返回None
        # print(res.successful())  # 结果正常到达时为Ture

    # res2 = p.map_async(lambda x: x+1, (7, 8, 9))  # 无法使用lambda, 因为无法pickle
    res2 = p.map_async(test4, (7, 8, 9), 3)
    print(res2, res2.get())

    res3 = p.imap(test4, (111, 222, 333))
    print(res3, res3.__next__(timeout=1))  # 返回一个生成器,chunksize为1时next有timeout参数

    p.close()  # 意义:所有任务完成后process退出
    p.join()  # 只能在close()或terminate后使用,没有join不会阻塞, 主进程不会等待,直接退出
    print('main ending')

#result
<multiprocessing.pool.ApplyResult object at 0x0000020FBA104128> False
<multiprocessing.pool.ApplyResult object at 0x0000020FBA104208> False
<multiprocessing.pool.ApplyResult object at 0x0000020FBA1042B0> False
<multiprocessing.pool.ApplyResult object at 0x0000020FBA104390> False
<multiprocessing.pool.ApplyResult object at 0x0000020FBA104470> False
test0 starting
callback0 starting
test1 starting
callback1 starting
test2 starting
callback2 starting
<multiprocessing.pool.MapResult object at 0x0000020FBA104550> [8, 9, 10]
<multiprocessing.pool.IMapIterator object at 0x0000020FBA104278> 112
test3 starting
callback3 starting
test4 starting
callback4 starting
main ending
  • 1
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值