python多线程耗时大_python 多线程&多进程&rxpy的效率比较

因为python的GIL的问题, 一般来说对于计算密集型的代码, 效率一边如下: 多进程 < 普通 < 多线程, 多进程效率最高, 多线程由于切换context的原因, 反倒效率不佳。

对于一个reactive编程的死忠, 用python多线程编程, 还没有看完api, 就想到了用rxpy来实现如何呢?结果官网上有这么一段话:

Keep in mind Python's GIL has the potential to undermine your concurrency performance, as it prevents multiple threads from accessing the same line of code simultaneously. Libraries like NumPy can mitigate this for parallel intensive computations as they free the GIL. RxPy may also minimize thread overlap to some degree. Just be sure to test your application with concurrency and ensure there is a performance gain.

大概意思就是rxpy能减轻GIL带来的问题, 并且还是每次都要测试下你写的代码是得到了性能的优化。

作为一个调库君, python暂时不打算太深入的情况下, 我不准备深究GIL的问题, 以从根本知道哪些情况下会带来效率提升, 但是简答的一个评测还是有必要的, OK, let's try!

首先上测试代码

#!/usr/bin/python

# -*- coding: UTF-8 -*-

import multiprocessing as mp

import threading as td

import sys

from threading import current_thread

import time

from rx import Observable

from rx.concurrency import ThreadPoolScheduler

def intense_calculation(s):

temp = 0.8

for i in range(100000):

temp += (i ** 2 + i ** 0.3 + i ** 0.5) / (i ** 1.2 + i * 0.02 + 0.05)

return s

def do_20_calculation(name):

for i in range(20):

print("PROCESS {0} {1} {2} @ {3}".format(name, current_thread().name, intense_calculation(i),

time.time() - start_time)),

def proc_proces(name):

mp.Process(target=do_20_calculation, args=(name,)).start()

def proc_thread(name):

td.Thread(target=do_20_calculation, args=(name,)).start()

def proc_rx2(name, pool_scheduler):

# Observable.from_(["Alpha", "Beta", "Gamma", "Delta", "Epsilon"]) \

# .do_action(lambda s: print("begin PROCESS {0} {1} {2}".format(name, current_thread().name, s)))

for i in range(20):

Observable.just(i) \

.subscribe_on(pool_scheduler) \

.map(lambda s: intense_calculation(s)) \

.subscribe(on_next=lambda s: print("PROCESS {0} {1} {2}".format(name, current_thread().name, s)),

on_error=lambda e: print(e),

on_completed=lambda: print("PROCESS %s done! @ %.3f" % (name, time.time() - start_time)))

def proc_normal(name):

for i in range(20):

s = intense_calculation(i)

used_time = time.time() - start_time

print("PROCESS? {0} thread {1} task {2} done @ {3}".format(name, current_thread().name, s, used_time))

def main(type, proc_num, add_to_core_num):

global start_time

# calculate number of CPU's, then create a ThreadPoolScheduler with that number of threads

optimal_thread_count = mp.cpu_count()

print("has %d cpu cores" % optimal_thread_count)

pool_scheduler = ThreadPoolScheduler(optimal_thread_count + add_to_core_num)

start_time = time.time()

if type == "thread":

for i in range(proc_num):

proc_thread("%d" % i)

elif type == "process":

for i in range(proc_num):

proc_proces("%d" % i)

elif type == "rx":

for i in range(proc_num):

proc_rx2("%d" % i, pool_scheduler)

else:

for i in range(proc_num):

proc_normal("%d" % i)

print("end @ %.2f" % (time.time() - start_time))

input("Press any key to exit\n")

start_time = 0

if __name__ == "__main__":

main(sys.argv[1], int(sys.argv[2]), int(sys.argv[3]))

简单解释下功能, 就是根据输入的type, 测试不同的调用方式, 当传入的参数迭代8次的时候,意味着单线程会循环8*20次调用intense_calculation, process会启动8个进程,每个进程调用20次, thread会启动8个线程, 运行20次, rx比较特别, 它会启动8个线程, 但是每个线程从线程池中获取, 线程池的大小为8.

为什么取8, 因为我测试的ubuntu机器正好8核(逻辑核)。

都传入8次的时候, 结果如下:

线程调用方式

迭代此处

耗时(秒)

单线程循环

8

21.92

rx方式

8

45.35

线程

8

45.23

进程

8

3.1

rx

16

90.854

线程

16

92.62

进程

16

5.91

从上面的表格可以看出,当前的测试代码下(cpu comsuming&same code been called), 基本来说rx还是不能避免thread的低效的问题,基本延续了之前的结论。rxPy估计只是在线程更多的时候,并且线程调度频繁的时候,使用的线程池有点点优势,知道没有明显降低性能, 还是值得一用的。

结论

多进程模式吊打多线程, 多进程真的能充分给你用多核的优势。

继续测试使用多进程进行guezli转换的速度

ps: 这个才是我看python多线程&多进程的初衷

附上测试代码,简单来说,启动了cpu核数一样的进程, 每个进程会获取queue传入的数据来进行压缩, 在ubuntu 16.04机器上,cpu: 8核 Intel(R) Xeon(R) CPU E5-2609 v2 @ 2.50GHz, 2个物理cpu的情况下, 压缩23张 260*364大小的图(其实是三轮, 所以到24张图估计时间也一样, 同时不同图耗时可能不一样),耗时74.70秒, 未调整guetzli的参数。

#!/usr/bin/python

# -*- coding: UTF-8 -*-

import multiprocessing as mp

import os

import sys

import threading as td

import time

from threading import current_thread

from rx import Observable

from rx.concurrency import ThreadPoolScheduler

def do_encode(path, out_path):

print("will process %s %s" % (path, os.getpid()))

os.system('guetzli ' + path + " " + out_path)

print("process done: %s" % path)

return path

def do_process(queue):

while True:

info = queue.get()

if info["type"] == "quit":

# print("{0} get quit command".format(os.getpid()))

break

elif info["type"] == "zipJpg":

print("{0} get zip jpeg".format(os.getpid()))

do_encode(info["path"], info["out_path"])

else:

print("{0} get wrong command".format(os.getpid()))

print("{0} quits".format(os.getpid()))

def main(path):

global start_time

# calculate number of CPU's, then create a ThreadPoolScheduler with that number of threads

optimal_thread_count = mp.cpu_count()

print("has %d cpu cores" % optimal_thread_count)

pool_scheduler = ThreadPoolScheduler(optimal_thread_count)

work_process = []

queue = mp.Queue(optimal_thread_count)

# 启动线程

for i in range(optimal_thread_count):

p = mp.Process(target=do_process, args=(queue,))

work_process.append(p)

p.start()

pictures = []

out_pictures = []

out_dir = os.path.join(path, "out")

os.system("rm -rf %s" % out_dir)

os.system("mkdir %s" % out_dir)

for root, dirs, files in os.walk(path):

for file in files:

if file.endswith(".jpg") or file.endswith(".png"):

pictures.append(os.path.join(root, file))

out_pictures.append(os.path.join(out_dir, file))

# for f in pictures:

# print(f)

def on_complete():

# 停止所有的进程

for ii in range(optimal_thread_count):

print("send {0} to quit".format(ii))

queue.put({"type": "quit"})

print("onComplete")

def send_message(paths):

queue.put({"type": "zipJpg", "path": paths[0], "out_path": paths[1]})

return paths[0]

# 向进程发送消息

Observable.from_(zip(pictures, out_pictures)).map(

lambda paths: send_message(paths)).subscribe_on(pool_scheduler).subscribe(

on_next=lambda s: print("send {0} on thread {1} done.".format(s, current_thread().name)),

on_error=lambda e: print("onError", e),

on_completed=on_complete)

start_time = time.time()

for p in work_process:

p.join()

print("complete all zip @ {0} secs".format(time.time() - start_time))

# exit(0) # 615secs 8times 76.875 per time. 74.70 run a time.

# input("Press any key to exit\n")

start_time = 0

if __name__ == "__main__":

main(sys.argv[1])

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值