背景
python多线程是假的。受全局解释器锁(GIL)的控制,python在任一时刻最多只能有一个线程运行,所谓的多线程本质是多个线程共享CPU资源相互切换完成的。
在IO密集型场景下,由于CPU的效率远大于等待IO的效率,线程切换那点损失微不足道,因此python多线程可明显提高代码整体运行效率。
在CPU密集型场景下,线程切换和常规运算损失的都是CPU效率,因此多线程不一定可以提高代码整体运行效率。
解决方式
python多进程, 先上代码:
from sklearn.metrics import r2_score
import os
import joblib
from multiprocessing import Process, Queue
import argparse
import time
FIND_DIR = './FileList/filelist_find_m_20210426.txt'
BASE_DIR = './FileList/filelist_base_m_20210426.txt'
R2_THRES = 0.75
RET_FILE = './FileList/R2_result_20210426.txt'
def make_mdata_path(path):
path = path.strip()
return path
def find_pic_inbase(path, q):
path = make_mdata_path(path)
file_m_data = joblib.load(path)
with open(BASE_DIR, 'r') as f:
base_file_list = f.readlines()
for bf in base_file_list:
base_file = make_mdata_path(bf)
base_m_data = joblib.load(base_file)
r2_ret = r2_score(file_m_data[0], base_m_data[0])
if r2_ret > R2_THRES:
ret_end = ' '.join([path, base_file, str(r2_ret)])
print(ret_end)
q.put(ret_end)
def write_result(q):
while True:
if q.qsize() > 0:
ret_data = q.get()
if ret_data == 'stop':
break
with open(RET_FILE, 'a') as f:
f.write(''.join([ret_data, '\n']))
def main(procossor, q):
with open(RET_FILE, 'w') as f:
pass
with open(FIND_DIR, 'r') as f:
find_list = f.readlines()
w_p = Process(target=write_result, args=(q,))
w_p.start()
while len(find_list) > 0:
for i in range(procossor):
if len(find_list) > 0:
file_data = find_list.pop()
p = Process(target=find_pic_inbase, args=(file_data, q,))
p.start()
else:
continue
p.join()
print('Out!')
q.put('stop')
w_p.join()
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument("-p", "--PROCESOR", default=10, type=int, help="use procesor")
args = parser.parse_args()
q = Queue(10000)
start_time = time.time()
main(args.PROCESOR, q)
print('end_time', time.time() - start_time)
挑选了段代码,懒得为了这篇文章再去删改,直接复制过来了。注意里面有个Queue和Process函数,总体思路是:先放进队列里,然后从队列里拿出来处理。
当时写这个项目时,很多地方用到了这样的套路,后面慢慢调节,根据cpu的负载情况,合理选择进程数。