最近做期末论文,在针对数组的处理问题上遇到了循环计算量太大的问题。我的部分原始代码如下:
数据集可以参考和鲸社区数据集:https://www.kesci.com/mw/dataset/58e893c49957300141f973dd
LC = pd.read_csv('D:/data_analysis/schoolwork/LC.csv', encoding='utf-8')
LP = pd.read_csv('D:/data_analysis/schoolwork/LP.csv', encoding='utf-8')
# 计算y帽子
LP1 = LP[['ListingId', '还款状态']]
count = LP1.value_counts(sort=False)
ymlist = []
ymindex = []
count0 = deepcopy(count)
for i in list(LP['ListingId'].drop_duplicates()):
for j in range(5):
count[(i,j)] = 0
for j in range(5):
try:
count[(i,j)] = count0[(i,j)]
except:
continue
ym = (count[(i,0)] + 0.5*(count[(i,2)]+count[(i,4)]))/np.array(count[i,]).sum()
ymlist.append(ym)
ymindex.append(i)
ymdata = pd.Series(data=ymlist, index=ymindex)
程序可以运行,但计算量太大,出现警告:PerformanceWarning: indexing past lexsort depth may impact performance. more = interpreter.add_exec(code_fragment)
结合警告提示信息,认为是没有使用多层索引,导致每次重复在全体数据集中定位count[(i,j)],产生了大量无效开销。为此,通过建立中间变量box,每次内循环只在box中遍历,改进程序如下:
LP1 = LP[['ListingId', '还款状态']]
count = LP1.value_counts(sort=False)
ymlist = []
ymindex = []
for i in list(LP1['ListingId'].drop_duplicates()):
box = count.xs((i, ))
box0 = deepcopy(box)
for j in range(5):
box[j] = 0
for j in range(5):
try:
box[j] = box0[j]
except:
continue
ym = (box[0] + 0.5*(box[2] + box[4])) / box.sum()
ymlist.append(ym)
ymindex.append(i)
虽然很慢,但已经可以运行出来了了。
最后,这个程序的目的就是将value_counts中为0的索引列出来。如果有大佬能够一步到位希望能够指点一下。