python 字典查询效率,提高Python中非常大的字典的性能

I find that if I initialize an empty dictionary at the beginning, and then adding elements to the dictionary in a for loop (about 110,000 keys, the value for each key is a list, also increasing in the loop), the speed goes down as for loop goes.

I suspect that the problem is, the dictionary does not know the number of keys at init time and it is not doing something very smart, so perhaps the storage collision becomes quite often and it slows down.

If I know the number of keys and exactly what are those keys, is there any way in python to make a dict (or a hashtable) work more efficiently? I vaguely remember that if you know the keys, you can design the hash function smartly (perfect hash?) and allocate the space beforehand.

解决方案If I know the number of keys and exactly what are those keys, is there

any way in python to make a dict (or a hashtable) work more

efficiently? I vaguely remember that if you know the keys, you can

design the hash function smartly (perfect hash?) and allocate the

space beforehand.

Python doesn't expose a pre-sizing option to speed-up the "growth phase" of a dictionary, nor does it provide any direct controls over "placement" in the dictionary.

That said, if the keys are always known in advance, you can store them in a set and build your dictionaries from the set using dict.fromkeys(). That classmethod is optimized to pre-size the dictionary based on the set size and it can populate the dictionary without any new calls to __hash__():

>>> keys = {'red', 'green', 'blue', 'yellow', 'orange', 'pink', 'black'}

>>> d = dict.fromkeys(keys) # dict is pre-sized to 32 empty slots

If reducing collisions is your goal, you can run experiments on the insertion order in the dictionary to minimize pile-ups. (Take a look at Brent's variation on Algorithm D in Knuth's TAOCP to get an idea of how this is done).

By instrumenting a pure Python model for dictionaries (such as this one), it is possible to count the weighted-average number of probes for an alternative insertion order. For example, inserting dict.fromkeys([11100, 22200, 44400, 33300]) averages 1.75 probes per lookup. That beats the 2.25 average probes per lookup for dict.fromkeys([33300, 22200, 11100, 44400]).

Another "trick" is to increase spareness in a fully populated dictionary by fooling it into increasing its size without adding new keys:

d = dict.fromkeys(['red', 'green', 'blue', 'yellow', 'orange'])

d.update(dict(d)) # This makes room for additional keys

# and makes the set collision-free.

Lastly, you can introduce your own custom __hash__() for your keys with the goal of eliminating all collisions (perhaps using a perfect hash generator such as gperf).

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值