python 匿名回调函数,Python生成器与回调函数

博客探讨了在解决精确覆盖问题时,使用递归回溯算法的类实现中,回调函数与生成器性能的差异。作者发现,尽管生成器在代码上更简洁,但在实际运行中,其速度比使用回调函数慢了5倍。原因可能是生成器在递归过程中保存和恢复状态信息导致的额外开销。文章建议,在深层递归的情况下,避免使用生成器,而应选择回调函数以提高效率。
摘要由CSDN通过智能技术生成

I have a class that solves an exact cover problem using a recursive, backtracking algorithm. Originally, I implemented the class with a callback function I passed to the object during initialization. This callback is invoked whenever a solution is found. In looking at someone else's implementation of the same problem, I saw that they were using yield statements to pass a solution out, in other words, their code was a python generator. I thought this was an interesting idea so I made a new version of my class to use yields. I then ran comparison tests between the two versions and, to my surprise, I found the generator version ran 5 times slower than the callback version. Note that, except for switching in a yield for a callback, the code is identical.

What is going on here? I'm speculating that, because a generator needs to save state information before yielding and then restore that state when restarting at the next call, it is this save/restore that is what makes the generator version run so much slower. If this is the case, how much state information is the generator having to save and restore?

Any ideas from the python experts?

--Edited 7:40 PDT

Here is the solver code which uses yield. Replace the first yield below with a call to the callback function and change the following loop with the second yield to just a recursive call to solve for the original version of this code.

def solve(self):

for tp in self.pieces:

if self.inuse[tp.name]: continue

self.inuse[tp.name] = True

while tp.next_orientation() is not None:

if tp.insert_piece():

self.n_trials += 1

self.pieces_in += 1

self.free_cells -= tp.size

if self.pieces_in == len(self.pieces) or self.free_cells == 0:

self.solutions += 1

self.haveSolution = True

yield True

self.haveSolution = False

else:

self.table.next_base_square()

for tf in self.solve():

yield tf

tp.remove_piece()

self.pieces_in -= 1

self.table.set_base_square(tp.base_square)

self.free_cells += tp.size

self.inuse[tp.name] = False

tp.reset_orientation()

The mail loop which invokes the solver (after initialization, of course) is

start_time = time.time()

for tf in s.solve():

printit(s)

end_time = time.time()

delta_time = end_time - start_time

In the callback version, the loop is gone with just a single call to solve.

解决方案

What i meant in my comment, ("yielding from a recursive function sounds like it requires extra for loops to pass the results down to the caller") is this line:

for tf in self.solve():

yield tf

These lines recursively loop over the results from the deeper recursion stages. That means that a single result is iterated over on each level of the recursion, resulting in a lot of unnecessary looping.

Let me illustrate with this example:

n = 0

def rekurse(z):

global n

if z:

yield z

for x in rekurse(z-1):

n += 1

yield x

print list(rekurse(10))

print n

As you can see this simply counts down from 10, so you'd expect a a linear number of iterations. What you can see though is that n grows quadratically - recurse(10) loops over 9 items, recurse(9) over 8 items and so on.

The more items you have, the more time Python spends on these simple lines. Callbacks completely avoid that problem, so I'd suspect that is the problem with your code.

A optimized implementation of PEP 380 could fix this (see this paragraph). In the meantime I don't think it's a good idea to yield from recursive functions (at least if they recurse deeply), they just don't work well together.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值