Lazy binomial heap——python实现

前言

完整的资源文件和测试代码已经上传。

关于eager binomial heaps的内容详见这篇博客Lazy Binomial Heaps. 双向循环链表实现的堆,api: lazy_merge, insert, extractMin, find_min, coalesce_step, updateMin. 对于其实现的优先队列没有实现 decreaseKey和 delete,这里的疑问在最后会说。

functions

采用的是如下的heap结构,与其不同之处只在于parent的孩子指向最左孩子。这也与前一篇博客二项堆python实现——eager binomial heap大体一致,区别在于这里使充分使用并维护双向循环链表,还存储了nodes个数。
tricky solution

为以示区别,下图展示的是一般lazy binomial heap采用的heap结构。
representation

lazy merge

简单的双链表合并,不考虑维护的事情,lazy merge, O ( 1 ) O(1) O(1).

    def merge(self,H2):
        self.len += H2.len
        if not H2.head:
            return
        if not self.head:
            self.head = H2.head
            self.min = H2.min
            return
        temp = self.head.pre
        self.head.pre.sibling = H2.head
        H2.head.pre.sibling = self.head
        self.head.pre = H2.head.pre
        H2.head.pre = temp
        if self.min.data > H2.min.data:
            self.min = H2.min

insert

依靠 merge 来实现 insert,先对其 make_heap, O ( 1 ) O(1) O(1).

    def insert(self,x):
        node = Binomial_Trees(x)
        node.sibling = node
        node.pre = node
        heap = Lazy_Binomial_Heap()
        heap.head = node
        heap.min = node
        heap.len = 1
        self.merge(heap)

extractMin

extractMin可以分为三步:

  • 先 remove min 指针所指向 binomial tree 的 root——把 Bk 分裂成 BK-1…B0, 再 lazy meld 回来。因为要给 BK-1…B0 清空 parent 指针,所以复杂度 O ( log ⁡ n ) O(\log n) O(logn).
  • 再 coalesce step, O ( t + log ⁡ n ) O(t+\log n) O(t+logn), t t t 为当前 heap 里面树的个数。
  • 最后 update min 指针, O ( log ⁡ n ) O(\log n) O(logn)(经过上一步order已经 unique).
    def extractMin(self):
        if not self.head: # empty heap exception
            raise Exception('empty heap')
        cur = self.min.Lchild
        x = self.min.data
        if cur: # order of min > 0
            while cur.parent:  # clear parents of new heap
                cur.parent = None
                # if self.min.data > cur.data:
                #     self.min = cur
                cur = cur.sibling
            if self.min.pre == self.min: # single heap, remove min, naturally unique order
                self.head = cur.pre
                # self.min = cur # unnecessary
                self.updateMin()
                self.len -= 1
                return x
            else: # multiple Bs. remove, merge, then coalesce and updateMin
                new_heap = Lazy_Binomial_Heap() # create new heap, children of min
                new_heap.head = cur
                new_heap.min = cur
                new_heap.len = 2**self.min.order - 1 # new heap created
                # new_heap.updateMin() # unnecessary
                if self.head == self.min: # to remove min. in case losing head
                    self.head = self.head.pre
                self.min.pre.sibling = self.min.sibling
                self.min.sibling.pre = self.min.pre
                self.len -= 2**self.min.order
                self.merge(new_heap)
                self.coalesce_step()
                self.updateMin()
                return x
        else: # order of min = 0
            if self.min.pre == self.min:  # single node, len = 1, empty after extract
                self.head = None
                self.min = None
                self.len = 0
                return x
            else: # multiple Bs
                if self.head == self.min: # to remove min. in case losing head
                    self.head = self.head.pre
                self.min.pre.sibling = self.min.sibling
                self.min.sibling.pre = self.min.pre
                self.len -= 1
                self.coalesce_step()
                self.updateMin()
                return x

coalesce_step

coalesce_step 的详细内容参考Lazy Binomial Heaps. 大致就是参考基排序来合并 order 相同的树。这里实现的比较直接,没有按照那几步走,直接扫的,有同 order 就合并进位——保持 order 相同的树不超过2。所有树扫一遍,还要过 log ⁡ n \log n logn个箱子, 复杂度 O ( t + log ⁡ n ) . O(t+\log n). O(t+logn).

    def coalesce_step(self):
        n = int(math.log(self.len,2))+1
        L = [None] * n
        cur = self.head
        while cur:
            if cur.pre != cur:
                cur.pre.sibling = cur.sibling
                cur.sibling.pre = cur.pre
                self.head = cur.sibling
            else:
                self.head = None
            cur.pre = cur
            cur.sibling = cur
            while L[cur.order]:
                if cur.data < L[cur.order].data:
                    if cur.Lchild:
                        cur.Lchild.pre.sibling = L[cur.order]
                        L[cur.order].pre = cur.Lchild.pre
                        L[cur.order].sibling = cur.Lchild
                        cur.Lchild.pre = L[cur.order]
                    cur.Lchild = L[cur.order]
                    L[cur.order].parent = cur
                    L[cur.order] = None
                else:
                    if L[cur.order].Lchild:
                        L[cur.order].Lchild.pre.sibling = cur
                        cur.pre = L[cur.order].Lchild.pre
                        cur.sibling = L[cur.order].Lchild
                        L[cur.order].Lchild.pre = cur
                    L[cur.order].Lchild = cur
                    cur.parent = L[cur.order]
                    cur = L[cur.order]
                    L[cur.order] = None
                cur.order += 1
            L[cur.order] = cur
            cur = self.head
        k = []
        for i in range(n):
            if L[i]:
                k.append(i)
        j = len(k)
        for i in range(j):
            L[k[i]].sibling = L[k[(i+1)%j]]
            L[k[i]].pre = L[k[(i-1)%j]]
        self.head = L[k[0]]

updateMin

得益于上一步已经把所有树合并,order 依然 unique。这里只需要update min 指针,来保证 O ( 1 ) O(1) O(1)find_min.

    def updateMin(self):
        cur = self.head
        self.min = self.head.pre
        self.head = self.head.pre
        while cur != self.head:
            if cur.data < self.min.data:
                self.min = cur
            cur = cur.sibling

关于decreaseKey的问题

我现在的理解是这两种结构的 lazy binomial heap 或者 eager binomial heap,都是没法保证 O ( log ⁡ n ) O(\log n) O(logn)的 decreaseKey(v,k)的。

原因在于 O ( log ⁡ n ) O(\log n) O(logn)decreaseKey 的基础是只在一条 node v 到 root 的 path 上去交换。 layer by layer,而高度因为是满二叉树所以 O ( log ⁡ n ) O(\log n) O(logn)

然而纵观这两种结构,都要涉及到 node v 的 兄弟的 parent 指针维护。 所以真的 node v 每交换一层,都要涉及到其所有兄弟的 parent 指针维护。那么最坏情况就只有一棵树, v 在 leaves,而 k 小于 min,即 root。一层层换过去,也就是相当于要从叶子到根的所有结点都过一遍,显然 O ( n ) O(n) O(n)

所以尽管有许多 slides 写了 binomial heaps 实现细节 并且 分析了 decreaseKey(v,k) 和 delete(v) 的实现和复杂度。但也有一些学校的 slides 压根就没提到 binomial heaps 的 decreaseKey,以及依赖它的 deleteMin.

所以我只在这里提一下问题吧,如果有谁能回答一下可以私信或者评论一下,还请不吝赐教啊[抱拳]。

  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
好的,Lazy Theta* 的 Python 代码如下: ```python import math import heapq def lazy_theta_star(start, goal, obstacles, h_func, c_func): g = {start: 0} parents = {start: None} open_set = [(h_func(start, goal), start)] closed_set = set() while open_set: f, curr_node = heapq.heappop(open_set) if curr_node == goal: path = [] while curr_node != start: path.append(curr_node) curr_node = parents[curr_node] path.append(start) return list(reversed(path)) closed_set.add(curr_node) for neighbor in get_neighbors(curr_node, obstacles): if neighbor in closed_set: continue tentative_g = g[curr_node] + c_func(curr_node, neighbor) if neighbor not in g or tentative_g < g[neighbor]: parents[neighbor] = curr_node g[neighbor] = tentative_g f = tentative_g + h_func(neighbor, goal) heapq.heappush(open_set, (f, neighbor)) return None def get_neighbors(node, obstacles): neighbors = [] for x_offset in range(-1, 2): for y_offset in range(-1, 2): if x_offset == 0 and y_offset == 0: continue neighbor_x = node[0] + x_offset neighbor_y = node[1] + y_offset neighbor = (neighbor_x, neighbor_y) if neighbor in obstacles: continue if x_offset != 0 and y_offset != 0: if (node[0] + x_offset, node[1]) in obstacles or (node[0], node[1] + y_offset) in obstacles: continue neighbors.append(neighbor) return neighbors def euclidean_dist(node1, node2): return math.sqrt((node1[0] - node2[0]) ** 2 + (node1[1] - node2[1]) ** 2) def manhattan_dist(node1, node2): return abs(node1[0] - node2[0]) + abs(node1[1] - node2[1]) ``` 希望能帮到你!接下来有什么问题,可以随时问我哦!
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值