NO.66——人工智能学习:python实现一致代价搜索算法

目的:

               在广度优先算法上进行进化。一致代价搜索算法每次扩展的是当前路径消耗g(n)最小的节点n。

源码:

数据结构: 

  • frontier : 边缘,存储未扩展的节点。通过维护一个优先级队列,按路径损耗来排列。
  • explored :探索集,保存已访问的节点。

算法流程:

  • 如果边缘为空,则返回失败。操作:EMPTY?(frontier)
  • 否则从边缘中选择一个叶子节点。操作:POP(frontier)
  • 目标测试:通过返回,否则将叶子节点的状态放在探索集
  • 遍历叶子节点的所有动作

                    每个动作产生子节点

                    如果子节点的状态不在探索集或者边缘,则插入到边缘集合。操作:INSERT(child, frontier)

                    否则如果边缘集合中如果存在此状态且有更高的路径消耗,则用子节点替代边缘集合中的状态

算法性能分析:

            当所有的单步消耗都相等时,一致代价搜索与广度优先搜索类似。在终止条件上,广度优先搜索在找到解时终止,而一致代价搜索会检查目标深度的所有节点,看谁的代价最小。在这种情况下,一致代价搜索在深度d无意义的做了更多工作。

示例代码:(参考http://blog.csdn.net/jdh99

import pandas as pd
from pandas import Series, DataFrame

# 城市信息:city1 city2 path_cost
_city_info = None

# 按照路径消耗进行排序的FIFO,低路径消耗在前面
_frontier_priority = []


# 节点数据结构
class Node:
    def __init__(self, state, parent, action, path_cost):
        self.state = state
        self.parent = parent
        self.action = action
        self.path_cost = path_cost


def main():
    global _city_info
    import_city_info()
    
    while True:
        src_city = input('input src city\n')
        dst_city = input('input dst city\n')
        # result = breadth_first_search(src_city, dst_city)
        result = uniform_cost_search(src_city, dst_city)
        if not result:
            print('from city: %s to city %s search failure' % (src_city, dst_city))
        else:
            print('from city: %s to city %s search success' % (src_city, dst_city))
            path = []
            while True:
                path.append(result.state)
                if result.parent is None:
                    break
                result = result.parent
            size = len(path)
            for i in range(size):
                if i < size - 1:
                    print('%s->' % path.pop(), end='')
                else:
                    print(path.pop())


def import_city_info():
    global _city_info
    data = [{'city1': 'Oradea', 'city2': 'Zerind', 'path_cost': 71},
            {'city1': 'Oradea', 'city2': 'Sibiu', 'path_cost': 151},
            {'city1': 'Zerind', 'city2': 'Arad', 'path_cost': 75},
            {'city1': 'Arad', 'city2': 'Sibiu', 'path_cost': 140},
            {'city1': 'Arad', 'city2': 'Timisoara', 'path_cost': 118},
            {'city1': 'Timisoara', 'city2': 'Lugoj', 'path_cost': 111},
            {'city1': 'Lugoj', 'city2': 'Mehadia', 'path_cost': 70},
            {'city1': 'Mehadia', 'city2': 'Drobeta', 'path_cost': 75},
            {'city1': 'Drobeta', 'city2': 'Craiova', 'path_cost': 120},
            {'city1': 'Sibiu', 'city2': 'Fagaras', 'path_cost': 99},
            {'city1': 'Sibiu', 'city2': 'Rimnicu Vilcea', 'path_cost': 80},
            {'city1': 'Rimnicu Vilcea', 'city2': 'Craiova', 'path_cost': 146},
            {'city1': 'Rimnicu Vilcea', 'city2': 'Pitesti', 'path_cost': 97},
            {'city1': 'Craiova', 'city2': 'Pitesti', 'path_cost': 138},
            {'city1': 'Fagaras', 'city2': 'Bucharest', 'path_cost': 211},
            {'city1': 'Pitesti', 'city2': 'Bucharest', 'path_cost': 101},
            {'city1': 'Bucharest', 'city2': 'Giurgiu', 'path_cost': 90},
            {'city1': 'Bucharest', 'city2': 'Urziceni', 'path_cost': 85},
            {'city1': 'Urziceni', 'city2': 'Vaslui', 'path_cost': 142},
            {'city1': 'Urziceni', 'city2': 'Hirsova', 'path_cost': 98},
            {'city1': 'Neamt', 'city2': 'Iasi', 'path_cost': 87},
            {'city1': 'Iasi', 'city2': 'Vaslui', 'path_cost': 92},
            {'city1': 'Hirsova', 'city2': 'Eforie', 'path_cost': 86}]
            
    _city_info = DataFrame(data, columns=['city1', 'city2', 'path_cost'])
# print(_city_info)

'''
def breadth_first_search(src_state, dst_state):
    global _city_info
    
    node = Node(src_state, None, None, 0)
    frontier = [node]
    explored = []

    while True:
        if len(frontier) == 0:
            return False
        node = frontier.pop(0)
        explored.append(node.state)
        # 目标测试
        if node.state == dst_state:
            return node
        if node.parent is not None:
            print('deal node:state:%s\tparent state:%s\tpath cost:%d' % (node.state, node.parent.state, node.path_cost))
        else:
            print('deal node:state:%s\tparent state:%s\tpath cost:%d' % (node.state, None, node.path_cost))
        
        # 遍历子节点
        for i in range(len(_city_info)):
            dst_city = ''
            if _city_info['city1'][i] == node.state:
                dst_city = _city_info['city2'][i]
            elif _city_info['city2'][i] == node.state:
                dst_city = _city_info['city1'][i]
            if dst_city == '':
                continue
            child = Node(dst_city, node, 'go', node.path_cost + _city_info['path_cost'][i])
            print('\tchild node:state:%s path cost:%d' % (child.state, child.path_cost))
            if child.state not in explored and not is_node_in_frontier(frontier, child):
                frontier.append(child)
                print('\t\t add child to child')
'''

def is_node_in_frontier(frontier, node):
    for x in frontier:
        if node.state == x.state:
            return True
    return False


def uniform_cost_search(src_state, dst_state):
    global _city_info, _frontier_priority
    
    node = Node(src_state, None, None, 0)
    frontier_priority_add(node)
    explored = []
    
    while True:
        if len(_frontier_priority) == 0:
            return False
        node = _frontier_priority.pop(0)
        explored.append(node.state)
        # 目标测试
        if node.state == dst_state:
            print('\t this node is goal!')
            return node
        if node.parent is not None:
            print('deal node:state:%s\tparent state:%s\tpath cost:%d' % (node.state, node.parent.state, node.path_cost))
        else:
            print('deal node:state:%s\tparent state:%s\tpath cost:%d' % (node.state, None, node.path_cost))
    
        
        # 遍历子节点
        for i in range(len(_city_info)):
            dst_city = ''
            if _city_info['city1'][i] == node.state:
                dst_city = _city_info['city2'][i]
            elif _city_info['city2'][i] == node.state:
                dst_city = _city_info['city1'][i]
            if dst_city == '':
                continue
            child = Node(dst_city, node, 'go', node.path_cost + _city_info['path_cost'][i])
            print('\tchild node:state:%s path cost:%d' % (child.state, child.path_cost))
            
            if child.state not in explored and not is_node_in_frontier(_frontier_priority, child):
                frontier_priority_add(child)
                print('\t\t add child to frontier')
            elif is_node_in_frontier(_frontier_priority, child):
                # 替代为路径消耗少的节点
                frontier_priority_replace_by_priority(child)


def frontier_priority_add(node):
    """
        :param Node node:
        :return:
        """
    global _frontier_priority
    size = len(_frontier_priority)
    for i in range(size):
        #如果新加入的节点存在阈值较小的情况,插入队列
        if node.path_cost < _frontier_priority[i].path_cost:
            _frontier_priority.insert(i, node)
            return
    #否则,新添加的节点比优先级队列中现有的节点阈值都大,直接添加到队列末尾
    _frontier_priority.append(node)


def frontier_priority_replace_by_priority(node):
    """
        :param Node node:
        :return:
        """
    global _frontier_priority
    size = len(_frontier_priority)
    for i in range(size):
        if _frontier_priority[i].state == node.state and _frontier_priority[i].path_cost > node.path_cost:
            print('\t\t replace state: %s old cost:%d new cost:%d' % (node.state,_frontier_priority[i].path_cost,node.path_cost))
            _frontier_priority[i] = node
            return


if __name__ == '__main__':
    main()

 

  • 2
    点赞
  • 12
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
好的,针对您的问题,我来为您解答。 首先,初始化一个4维向量可以使用Python的列表(list)来实现。例如,以下代码演示了如何创建一个包含4个0的向量: ``` vector = [0, 0, 0, 0] ``` 接下来,让我们来看一下如何使用K均值聚类算法实现对数据的聚类。K均值聚类是一种常用的无监督学习算法,它通过将数据分成K个不同的簇来实现聚类。 以下是使用Python实现K均值聚类的示例代码: ```python import numpy as np # 定义K均值聚类算法 def kmeans(X, k, max_iters=100): # 随机初始化质心 centroids = X[np.random.choice(range(len(X)), k)] for i in range(max_iters): # 计算每个样本到质心的距离 distances = np.sqrt(((X - centroids[:, np.newaxis]) ** 2).sum(axis=2)) # 找到距离每个样本最近的质心 closest_centroids = np.argmin(distances, axis=0) # 重新计算质心 for j in range(k): centroids[j] = X[closest_centroids == j].mean(axis=0) return centroids, closest_centroids # 测试K均值聚类算法 X = np.array([[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12], [13, 14, 15, 16]]) k = 2 centroids, closest_centroids = kmeans(X, k) print("质心:", centroids) print("每个样本所属的簇:", closest_centroids) ``` 在上述代码中,我们使用了NumPy库来进行数据处理和计算。首先,我们随机初始化了K个质心,然后循环执行以下步骤: 1. 计算每个样本到质心的距离。 2. 找到距离每个样本最近的质心。 3. 重新计算质心。 重复上述步骤,直到算法收敛或达到最大迭代次数。 最后,我们输出了计算出的质心和每个样本所属的簇。 希望上述代码能帮到您,如有疑问请随时提出。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值