L3.1-Solving Problems by Searching-Part One Keys

  1. Bucket Problem

    • Use state-space search to measure 7 liters using 3, 5, and 9-liter buckets.
    • Actions: Fill, empty, or transfer water.
  2. Problem Formulation

    • Reflex agents use direct mapping, but they fail in large environments.
    • Goal-based agents consider future actions and their outcomes.
  3. Driving Problem

    • State: Agent's current position.
    • Actions: Possible routes.
    • Goal: Reach destination.
    • Path Cost: Measures solution efficiency.
  4. Problem-Solving Agent

    • Finds action sequences from initial state (S) to goal state (G).
    • Needs goal formulation and problem formulation.
  5. Driving from Beijing to Shanghai

    • Classic graph search problem.
    • Use Dijkstra's Algorithm or A* for optimal pathfinding.

 

Midterm Review Notes: Problem-Solving Agents

(Fundamentals of Artificial Intelligence - Lecture 3 Summary)


1. Introduction to Problem-Solving Agents

  • A Problem-Solving Agent finds a solution by searching for a sequence of actions that leads from an initial state to a goal state.
  • Example: Driving from Beijing to Shanghai, choosing the best route based on time, cost, and constraints.

2. Goal Formulation

  • The agent must formulate a goal based on:
    • Current Situation (where the agent is).
    • Performance Measure (time, cost, safety, etc.).
  • The goal should be well-defined (e.g., “reaching Shanghai”).

📝 Exam Tip: Be able to define a goal in a given problem scenario.


3. Problem Formulation

  • Defining the problem means specifying:
    1. Initial State (e.g., being in Beijing).
    2. Actions (e.g., traveling to Tianjin, Jinan, or Nanjing).
    3. Transition Model (what happens when an action is taken).
    4. Goal Test (determining if the goal is reached).
    5. Path Cost (measuring how "good" a solution is).

📝 Exam Tip: Be able to describe these five components for a problem.


4. State Space & Search Trees

  • State Space: All possible states the agent can be in.
  • Search Tree: A tree structure where:
    • Nodes represent states.
    • Edges represent actions leading to new states.

📝 Exam Tip: Draw a search tree for a given problem.


5. Actions & Transitions

  • Actions are the choices available to the agent (e.g., moving from one city to another).
  • Transitions occur when actions change the state.

Example:

RESULT(In(Beijing), Go(Tianjin)) → In(Tianjin)

📝 Exam Tip: Given an action and state, determine the new state.


6. Solution & Execution

  • Search Algorithms find a solution (a sequence of actions).
  • Execution: The agent follows the found solution.
  • Solution Quality is measured by path cost (lower is better).

Example:

  • Path 1: Beijing → Tianjin → Jinan → Shanghai (cost = 50)
  • Path 2: Beijing → Nanjing → Shanghai (cost = 40)
  • The second path is optimal (least cost).

📝 Exam Tip: Compare paths and choose the best one.


7. Problem-Solving Algorithm (Pseudo-Code)

function SIMPLE-PROBLEM-SOLVING-AGENT(percept) returns an action
    state ← UPDATE-STATE(state, percept)
    if seq is empty then
        goal ← FORMULATE-GOAL(state)
        problem ← FORMULATE-PROBLEM(state, goal)
        seq ← SEARCH(problem)
        if seq = failure then return a null action
    action ← FIRST(seq)
    seq ← REST(seq)
    return action
  • Formulate → Search → Execute

📝 Exam Tip: Understand the three steps and apply them to a new scenario.


8. Well-Defined Problems & Solutions

A problem must have five components:

  1. Initial State
  2. Actions
  3. Transition Model
  4. Goal Test
  5. Path Cost

Example:

  • Chess: Goal = Checkmate
  • Robot Vacuum: Goal = Clean all rooms

📝 Exam Tip: Given a problem, identify these five components.


9. Optimal Solutions

  • A solution is optimal if it has the lowest path cost among all solutions.
  • Example: Choosing the shortest or fastest route to Shanghai.

📝 Exam Tip: Evaluate paths based on cost.


Midterm Exam Preparation Checklist ✅

  • Define and explain a problem-solving agent.
  • Identify the five components of a well-defined problem.
  • Construct a search tree for a given problem.
  • Apply state transitions based on actions.
  • Choose the best path based on path cost.
  • Understand the Formulate → Search → Execute model.

🔥 Quick Recap: Key Formulas & Concepts

  1. State Transition: RESULT(s, a) = s'
  2. Path Cost Calculation: c(s, a, s')
  3. Search Algorithm Steps: Formulate → Search → Execute
  4. Evaluation Metrics: Performance Measure, Path Cost, Goal Test

Below, we analyze three classic problem-solving cases—Vacuum World, 8-Puzzle, and 8-Queens—from multiple perspectives, including problem definition, state space, action design, and search methods. Bilingual notes (English-Chinese) are also provided.


1. Vacuum World

Problem Definition

Goal: Clean all rooms.
Initial State: The vacuum cleaner is in a certain room, and some rooms may contain dirt.
Actions: Move Left, Move Right, Suck Dirt.
Path Cost: Each movement or sucking operation incurs a cost of 1 unit.

Search Methods

BFS (Breadth-First Search): Guarantees finding the optimal solution (least steps) but has high space complexity.
DFS (Depth-First Search): May fall into infinite loops (e.g., repeatedly moving left and right); useful for finite state spaces.
• *A (A-star search)**: Uses a heuristic function (e.g., the number of uncleaned rooms) to accelerate search while maintaining admissibility.

State Space Example
State1: [A, Dirty, B, Dirty] → Suck → [A, Clean, B, Dirty]  
State2: [A, Clean, B, Dirty] → Move Right → [B, Clean, B, Dirty] → Suck → Goal  

2. 8-Puzzle

Problem Definition

Goal: Arrange the 3×3 tiles into the target state (e.g., numbered 1-8 with the empty space in the bottom-right corner).
Initial State: Any solvable initial arrangement (50% of random arrangements are unsolvable).
Actions: Move the empty tile left, right, up, or down (by swapping with an adjacent tile).
Path Cost: Each move counts as 1 unit of cost.

Search Methods

A Algorithm*: Uses Manhattan Distance as the heuristic function (sum of horizontal and vertical distances of each tile from its goal position), guaranteeing the optimal solution.
IDA (Iterative Deepening A)**: Combines depth-first search with heuristics to reduce memory usage, suitable for large state spaces.

Key Features

Solvability Check: Determined by the inversion parity (whether the number of inversions is even).
Total Number of States: 9! = 362,880 (including the empty tile).
Upper Bound for Optimal Solution: Any solvable state requires at most 31 moves.


3. 8-Queens

Problem Definition

Goal: Place 8 queens on an 8×8 chessboard so that no two queens attack each other (i.e., they are not in the same row, column, or diagonal).
Initial State: An empty chessboard.
Actions: Place one queen per row while ensuring no conflicts.

Solution Methods

Backtracking: Places queens row by row, backtracks if conflicts arise. Time complexity is O(n!).
Min-Conflicts Heuristic: Randomly initializes queen positions and iteratively adjusts them to minimize conflicts, commonly used in local search.

Solution Statistics

Total Solutions: 92 (only 12 unique solutions after considering symmetry).
Backtracking Optimization: Uses a 1D array to store column positions of queens, enabling quick conflict detection in O(1) time.


Bilingual Notes (中英对照笔记)

ConceptEnglish TermKey Description
吸尘器世界Vacuum WorldThe goal is to clean all rooms. BFS guarantees an optimal solution.
曼哈顿距离Manhattan DistanceUsed in 8-Puzzle heuristic search, calculating tile distances.
回溯法BacktrackingPlaces queens row by row and backtracks when conflicts occur.
最小冲突启发式Min-Conflicts HeuristicAdjusts queen positions iteratively to minimize conflicts.
逆序数奇偶性Inversion ParityDetermines whether an 8-Puzzle initial state is solvable.

Summary

Vacuum World: Highlights the comparison of fundamental search strategies (BFS vs. DFS vs. A*).
8-Puzzle: Focuses on heuristic function design and state space optimization (A and IDA**).
8-Queens: Demonstrates the complementarity of brute-force backtracking and heuristic local search.

These three cases provide deep insights into state-space modeling, search algorithm selection, and heuristic design in problem-solving.


以下从问题定义状态空间动作设计搜索方法等维度,分析三个经典案例:Vacuum World(吸尘器世界)、8-Puzzle(八数码问题)、8-Queens(八皇后问题),并附中英对照笔记。


1. 吸尘器世界 (Vacuum World)

问题定义

目标: 清洁所有房间。
初始状态: 吸尘器位于某个房间,房间可能包含灰尘。
动作: 向左移动、向右移动、吸尘。
路径成本: 每次移动或吸尘操作计为1单位成本。

搜索方法

广度优先搜索(BFS): 保证找到最优解(最少操作次数),但空间复杂度高。
深度优先搜索(DFS): 可能陷入无限循环(如反复左右移动),适用于有限状态空间。
• *A(启发式搜索)**: 使用启发函数(如未清洁房间数量)加速搜索,需满足可采纳性。

状态空间示例
状态1: [A, Dirty, B, Dirty] → Suck → [A, Clean, B, Dirty]  
状态2: [A, Clean, B, Dirty] → Move Right → [B, Clean, B, Dirty] → Suck → Goal  

2. 八数码问题 (8-Puzzle)

问题定义

目标: 将3×3的拼图块排列为目标状态(如1-8顺序,空格在右下角)。
初始状态: 任意可解的初始排列(50%随机排列不可解)。
动作: 移动空格左/右/上/下(需与相邻块交换位置)。
路径成本: 每移动一次计为1单位成本。

搜索方法

A 算法*: 使用曼哈顿距离作为启发函数,保证最优解。
IDA(迭代加深A)**: 结合深度优先与启发式,降低内存消耗,适合大规模状态空间。

关键特性

可解性判定: 通过逆序数奇偶性判断初始状态是否可解。
状态总数: 9! = 362,880。
最优解上限: 任何可解状态最多需31步。


3. 八皇后问题 (8-Queens)

问题定义

目标: 在8×8棋盘放置8个皇后,使其互不攻击。
初始状态: 空棋盘。
动作: 在每行放置一个皇后,并确保不冲突。

解决方法

回溯法: 逐行放置皇后,若当前行无合法位置则回溯。
最小冲突启发式: 随机初始化皇后位置,迭代调整冲突最少的列。

解法统计

总解数: 92种(12种本质不同解)。
回溯法优化: 使用一维数组存储皇后列位置,快速检测冲突(O(1)时间复杂度)。


1. 吸尘器世界(Vacuum World)

想象你有一个简单的机器人吸尘器,它只会在两个房间里跑来跑去,把地上的灰尘吸干净。这个问题就是让吸尘器想办法用最少的步骤把所有房间都打扫干净。

吸尘器能做什么?

  • 向左移动(去另一个房间)
  • 向右移动(去另一个房间)
  • 吸尘(把房间里的灰尘吸掉)

怎么解决这个问题?

  1. 最笨的方法(乱跑):随便走,看到灰尘就吸,但可能会重复走很多不必要的路。
  2. 有条理的方法(广度优先搜索 BFS):一步步尝试所有可能的路径,保证找到最短的清洁路线,但需要记住很多信息,占用内存大。
  3. 聪明的方法(A*搜索):想办法“估计”哪些地方还脏,优先去那些地方,尽量用最短的路打扫干净。
举个例子

假设你家有两个房间:

[A, Dirty, B, Dirty]  (A房间脏,B房间也脏)
  1. 机器人先在 A 房间,把这里的灰尘吸掉:
    [A, Clean, B, Dirty]
    
  2. 然后它移动到 B 房间:
    [B, Clean, B, Dirty]
    
  3. 最后吸尘,所有房间都干净了,任务完成:
    [A, Clean, B, Clean] ✅
    

总结:这个问题就像让小朋友打扫房间,我们要教他怎么用最少的步数完成任务。


2. 八数码问题(8-Puzzle)

想象你有一个 3×3 的拼图,上面有 8 块数字(1~8)和一个空格,你可以滑动数字来调整它们的位置,目标是把它们排成正确的顺序。

怎么玩?

  • 你可以把空格向 左、右、上、下 移动,把旁边的数字滑过去。
  • 你的目标是把所有数字排好,比如:
    1 2 3
    4 5 6
    7 8 _
    

怎么解这个问题?

  1. 乱试法(随便滑):运气好就能成功,但可能要试很多次。
  2. 聪明的方法(A*搜索)
    • 计算每个数字距离它应该在的位置有多远(比如 1 应该在左上角,如果它在右下角,就很远)。
    • 优先移动那些能让整体更接近目标状态的数字。
    • 这样可以减少不必要的步骤,更快地解决问题。
举个例子

假设你的拼图是这样的:

1 2 3
4 5 _
7 8 6
  • 你看到 6 还没有到正确的位置,应该往左移一步:
1 2 3
4 5 6
7 8 _
  • 这样就完成了!

总结:这个问题就像我们小时候玩的拼图游戏,我们希望用最少的步数把拼图拼好。


3. 八皇后问题(8-Queens)

想象你有一个8×8的国际象棋棋盘,你要在上面放 8 个皇后棋子,但不能让它们互相攻击。

皇后怎么攻击?

  • 横着走(不能有两个皇后在同一行)
  • 竖着走(不能有两个皇后在同一列)
  • 斜着走(不能有两个皇后在同一条对角线上)

怎么解这个问题?

  1. 暴力尝试(回溯法)

    • 逐行放皇后,如果发现有冲突,就退回上一行,换个位置再试,直到找到所有皇后都不互相攻击的摆法。
    • 这个方法保证能找到所有正确的解,但可能会试很多次,效率不高。
  2. 更聪明的方法(最小冲突启发式)

    • 先随机放置 8 个皇后(不管冲突)。
    • 然后逐个调整皇后,让它们减少冲突(比如换到冲突最少的位置)。
    • 这样比暴力尝试更快,通常能很快找到一个正确的解。
举个例子

假设你已经放了 8 个皇后:

. . Q . . . . .
Q . . . . . . .
. . . . Q . . .
. . . . . . . Q
. . . Q . . . .
. . . . . Q . .
. Q . . . . . .
. . . . . . Q .
  • 发现某个皇后和别的皇后冲突,我们换个位置,让冲突减少,直到所有皇后都安全。

总结:这个问题就像“排队”游戏,我们要让每个人站的位置不影响别人。


总结

问题目标方法通俗理解
吸尘器世界用最短的步数把房间打扫干净让机器人用最少的动作吸尘机器人扫地
八数码问题用最少的步数把拼图拼好计算最短路径,减少不必要的滑动拼图游戏
八皇后问题让 8 个皇后互不攻击逐个调整皇后,减少冲突排队游戏

这些问题的本质是“如何用最少的步骤解决一个特定的任务”,而搜索算法就是帮我们找到最优解的“策略”。

幻灯片 31: Searching for Solutions(寻找解决方案)

翻译

这张幻灯片是一个章节标题,表示接下来的幻灯片将讨论 搜索问题的求解方法


幻灯片 32-33: 搜索树与搜索过程

翻译

搜索解决方案的过程

  1. 搜索等于在状态空间中查找目标状态
  2. 所有问题都可以转换为搜索树
    • 初始状态状态转移模型 生成。
  3. 初始状态 (Initial State)
    • 作为搜索树的 根节点 (Root)
    • 也称为 搜索节点 (Search Node)
  4. 扩展 (Expanding)
    • 应用 状态转移模型 (Transition Model) 来产生 新状态
    • 例如:
      • 北京 扩展后,可能的下一个状态是 太原、石家庄、天津
  5. 叶子节点 (Leaf Nodes)
    • 目标状态(无后继状态)。
    • 或者 尚未扩展的前沿节点 (Frontier)

示例:搜索树的构建

  • 初始状态:北京。
  • 扩展北京:生成 太原、石家庄、天津
  • 继续扩展:不断生成新的子节点,直到找到目标。
讲解

搜索树人工智能搜索问题 的核心结构。可以使用:

  • 广度优先搜索 (BFS):逐层展开,适用于最短路径问题。
  • 深度优先搜索 (DFS):递归深入,但可能陷入死循环。
  • *启发式搜索 (A)**:基于估计代价优化搜索。

幻灯片 34-37: 避免重复状态

翻译

重复状态 (Repeated States)

  • 什么是重复状态?
    • 在搜索过程中,某个状态已经被访问或扩展过。
  • 为什么要避免?
    • 可能导致 无限循环,例如:
      • 北京 -> 太原 -> 北京 -> 太原...
    • 即使状态空间是有限的,也可能极大增加计算量。

避免重复状态的方法

  1. 不回到刚刚访问过的状态
    • 防止搜索回到 父状态 (Parent State)
  2. 不生成有环的路径
    • 阻止状态回到 祖先状态 (Ancestor State)
  3. 不生成任何已访问状态
    • 需要 检查所有已扩展状态

使用数据结构来记录访问状态

  • 使用 Closed List (封闭列表)
    • 存储所有已扩展的节点。
    • 如果当前节点已在列表中,则丢弃它。
  • Graph Search 算法
    function GRAPH-SEARCH(problem) returns a solution, or failure
        initialize the frontier using the initial state of problem
        initialize the explored set to be empty
        loop do
            if the frontier is empty then return failure
            choose a leaf node and remove it from the frontier
            if the node contains a goal state then return the corresponding solution
            add the node to the explored set
            expand the chosen node, adding the resulting nodes to the frontier
                only if not in the frontier or explored set
    
讲解
  • 在搜索问题中 避免重复状态是关键,否则搜索可能变得非常低效,甚至 无法终止
  • 常见方法
    • 深度优先搜索 (DFS) 可能会遇到环,因此需要 回溯 (Backtracking) 处理。
    • 广度优先搜索 (BFS) 可以使用 队列 (Queue)集合 (Set) 来存储访问过的状态。
    • A 搜索* 通常使用 优先队列 (Priority Queue) + Closed List 来优化。

英文复习精简笔记

  1. Search Tree

    • Initial State = Root Node.
    • Expanding applies Transition Model to generate new states.
    • Leaf Nodes: No successors (goal states or unexpanded nodes).
    • Example: Expanding Beijing → Generates Taiyuan, Shijiazhuang, Tianjin.
  2. Avoiding Redundant Paths

    • Avoid paths like Beijing → Taiyuan → Beijing (Loop).
    • Do not revisit parent state or ancestor state.
  3. Handling Repeated States

    • Use Closed List to store expanded nodes.
    • Graph Search Algorithm:
      • Expand only if not in frontier or explored set.
      • Prevents infinite loops.

Intuitive Explanation: Search Trees and Avoiding Repeated States


1. Search Trees: Finding a Path Like GPS Navigation

Core Concept:

Think of problem-solving as a tree where:

  • The root node is the starting point (e.g., Beijing).
  • The branches represent possible next steps (e.g., going to Taiyuan, Shijiazhuang, or Tianjin).
  • The leaf nodes are the possible destinations (e.g., Shanghai) or unexplored paths.
Example: Finding a Route from Beijing to Shanghai
  • Root node: Beijing (start point).
  • Expanding: From Beijing → Taiyuan, Beijing → Shijiazhuang, Beijing → Tianjin.
  • Further expanding: From Taiyuan → Zhengzhou, Shijiazhuang → Jinan... until reaching Shanghai (goal node).
Different Search Strategies:
  • BFS (Breadth-First Search): Like systematically checking every street, ensuring the shortest route.
  • DFS (Depth-First Search): Like an adventurer exploring one path deeply; may take a long way or get stuck.
  • A Algorithm*: Like an intelligent GPS that prioritizes paths that seem closer to the destination.

2. Avoiding Repeated States: Why Not Take a U-Turn?

The Problem:

If we allow revisiting the same locations, the search may enter an infinite loop or waste resources.

  • Example: Beijing → Taiyuan → Beijing → Taiyuan... (looping without reaching Shanghai).
Solutions:
  1. Closed List (Visited List): Keep track of places already visited; if a place appears again, ignore it.
    • Similar to marking places on a travel map to avoid revisiting the same attractions.
  2. Graph Search Algorithm: Before expanding a location, check two places:
    • Frontier (Queue of Unexplored Locations): Is this place already planned for exploration?
    • Explored Set (Visited Locations): Have we already visited this place?
    • If neither, add it to the queue for further exploration.
Simplified Code Logic:
closed_list = []  # List of visited cities
frontier = ["Beijing"]  # Queue of cities to explore

while frontier is not empty:
    current_city = take the first city from the frontier
    if current_city is Shanghai: 
        return "Success!"
    if current_city is not in closed_list:
        add its neighboring cities (Taiyuan, Shijiazhuang, etc.) to the frontier
        add current_city to closed_list

3. Practical Applications: General Strategies for Avoiding Repetition

The idea of "avoiding repetition" appears in many real-world scenarios:

  • Preventing duplicate form submissions (on websites):
    • Disable the button after clicking (similar to marking “processed” in a closed list).
    • Use a debounce mechanism (e.g., only accept one click every 0.5 seconds).
  • Plagiarism detection (in documents):
    • The system stores submitted content and highlights repeated sections in red.

Summary

  • Search trees represent problem-solving paths, where the root is the start, and leaves are the goal or unexplored paths.
  • Avoiding repeated states prevents infinite loops and wasted effort by keeping track of visited locations.
  • This idea is widely used in practical applications like preventing duplicate submissions and optimizing navigation.

通俗讲解:搜索树与避免重复状态的原理


1. 什么是搜索树?想象成地图导航

核心概念

我们可以把解决问题的过程看成是一棵,就像在地图上找路:

  • 根节点(起点):比如“北京”。
  • 分支(下一步可能的路):从北京可以去太原、石家庄、天津。
  • 叶子节点(终点或还没走过的路):比如“上海”或者其他没探索过的地方。
举个例子:从北京去上海
  1. 从北京出发(起点)
  2. 探索周围的城市(扩展)
    • 北京 → 太原
    • 北京 → 石家庄
    • 北京 → 天津
  3. 继续探索
    • 太原 → 郑州
    • 石家庄 → 济南
    • 直到找到上海,问题解决!
不同的搜索方式
  • 广度优先搜索(BFS):像扫大街一样,一层层找,确保最短路径。
  • 深度优先搜索(DFS):像探险家,一条路走到底,但可能走弯路甚至迷路。
  • A 算法*:像智能导航,优先选择“看起来更接近终点”的路线(用启发式函数计算)。

2. 为什么不能走回头路?避免重复状态

问题:如果允许走回头路,可能会一直绕圈子!
  • 比如:北京 → 太原 → 北京 → 太原……(永远到不了上海)。
解决方法
  1. 封闭列表(记录走过的地方)
    • 就像旅游时标记“我去过这里”,下次就不会再去。
  2. 搜索时检查是否重复
    • 待探索列表(Frontier):这个地方是不是已经计划去?
    • 已探索集合(Closed List):这个地方是不是已经去过?
    • 如果都没去过,才加入探索队列
简化代码示例
closed_list = []  # 记录去过的地方
frontier = ["北京"]  # 还没去的地方

while frontier 还有城市:
    当前城市 = 取出frontier的第一个城市
    if 当前城市是上海: 
        返回 "找到最快路线!"
    if 当前城市不在 closed_list:
        把它的邻居(太原、石家庄等)加入frontier
        将当前城市加入 closed_list  # 标记走过

3. 现实中的应用:如何防止重复操作?

避免重复的思想,在很多地方都能用:

防止网页重复提交

  • 你点击按钮后,按钮会变灰色,防止重复提交数据。
  • 或者使用“防抖机制”,比如0.5秒内只接受一次点击

防止抄袭(论文查重)

  • 系统会记录已经提交的内容,相同的部分会被标红

GPS导航优化

  • 你开车时,导航不会推荐你原地绕圈,而是找到一条最佳路线!

总结

问题解决方法通俗理解
搜索树解决问题的路径图,根是起点,叶是终点或未探索的地方地图导航
避免重复状态记录“已经走过的地方”,避免死循环旅游打卡
现实应用防重复提交、查重、导航优化生活场景

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值