AI期末复习(PolyU)

文档详见评论链接

Tutorial 2-Search Algorithm
Breadth-first search宽度优先搜索
[图片]
[图片]
open表由queue队列实现,因此加在尾部。
[图片]
Depth-first search深度优先搜索
[图片]
[图片]
open表由stack栈实现,因此加在头部。
[图片]
Hill climbing (a heuristic search algorithm)
[图片]
Hill climbing algorithm goes uphill along the steepest possible path until can go no further up, which may return a state that is a local maximum.
Advantages

  • Avoid traversal避免遍历所有解。
  • Achieve the purpose of improving efficiency提高搜索效率。
    Disadvantages
  • Not necesssarily find the global maximum but converge on a local maximum不一定找到最优解但收敛于局部最优解。
  • In plateau cases, the hill climber may not be able to determine in which direction it should step, and may wander in a direction that never leads to improvement.在高原(比较平坦)的情况下,登山者可能无法确定应该朝哪个方向前进,并且可能在一个永远无法改善的方向上徘徊。
  • Ridges problem: If the target function creates a narrow ridge that ascends in a non-axis-aligned direction, then the hill climber can only ascend the ridge by zig-zagging.山脊问题:如果目标函数创建了一个狭窄的山脊,它以非轴线对齐的方向上升,那么爬山者只能通过曲折攀登山脊。

Best-first Search (Greedy Search贪心算法)
The node with the lowest evaluation is expanded first, i.e., 𝑎𝑟𝑔𝑚𝑖𝑛(𝑓(𝑛))
f(n) = ℎ(𝑛) = 𝑒𝑠𝑡𝑖𝑚𝑎𝑡𝑒𝑑 𝑐𝑜𝑠𝑡 𝑜𝑓 𝑡ℎ𝑒 𝑐ℎ𝑒𝑎𝑝𝑒𝑠𝑡 𝑝𝑎𝑡ℎ 𝑓𝑟𝑜𝑚 𝑠𝑡𝑎𝑡𝑒 𝑎𝑡 𝑛𝑜𝑑𝑒 𝑛 𝑡𝑜 𝑎 𝑔𝑜𝑎𝑙 𝑠𝑡𝑎𝑡𝑒
If 𝑛 is a goal node, then ℎ(𝑛) = 0.
[图片]
Limitations of Greedy Search

  1. not optimal
    [图片]
    A* Search
    [图片]
    [图片]
    [图片]
    [图片]
    Exercise
    [图片]
    Breadth-first search:A B C D E F
    [图片]
    Depth-first search:A D F
    [图片]
    [图片]
    [图片]
    [图片]
    [图片]
    openList, closeList = [start], []
    while True:
    currentNode = lowest f cost in openList
    if currentNode == end: return
    for neighbour in currentNode.Neighbours:
    if closeList.contains(neighbour) continue
    if new_neighbour_f <= old_neighbour_f or not openList.contains(neighbour):
    neighbour.f = new_neighbour_f
    if not openList.contains(neighbour):
    openList.add(neighbour)
    [图片]
    [图片]
    [图片]
    [图片]
    [图片]
    [图片]
    [图片]
    [图片]
    [图片]
    [图片]

Tutorial 3-Genetic Algorithm
[图片]
[图片]
[图片]
[图片]
[图片]
[图片]
[图片]
[图片]
[图片]
[图片]
[图片]
[图片]
[图片]
[图片]
[图片]
[图片]

Tutorial 4-Multi-objective Optimization
[图片]
[图片]
[图片]
[图片]
[图片]
[图片]
Exercise 1同 Tutorial 3的Exercise 4
[图片]
[图片]
[图片]
[图片]
[图片]
[图片]

Tutorial 5-Regression and Gradient Descent
[图片]
[图片]
[图片]
[图片]
[图片]
[图片]
[图片]

Tutorial 6-Scaling, Overfitting and Kmeans
[图片]
[图片]
[图片]
[图片]
[图片]
[图片]
[图片]
[图片]
[图片]
[图片]
[图片]
[图片]
[图片]

Tutorial 7-Building a Perceptron
[图片]
[图片]
[图片]
[图片]
[图片]
[图片]
[图片]
[图片]
[图片]
[图片]
[图片]
[图片]

Tutorial 8-Building a Neural Network
[图片]
[图片]
[图片]
[图片]
[图片]
[图片]
[图片]
[图片]
[图片]
[图片]
[图片]
[图片]
[图片]
[图片]
[图片]
[图片]
[图片]

Tutorial 9-Attention and Transformer
[图片]
[图片]
[图片]
[图片]
[图片]
[图片]
[图片]
[图片]
[图片]
[图片]
[图片]
[图片]
[图片]
[图片]
[图片]
[图片]
[图片]

Tutorial 10-Ensemble Learning
[图片]
[图片]
[图片]
[图片]

Tutorial 11-Fuzzy
[图片]
[图片]
[图片]
[图片]
[图片]
[图片]
[图片]

Tutorial 12-Fuzzy2
[图片]
[图片]
按列两两求最小,然后整列求最大。
[图片]
P的行,与Q的列逐个元素求最小值,然后选择最小值里的最大值。
[图片]
[图片]
[图片]
[图片]
[图片]

期中考:

  1. 图搜索 * 2
  2. 逻辑代数 * 2
  3. bfs dfs
  4. a*
  5. 贝叶斯公式
  6. Kmeans
  7. 反向传播
  • 4
    点赞
  • 7
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

路过的风666

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值