无人驾驶学习笔记--路径规划(一)【Path Finding -- Hybrid A* 】

无人驾驶学习笔记–路径规划(一)【Path Finding – Hybrid A*】

混合A算法 (Hybrid A

  • It uses continuous search space 连续空间
  • It always find a solution if there exists one 完成性
  • Solution it finds is drivable 考虑车辆可完成动作

与传统A相比,混合A可以理解为建立在A*算法的理念之上,将离散空间变为连续空间,通过车辆运动原理进行规划。

下面对几种路径规划进行对比:
在这里插入图片描述

  1. A*: associates costs with centers of cells and only visits states that correspond to grid-cell centers.将世界分解为离散的格子,以格子为单位规划时只访问每个格子的中心位置,这种规划方式会使得车辆无法完全执行。
  2. Field D* (Ferguson and Stentz, 2005): associates costs with cell corners and allows arbitrary linear paths from cell to cell.将成本与单元格角关联,并允许从单元格到单元格的任意线性路径。
  3. Hybrid A*: associates a continuous state with each cell and the score of the cell is the cost of its associated continuous state.将连续状态与每个单元格相关联,单元格的分数是其关联的连续状态的成本。

车辆运动方程

混合A的采样过程基于车辆的运动方程,混合 A 表示 4-D 离散网格中的车辆状态 <x, y, θ \theta θ, is_foward>。其中两个维度表示车辆中心的 x-y 位置(平滑的地图坐标);第三个维度为车辆基于世界坐标系的方向 θ \theta θ ,第四个维度则是车辆方向(向前或是向后)。

x ( t + Δ t ) = x ( t ) + v ∗ Δ t ∗ c o s ( θ ) x(t+\Delta t) = x(t) + v * \Delta t * cos(\theta) x(t+Δt)=x(t)+vΔtcos(θ)
y ( t + Δ t ) = y ( t ) + v ∗ Δ t ∗ s i n ( θ ) y(t+\Delta t) = y(t) + v * \Delta t * sin(\theta) y(t+Δt)=y(t)+vΔtsin(θ)
θ ( t + Δ t ) = θ ( t ) + ω ∗ Δ t \theta(t+\Delta t) = \theta(t) + \omega * \Delta t θ(t+Δt)=θ(t)+ωΔt

其中根据自行车模型:
ω = v L ∗ t a n ( δ ( t ) ) \omega = \frac{v}{L} * tan(\delta (t)) ω=Lvtan(δ(t))
在这里插入图片描述
大多数车的性能:轮子转动角在35度以内

混合A*的启发函数

在斯坦福的城市挑战赛中,用了两个启发函数:

  1. Non-holonomic-without-obstacles heuristic 非全向无障碍物启发函数: 无视周围障碍物但考虑的车辆本身的特性。这种启发式,可以完全预计整个 4D 空间(车辆位置、方向和运动方向),接近目标位置和目标角度。
  2. holonomic-with-obstacles heuristic 全向考虑障碍物启发: 通过2D动态规划计算车辆距离目标的最短距离 (无视车辆角度和动作方向)

通过这几种方式都可以融合两种启发函数:(h1+h2)/2, min(h1,h2), max(h1,h2)

混合A*伪代码示例

  • State(x, y, theta, g, f): 车辆当前状态(x, y坐标,角度,当前g,f值)
  • grid: 2D搜索空间,取值范围0~1,1表示障碍物,0表示可行走区域
  • SPEED: 车辆速度
  • LENGTH: 车辆前后轮距离
  • NUM_THETA_CELLS: The number of cells a circle is divided into. This is used in keeping track of which States we have visited already.
def expand(state, goal):
    next_states = []
    for delta in range(-35, 40, 5): 
        # Create a trajectory with delta as the steering angle using 
        # the bicycle model:

        # ---Begin bicycle model---
        delta_rad = deg_to_rad(delta)
        omega = SPEED/LENGTH * tan(delta_rad)
        next_x = state.x + SPEED * cos(theta)
        next_y = state.y + SPEED * sin(theta)
        next_theta = normalize(state.theta + omega)
        # ---End bicycle model-----

        next_g = state.g + 1
        next_f = next_g + heuristic(next_x, next_y, goal)

        # Create a new State object with all of the "next" values.
        state = State(next_x, next_y, next_theta, next_g, next_f)
        next_states.append(state)

    return next_states

def search(grid, start, goal):
    # The opened array keeps track of the stack of States objects we are 
    # searching through.
    opened = []
    # 3D array of zeros with dimensions:
    # (NUM_THETA_CELLS, grid x size, grid y size).
    closed = [[[0 for x in range(grid[0])] for y in range(len(grid))] 
        for cell in range(NUM_THETA_CELLS)]
    # 3D array with same dimensions. Will be filled with State() objects 
    # to keep track of the path through the grid. 
    came_from = [[[0 for x in range(grid[0])] for y in range(len(grid))] 
        for cell in range(NUM_THETA_CELLS)]

    # Create new state object to start the search with.
    x = start.x
    y = start.y
    theta = start.theta
    g = 0
    f = heuristic(start.x, start.y, goal)
    state = State(x, y, theta, 0, f)
    opened.append(state)

    # The range from 0 to 2pi has been discretized into NUM_THETA_CELLS cells. 
    # Here, theta_to_stack_number returns the cell that theta belongs to. 
    # Smaller thetas (close to 0 when normalized  into the range from 0 to 
    # 2pi) have lower stack numbers, and larger thetas (close to 2pi when 
    # normalized) have larger stack numbers.
    stack_num = theta_to_stack_number(state.theta)
    closed[stack_num][index(state.x)][index(state.y)] = 1

    # Store our starting state. For other states, we will store the previous 
    # state in the path, but the starting state has no previous.
    came_from[stack_num][index(state.x)][index(state.y)] = state

    # While there are still states to explore:
    while opened:
        # Sort the states by f-value and start search using the state with the 
        # lowest f-value. This is crucial to the A* algorithm; the f-value 
        # improves search efficiency by indicating where to look first.
        opened.sort(key=lambda state:state.f)
        current = opened.pop(0)

        # Check if the x and y coordinates are in the same grid cell 
        # as the goal. (Note: The idx function returns the grid index for 
        # a given coordinate.)
        if (idx(current.x) == goal[0]) and (idx(current.y) == goal.y):
            # If so, the trajectory has reached the goal.
            return path

        # Otherwise, expand the current state to get a list of possible 
        # next states.
        next_states = expand(current, goal)
        for next_s in next_states:
            # If we have expanded outside the grid, skip this next_s.
            if next_s is not in the grid:
                continue
            # Otherwise, check that we haven't already visited this cell and
            # that there is not an obstacle in the grid there.
            stack_num = theta_to_stack_number(next_s.theta)
            if closed[stack_num][idx(next_s.x)][idx(next_s.y)] == 0 
                and grid[idx(next_s.x)][idx(next_s.y)] == 0:
                # The state can be added to the opened stack.
                opened.append(next_s)
                # The stack_number, idx(next_s.x), idx(next_s.y) tuple 
                # has now been visited, so it can be closed.
                closed[stack_num][idx(next_s.x)][idx(next_s.y)] = 1
                # The next_s came from the current state, and is recorded.
                came_from[stack_num][idx(next_s.x)][idx(next_s.y)] = current
  • 5
    点赞
  • 37
    收藏
    觉得还不错? 一键收藏
  • 2
    评论
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值