强化学习(一)案例grid world

MDP的案例

一、Grid world

参考连接:https://www.jianshu.com/p/b392405115bb

网格世界(Grid World)

规则:网格中的每一个小格都对应于环境中的状态. 在一个小格上, 有 4 种可能的动作: 北移, 南移,东移, 西移, 其中各个动作都确定性地使智能体在网格上沿对应的方向移动一格. 如果所采取的动作将令智能体脱离网格, 那么该动作的结果为智能体的位置保持不变, 且造成 −1 的奖赏. 除了上述动作与将智能体移出特殊状态 A 与 B 的动作外, 其他的动作只会造成 0 的奖赏.在状态 A 上, 所有的 4 个动作会产生 +10 的奖赏, 并将智能体带至 A’. 在状态 B 上, 所有的 4个动作会产生 +5 的奖赏, 并将智能体带至 B’.

1.1 解决过程

1 导入模块及数据初始化
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.table import Table

WORLD_SIZE = 5       # 网格为5*5
A_POS = [0, 1]       # A点位置
A_PRIME_POS = [4, 1] # A'位置
B_POS = [0, 3]       # B点位置
B_PRIME_POS = [2, 3] # B'位置
discount = 0.9       # 折扣因子
world = np.zeros((WORLD_SIZE, WORLD_SIZE))  # 网格世界初始为5*5的全零矩阵
2 动作空间,即在每一格上下左右动作的概率相同
# left, up, right, down
actions = ['L', 'U', 'R', 'D']
actionProb = []    #生成5*5的每个单元包括4个方向概率的list
for i in range(0, WORLD_SIZE):
    actionProb.append([])
    for j in range(0, WORLD_SIZE): # 对25个格子进行循环
        actionProb[i].append(dict({'L':0.25, 'U':0.25, 'R':0.25, 'D':0.25}))
3 环境(状态空间),即确定在每一格在某一动作后的下一状态和反馈值
nextState = []      #确定每一位置的下一状态
actionReward = []   #按照规则确定奖惩函数
for i in range(0, WORLD_SIZE):
    nextState.append([])
    actionReward.append([])
    for j in range(0, WORLD_SIZE):
        next = dict()
        reward = dict()
        if i == 0: # 最顶的一层,[0,j]向上动作的下一状态依旧是当下状态[0,j]
            next['U'] = [i, j]
            reward['U'] = -1.0
        else:     # 如果不是最顶层,则动作向上状态向上一格
            next['U'] = [i - 1, j]
            reward['U'] = 0.0
        if i == WORLD_SIZE - 1:  # 如果是最低一层,向下的动作的下一状态不变
            next['D'] = [i, j]
            reward['D'] = -1.0
        else:                   #  如果不是最低一层,向下动作则向下一格
            next['D'] = [i + 1, j]
            reward['D'] = 0.0
        if j == 0: # 最左一侧,向左的动作状态不变
            next['L'] = [i, j]
            reward['L'] = -1.0
        else:     # 不是最左侧,向左动作向左一格
            next['L'] = [i, j - 1]
            reward['L'] = 0.0
        if j == WORLD_SIZE - 1:  # 最右一侧,向右动作状态不变
            next['R'] = [i, j]  
            reward['R'] = -1.0
        else:                    # 不是最右,向右动作向右一格
            next['R'] = [i, j + 1]
            reward['R'] = 0.0
        if [i, j] == A_POS:      # 若A点位置,任意动作都为A'
            next['L'] = next['R'] = next['D'] = next['U'] = A_PRIME_POS
            reward['L'] = reward['R'] = reward['D'] = reward['U'] = 10.0
        if [i, j] == B_POS:     # 若B点位置,任意动作都为B'
            next['L'] = next['R'] = next['D'] = next['U'] = B_PRIME_POS
            reward['L'] = reward['R'] = reward['D'] = reward['U'] = 5.0

        nextState[i].append(next)
        actionReward[i].append(reward)
4. 随机优化策略。即在每个格子上下左右选择是随机的,不采取任何措施
while True:
    # keep iteration until convergence 保持迭代直到收敛
    newWorld = np.zeros((WORLD_SIZE, WORLD_SIZE))
    for i in range(0, WORLD_SIZE):
        for j in range(0, WORLD_SIZE):
            # 对每个网格(i,j)施加动作['L', 'U', 'R', 'D']
            for action in actions:
                newPosition = nextState[i][j][action]
                # bellman equation 贝尔曼等式
                # 当前状态的值+=(i,j)的action概率*【((i,j),action)的报酬+折扣*下一状态的值】
                newWorld[i, j] += actionProb[i][j][action] * (actionReward[i][j][action] + discount * world[newPosition[0], newPosition[1]])
    if np.sum(np.abs(world - newWorld)) < 1e-4: # 如果新状态值和旧状态值收敛的话,
        print('Random Policy')
        draw_image(np.round(newWorld, decimals=2)) # 调用draw_image函数画图
        plt.savefig('random_policy.png')           # 可保存图片
        break                                      # 跳出循环
    world = newWorld  

将其包装成函数

def figure_3_2():
    # random -policy 并画出值函数
    value = np.zeros((WORLD_SIZE, WORLD_SIZE))
    while True:
        # keep iteration until convergence # 保持迭代直到收敛
        new_value = np.zeros_like(value)
        # np.zeros_like(x)构造一个同x相同形状的全0数组
        for i in range(WORLD_SIZE):
            for j in range(WORLD_SIZE):
                for action in ACTIONS:  # 当前(状态,动作)的下一状态和报酬
                    (next_i, next_j), reward = step([i, j], action)
                    # bellman equation 随机决策的期望值
                    new_value[i, j] += ACTION_PROB * (reward + DISCOUNT * value[next_i, next_j]) # ACTION_PROB=0.5
        if np.sum(np.abs(value - new_value)) < 1e-4:
            draw_image(np.round(new_value, decimals=2))
            plt.savefig('./images/figure_3_2.png')
            plt.close()
            break
        value = new_value
5. 值迭代的优化策略,即每次总选择使值函数最大的策略。通过不断的迭代过程直至收敛
world = np.zeros((WORLD_SIZE, WORLD_SIZE))
while True:
    # keep iteration until convergence 保持迭代直到收敛
    newWorld = np.zeros((WORLD_SIZE, WORLD_SIZE))
    for i in range(0, WORLD_SIZE):
        for j in range(0, WORLD_SIZE):
            values = []
            # 对(i,j)选择一个获得的最大值的action,下一个状态是(i,j)->actionmax
            for action in actions:
                newPosition = nextState[i][j][action]
                # value iteration 值迭代
                values.append(actionReward[i][j][action] + discount * world[newPosition[0], newPosition[1]])
            newWorld[i][j] = np.max(values)
    if np.sum(np.abs(world - newWorld)) < 1e-4: # 新旧状态的值收敛
        print('Optimal Policy')                   # 则打印“最优策略”
        draw_image(np.round(newWorld, decimals=2))# 调用draw_image()画图
        '''
        np.round(数据, decimal=保留的小数位数)
        原则:对于浮点型数据,四舍六入,正好一半就搞到偶数
        '''
        # plt.savefig('optimal_policy.png')
        break
    world = newWorld

包装成函数

def figure_3_5():
    value = np.zeros((WORLD_SIZE, WORLD_SIZE))
    while True:
        # keep iteration until convergence
        new_value = np.zeros_like(value)
        for i in range(WORLD_SIZE):
            for j in range(WORLD_SIZE):
                values = []
                for action in ACTIONS:
                    (next_i, next_j), reward = step([i, j], action)
                    # value iteration
                    values.append(reward + DISCOUNT * value[next_i, next_j])
                new_value[i, j] = np.max(values)
        if np.sum(np.abs(new_value - value)) < 1e-4:
            draw_image(np.round(new_value, decimals=2))
            plt.savefig('./images/figure_3_5.png')
            plt.close()
            draw_policy(new_value)
            plt.savefig('./images/figure_3_5_policy.png')
            plt.close()
            break
        value = new_value
6 画出任意策略的值函数图
def draw_image(image):
    fig, ax = plt.subplots() # 建立子图、坐标轴
    ax.set_axis_off()        # 隐藏坐标轴
    tb = Table(ax, bbox=[0, 0, 1, 1])
    '''
    matplotlib.table.Table(ax,loc=None,bbox=None,**kwargs)
    由一系列单元格组成的表,其索引为(row,column).总索引由(0,0)到(num_rows-1,num_cols-1),其中(0,0)位于左上角
    bbox:将表格绘制到其中的边界框。如果不是None,则将覆盖loc
    '''

    nrows, ncols = image.shape # 在本案例中为(5,5)
    width, height = 1.0 / ncols, 1.0 / nrows

    # Add cells
    # np.ndenumerate()效果等同与enumerate,并且支持对多维数据的输出
    for (i, j), val in np.ndenumerate(image):
        
        # add state labels
        if [i, j] == A_POS:
            val = str(val) + " (A)"
        if [i, j] == A_PRIME_POS:
            val = str(val) + " (A')"
        if [i, j] == B_POS:
            val = str(val) + " (B)"
        if [i, j] == B_PRIME_POS:
            val = str(val) + " (B')"
        # 对每个单元格设置(单元格索引,宽度,长度,标签,位置,颜色)
        tb.add_cell(i, j, width, height, text=val,
                    loc='center', facecolor='white')
        

    # Row and column labels...
    for i in range(len(image)): # 本案例中,len(image)=5
        tb.add_cell(i, -1, width, height, text=i+1, loc='right',
                    edgecolor='none', facecolor='none')
        tb.add_cell(-1, i, width, height/2, text=i+1, loc='center',
                    edgecolor='none', facecolor='none')

    ax.add_table(tb)
7 画出最优策略的符号表示
def draw_policy(optimal_values):
    # 在对应位置用箭头画出策略
    fig, ax = plt.subplots()
    ax.set_axis_off()
    tb = Table(ax, bbox=[0, 0, 1, 1])

    nrows, ncols = optimal_values.shape
    width, height = 1.0 / ncols, 1.0 / nrows

    # Add cells 获得每个单元格的最佳动作,并设置符号,并创建单元格
    for (i, j), val in np.ndenumerate(optimal_values):
        next_vals=[]
        for action in ACTIONS:
            next_state, _ = step([i, j], action) # [i,j]的动作和下一个状态
            next_vals.append(optimal_values[next_state[0],next_state[1]]) # 将下一个状态的值放入next_vals

        best_actions=np.where(next_vals == np.max(next_vals))[0] # 下一个状态的值取最大值
        # np.where(next_vals == np.max(next_vals)) 表示最大值的索引,并注意到最大值的索引不唯一
        val=''
        for ba in best_actions:
            val+=ACTIONS_FIGS[ba]
        
        # add state labels
        if [i, j] == A_POS:
            val = str(val) + " (A)"
        if [i, j] == A_PRIME_POS:
            val = str(val) + " (A')"
        if [i, j] == B_POS:
            val = str(val) + " (B)"
        if [i, j] == B_PRIME_POS:
            val = str(val) + " (B')"
        
        tb.add_cell(i, j, width, height, text=val,
                loc='center', facecolor='white')
    
    # Row and column labels... 设置行列标签
    for i in range(len(optimal_values)):
        tb.add_cell(i, -1, width, height, text=i+1, loc='right',
                    edgecolor='none', facecolor='none')
        tb.add_cell(-1, i, width, height/2, text=i+1, loc='center',
                   edgecolor='none', facecolor='none')

    ax.add_table(tb)
8 利用线性系统求解精确解
def figure_3_2_linear_system():
    '''
    Here we solve the linear system of equations to find the exact solution. 在这里,我们求解线性方程组以找到精确解。
    We do this by filling the coefficients for each of the states with their respective right side constant.
    我们通过用它们各自的右侧常数填充每个状态的系数来做到这一点。
    '''
    A = -1 * np.eye(WORLD_SIZE * WORLD_SIZE) # 25*25的单位矩阵*(-1)
    b = np.zeros(WORLD_SIZE * WORLD_SIZE)    # 25*1的全0数组
    for i in range(WORLD_SIZE):
        for j in range(WORLD_SIZE):
            s = [i, j]  # current state
            index_s = np.ravel_multi_index(s, (WORLD_SIZE, WORLD_SIZE)) # 将(5,5)数组中的索引(i,j)转化成一维数组的索引index_s
            for a in ACTIONS:
                s_, r = step(s, a)
                index_s_ = np.ravel_multi_index(s_, (WORLD_SIZE, WORLD_SIZE)) # 同上,化为一维数组的索引

                A[index_s, index_s_] += ACTION_PROB * DISCOUNT  # Action_PROB=0.25, DISCOUNT=0.9  在对角线上的位置
                b[index_s] -= ACTION_PROB * r #r是(状态,动作)后的立时报酬

    x = np.linalg.solve(A, b) #以矩阵形式解一个线性矩阵方程,或线性标量方程组
    draw_image(np.round(x.reshape(WORLD_SIZE, WORLD_SIZE), decimals=2))
    plt.savefig('./images/figure_3_2_linear_system.png')
    plt.close()
  • 5
    点赞
  • 32
    收藏
    觉得还不错? 一键收藏
  • 11
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 11
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值