强化学习实战-使用Q-learning算法解决悬崖问题

Q-learning简介

  • Q-learning也是采用Q表格的方式存储Q值(状态动作价值),决策部分与Sarsa是一样的,采用ε-greedy方式增加探索。
  • Q-learning跟Sarsa不一样的地方是更新Q表格的方式。
  • Sarsa是on-policy的更新方式,先做出动作再更新。
  • Q-learning是off-policy的更新方式,更新learn()时无需获取下一步实际做出的动作next_action,并假设下一步动作是取最大Q值的动作。

Q-learning的更新公式

在这里插入图片描述

悬崖问题

找到绕过悬崖通往终端的最短路径(快速到达目的地),每走一步都有-1的惩罚,掉进悬崖会有-100的惩罚(并被拖回出发点),直到到达目的地结束游戏,如下图所示。
在这里插入图片描述

源程序

# Step1 导入依赖
import gym
import numpy as np
import time
import matplotlib.pyplot as plt


# Step2 定义Agent
class SarsaAgent(object):
    def __init__(self, obs_n, act_n, lr, gamma, epsilon):
        self.obs_n = obs_n
        self.act_n = act_n
        self.lr = lr
        self.gamma = gamma
        self.epsilon = epsilon
        self.Q_table = np.zeros((obs_n, act_n))

    def sample(self, obs):
        """
        根据输入观察值,采样输出的动作值,带探索
        :param obs:当前state
        :return: 下一个动作
        """
        action = 0
        if np.random.uniform(0, 1) < (1.0 - self.epsilon):  # 根据table的Q值选动作
            action = self.predict(obs)
        else:
            action = np.random.choice(self.act_n)  # 有一定概率随机探索选取一个动作
        return action

    def predict(self, obs):
        '''
        根据输入观察值,预测输出的动作值
        :param obs:当前state
        :return:预测的动作
        '''
        Q_list = self.Q_table[obs, :]
        maxQ = np.max(Q_list)
        action_list = np.where(Q_list == maxQ)[0]  # maxQ可能对应多个action
        action = np.random.choice(action_list)
        return action

    def learn(self, obs, act, reward, next_obs, done):
        '''
        on-policy
        :param obs:交互前的obs, s_t
        :param act:本次交互选择的action, a_t
        :param reward:本次动作获得的奖励r
        :param next_obs:本次交互后的obs, s_t+1
        :param next_act:根据当前Q表格, 针对next_obs会选择的动作, a_t+1
        :param done:episode是否结束
        :return:null
        '''
        predict_Q = self.Q_table[obs, act]
        if done:
            target_Q = reward  # 没有下一个状态了
        else:
            target_Q = reward + self.gamma * np.max(self.Q_table[next_obs])  # Q-learning
        self.Q_table[obs, act] = self.lr * (target_Q - predict_Q)  # 修正q

    # 保存Q表格数据到文件
    def save(self):
        npy_file = './q_table.npy'
        np.save(npy_file, self.Q_table)
        print(npy_file + ' saved.')

    # 从文件中读取Q值到Q表格中
    def restore(self, npy_file='./q_table.npy'):
        self.Q_table = np.load(npy_file)
        print(npy_file + ' loaded.')


# Step3 Training && Test(训练&&测试)
def train_episode(env, agent, render=False):
    total_reward = 0
    total_steps = 0  # 记录每个episode走了多少step

    obs = env.reset()
    act = agent.sample(obs)

    while True:
        next_obs, reward, done, _ = env.step(act)  # 与环境进行一个交互
        next_act = agent.sample(next_obs)  # 根据算法选择一个动作
        # 训练Sarsa算法
        agent.learn(obs, act, reward, next_obs, done)

        act = next_act
        obs = next_obs  # 存储上一个观察值
        total_reward += reward
        total_steps += 1
        if render:
            env.render()  # 渲染新的一帧图形
        if done:
            break
    return total_reward, total_steps


def test_episode(env, agent):
    total_reward = 0
    total_steps = 0  # 记录每个episode走了多少step
    obs = env.reset()

    while True:
        action = agent.predict(obs)  # greedy
        next_obs, reward, done, _ = env.step(action)
        total_reward += reward
        total_steps += 1
        obs = next_obs
        # time.sleep(0.5)
        # env.render()
        if done:
            break
    return total_reward, total_steps


# Step4 创建环境和Agent,启动训练

# 使用gym创建悬崖环境
env = gym.make("CliffWalking-v0")  # 0 up, 1 right, 2 down, 3 left

# 创建一个agent实例,输入超参数
agent = SarsaAgent(
    obs_n=env.observation_space.n,
    act_n=env.action_space.n,
    lr=0.1,
    gamma=0.9,
    epsilon=0.1
)

# 训练1000个episode,打印每个episode的分数
total_reward_list = []
for episode in range(1000):
    ep_reward, ep_steps = train_episode(env, agent, False)
    total_reward_list.append(ep_reward)
    if episode % 50 == 0:
        print('Episode %s: steps = %s , reward = %.1f' % (episode, ep_steps, ep_reward))

print("Train end.")


def show_reward(total_reward):
    N = len(total_reward)
    x = np.linspace(0, N, 1000)
    plt.plot(x, total_reward, 'b-', lw=1, ms=5)
    plt.show()


show_reward(total_reward_list)

# 全部训练结束,查看算法效果
test_reward, test_steps = test_episode(env, agent)
print('test steps = %.1f , reward = %.1f' % (test_steps, test_reward))

实验结果

训练1000个episode的奖励情况如下图:
在这里插入图片描述
训练和测试时的step和reward:

Episode 0: steps = 765 , reward = -1458.0
Episode 50: steps = 33 , reward = -33.0
Episode 100: steps = 17 , reward = -17.0
Episode 150: steps = 31 , reward = -130.0
Episode 200: steps = 21 , reward = -120.0
Episode 250: steps = 26 , reward = -26.0
Episode 300: steps = 20 , reward = -20.0
Episode 350: steps = 24 , reward = -123.0
Episode 400: steps = 32 , reward = -32.0
Episode 450: steps = 22 , reward = -22.0
Episode 500: steps = 23 , reward = -23.0
Episode 550: steps = 31 , reward = -31.0
Episode 600: steps = 27 , reward = -126.0
Episode 650: steps = 17 , reward = -17.0
Episode 700: steps = 34 , reward = -34.0
Episode 750: steps = 24 , reward = -24.0
Episode 800: steps = 29 , reward = -29.0
Episode 850: steps = 19 , reward = -19.0
Episode 900: steps = 26 , reward = -26.0
Episode 950: steps = 21 , reward = -21.0
Train end.
test steps = 19.0 , reward = -19.0

Process finished with exit code 0
评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

wydxry

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值