强化学习MountainCar(Q-learning)代码解读

刚接触强化学习,在b站上看到一个up主实现的简单的Q-learning小车爬山,代码学习笔记如下,资料来源在文末。

up讲解更为详细,笔记更多留作自用^^

import gym
import numpy as np

env = gym.make("MountainCar-v0")

# Q-Learning settings
LEARNING_RATE = 0.1
DISCOUNT = 0.95
EPISODES = 25000

SHOW_EVERY = 1000

^使用gym中的MountainCar-v0环境

设置学习率,折扣因子,迭代轮次(一个轮次指一次爬山的尝试,结束的标志是到达目标或者次数到达上限),展示频率(每1000轮次展示一次)

# Exploration settings
epsilon = 1  # not a constant, qoing to be decayed
START_EPSILON_DECAYING = 1
END_EPSILON_DECAYING = EPISODES//2
epsilon_decay_value = epsilon/(END_EPSILON_DECAYING - START_EPSILON_DECAYING)

^epsilon是随机程度的指标(后文会用到)

epsilon_decay_value是每个大循环epsilon衰减的值(常值,可自己修改)

DISCRETE_OS_SIZE = [20, 20]
discrete_os_win_size = (env.observation_space.high - env.observation_space.low)/DISCRETE_OS_SIZE

def get_discrete_state(state):
    discrete_state = (state - env.observation_space.low)/discrete_os_win_size
    return tuple(discrete_state.astype(np.int64))  # we use this tuple to look up the 3 Q values for the available actions in the q-table

^将连续状态离散化,将横向纵向分别分成20块

q_table = np.random.uniform(low=-2, high=0, size=(DISCRETE_OS_SIZE + [env.action_space.n]))

for episode in range(EPISODES):
    state = env.reset()
    discrete_state = get_discrete_state(state)

    if episode % SHOW_EVERY == 0:
        render = True
        print(episode)
    else:
        render = False

    done = False

^初始化Q值表

每个轮次中,重置状态

每SHOW_EVERY个轮次展示一次

while not done:
        if np.random.random() > epsilon:
            # Get action from Q table
            action = np.argmax(q_table[discrete_state])
        else:
            # Get random action
            action = np.random.randint(0, env.action_space.n)

        new_state, reward, done, _ = env.step(action)
        new_discrete_state = get_discrete_state(new_state)

        # If simulation did not end yet after last step - update Q table 

^在一个轮次中:

随机生成数,和epsilon比较以达到随机程度的设置,开始时epsilon=1,动作完全随机生成,后续epsilon逐渐变小,agent的action选择逐渐依靠Q值表。

        if not done:

            # Maximum possible Q value in next step (for new state)
            max_future_q = np.max(q_table[new_discrete_state])

            # Current Q value (for current state and performed action)
            current_q = q_table[discrete_state + (action,)]

            # And here's our equation for a new Q value for current state and action
            new_q = (1 - LEARNING_RATE) * current_q + LEARNING_RATE * (reward + DISCOUNT * max_future_q)

            # Update Q table with new Q value
            q_table[discrete_state + (action,)] = new_q

        # Simulation ended (for any reson) - if goal position is achived - update Q value with reward directly
        elif new_state[0] >= env.goal_position:
            q_table[discrete_state + (action,)] = 0
            print("we made it on episode {}".format(episode))

^如果未到达目标,则更新Q值表:

依照公式:

7606553bbdbe4e859bad1aa03b6b0a1a.png

含义为:

现在的Q值=原来的Q值+学习率*(立即回报+Lambda*后继状态的最大Q值-原来的Q值)

如果车到达目标,则将当前状态、当前动作的Q值更新为0???

        discrete_state = new_discrete_state

        if render:
            env.render()

    # Decaying is being done every episode if episode number is within decaying range
    if END_EPSILON_DECAYING >= episode >= START_EPSILON_DECAYING:
        epsilon -= epsilon_decay_value

env.close()

^将新状态赋予当前状态

如果到了相应轮次,则展示本次尝试

更新epsilon值

关闭环境

完整代码:

import gym
import numpy as np

env = gym.make("MountainCar-v0")

# Q-Learning settings
LEARNING_RATE = 0.1
DISCOUNT = 0.95
EPISODES = 25000

SHOW_EVERY = 1000

# Exploration settings
epsilon = 1  # not a constant, qoing to be decayed
START_EPSILON_DECAYING = 1
END_EPSILON_DECAYING = EPISODES//2
epsilon_decay_value = epsilon/(END_EPSILON_DECAYING - START_EPSILON_DECAYING)

DISCRETE_OS_SIZE = [20, 20]
discrete_os_win_size = (env.observation_space.high - env.observation_space.low)/DISCRETE_OS_SIZE

def get_discrete_state(state):
    discrete_state = (state - env.observation_space.low)/discrete_os_win_size
    return tuple(discrete_state.astype(np.int64))  # we use this tuple to look up the 3 Q values for the available actions in the q-table

q_table = np.random.uniform(low=-2, high=0, size=(DISCRETE_OS_SIZE + [env.action_space.n]))

for episode in range(EPISODES):
    state = env.reset()
    discrete_state = get_discrete_state(state)

    if episode % SHOW_EVERY == 0:
        render = True
        print(episode)
    else:
        render = False

    done = False
    while not done:
        if np.random.random() > epsilon:
            # Get action from Q table
            action = np.argmax(q_table[discrete_state])
        else:
            # Get random action
            action = np.random.randint(0, env.action_space.n)

        new_state, reward, done, _ = env.step(action)
        new_discrete_state = get_discrete_state(new_state)

        # If simulation did not end yet after last step - update Q table
        if not done:

            # Maximum possible Q value in next step (for new state)
            max_future_q = np.max(q_table[new_discrete_state])

            # Current Q value (for current state and performed action)
            current_q = q_table[discrete_state + (action,)]

            # And here's our equation for a new Q value for current state and action
            new_q = (1 - LEARNING_RATE) * current_q + LEARNING_RATE * (reward + DISCOUNT * max_future_q)

            # Update Q table with new Q value
            q_table[discrete_state + (action,)] = new_q

        # Simulation ended (for any reson) - if goal position is achived - update Q value with reward directly
        elif new_state[0] >= env.goal_position:
            q_table[discrete_state + (action,)] = 0
            print("we made it on episode {}".format(episode))

        discrete_state = new_discrete_state

        if render:
            env.render()

    # Decaying is being done every episode if episode number is within decaying range
    if END_EPSILON_DECAYING >= episode >= START_EPSILON_DECAYING:
        epsilon -= epsilon_decay_value

env.close()
# np.save(path, q_table) # path自己指定 作者:Leon小草办 https://www.bilibili.com/read/cv17082506/ 出处:bilibili

代码作者:Leon小草办 https://www.bilibili.com/read/cv17082506/ 出处:bilibili

作者实践视频(b站):BV1TZ4y1q728

  • 12
    点赞
  • 9
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
以下是使用 Python 和 SUMO(Simulation of Urban MObility)与强化学习 Q-learning 结合的代码示例: ```python import traci import numpy as np # 定义 Q-learning 算法的相关参数 Q = np.zeros((num_states, num_actions)) alpha = 0.1 gamma = 0.6 epsilon = 0.1 # 定义状态和动作数量 num_states = ... num_actions = ... # 初始化 SUMO 环境 sumoCmd = ["sumo", "-c", "path/to/sumo/config/file"] traci.start(sumoCmd) # 开始执行 Q-learning 算法 for episode in range(num_episodes): # 重置环境并获取初始状态 traci.load(["-c", "path/to/sumo/config/file"]) state = ... # 执行 Q-learning 算法 while not done: # 使用 epsilon-greedy 策略选择动作 if np.random.uniform(0, 1) < epsilon: action = np.random.randint(0, num_actions) else: action = np.argmax(Q[state, :]) # 执行动作并获取下一个状态和奖励 reward = ... next_state = ... # 更新 Q 值 Q[state, action] = (1 - alpha) * Q[state, action] + alpha * (reward + gamma * np.max(Q[next_state, :])) # 进入下一个状态 state = next_state # 保存本次训练结果 ... # 结束 SUMO 环境 traci.close() ``` 在此示例代码中,我们使用了 Q-learning 算法来控制 SUMO 的交通流,其中 Q 表示状态-动作对的价值,alpha 表示学习率,gamma 表示折扣因子,epsilon 表示 epsilon-greedy 策略中的 epsilon。我们首先初始化了 Q 值表,然后使用 epsilon-greedy 策略选择动作,执行动作并获取下一个状态和奖励,然后更新 Q 值。在每个 episode 结束后,我们保存了本次训练结果。最后,我们结束了 SUMO 环境。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值