Model-based value iteration and policy iteration pseudocode

Note that the symbols used in the pseudocode below have the following meanings:

  • MDP: Markov Decision Process;
  • V(s): Value function, the avg reture of one state;
  • π(s): Policy, in the sense that for a given state s, π(s)represents the action that the agent will take in that state according to the policy,  usually can be divided into a random manner or a deterministic manner;
  • R(s,a): Immediate reward when taking action a in state s;
  • P(s'|s,a): Transition probability from state s to state s' under an action a;
  • γ: Discount factor for future reward.

Value iteration:

function ValueIteration(MDP):
    // MDP is a Markov Decision Process
    V(s) = 0 for all states s  // Initialization

    repeat until convergence:
        delta = 0
        for each state s:
            v = V(s)
            V(s) = max over all actions a of [ R(s, a) + γ * Σ P(s' | s, a) * V(s') ]
            delta = max(delta, |v - V(s)|)

    return V  // Optimal value function

function ExtractOptimalPolicy(MDP, V):
    // MDP is a Markov Decision Process, V is the optimal value function
    for each state s:
        π(s) = argmax over all actions a of [ R(s, a) + γ * Σ P(s' | s, a) * V(s') ]

    return π  // Optimal policy

Policy iteration:

function PolicyIteration(MDP):
    // MDP is a Markov Decision Process
    Initialize a policy π arbitrarily

    repeat until policy converges:
        // Policy Evaluation
        V = EvaluatePolicy(MDP, π)

        // Policy Improvement
        π' = GreedyPolicyImprovement(MDP, V)

        if π' = π:
            break  // Policy has converged

        π = π'

    return π  // Optimal policy

function EvaluatePolicy(MDP, π):
    // MDP is a Markov Decision Process, π is a policy
    V(s) = 0 for all states s  // Initialization

    repeat until convergence:
        delta = 0
        for each state s:
            v = V(s)
            V(s) = Σ P(s' | s, π(s)) * [ R(s, π(s)) + γ * V(s') ]
            delta = max(delta, |v - V(s)|)

    return V  // Value function under the given policy

function GreedyPolicyImprovement(MDP, V):
    // MDP is a Markov Decision Process, V is a value function
    for each state s:
        π(s) = argmax over all actions a of [ R(s, a) + γ * Σ P(s' | s, a) * V(s') ]

    return π  // Improved policy

given the shiyu Zhao's course [1] ppt :

References:

[1] https://www.bilibili.com/video/BV1sd4y167NS

[2] https://chat.openai.com/

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

七月是你的谎言..

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值