参考资料
周博磊 Lecture 2: https://github.com/zhoubolei/introRL/blob/master/lecture2.pdf
David Silver Planning By DP: https://www.davidsilver.uk/wp-content/uploads/2020/03/DP.pdf
MDP中的决策问题
有两类:Prediction 和 Control
Policy Evaluation
利用Bellman方程不断迭代,直至 v π ( s ) v_\pi(s) vπ(s)收敛。
demo: https://cs.stanford.edu/people/karpathy/reinforcejs/gridworld_dp.html
例子中
γ
=
0.9
\gamma=0.9
γ=0.9
V
π
(
s
)
=
∑
a
π
(
s
,
a
)
∑
s
′
P
s
s
′
a
[
R
s
s
′
a
+
γ
V
π
(
s
′
)
]
V^\pi(s) = \sum_{a} \pi(s,a) \sum_{s'} \mathcal{P}_{ss'}^{a} \left[ \mathcal{R}_{ss'}^{a} + \gamma V^\pi(s') \right]
Vπ(s)=a∑π(s,a)s′∑Pss′a[Rss′a+γVπ(s′)]
Policy Improvement
贪婪的action能够improve policy。
Policy Iteration
Policy Evaluation 和 Policy Improvement的重复迭代。
Value Iteration
每次都选最优的value function。