【例子】
【抛硬币】
-
方法1(基于模型的):
p ( X = 1 ) = 0.5 , p ( X = − 1 ) = 0.5 p(X=1)=0.5, \quad p(X=-1)=0.5 p(X=1)=0.5,p(X=−1)=0.5E [ X ] = ∑ x x p ( x ) = 1 × 0.5 + ( − 1 ) × 0.5 = 0 \mathbb{E}[X]=\sum_x x p(x)=1 \times 0.5+(-1) \times 0.5=0 E[X]=x∑xp(x)=1×0.5+(−1)×0.5=0
问题:我们很难得到这么精确的概率
-
方法2(没有模型的):将硬币抛掷非常多次 { x 1 , x 2 , … , x N } \left\{x_1, x_2, \ldots, x_N\right\} {x1,x2,…,xN},然后计算平均值
E [ X ] ≈ x ˉ = 1 N ∑ j = 1 N x j \mathbb{E}[X] \approx \bar{x}=\frac{1}{N} \sum_{j=1}^N x_j E[X]≈xˉ=N1j=1∑Nxj
问题:当N非常小的时候是非常不精确的,但是随着N增大也随之精确(大数定理)
E [ x ˉ ] = E [ X ] , Var [ x ˉ ] = 1 N Var [ X ] \begin{aligned} \mathbb{E}[\bar{x}] & =\mathbb{E}[X], \\ \operatorname{Var}[\bar{x}] & =\frac{1}{N} \operatorname{Var}[X] \end{aligned} E[xˉ]Var[xˉ]=E[X],=N1Var[X]
问题1:我们为什么需要蒙特卡洛方法?
回答:因为它不需要模型
问题2:我们为什么关注于均值?
回答:因为state value和action value是随机变量的期望
【MC Basic】
主要问题:如何把policy iteration 变成 model-free 的
policy iteration:
{
Policy evaluation:
v
π
k
=
r
π
k
+
γ
P
π
k
v
π
k
Policy improvement:
π
k
+
1
=
arg
max
π
(
r
π
+
γ
P
π
v
π
k
)
\left\{\begin{array}{l} \text { Policy evaluation: } v_{\pi_k}=r_{\pi_k}+\gamma P_{\pi_k} v_{\pi_k} \\ \text { Policy improvement: } \pi_{k+1}=\arg \max _\pi\left(r_\pi+\gamma P_\pi v_{\pi_k}\right) \end{array}\right.
{ Policy evaluation: vπk=rπk+γPπkvπk Policy improvement: πk+1=argmaxπ(rπ+γPπvπk)
其中policy improvement:
π
k
+
1
(
s
)
=
arg
max
π
∑
a
π
(
a
∣
s
)
[
∑
r
p
(
r
∣
s
,
a
)
r
+
γ
∑
s
′
p
(
s
′
∣
s
,
a
)
v
π
k
(
s
′
)
]
=
arg
max
π
∑
a
π
(
a
∣
s
)
q
π
k
(
s
,
a
)
,
s
∈
S
\begin{aligned} \pi_{k+1}(s) & =\arg \max _\pi \sum_a \pi(a \mid s)\left[\sum_r p(r \mid s, a) r+\gamma \sum_{s^{\prime}} p\left(s^{\prime} \mid s, a\right) v_{\pi_k}\left(s^{\prime}\right)\right] \\ & =\arg \max _\pi \sum_a \pi(a \mid s) q_{\pi_k}(s, a), \quad s \in \mathcal{S} \end{aligned}
πk+1(s)=argπmaxa∑π(a∣s)[r∑p(r∣s,a)r+γs′∑p(s′∣s,a)vπk(s′)]=argπmaxa∑π(a∣s)qπk(s,a),s∈S
关键在于
q
π
k
(
s
,
a
)
q_{\pi_k}(s, a)
qπk(s,a):
-
action value1(依赖模型):不使用×
q π k ( s , a ) = ∑ r p ( r ∣ s , a ) r + γ ∑ s ′ p ( s ′ ∣ s , a ) v π k ( s ′ ) q_{\pi_k}(s, a)=\sum_r p(r \mid s, a) r+\gamma \sum_{s^{\prime}} p\left(s^{\prime} \mid s, a\right) v_{\pi_k}\left(s^{\prime}\right) qπk(s,a)=r∑p(r∣s,a)r+γs′∑p(s′∣s,a)vπk(s′) -
action value2(不依赖模型):使用√
q π k ( s , a ) = E [ G t ∣ S t = s , A t = a ] q_{\pi_k}(s, a)=\mathbb{E}\left[G_t \mid S_t=s, A_t=a\right] qπk(s,a)=E[Gt∣St=s,At=a]
✨对action value的蒙特卡洛估计过程:
- 从 ( s , a ) (s,a) (s,a) 开始依据策略 π k \pi_k πk 获得 episode
- 得到 episode 的 discounted return 用 g ( s , a ) g(s,a) g(s,a) 表示
- g ( s , a ) g(s,a) g(s,a) 是 G t G_t Gt 的一个采样: q π k ( s , a ) = E [ G t ∣ S t = s , A t = a ] q_{\pi_k}(s, a)=\mathbb{E}\left[G_t \mid S_t=s, A_t=a\right] qπk(s,a)=E[Gt∣St=s,At=a]
- 假设我们有一个集合 { g ( j ) ( s , a ) } \left\{g^{(j)}(s, a)\right\} {g(j)(s,a)}
- 我们可以用这些采样的平均值来估计这个 G t G_t Gt : q π k ( s , a ) = E [ G t ∣ S t = s , A t = a ] ≈ 1 N ∑ i = 1 N g ( i ) ( s , a ) q_{\pi_k}(s, a)=\mathbb{E}\left[G_t \mid S_t=s, A_t=a\right] \approx \frac{1}{N} \sum_{i=1}^N g^{(i)}(s, a) qπk(s,a)=E[Gt∣St=s,At=a]≈N1∑i=1Ng(i)(s,a)
当没有模型的时候要有数据,没有数据的时候要有模型
✨MC Basic algorithm:
- 给定初始策略
π
0
\pi_0
π0 ,会有k次迭代
- policy evaluation:对所有的 ( s , a ) (s,a) (s,a)计算 q π k ( s , a ) q_{\pi_k}(s, a) qπk(s,a)(方法如上所述)
- policy improvement:求解优化问题 π k + 1 ( s ) = arg max π ∑ a π ( a ∣ s ) q π k ( s , a ) \pi_{k+1}(s)=\arg \max _\pi \sum_a \pi(a \mid s) q_{\pi_k}(s, a) πk+1(s)=argmaxπ∑aπ(a∣s)qπk(s,a)
和policy iteration步骤是一样的,唯一的区别是计算 q π k ( s , a ) q_{\pi_k}(s, a) qπk(s,a)的方法不一样
✨MC Basic伪代码:
- 初始化: π 0 \pi_0 π0
- 目标:寻找最优策略
- 过程:假设未收敛,对于第k次迭代
- 对每个状态
s
∈
S
s \in \mathcal{S}
s∈S
- 对每个action a ∈ A ( s ) a \in \mathcal{A}(s) a∈A(s):计算 ( s , a ) (s,a) (s,a)出发得到的很多的episode,计算他的均值为 q π k ( s , a ) q_{\pi_k}(s, a) qπk(s,a)
- a k ∗ ( s ) = arg max a q π k ( s , a ) a_k^*(s)=\arg \max _a q_{\pi_k}(s, a) ak∗(s)=argmaxaqπk(s,a)
- π k + 1 ( a ∣ s ) = 1 if a = a k ∗ , 并且 π k + 1 ( a ∣ s ) = 0 \begin{aligned}& \pi_{k+1}(a \mid s)=1 \text { if } a=a_k^*, \text { 并且 } \pi_{k+1}(a \mid s)=0\end{aligned} πk+1(a∣s)=1 if a=ak∗, 并且 πk+1(a∣s)=0
- 对每个状态
s
∈
S
s \in \mathcal{S}
s∈S
✨MC Basic小结:
- MC Basic是策略迭代算法的一个变体。
- MC Basic是在基于模型的基础上研究的无模型算法,所以第一步应该先研究原本模型
- MC Basic由于效率低不使用
- MC Basic直接估计的是action value而不是state value是因为state value不能直接改进策略
- 由于原本策略收敛,所以MC Basic也是收敛的
✨MC Basic例子:
-
r boundary = − 1 , r forbidden = − 1 , r target = 1 , γ = 0.9 r_{\text {boundary }}=-1, r_{\text {forbidden }}=-1, r_{\text {target }}=1, \gamma=0.9 rboundary =−1,rforbidden =−1,rtarget =1,γ=0.9
-
第一步:policy evaluation,计算 q π k ( s , a ) q_{\pi_k}(s, a) qπk(s,a)(9 states × 5 \times 5 ×5 actions = 45 =45 =45 state-action)
-
从 ( s 1 , a 1 ) (s_1,a_1) (s1,a1)开始,episode是 s 1 ⟶ a 1 s 1 ⟶ a 1 s 1 ⟶ a 1 … s_1 \stackrel{a_1}{\longrightarrow} s_1 \stackrel{a_1}{\longrightarrow} s_1 \stackrel{a_1}{\longrightarrow} \ldots s1⟶a1s1⟶a1s1⟶a1…
q π 0 ( s 1 , a 1 ) = − 1 + γ ( − 1 ) + γ 2 ( − 1 ) + … q_{\pi_0}\left(s_1, a_1\right)=-1+\gamma(-1)+\gamma^2(-1)+\ldots qπ0(s1,a1)=−1+γ(−1)+γ2(−1)+… -
从 ( s 1 , a 2 ) (s_1,a_2) (s1,a2)开始,episode是 s 1 ⟶ a 2 s 2 ⟶ a 3 s 5 ⟶ a 3 … s_1 \stackrel{a_2}{\longrightarrow} s_2 \stackrel{a_3}{\longrightarrow} s_5 \stackrel{a_3}{\longrightarrow} \ldots s1⟶a2s2⟶a3s5⟶a3…
q π 0 ( s 1 , a 2 ) = 0 + γ 0 + γ 2 0 + γ 3 ( 1 ) + γ 4 ( 1 ) + … q_{\pi_0}\left(s_1, a_2\right)=0+\gamma 0+\gamma^2 0+\gamma^3(1)+\gamma^4(1)+\ldots qπ0(s1,a2)=0+γ0+γ20+γ3(1)+γ4(1)+… -
从 ( s 1 , a 3 ) (s_1,a_3) (s1,a3)开始,episode是 s 1 ⟶ a 3 s 4 ⟶ a 2 s 5 ⟶ a 3 … s_1 \stackrel{a_3}{\longrightarrow} s_4 \stackrel{a_2}{\longrightarrow} s_5 \stackrel{a_3}{\longrightarrow} \ldots s1⟶a3s4⟶a2s5⟶a3…
q π 0 ( s 1 , a 3 ) = 0 + γ 0 + γ 2 0 + γ 3 ( 1 ) + γ 4 ( 1 ) + … q_{\pi_0}\left(s_1, a_3\right)=0+\gamma 0+\gamma^2 0+\gamma^3(1)+\gamma^4(1)+\ldots qπ0(s1,a3)=0+γ0+γ20+γ3(1)+γ4(1)+… -
从 ( s 1 , a 4 ) (s_1,a_4) (s1,a4)开始,episode是 s 1 ⟶ a 4 s 1 ⟶ a 1 s 1 ⟶ a 1 … s_1 \stackrel{a_4}{\longrightarrow} s_1 \stackrel{a_1}{\longrightarrow} s_1 \stackrel{a_1}{\longrightarrow} \ldots s1⟶a4s1⟶a1s1⟶a1…
q π 0 ( s 1 , a 4 ) = − 1 + γ ( − 1 ) + γ 2 ( − 1 ) + … q_{\pi_0}\left(s_1, a_4\right)=-1+\gamma(-1)+\gamma^2(-1)+\ldots qπ0(s1,a4)=−1+γ(−1)+γ2(−1)+… -
从 ( s 1 , a 5 ) (s_1,a_5) (s1,a5)开始,episode是 s 1 ⟶ a 5 s 1 ⟶ a 1 s 1 ⟶ a 1 … s_1 \stackrel{a_5}{\longrightarrow} s_1 \stackrel{a_1}{\longrightarrow} s_1 \stackrel{a_1}{\longrightarrow} \ldots s1⟶a5s1⟶a1s1⟶a1…
q π 0 ( s 1 , a 5 ) = 0 + γ ( − 1 ) + γ 2 ( − 1 ) + … q_{\pi_0}\left(s_1, a_5\right)=0+\gamma(-1)+\gamma^2(-1)+\ldots qπ0(s1,a5)=0+γ(−1)+γ2(−1)+…
-
-
第二部:policy improvement:选择 greedy action, a ∗ ( s ) = arg max a i q π k ( s , a ) a^*(s)=\arg \max _{a_i} q_{\pi_k}(s, a) a∗(s)=argmaxaiqπk(s,a)
- 最大: q π 0 ( s 1 , a 2 ) = q π 0 ( s 1 , a 3 ) q_{\pi_0}\left(s_1, a_2\right)=q_{\pi_0}\left(s_1, a_3\right) qπ0(s1,a2)=qπ0(s1,a3)
- 策略更新: π 1 ( a 2 ∣ s 1 ) = 1 \pi_1\left(a_2 \mid s_1\right)=1 π1(a2∣s1)=1 or π 1 ( a 3 ∣ s 1 ) = 1 \pi_1\left(a_3 \mid s_1\right)=1 π1(a3∣s1)=1.
【MC Exploring Starts】
-
visit:每个state-action对在episode中叫做一个visit
-
MC Basic:采用的策略是initial-visit method,只考虑 ( s 1 , a 2 ) (s_1,a_2) (s1,a2),用剩下的所得到的return来估计 ( s 1 , a 2 ) (s_1,a_2) (s1,a2)的action value。
没有充分利用episode,例如我从下图 s 2 s_2 s2出发,这样就是一个新的episode就可以用来估计 ( s 2 , a 4 ) (s_2,a_4) (s2,a4),有许多多余的计算
数据利用高效:
-
first-visit:后边的第二次出现其后边的就不用来进行估计了
-
every-visit:后边的都用
什么时候更新策略:
- 第一种方法:从一个state-action pair出发的episode全部收集做平均值来估计action-value
- 问题:它需要等,浪费时间资源
- 第二种方法:利用一个episode来估计action value,然后改进策略不断轮询这样效率提升
Generalized policy iteration(GPI):指在政策评价和政策改进过程之间转换的一般思想或框架
✨MC Exploring Starts伪代码:
-
初始化: π 0 \pi_0 π0
-
目标:寻找最优策略
-
过程:随机选择开始 state-action ( s 0 , a 0 ) (s_0,a_0) (s0,a0),依据当前策略生成episode T : s 0 , a 0 , r 1 , … , s T − 1 , a T − 1 , r T T: s_0, a_0, r_1, \ldots, s_{T-1}, a_{T-1}, r_T T:s0,a0,r1,…,sT−1,aT−1,rT
- 初始化 g ← 0 g \leftarrow 0 g←0
- 对每一步episode,
t
=
T
−
1
,
T
−
2
,
…
,
0
t=T-1, T-2, \ldots, 0
t=T−1,T−2,…,0,
- g ← γ g + r t + 1 g \leftarrow \gamma g+r_{t+1} g←γg+rt+1
- 如果
(
s
t
,
a
t
)
\left(s_t, a_t\right)
(st,at)不在episode中:
- Returns ( s t , a t ) ← Returns ( s t , a t ) + g \operatorname{Returns}\left(s_t, a_t\right) \leftarrow \operatorname{Returns}\left(s_t, a_t\right)+g Returns(st,at)←Returns(st,at)+g
- q ( s t , a t ) = average ( Returns ( s t , a t ) ) q\left(s_t, a_t\right)=\operatorname{average}\left(\operatorname{Returns}\left(s_t, a_t\right)\right) q(st,at)=average(Returns(st,at))
- π ( a ∣ s t ) = 1 \pi\left(a \mid s_t\right)=1 π(a∣st)=1 if a = arg max a q ( s t , a ) a=\arg \max _a q\left(s_t, a\right) a=argmaxaq(st,a)
✨为什么需要Exploring Starts:
Exploring :从每一个 ( s , a ) (s,a) (s,a),我都有这个episode,我才能用后面生成的reward来估计return,进一步估计action value。
由于依赖于后面的reward,所以需要确保每一个都能访问到防止少访问了一个可能这个是最优解所以导致前面的都不是最优的
starts:我要访问每一个 ( s , a ) (s,a) (s,a),从它后面能够生成reward的这些数据。一种是从 ( s , a ) (s,a) (s,a)开始一个episode也就是start,另外一种是从其他 ( s , a ) (s,a) (s,a)开始但是能够经过当前的这个 ( s , a ) (s,a) (s,a),后面的数据可以用来估计当前的return也就是visit。
就是reward从前面走还是从后面走
【MC ε \varepsilon ε-Greedy】
π ( a ∣ s ) = { 1 − ε ∣ A ( s ) ∣ ( ∣ A ( s ) ∣ − 1 ) , for the greedy action, ε ∣ A ( s ) ∣ , for the other ∣ A ( s ) ∣ − 1 actions. \pi(a \mid s)= \begin{cases}1-\frac{\varepsilon}{|\mathcal{A}(s)|}(|\mathcal{A}(s)|-1), & \text { for the greedy action, } \\ \frac{\varepsilon}{|\mathcal{A}(s)|}, & \text { for the other }|\mathcal{A}(s)|-1 \text { actions. }\end{cases} π(a∣s)={1−∣A(s)∣ε(∣A(s)∣−1),∣A(s)∣ε, for the greedy action, for the other ∣A(s)∣−1 actions.
ε ∈ [ 0 , 1 ] \varepsilon \in[0,1] ε∈[0,1], ∣ A ( s ) ∣ |\mathcal{A}(s)| ∣A(s)∣是 s s s对应的action个数。
我们虽然给了其他的action一些选择的概率,但是greedy action的概率比其他的要大: 1 − ε ∣ A ( s ) ∣ ( ∣ A ( s ) ∣ − 1 ) = 1 − ε + ε ∣ A ( s ) ∣ ≥ ε ∣ A ( s ) ∣ 1-\frac{\varepsilon}{|\mathcal{A}(s)|}(|\mathcal{A}(s)|-1)=1-\varepsilon+\frac{\varepsilon}{|\mathcal{A}(s)|} \geq \frac{\varepsilon}{|\mathcal{A}(s)|} 1−∣A(s)∣ε(∣A(s)∣−1)=1−ε+∣A(s)∣ε≥∣A(s)∣ε
✨为什么使用 ε \varepsilon ε-Greedy:
用来平衡exploitation和exploration
exploitation:充分利用的意思,我在一个状态有许多的action,发现这个action很大,那么我在下一时刻应该采取这个action,未来相信会获得更多的reward。
exploration:探索,虽然现在知道这个action会获得更多的reward但说不定现在信息不完备,说不定应该去探索其他的action,说不定其他的action value也是很好的
- ε = 0 \varepsilon=0 ε=0,变得greedy,减少探索
- ε = 1 \varepsilon=1 ε=1,探索增强但是最大和exploitation一样
✨ ε ✨\varepsilon ✨ε-Greedy与MC-based RL algorithms结合
原始:
π
k
+
1
(
s
)
=
arg
max
π
∈
Π
∑
a
π
(
a
∣
s
)
q
π
k
(
s
,
a
)
\pi_{k+1}(s)=\arg \max _{\pi \in \Pi} \sum_a \pi(a \mid s) q_{\pi_k}(s, a)
πk+1(s)=argπ∈Πmaxa∑π(a∣s)qπk(s,a)
π k + 1 ( a ∣ s ) = { 1 , a = a k ∗ 0 , a ≠ a k ∗ \pi_{k+1}(a \mid s)= \begin{cases}1, & a=a_k^* \\ 0, & a \neq a_k^*\end{cases} πk+1(a∣s)={1,0,a=ak∗a=ak∗
a k ∗ = arg max a q π k ( s , a ) a_k^*=\arg \max _a q_{\pi_k}(s, a) ak∗=argmaxaqπk(s,a)
现在:
π
k
+
1
(
s
)
=
arg
max
π
∈
Π
ε
∑
a
π
(
a
∣
s
)
q
π
k
(
s
,
a
)
\pi_{k+1}(s)=\arg \max _{\pi \in \Pi_{\varepsilon}} \sum_a \pi(a \mid s) q_{\pi_k}(s, a)
πk+1(s)=argπ∈Πεmaxa∑π(a∣s)qπk(s,a)
Π
ε
\Pi_{\varepsilon}
Πε是所有
ε
\varepsilon
ε-greedy 策略
π
k
+
1
(
a
∣
s
)
=
{
1
−
∣
A
(
s
)
∣
−
1
∣
A
(
s
)
∣
ε
,
a
=
a
k
∗
1
∣
A
(
s
)
∣
ε
,
a
≠
a
k
∗
\pi_{k+1}(a \mid s)= \begin{cases}1-\frac{|\mathcal{A}(s)|-1}{|\mathcal{A}(s)|} \varepsilon, & a=a_k^* \\ \frac{1}{|\mathcal{A}(s)|} \varepsilon, & a \neq a_k^*\end{cases}
πk+1(a∣s)={1−∣A(s)∣∣A(s)∣−1ε,∣A(s)∣1ε,a=ak∗a=ak∗
使用这个策略不需要exploring starts这个条件了
✨MC ε \varepsilon ε-Greedy伪代码:
-
初始化: ϵ ∈ [ 0 , 1 ] \epsilon \in[0,1] ϵ∈[0,1]
-
目标:寻找最优策略
-
随机选择开始 state-action ( s 0 , a 0 ) (s_0,a_0) (s0,a0),依据当前策略生成episode T : s 0 , a 0 , r 1 , … , s T − 1 , a T − 1 , r T T: s_0, a_0, r_1, \ldots, s_{T-1}, a_{T-1}, r_T T:s0,a0,r1,…,sT−1,aT−1,rT
-
初始化 g ← 0 g \leftarrow 0 g←0
-
对于episode的每一步 t = T − 1 , T − 2 , … , 0 t=T-1, T-2, \ldots, 0 t=T−1,T−2,…,0
-
g ← γ g + r t + 1 g \leftarrow \gamma g+r_{t+1} g←γg+rt+1
-
使用every-visit方法:若 ( s t , a t ) \left(s_t, a_t\right) (st,at)不在 ( s 0 , a 0 , s 1 , a 1 , … , s t − 1 , a t − 1 ) \left(s_0, a_0, s_1, a_1, \ldots, s_{t-1}, a_{t-1}\right) (s0,a0,s1,a1,…,st−1,at−1)
-
Returns ( s t , a t ) ← Returns ( s t , a t ) + g \operatorname{Returns}\left(s_t, a_t\right) \leftarrow \operatorname{Returns}\left(s_t, a_t\right)+g Returns(st,at)←Returns(st,at)+g
-
q ( s t , a t ) = average ( Returns ( s t , a t ) ) q\left(s_t, a_t\right)=\operatorname{average}\left(\operatorname{Returns}\left(s_t, a_t\right)\right) q(st,at)=average(Returns(st,at))
-
Let a ∗ = arg max a q ( s t , a ) a^*=\arg \max _a q\left(s_t, a\right) a∗=argmaxaq(st,a) and
π ( a ∣ s t ) = { 1 − ∣ A ( s t ) ∣ − 1 ∣ A ( s t ) ∣ ϵ , a = a ∗ 1 ∣ A ( s t ) ∣ ϵ , a ≠ a ∗ \pi\left(a \mid s_t\right)= \begin{cases}1-\frac{\left|\mathcal{A}\left(s_t\right)\right|-1}{\left|\mathcal{A}\left(s_t\right)\right|} \epsilon, & a=a^* \\ \frac{1}{\left|\mathcal{A}\left(s_t\right)\right|} \epsilon, & a \neq a^*\end{cases} π(a∣st)={1−∣A(st)∣∣A(st)∣−1ϵ,∣A(st)∣1ϵ,a=a∗a=a∗
-
-
✨MC ε \varepsilon ε-Greedy例子:
ε = 1 \varepsilon=1 ε=1下具有最强的探索能力:
ϵ \epsilon ϵ比较小下探索能力也减小: