强化学习2:策略迭代和价值迭代(下)

策略评估和策略改进

策略评估算法

我们先来考虑第一个问题:给定一个策略 π \pi π,我们怎么计算各个状态的价值函数,前面我们得出了Bellman等式是关于 V π = ( V π ( s 1 ) , ⋯   , V π ( s n ) ) V^\pi=(V^\pi(s_1),\cdots,V^\pi(s_n)) Vπ=(Vπ(s1),,Vπ(sn))的线性方程组,所以可以通过求逆来计算所有价值函数的值,但直接求解线性方程组的解的复杂度是 O ( n 3 ) O(n^3) O(n3) n n n为状态空间的基数,也就是状态的个数,我们通常不直接求解Bellman方程,而是采用迭代的方法进行求解。

回顾Bellman等式:
V π ( s ) = E π [ R ( s , a 0 , s 1 ) ∣ s 0 = s ] + γ E π [ V π ( s 1 ) ∣ s 0 = s ] V^\pi(s)=E^\pi[R(s,a_0,s_1)|s_0=s]+\gamma E^\pi[V^\pi(s_1)|s_0=s] Vπ(s)=Eπ[R(s,a0,s1)s0=s]+γEπ[Vπ(s1)s0=s]先求解第一项
E π [ R ( s , a 0 , s 1 ) ∣ s 0 = s ] = ∑ a ∈ A ∑ s ′ ∈ S p π ( a 0 = a , s 1 = s ′ ∣ s 0 = s ) R ( s , a , s ′ ) = ∑ a ∈ A ∑ s ′ ∈ S p ( s ′ ∣ a , s ) π ( a ∣ s ) R ( s , a , s ′ ) \begin{aligned} E^\pi[R(s,a_0,s_1)|s_0=s]=&\sum_{a\in\mathcal{A}}\sum_{s^\prime\in\mathcal{S}}p^\pi(a_0=a,s_1=s^\prime|s_0=s)R(s,a,s^\prime)\\ =&\sum_{a\in\mathcal{A}}\sum_{s^\prime\in\mathcal{S}}p(s^\prime|a,s)\pi(a|s)R(s,a,s^\prime) \end{aligned} Eπ[R(s,a0,s1)s0=s]==aAsSpπ(a0=a,s1=ss0=s)R(s,a,s)aAsSp(sa,s)π(as)R(s,a,s)计算这一项的时间复杂度为 O ( ∣ A ∣ . ∣ S ∣ ) O(|A|.|S|) O(A.S),再求解第二项
E π [ V π ( s 1 ) ∣ s 0 = s ] = ∑ s ′ ∈ S P π ( s 1 = s ′ ∣ s 0 = s ) V π ( s ′ ) = ∑ s ′ ∈ S ∑ a ∈ A p ( s ′ ∣ a , s ) π ( a ∣ s ) V π ( s ′ ) \begin{aligned} E^\pi[V^\pi(s_1)|s_0=s]=&\sum_{s^\prime\in\mathcal{S}}P^\pi(s_1=s^\prime|s_0=s)V^\pi(s^\prime)\\ =&\sum_{s^\prime\in\mathcal{S}}\sum_{a\in\mathcal{A}}p(s^\prime|a,s)\pi(a|s)V^\pi(s^\prime) \end{aligned} Eπ[Vπ(s1)s0=s]==sSPπ(s1=ss0=s)Vπ(s)sSaAp(sa,s)π(as)Vπ(s)
我们把状态空间设为 S = { 0 , 1 , 2 , ⋯   , n − 1 } \mathcal{S}=\{0,1,2,\cdots,n-1\} S={0,1,2,,n1},行为空间设为 A = { 0 , 1 , ⋯   , m − 1 } \mathcal{A}=\{0,1,\cdots,m-1\} A={0,1,,m1},第一项我们记为 R s π = E π [ R ( s , a 0 , s 1 ) ∣ s 0 = s ] R^\pi_{s}=E^\pi[R(s,a_0,s_1)|s_0=s] Rsπ=Eπ[R(s,a0,s1)s0=s],第二项的系数记为 p s , s ′ π = P π ( s 1 = s ′ ∣ s 0 = s ) = ∑ a ∈ A p ( s ′ ∣ a , s ) π ( a ∣ s ) p_{s,s^\prime}^\pi=P^\pi(s_1=s^\prime|s_0=s)=\sum_{a\in\mathcal{A}}p(s^\prime|a,s)\pi(a|s) ps,sπ=Pπ(s1=ss0=s)=aAp(sa,s)π(as),求解每个系数的时间复杂度为 O ( ∣ A ∣ ) O(|A|) O(A),需要求解 ∣ S ∣ 2 |S|^2 S2个系数,时间复杂度为 O ( ∣ A ∣ ∣ S ∣ 2 ) O(|A||S|^2) O(AS2),要确定线性方程组的参数需要的时间复杂度为 O ( ∣ A ∣ ∣ S ∣ 2 ) O(|A||S|^2) O(AS2)
P π = [ p 0 , 0 π p 0 , 1 π ⋯ p 0 , n − 1 π p 1 , 0 π p 1 , 1 π ⋯ p 1 , n − 1 π ⋯ ⋯ ⋯ ⋯ p n − 1 , 0 π p n − 1 , 1 π ⋯ p n − 1 , n − 1 π ] R π = [ R 0 π R 1 π ⋯ R n − 1 π ] V π = [ V π ( 0 ) V π ( 1 ) ⋯ V π ( n − 1 ) ] \begin{aligned} P^\pi=&\left[\begin{matrix} p^\pi_{0,0}&p_{0,1}^\pi&\cdots&p_{0,n-1}^\pi\\ p^\pi_{1,0}&p_{1,1}^\pi&\cdots&p_{1,n-1}^\pi\\ \cdots&\cdots&\cdots&\cdots\\ p^\pi_{n-1,0}&p_{n-1,1}^\pi&\cdots&p_{n-1,n-1}^\pi \end{matrix}\right]\\ R^\pi=&\left[\begin{matrix} R^\pi_0\\ R^\pi_1\\ \cdots\\ R^\pi_{n-1} \end{matrix}\right]\\ V^\pi=&\left[\begin{matrix} V^\pi(0)\\ V^\pi(1)\\ \cdots\\ V^\pi(n-1) \end{matrix}\right] \end{aligned} Pπ=Rπ=Vπ=p0,0πp1,0πpn1,0πp0,1πp1,1πpn1,1πp0,n1πp1,n1πpn1,n1πR0πR1πRn1πVπ(0)Vπ(1)Vπ(n1)
bellman方程就可以表示为
V π = R π + γ P π V π V^\pi=R^\pi+\gamma P^\pi V^\pi Vπ=Rπ+γPπVπ
是一个关于 V π V^\pi Vπ的线性方程组,直接用高斯消元或者矩阵求逆的方法求解这个方程组的时间复杂度是 O ( ∣ S ∣ 3 ) O(|S|^3) O(S3),但我们通常不直接用高斯消元来求解,而是通过迭代的方法
V k + 1 = R π + γ P π V k V_{k+1}=R^\pi+\gamma P^\pi V_k Vk+1=Rπ+γPπVk
序列 { V k } \{V_k\} {Vk}能收敛到方程组的,其依据是泛函分析中压缩映射定理。

度量: S S S是一个集合,在上面定义了一个二元函数 d ( x , y ) d(x,y) d(x,y),满足:

  1. d ( x , y ) ≥ 0 d(x,y)\ge 0 d(x,y)0 d ( x , y ) = 0 ⇔ x = y d(x,y)=0\Leftrightarrow x=y d(x,y)=0x=y
  2. d ( x , y ) = d ( y , x ) d(x,y)=d(y,x) d(x,y)=d(y,x)
  3. d ( x , z ) ≤ d ( x , y ) + d ( y , z ) d(x,z)\le d(x,y)+d(y,z) d(x,z)d(x,y)+d(y,z)

压缩映射: g : S → S g:S\to S g:SS是一个 S S S上的映射,如果存在 α ∈ ( 0 , 1 ) \alpha\in(0,1) α(0,1),满足:
d ( g ( x ) , g ( y ) ) ≤ α d ( x , y ) d(g(x),g(y))\le \alpha d(x,y) d(g(x),g(y))αd(x,y)
g g g是一个压缩映射

压缩映射定理: 完备度量空间上的压缩映射存在唯一的一个不动点

并且这个不动点可以通过迭代 x k + 1 = g ( x k ) x_{k+1}=g(x_k) xk+1=g(xk)求得

回到这个线性方程组的求解
V k + 2 − V k + 1 = γ P π ( V k + 1 − V k ) V_{k+2}-V_{k+1}=\gamma P^\pi(V_{k+1}-V_k) Vk+2Vk+1=γPπ(Vk+1Vk)我们取 R n R^n Rn上的无穷范数,即 ∣ ∣ x ∣ ∣ = max ⁡ i = 1 , ⋯   , n ∣ x i ∣ ||x||=\max_{i=1,\cdots,n}|x_i| x=maxi=1,,nxi,于是
∣ ∣ V k + 2 − V k + 1 ∣ ∣ = γ ∣ ∣ P π ( V k + 1 − V k ) ∣ ∣ ||V_{k+2}-V_{k+1}||=\gamma ||P^\pi(V_{k+1}-V_k)|| Vk+2Vk+1=γPπ(Vk+1Vk) V k + 1 − V k = ( x 1 , ⋯   , x n ) T V_{k+1}-V_k=(x_1,\cdots,x_n)^T Vk+1Vk=(x1,,xn)T P π ( V k + 1 − V k ) = ( y 1 , ⋯   , y n ) T P^\pi(V_{k+1}-V_k)=(y_1,\cdots,y_n)^T Pπ(Vk+1Vk)=(y1,,yn)T,则
y i = ∑ j = 1 n p i , j π x i y_i=\sum_{j=1}^np^\pi_{i,j}x_i yi=j=1npi,jπxi因为
p i , j π ∈ [ 0 , 1 ] , ∑ j = 1 n p i , j π = 1 p^\pi_{i,j}\in[0,1],\sum_{j=1}^np^\pi_{i,j}=1 pi,jπ[0,1],j=1npi,jπ=1所以
∣ y i ∣ ≤ ∑ j = 1 n p i , j π ∣ x i ∣ ≤ ∑ j = 1 n p i , j π ∣ ∣ x ∣ ∣ = ∣ ∣ x ∣ ∣ |y_i|\le\sum_{j=1}^np_{i,j}^\pi|x_i|\le \sum_{j=1}^n p_{i,j}^\pi||x||=||x|| yij=1npi,jπxij=1npi,jπx=x所以
∣ ∣ y ∣ ∣ ≤ ∣ ∣ x ∣ ∣ ||y||\le ||x|| yx于是
∣ ∣ V k + 2 − V k + 1 ∣ ∣ ≤ γ ∣ ∣ V k + 1 − V k ∣ ∣ ||V_{k+2}-V_{k+1}||\le\gamma ||V_{k+1}-V_k|| Vk+2Vk+1γVk+1Vk
这就是一个压缩映射,所以可以通过上面的迭代求解,给定误差 ϵ \epsilon ϵ,至多 [ log ⁡ ( ϵ ) / log ⁡ ( γ ) ] + 1 [\log(\epsilon)/\log(\gamma)]+1 [log(ϵ)/log(γ)]+1迭代就能收敛到方程组的解,这个迭代次数与状态空间和行为空间都无关,所以整个时间复杂度是 O ( ∣ S ∣ 2 ) O(|S|^2) O(S2)

输入:误差限e=1e-5,策略policy
算法:
1. 求解方程组的参数
初始化R[i]=0,i=0,1,...,s-1,P2[i][j]=0,i=0,1,...,s-1,j=0,1,...,s-1
for(int i=0;i<s;i++){
    for(int j=0;j<a;j++){
        for(int k=0;k<s;k++){
            P2[i][k]+=P[i][j][k]*policy[i][j];
            R[i]+=R[i][j][k]*P[i][j][k]*policy[i][j];
        }
    }
}//时间复杂度O(as^2)
2. 迭代
初始化v={0,0,...,0},e0=1
while(e0>=e){
      v2=0;
      e0=0;
      for(int i=0;i<s;i++){
          for(int j=0;j<s;j++){
              v2[i]+=P2[i,j]*v[j];
          }
          v2[i]*=gamma;
          v2[i]+=R[i];
          e0=max(e0,abs(v2[i]-v[i]));
      }
      for(int i=0;i<s;i++){
          v[i]=v2[i];
      }
}//时间复杂度O(s^2)
返回:v

以上代码是伪代码,我们可以改成标准的C++代码,下面的代码假定策略是确定性策略,并且不同的状态可以有不同的行为空间

void policy_evaluation(const vector<int>& policy,vector<double>& state_value,double e=1e-5){
            vector<double> R2(s,0);
            vector<vector<double>> P2(s,vector<double>(s,0));
            //求解线性方程组的系数
            for(int i=0;i<s;i++){
                for(int k=0;k<s;k++){
                    P2[i][k]=P[i][policy[i]][k];
                    R2[i]+=P[i][policy[i]][k]*R[i][policy[i]][k];
                }
            }
            //迭代求解价值函数
            double e0=1;
            while(e0>=e){
                vector<double> v2(s,0);
                e0=0;
                for(int i=0;i<s;i++){
                    for(int j=0;j<s;j++) v2[i]+=P2[i][j]*state_value[j];
                    v2[i]*=gamma;
                    v2[i]+=R2[i];
                    e0=max(e0,abs(v2[i]-state_value[i]));
                    state_value[i]=v2[i];
                }
            }
        }
策略改进

由于 T = ∞ T=\infty T=,所以我们没有办法从最后一个阶段向前倒推,所以传统DP算法在这里没有用武之地,那我们考虑这么一个问题:如果我们已经有一个策略 π \pi π,我们能否通过改进 π \pi π的方法得到一个更好的策略 π ′ \pi^\prime π呢?答案是肯定的,并且也是通过状态-行为价值函数来实现。

我们可以选择简单的最大化 Q ( a , s ) Q(a,s) Q(a,s)来得到一个新的策略 π ′ \pi^\prime π,即
a r g m a x π ′ ∑ a ∈ A π ′ ( a ∣ s ) Q π ( a , s ) ∀ s ∈ S argmax_{\pi^\prime}\sum_{a\in\mathcal{A}}\pi^\prime(a|s)Q^\pi(a,s)\quad\forall s\in\mathcal{S} argmaxπaAπ(as)Qπ(a,s)sS显然这可以通过简单选择一个使得 Q π Q^\pi Qπ最大的行为来实现,也就是得到一个新的确定性的策略
π ′ ( s ) = a r g m a x a Q π ( a , s ) \pi^\prime(s)=argmax_{a}Q^\pi(a,s) π(s)=argmaxaQπ(a,s)
这是一种贪心的思想,我们相当于在 t = 0 t=0 t=0前面加了一步,这一步采用策略 π ′ \pi^\prime π,后面全部采用策略 π \pi π。但我们可以断言:策略改成 π ′ \pi^\prime π后所有状态的价值函数都能得到提升,其依据是策略改进定理:

策略改进定理: 对于两个策略 π ′ , π \pi^\prime,\pi π,π,如果
∑ a ∈ A π ′ ( a ∣ s ) Q π ( a , s ) ≥ V π ( s ) ∀ s ∈ S \sum_{a\in\mathcal{A}}\pi^\prime(a|s)Q^\pi(a,s)\geq V^\pi(s) \quad \forall s\in\mathcal{S} aAπ(as)Qπ(a,s)Vπ(s)sS
π ′ ≥ π \pi^\prime\geq \pi ππ

证:
后面我们总是将 p ( s t + 1 = s t + 1 ′ , a t = a t ′ , s t = s t ′ , ⋯   , a 0 = a 0 ′ , s 0 = s 0 ′ ∣ s 0 = s 0 ′ ) p(s_{t+1}=s_{t+1}^\prime,a_t=a_t^\prime,s_t=s_t^\prime,\cdots,a_0=a_0^\prime,s_0=s_0^\prime|s_0=s_0^\prime) p(st+1=st+1,at=at,st=st,,a0=a0,s0=s0s0=s0)简写成 p ( s t + 1 ′ , a t ′ , s t ′ , ⋯   , a 0 ′ , s 0 ′ ∣ s 0 ′ ) p(s_{t+1}^\prime,a_t^\prime,s_t^\prime,\cdots,a_0^\prime,s_0^\prime|s_0^\prime) p(st+1,at,st,,a0,s0s0)
V π ( s ) ≤ ∑ a 0 ′ π ′ ( a 0 ′ ∣ s ) Q π ( a 0 ′ , s ) = ∑ a 0 ′ π ′ ( a 0 ′ ∣ s ) [ ∑ s 1 ′ p ( s 1 ′ ∣ a 0 ′ , s ) [ R ( s , a 0 ′ , s 1 ′ ) + γ V π ( s 1 ′ ) ] ] = ∑ a 0 ′ , s 1 ′ p π ′ ( s 1 ′ , a 0 ′ , s ∣ s ) R ( s , a 0 ′ , s 1 ′ ) + γ ∑ a 0 ′ , s 1 ′ p π ′ ( s 1 ′ , a 0 ′ , s ∣ s ) V π ( s 1 ′ ) \begin{aligned} V^\pi(s)\le& \sum_{a_0^\prime}\pi^\prime(a_0^\prime|s)Q^\pi(a_0^\prime,s)\\ =&\sum_{a_0^\prime}\pi^\prime(a_0^\prime|s)\left[\sum_{s_1^\prime}p(s_1^\prime|a_0^\prime,s)[R(s,a_0^\prime,s_1^\prime)+\gamma V^\pi(s_1^\prime)]\right]\\ =&\sum_{a_0^\prime,s_1^\prime}p^{\pi^\prime}(s_1^\prime,a_0^\prime,s|s)R(s,a_0^\prime,s_1^\prime)+\gamma \sum_{a_0^\prime,s_1^\prime}p^{\pi^\prime}(s_1^\prime,a_0^\prime,s|s)V^\pi(s_1^\prime) \end{aligned} Vπ(s)==a0π(a0s)Qπ(a0,s)a0π(a0s)s1p(s1a0,s)[R(s,a0,s1)+γVπ(s1)]a0,s1pπ(s1,a0,ss)R(s,a0,s1)+γa0,s1pπ(s1,a0,ss)Vπ(s1)
R t π ′ ( s ) = ∑ s t ′ , a t − 1 ′ , s t − 1 ′ , ⋯   , a 0 ′ , s p π ′ ( s t ′ , a t − 1 ′ , s t − 1 ′ , ⋯   , a 0 ′ , s ∣ s ) R ( s t − 1 ′ , a t − 1 ′ , s t ′ ) R^{\pi^\prime}_t(s)=\sum_{s_t^\prime,a_{t-1}^\prime,s_{t-1}^\prime,\cdots,a_0^\prime,s}p^{\pi^\prime}(s_t^\prime,a_{t-1}^\prime,s_{t-1}^\prime,\cdots,a_0^\prime,s|s)R(s_{t-1}^\prime,a_{t-1}^\prime,s_t^\prime) Rtπ(s)=st,at1,st1,,a0,spπ(st,at1,st1,,a0,ss)R(st1,at1,st)于是
V π ( s ) ≤ R 1 π ′ ( s ) + γ ∑ a 0 ′ , s 1 ′ p π ′ ( s 1 ′ , a 0 ′ , s ∣ s ) V π ( s 1 ′ ) ≤ R 1 π ′ ( s ) + γ ∑ a 0 ′ , s 1 ′ p π ′ ( s 1 ′ , a 0 ′ , s ∣ s ) [ R 1 π ′ ( s 1 ′ ) + γ ∑ a 1 ′ , s 2 ′ p π ′ ( s 2 ′ , a 1 ′ , s 1 ′ ∣ s 1 ′ ) V π ( s 2 ′ ) ] \begin{aligned} V^\pi(s)\le& R^{\pi^\prime}_1(s)+\gamma \sum_{a_0^\prime,s_1^\prime}p^{\pi^\prime}(s_1^\prime,a_0^\prime,s|s)V^\pi(s_1^\prime)\\ \le & R^{\pi^\prime}_1(s)+\gamma \sum_{a_0^\prime,s_1^\prime}p^{\pi^\prime}(s_1^\prime,a_0^\prime,s|s)\left[R_1^{\pi^\prime}(s_1^\prime)+\gamma\sum_{a_1^\prime,s_2^\prime}p^{\pi^\prime}(s_2^\prime,a_1^\prime,s_1^\prime|s_1^\prime)V^\pi(s_2^\prime)\right] \end{aligned} Vπ(s)R1π(s)+γa0,s1pπ(s1,a0,ss)Vπ(s1)R1π(s)+γa0,s1pπ(s1,a0,ss)R1π(s1)+γa1,s2pπ(s2,a1,s1s1)Vπ(s2)我们再证明下面的递推式
∑ s t ′ , a t − 1 ′ , ⋯   , a 0 ′ p π ′ ( s t ′ , a t − 1 ′ , ⋯   , a 0 ′ , s ∣ s ) R 1 π ′ ( s t ′ ) = R t + 1 π ′ ( s ) \begin{aligned} \sum_{s_t^\prime,a_{t-1}^\prime,\cdots,a_0^\prime}p^{\pi^\prime}(s_t^\prime,a_{t-1}^\prime,\cdots,a_0^\prime,s|s)R_1^{\pi^\prime}(s_t^\prime)=R_{t+1}^{\pi^\prime}(s) \end{aligned} st,at1,,a0pπ(st,at1,,a0,ss)R1π(st)=Rt+1π(s)展开
∑ s t ′ , a t − 1 ′ , ⋯   , a 0 ′ p π ′ ( s t ′ , a t − 1 ′ , ⋯   , a 0 ′ , s ∣ s ) R 1 π ′ ( s t ′ ) = ∑ s t ′ , a t − 1 ′ , ⋯   , a 0 ′ p π ′ ( s t ′ , a t − 1 ′ , ⋯   , a 0 ′ , s ∣ s ) ∑ s t + 1 ′ , a t ′ p π ′ ( s t + 1 ′ , a t ′ ∣ s t ′ ) R ( s t ′ , a t ′ , s t + 1 ′ ) = ∑ s t + 1 ′ , a t ′ , s t ′ , a t − 1 ′ , ⋯   , a 0 ′ p π ′ ( s t ′ , a t − 1 ′ , ⋯   , a 0 ′ , s ∣ s ) p π ′ ( s t + 1 ′ , a t ′ ∣ s t ′ ) R ( s t ′ , a t ′ , s t + 1 ′ ) \begin{aligned} &\sum_{s_t^\prime,a_{t-1}^\prime,\cdots,a_0^\prime}p^{\pi^\prime}(s_t^\prime,a_{t-1}^\prime,\cdots,a_0^\prime,s|s)R_1^{\pi^\prime}(s_t^\prime)\\ =&\sum_{s_t^\prime,a_{t-1}^\prime,\cdots,a_0^\prime}p^{\pi^\prime}(s_t^\prime,a_{t-1}^\prime,\cdots,a_0^\prime,s|s)\sum_{s_{t+1}^\prime,a_{t}^\prime}p^{\pi^\prime}(s^{\prime}_{t+1},a^\prime_t|s_t^\prime)R(s_t^\prime,a^\prime_t,s^{\prime}_{t+1})\\ =&\sum_{s_{t+1}^\prime,a_{t}^\prime,s_t^\prime,a_{t-1}^\prime,\cdots,a_0^\prime}p^{\pi^\prime}(s_t^\prime,a_{t-1}^\prime,\cdots,a_0^\prime,s|s)p^{\pi^\prime}(s^{\prime}_{t+1},a^\prime_t|s_t^\prime)R(s_t^\prime,a^\prime_t,s^{\prime}_{t+1}) \end{aligned} ==st,at1,,a0pπ(st,at1,,a0,ss)R1π(st)st,at1,,a0pπ(st,at1,,a0,ss)st+1,atpπ(st+1,atst)R(st,at,st+1)st+1,at,st,at1,,a0pπ(st,at1,,a0,ss)pπ(st+1,atst)R(st,at,st+1)考察一下上面式子中概率
p π ′ ( s t + 1 ′ , a t ′ ∣ s t ′ ) = p π ′ ( s 1 = s t + 1 ′ , a 0 = a t ′ ∣ s 0 = s t ′ ) = p π ′ ( s t + 1 = s t + 1 ′ , a t = a t ′ ∣ s t = s t ′ ) = p π ′ ( s t + 1 = s t + 1 ′ , a t = a t ′ ∣ s t = s t ′ , a t − 1 = a t − 1 ′ , s t − 1 = s t − 1 ′ , ⋯   , s 0 = s ) \begin{aligned} &p^{\pi^\prime}(s^{\prime}_{t+1},a^\prime_t|s_t^\prime)=p^{\pi^\prime}(s_1=s_{t+1}^\prime,a_0=a_t^\prime|s_0=s_t^\prime)\\ =&p^{\pi^\prime}(s_{t+1}=s_{t+1}^\prime,a_t=a_t^\prime|s_t=s_t^\prime)\\ =&p^{\pi^\prime}(s_{t+1}=s_{t+1}^\prime,a_t=a_t^\prime|s_t=s_t^\prime,a_{t-1}=a_{t-1}^\prime,s_{t-1}=s_{t-1}^\prime,\cdots,s_0=s) \end{aligned} ==pπ(st+1,atst)=pπ(s1=st+1,a0=ats0=st)pπ(st+1=st+1,at=atst=st)pπ(st+1=st+1,at=atst=st,at1=at1,st1=st1,,s0=s)其中第一个等号利用了时齐性,第二个等号用了马尔可夫性,于是
p π ′ ( s t ′ , a t − 1 ′ , ⋯   , a 0 ′ , s ∣ s ) p π ′ ( s t + 1 ′ , a t ′ ∣ s t ′ ) = p π ′ ( s t + 1 = s t + 1 ′ , a t = a t ′ ∣ s t = s t ′ , a t − 1 = a t − 1 ′ , s t − 1 = s t − 1 ′ , ⋯   , s 0 = s ) ∗ p π ′ ( s t = s t ′ , a t − 1 = a t − 1 ′ , s t − 1 = s t − 1 ′ , ⋯   , a 0 = a 0 ′ ∣ s 0 = s ) = p π ′ ( s t + 1 = s t + 1 ′ , a t = a t ′ , s t = s t ′ , a t − 1 = a t − 1 ′ , s t − 1 = s t − 1 ′ , ⋯   , a 0 = a 0 ′ ∣ s 0 = s ) \begin{aligned} &p^{\pi^\prime}(s_t^\prime,a_{t-1}^\prime,\cdots,a_0^\prime,s|s)p^{\pi^\prime}(s^{\prime}_{t+1},a^\prime_t|s_t^\prime)\\ =&p^{\pi^\prime}(s_{t+1}=s_{t+1}^\prime,a_t=a_t^\prime|s_t=s_t^\prime,a_{t-1}=a_{t-1}^\prime,s_{t-1}=s_{t-1}^\prime,\cdots,s_0=s)*\\ &p^{\pi^\prime}(s_t=s_t^\prime,a_{t-1}=a_{t-1}^\prime,s_{t-1}=s_{t-1}^\prime,\cdots,a_0=a_0^\prime|s_0=s)\\ =&p^{\pi^\prime}(s_{t+1}=s_{t+1}^\prime,a_t=a_t^\prime,s_t=s_t^\prime,a_{t-1}=a_{t-1}^\prime,s_{t-1}=s_{t-1}^\prime,\cdots,a_0=a_0^\prime|s_0=s) \end{aligned} ==pπ(st,at1,,a0,ss)pπ(st+1,atst)pπ(st+1=st+1,at=atst=st,at1=at1,st1=st1,,s0=s)pπ(st=st,at1=at1,st1=st1,,a0=a0s0=s)pπ(st+1=st+1,at=at,st=st,at1=at1,st1=st1,,a0=a0s0=s)再回代就证得了递推式,所以
V π ( s ) ≤ R 1 π ′ ( s ) + γ ∑ a 0 ′ , s 1 ′ p π ′ ( s 1 ′ , a 0 ′ , s ∣ s ) [ R 1 π ′ ( s 1 ′ ) + γ ∑ a 1 ′ , s 2 ′ p π ′ ( s 2 ′ , a 1 ′ , s 1 ′ ∣ s 1 ′ ) V π ( s 2 ′ ) ] ≤ R 1 π ′ ( s ) + γ R 2 π ′ ( s ) + γ 2 ∑ s 2 ′ , a 1 ′ , s 1 ′ , a 0 ′ p π ′ ( s 2 ′ , a 1 ′ , s 1 ′ ∣ s 1 ′ ) V π ( s 2 ′ ) ≤ R 1 π ′ ( s ) + γ R 2 π ′ ( s ) + γ 2 R 3 π ′ ( s ) + γ 3 ∑ s 3 ′ , a 2 ′ , ⋯   , s 1 ′ , a 0 ′ p π ′ ( s 3 ′ , a 2 ′ , ⋯   , s 1 ′ ∣ s 1 ′ ) V π ( s 3 ′ ) ≤ ⋯ \begin{aligned} V^\pi(s)\le& R^{\pi^\prime}_1(s)+\gamma \sum_{a_0^\prime,s_1^\prime}p^{\pi^\prime}(s_1^\prime,a_0^\prime,s|s)\left[R_1^{\pi^\prime}(s_1^\prime)+\gamma\sum_{a_1^\prime,s_2^\prime}p^{\pi^\prime}(s_2^\prime,a_1^\prime,s_1^\prime|s_1^\prime)V^\pi(s_2^\prime)\right]\\ \le &R_1^{\pi^\prime}(s)+\gamma R_2^{\pi^\prime}(s)+\gamma^2 \sum_{s_2^\prime,a_1^\prime,s_1^\prime,a_0^\prime} p^{\pi^\prime}(s_2^\prime,a_1^\prime,s_1^\prime|s_1^\prime)V^\pi(s_2^\prime)\\ \le & R_1^{\pi^\prime}(s)+\gamma R_2^{\pi^\prime}(s)+\gamma^2 R_3^{\pi^\prime}(s)+\gamma^3 \sum_{s_3^\prime,a_2^\prime,\cdots,s_1^\prime,a_0^\prime} p^{\pi^\prime}(s_3^\prime,a_2^\prime,\cdots,s_1^\prime|s_1^\prime)V^\pi(s_3^\prime)\\ \le & \cdots \end{aligned} Vπ(s)R1π(s)+γa0,s1pπ(s1,a0,ss)R1π(s1)+γa1,s2pπ(s2,a1,s1s1)Vπ(s2)R1π(s)+γR2π(s)+γ2s2,a1,s1,a0pπ(s2,a1,s1s1)Vπ(s2)R1π(s)+γR2π(s)+γ2R3π(s)+γ3s3,a2,,s1,a0pπ(s3,a2,,s1s1)Vπ(s3)
V π ( s ) ≤ ∑ i = 1 n γ i − 1 R i π ′ ( s ) + γ n + 1 ∑ s n + 1 ′ , a n ′ , ⋯   , s 1 ′ , a 0 ′ p π ′ ( s n + 1 ′ , a n ′ , ⋯   , s 1 ′ ∣ s 1 ′ ) V π ( s n + 1 ′ ) \begin{aligned} V^\pi(s)\le \sum_{i=1}^n \gamma^{i-1}R_i^{\pi^\prime}(s)+\gamma^{n+1} \sum_{s_{n+1}^\prime,a_n^\prime,\cdots,s_1^\prime,a_0^\prime} p^{\pi^\prime}(s_{n+1}^\prime,a_n^\prime,\cdots,s_1^\prime|s_1^\prime)V^\pi(s_{n+1}^\prime) \end{aligned} Vπ(s)i=1nγi1Riπ(s)+γn+1sn+1,an,,s1,a0pπ(sn+1,an,,s1s1)Vπ(sn+1) ∣ R ( s , a , s ′ ) ∣ ≤ M ∀ s , s ′ ∈ S , a ∈ A |R(s,a,s^\prime)|\le M\quad \forall s,s^\prime\in\mathcal{S},a\in \mathcal{A} R(s,a,s)Ms,sS,aA,因此有 ∣ V π ( s ) ∣ ≤ M / ( 1 − γ ) , ∀ π , s ∈ S |V^\pi(s)|\le M/(1-\gamma),\forall\pi,s\in\mathcal{S} Vπ(s)M/(1γ),π,sS,所以两边取极限,有
V π ( s ) ≤ ∑ n = 1 ∞ γ n − 1 R n π ′ ( s ) \begin{aligned} V^\pi(s)\le \sum_{n=1}^\infty \gamma^{n-1}R_n^{\pi^\prime}(s) \end{aligned} Vπ(s)n=1γn1Rnπ(s)而按照状态价值函数的定义可以直接得到
V π ′ ( s ) = ∑ n = 1 ∞ γ n − 1 R n π ′ ( s ) \begin{aligned} V^{\pi^\prime}(s)=\sum_{n=1}^\infty \gamma^{n-1}R_n^{\pi^\prime}(s) \end{aligned} Vπ(s)=n=1γn1Rnπ(s)
所以 V π ( s ) ≤ V π ′ ( s ) ∀ s ∈ S V^\pi(s)\le V^{\pi^\prime}(s)\quad \forall s\in\mathcal{S} Vπ(s)Vπ(s)sS

所以对于我们可以通过简单的最大化 Q ( a , s ) Q(a,s) Q(a,s)得到一个新的策略 π ′ \pi^\prime π,同时这个新策略还是一个确定性策略,当其不能通过最大化 Q ( a , s ) Q(a,s) Q(a,s)来进行改进时迭代停止,而我们的状态空间和行为空间都是有限的,所以确定性策略也只有有限个,所以策略改进算法一定能在有限步内终止。

我们可以把整个策略迭代算法分成两部,不断进行循环直至收敛:

  1. 第一步:给定上一轮的策略,进行策略评估得到所有状态价值函数
  2. 第二步:通过状态价值函数计算行为价值函数,最大化行为价值函数得到新策略
  3. 如果策略有更新,则继续迭代,如果策略不再更新,终止迭代
策略迭代算法:
输入:误差e=1e-5
算法:
1 给定初始策略policy={0,0,...,0}
2 loop:
	3 对policy进行策略评估,得到价值函数v
	4 计算行为价值函数Q,对每个状态s,最大化Q,得到策略policy2
	5 如果policy2 == policy,退出循环,否则policy=policy2
输出:policy

整个策略迭代算法的C++代码如下:

void policy_evaluation(const vector<int>& policy,vector<double>& state_value,double e=1e-5){
            vector<double> R2(s,0);
            vector<vector<double>> P2(s,vector<double>(s,0));
            //求解线性方程组的系数
            for(int i=0;i<s;i++){
                for(int k=0;k<s;k++){
                    P2[i][k]=P[i][policy[i]][k];
                    R2[i]+=P[i][policy[i]][k]*R[i][policy[i]][k];
                }
            }
            //迭代求解价值函数
            double e0=1;
            while(e0>=e){
                vector<double> v2(s,0);
                e0=0;
                for(int i=0;i<s;i++){
                    for(int j=0;j<s;j++) v2[i]+=P2[i][j]*state_value[j];
                    v2[i]*=gamma;
                    v2[i]+=R2[i];
                    e0=max(e0,abs(v2[i]-state_value[i]));
                    state_value[i]=v2[i];
                }
            }
        }
bool policy_improvement(vector<int>& old_policy,const vector<double>& state_value){
            bool i0=false;
            for(int i=0;i<s;i++){
                int new_policy=0;
                double maxQ=0;
                for(int k=0;k<s;k++) maxQ+=P[i][0][k]*(R[i][0][k]+gamma*state_value[k]);
                for(int j=1;j<P[i].size();j++){
                    double Q=0;
                    for(int k=0;k<s;k++) Q+=P[i][j][k]*(R[i][j][k]+gamma*state_value[k]);
                    if(Q>maxQ){
                        new_policy=j;
                        maxQ=Q;
                    }
                }
                if(new_policy!=old_policy[i]) i0=true;
                old_policy[i]=new_policy;
            }
            return i0;
        }

void policy_iteration(double e=1e-5){//这里policy是类内的一个私有成员
            vector<double> sv(s,0);
            cout<<"policy iteration algorithm:"<<endl;
            policy_evaluation(policy,sv,e);
            cout<<"policy:";
            for(auto d:policy){
                cout<<d<<' ';
            }
            cout<<endl<<"state value:";
            for(auto v:sv){
                cout<<v<<' ';
            }
            cout<<endl;
            while(policy_improvement(policy,sv)) {
                    cout<<"policy:";
                    for(auto d:policy){
                        cout<<d<<' ';
                    }
                    policy_evaluation(policy,sv,e);
                    cout<<endl<<"state value:";
                    for(auto v:sv){
                        cout<<v<<' ';
                    }
                    cout<<endl;
            }
        }

最优策略的存在性与唯一性

给定一个初始策略,可以通过策略迭代算法收敛到一个不能再改进的确定性策略。但从逻辑上讲,不能再改进不等于其是最优策略,不同的初始策略进行策略迭代后,可能得到不同的确定性策略,相当于局部极值和全局最大值的关系。

通过策略迭代算法得到的策略 π ⋆ \pi^\star π一定满足:
V π ⋆ ( s ) = max ⁡ a ∈ A [ ∑ s ′ p ( s ′ ∣ a , s ) R ( s , a , s ′ ) + γ ∑ s ′ p ( s ′ ∣ a , s ) V π ⋆ ( s ′ ) ] V^{\pi^\star}(s)=\max_{a\in\mathcal{A}}\left[\sum_{s^\prime}p(s^\prime|a,s)R(s,a,s^\prime)+\gamma\sum_{s^\prime}p(s^\prime|a,s)V^{\pi^\star}(s^\prime)\right] Vπ(s)=aAmax[sp(sa,s)R(s,a,s)+γsp(sa,s)Vπ(s)]
这个方程称为Bellman最优方程(Bellman Optimality Equation),状态价值函数满足Bellman最优方程的策略就无法通过策略迭代算法进行改进了,由于策略迭代算法一定会收敛,Bellman最优方程一定有解,下面我们来证明Bellman最优方程有唯一解,这样,我们也证明最优策略的最优性。

证:(反证法)
如果存在两个价值函数 V 1 ( s ) , V 2 ( s ) V_1(s),V_2(s) V1(s),V2(s) V 1 ( s ) ≠ V 2 ( s ) V_1(s)\neq V_2(s) V1(s)=V2(s),即存在 s ⋆ ∈ S s^\star\in\mathcal{S} sS V 1 ( s ⋆ ) ≠ V 2 ( s ⋆ ) V_1(s^\star)\neq V_2(s^\star) V1(s)=V2(s)

e ( s ) = V 2 ( s ) − V 1 ( s ) e(s)=V_2(s)-V_1(s) e(s)=V2(s)V1(s) e ( s ⋆ ) = max ⁡ s ∈ S e ( s ) > 0 e(s^\star)=\max_{s\in\mathcal{S}}e(s)>0 e(s)=maxsSe(s)>0,则
V 2 ( s ⋆ ) = max ⁡ a ∈ A [ ∑ s ′ p ( s ′ ∣ a , s ) R ( s , a , s ′ ) + γ ∑ s ′ p ( s ′ ∣ a , s ) V 2 ( s ′ ) ] = max ⁡ a ∈ A [ ∑ s ′ p ( s ′ ∣ a , s ) R ( s , a , s ′ ) + γ ∑ s ′ p ( s ′ ∣ a , s ) V 1 ( s ′ ) + γ ∑ s ′ p ( s ′ ∣ a , s ) e ( s ′ ) ] ≤ max ⁡ a ∈ A [ ∑ s ′ p ( s ′ ∣ a , s ) R ( s , a , s ′ ) + γ ∑ s ′ p ( s ′ ∣ a , s ) V 1 ( s ′ ) ] + γ max ⁡ a ∈ A [ ∑ s ′ p ( s ′ ∣ a , s ) e ( s ′ ) ] ≤ V 1 ( s ⋆ ) + γ e ( s ⋆ ) < V 1 ( s ⋆ ) + e ( s ⋆ ) \begin{aligned} V_2(s^\star)=&\max_{a\in\mathcal{A}}\left[\sum_{s^\prime}p(s^\prime|a,s)R(s,a,s^\prime)+\gamma\sum_{s^\prime}p(s^\prime|a,s)V_2(s^\prime)\right]\\ =&\max_{a\in\mathcal{A}}\left[\sum_{s^\prime}p(s^\prime|a,s)R(s,a,s^\prime)+\gamma\sum_{s^\prime}p(s^\prime|a,s)V_1(s^\prime)+\gamma\sum_{s^\prime}p(s^\prime|a,s)e(s^\prime)\right]\\ \le&\max_{a\in\mathcal{A}}\left[\sum_{s^\prime}p(s^\prime|a,s)R(s,a,s^\prime)+\gamma\sum_{s^\prime}p(s^\prime|a,s)V_1(s^\prime)\right]+\gamma\max_{a\in\mathcal{A}}\left[\sum_{s^\prime}p(s^\prime|a,s)e(s^\prime)\right]\\ \le&V_1(s^\star)+\gamma e(s^\star) < V_1(s^\star)+e(s^\star) \end{aligned} V2(s)==aAmax[sp(sa,s)R(s,a,s)+γsp(sa,s)V2(s)]aAmax[sp(sa,s)R(s,a,s)+γsp(sa,s)V1(s)+γsp(sa,s)e(s)]aAmax[sp(sa,s)R(s,a,s)+γsp(sa,s)V1(s)]+γaAmax[sp(sa,s)e(s)]V1(s)+γe(s)<V1(s)+e(s)

e ( s ⋆ ) = V 2 ( s ⋆ ) − V 1 ( s ⋆ ) e(s^\star)=V_2(s^\star)-V_1(s^\star) e(s)=V2(s)V1(s)矛盾,所以Bellman最优方程有唯一解

由于Bellman最优方程只有唯一解,也就说,无论从哪个初始策略出发,通过策略迭代算法得到的最优策略价值函数是唯一的,这样,所有的"局部极大值"的价值函数都相等,都是最优策略。

值迭代算法

通过上面的讨论,我们可以知道:一个策略是最优策略当且仅当其状态价值函数满足Bellman最优方程,所以我们可以通过求解Bellman最优方程来得到最优策略。

我们可以通过指定一组初始价值函数 V 0 ( s ) V_0(s) V0(s),然后代入Bellman最优方程右边进行迭代
V k + 1 ( s ) = max ⁡ a ∈ A [ ∑ s ′ p ( s ′ ∣ a , s ) R ( s , a , s ′ ) + γ ∑ s ′ p ( s ′ ∣ a , s ) V k ( s ′ ) ] V_{k+1}(s)=\max_{a\in\mathcal{A}}\left[\sum_{s^\prime}p(s^\prime|a,s)R(s,a,s^\prime)+\gamma\sum_{s^\prime}p(s^\prime|a,s)V_k(s^\prime)\right] Vk+1(s)=aAmax[sp(sa,s)R(s,a,s)+γsp(sa,s)Vk(s)]
其不动点就是Bellman最优方程的解,再通过选择使得 Q ( a , s ) Q(a,s) Q(a,s)最大的方法就可以得到最优策略。

证:
对两组价值函数 V 1 ( s ) , V 2 ( s ) V_1(s),V_2(s) V1(s),V2(s),令 e ( s ) = V 1 ( s ) − V 2 ( s ) e(s)=V_1(s)-V_2(s) e(s)=V1(s)V2(s) ∣ ∣ e ( s ) ∣ ∣ = max ⁡ ( ∣ min ⁡ s ( e ( s ) ) ∣ , ∣ max ⁡ s ( e ( s ) ) ∣ ) ||e(s)||=\max(|\min_{s}(e(s))|,|\max_s(e(s))|) e(s)=max(mins(e(s)),maxs(e(s))),令 g = ( g 1 , ⋯   , g n ) T g=(g_1,\cdots,g_n)^T g=(g1,,gn)T
g s ( x ) = max ⁡ a ∈ A [ ∑ s ′ p ( s ′ ∣ a , s ) R ( s , a , s ′ ) + γ ∑ s ′ p ( s ′ ∣ a , s ) x s ′ ] g_s(x)=\max_{a\in\mathcal{A}}\left[\sum_{s^\prime}p(s^\prime|a,s)R(s,a,s^\prime)+\gamma\sum_{s^\prime}p(s^\prime|a,s)x_{s^\prime}\right] gs(x)=aAmax[sp(sa,s)R(s,a,s)+γsp(sa,s)xs]
则有
g s ( V 1 ) = max ⁡ a ∈ A [ ∑ s ′ p ( s ′ ∣ a , s ) R ( s , a , s ′ ) + γ ∑ s ′ p ( s ′ ∣ a , s ) V 1 ( s ′ ) ] ≤ g s ( V 2 ) + γ max ⁡ a ∈ A [ ∑ s ′ p ( s ′ ∣ a , s ) e ( s ′ ) ] ≤ g s ( V 2 ) + γ max ⁡ s ′ e ( s ′ ) \begin{aligned} &g_s(V_1)=\max_{a\in\mathcal{A}}\left[\sum_{s^\prime}p(s^\prime|a,s)R(s,a,s^\prime)+\gamma\sum_{s^\prime}p(s^\prime|a,s)V_1(s^\prime)\right]\\ \le&g_s(V_2)+\gamma\max_{a\in\mathcal{A}}\left[\sum_{s^\prime}p(s^\prime|a,s)e(s^\prime)\right]\le g_s(V_2)+\gamma \max_{s^\prime}e(s^\prime) \end{aligned} gs(V1)=aAmax[sp(sa,s)R(s,a,s)+γsp(sa,s)V1(s)]gs(V2)+γaAmax[sp(sa,s)e(s)]gs(V2)+γsmaxe(s)
于是
g s ( V 1 ) − g s ( V 2 ) ≤ γ max ⁡ s ′ e ( s ′ ) ∀ s ∈ S g_s(V_1)-g_s(V_2)\le \gamma \max_{s^\prime}e(s^\prime)\quad\forall s\in\mathcal{S} gs(V1)gs(V2)γsmaxe(s)sS
同理
g s ( V 2 ) − g s ( V 1 ) ≤ γ max ⁡ s ′ [ − e ( s ′ ) ] ∀ s ∈ S g_s(V_2)-g_s(V_1)\le \gamma \max_{s^\prime}[-e(s^\prime)]\quad\forall s\in\mathcal{S} gs(V2)gs(V1)γsmax[e(s)]sS
这样就得到
γ min ⁡ s ′ e ( s ′ ) ≤ g s ( V 1 ) − g s ( V 2 ) ≤ γ max ⁡ s ′ e ( s ′ ) ∀ s ∈ S \gamma \min_{s^\prime}e(s^\prime)\le g_s(V_1)-g_s(V_2)\le \gamma\max_{s^\prime}e(s^\prime)\quad\forall s\in\mathcal{S} γsmine(s)gs(V1)gs(V2)γsmaxe(s)sS
所以
∣ g s ( V 1 ) − g s ( V 2 ) ∣ ≤ max ⁡ ( γ ∣ min ⁡ s ′ e ( s ′ ) ∣ , γ ∣ max ⁡ s ′ e ( s ′ ) ∣ ) = γ ∣ ∣ e ( s ) ∣ ∣ |g_s(V_1)-g_s(V_2)|\le \max(\gamma |\min_{s^\prime}e(s^\prime)|,\gamma |\max_{s^\prime}e(s^\prime)|)=\gamma||e(s)|| gs(V1)gs(V2)max(γsmine(s),γsmaxe(s))=γe(s)
这说明 g g g是一个压缩映射,所以存在唯一的不动点,以上的迭代法一定会收敛到Bellman最优方程的唯一解

值函数算法的伪代码如下:

值迭代算法:
输入:误差限e=1e-5
1. 先计算所有的系数
2. loop: 当价值函数更新的最大绝对值大于e
    计算所有的Q(a,s)
    V(s)<-max(Q(a,s))
3. 最大化Q(a,s)得到最优策略

值迭代算法的C++代码如下:

void value_iteration(double e=1e-5){
            cout<<"value iteration algorithm:"<<endl;
            vector<vector<double>> R2;
            //计算系数
            for(int i=0;i<s;i++){
                R2.push_back(vector<double>(P[i].size(),0));
                for(int j=0;j<P[i].size();j++){
                    for(int k=0;k<s;k++){
                        R2[i][j]+=R[i][j][k]*P[i][j][k];
                    }
                }
            }
            //价值迭代
            double e0=1;
            vector<double> value(s,0);
            while(e0>e){
                cout<<"value:";
                for(auto e:value) cout<<e<<' ';
                cout<<endl;
                e0=0;
                for(int i=0;i<s;i++){
                    double v=0;
                    for(int k=0;k<s;k++){
                        v+=P[i][0][k]*value[k];
                    }
                    v*=gamma;
                    v+=R2[i][0];
                    policy[i]=0;
                    for(int j=1;j<P[i].size();j++){
                        double q=0;
                        for(int k=0;k<s;k++){
                            q+=P[i][j][k]*value[k];
                        }
                        q*=gamma;
                        q+=R2[i][j];
                        if(q>v){
                            policy[i]=j;
                            v=q;
                        }
                    }
                    e0=max(e0,abs(v-value[i]));
                    value[i]=v;
                }
                cout<<"policy:";
                for(auto d:policy) cout<<d<<' ';
            }
        }

计算系数的时间复杂度是 O ( ∣ A ∣ ∣ S ∣ 2 ) O(|A||S|^2) O(AS2),价值迭代的时间复杂度为 O ( ∣ A ∣ ∣ S ∣ 2 ) O(|A||S|^2) O(AS2),因为价值迭代的次数与 ∣ S ∣ , ∣ A ∣ |S|,|A| S,A均无关。

与此同时,价值迭代算法可以看成是将策略评估和策略更新算法"合二为一",策略迭代在价值套用Bellman最优等式基础上再进行一次策略评估,其迭代次数应当小于价值迭代,收敛速度更快一些。在状态空间较小时,最好选用策略迭代方法。当状态空间较大时,值迭代的计算量更小一些。

海岛商人问题

我们还是回到前面的海岛商人的例子,具体看看价值迭代和策略迭代过程,以及最终的结果。测试的代码如下:

#include <iostream>
#include <vector>
#include <string>
#include <math.h>
using std::vector;
using std::string;
using std::cout;
using std::endl;
using std::max;
struct MDP_exception{
    int exception_id;
    string exception_info;
    MDP_exception(int id,const string& info):exception_id(id),exception_info(info){}
};
class MDP2{
    private:
        vector<vector<vector<double>>> P,R;
        double gamma;
        vector<double> V;
        vector<int> policy;
        int s;
        void policy_evaluation(const vector<int>& policy,vector<double>& state_value,double e=1e-5){
            vector<double> R2(s,0);
            vector<vector<double>> P2(s,vector<double>(s,0));
            //求解线性方程组的系数
            for(int i=0;i<s;i++){
                for(int k=0;k<s;k++){
                    P2[i][k]=P[i][policy[i]][k];
                    R2[i]+=P[i][policy[i]][k]*R[i][policy[i]][k];
                }
            }
            //迭代求解价值函数
            double e0=1;
            while(e0>=e){
                vector<double> v2(s,0);
                e0=0;
                for(int i=0;i<s;i++){
                    for(int j=0;j<s;j++) v2[i]+=P2[i][j]*state_value[j];
                    v2[i]*=gamma;
                    v2[i]+=R2[i];
                    e0=max(e0,abs(v2[i]-state_value[i]));
                    state_value[i]=v2[i];
                }
            }
        }
        bool policy_improvement(vector<int>& old_policy,const vector<double>& state_value){
            bool i0=false;
            for(int i=0;i<s;i++){
                int new_policy=0;
                double maxQ=0;
                for(int k=0;k<s;k++) maxQ+=P[i][0][k]*(R[i][0][k]+gamma*state_value[k]);
                for(int j=1;j<P[i].size();j++){
                    double Q=0;
                    for(int k=0;k<s;k++) Q+=P[i][j][k]*(R[i][j][k]+gamma*state_value[k]);
                    if(Q>maxQ){
                        new_policy=j;
                        maxQ=Q;
                    }
                }
                if(new_policy!=old_policy[i]) i0=true;
                old_policy[i]=new_policy;
            }
            return i0;
        }
    public:
        //Constructor
        MDP2(int nums_states=0,
        double discount_factor=0.5,
        const vector<vector<vector<double>>>& transition_probability=vector<vector<vector<double>>>(),
        const vector<vector<vector<double>>>& rewards=vector<vector<vector<double>>>()){
            /*
            设定:
            nums_states:状态的数量,状态空间用[0,1,...,nums_states-1]表示
            T:阶段数,必须大于或等于1,默认为5
            initial_states:代理人所处的初始状态,如果不是空的MDP,即状态数为0,那么就必须在0和nums_states-1之间
            discount_factor:折现因子,储存在MDP内是discount_factor,折现因子大于0,并且小于1,如果违反会抛出异常
            transition_probability:转移概率矩阵,在构造函数中会对转移概率矩阵进行合法性检查,否则会抛出异常,转移概率矩阵合法的条件如下:
                1. 转移概率矩阵的第一个维度要等于状态数,即transition_probability.size()==nums_states
                2. 转移概率矩阵每一个矩阵的每一行之和都要等于1,且每一个元都要在0到1之间
                3. 转移概率矩阵的每一列的维度要等于状态数,即transition_probability[i][j].size()==nums_states
                除此以外,每个转移概率矩阵的行数没有限制,表示每个状态可以采取的行动的数量
            rewards:回报矩阵的维度必须和转移概率矩阵的维度一模一样
            */
            if(discount_factor<=0||discount_factor>=1) throw MDP_exception(1,"Discount factor <=0 || >=1");//检查折现因子的合法性
            else if(nums_states<0) throw MDP_exception(3,"illegal states number");
            else if(transition_probability.size()!=nums_states) throw MDP_exception(5,"transition_probability.size() != nums_states");
            else if(rewards.size()!=nums_states) throw MDP_exception(6,"rewards.size()!=nums_states");
            else{
                //对转移概率矩阵和回报矩阵进行逐行检查
                for(int i=0;i<nums_states;i++){
                    //检查行数是否相等
                    if(transition_probability[i].size()!=rewards[i].size()) throw MDP_exception(7,"shape of transition probability != shape of rewards");
                    int actions = transition_probability[i].size();
                    if(actions<=0) throw MDP_exception(8,"actions<=0");
                    for(int j=0;j<actions;j++){
                        if(transition_probability[i][j].size()!=nums_states) throw MDP_exception(8,"transition_probability[i][j].size()!=nums_states");
                        if(rewards[i][j].size()!=nums_states) throw MDP_exception(7,"shape of transition probability != shape of rewards");
                        double sum=0;
                        for(int k=0;k<nums_states;k++){
                            if(transition_probability[i][j][k]<0||transition_probability[i][j][k]>1) throw MDP_exception(9,"transition_probability[i][j][k]<0||transition_probability[i][j][k]>1");
                            sum+=transition_probability[i][j][k];
                        }
                        if(sum-1<-10e-8||sum-1>10e-8) throw MDP_exception(10,"sum(transition_probability[i][j])!=1");//由于浮点数相等经常会出错误,所以判断是否和为1要设定精度
                    }
                }
                //通过以上合法性检查之后MDP对象就是合法的,可以进行赋值
                s=nums_states;
                gamma=discount_factor;
                P=transition_probability;
                R=rewards;
                //创建初始策略矩阵
                policy=vector<int>(s,0);
                //创建初始价值函数矩阵
                V=vector<double>(s,0);
            }
        }
        void policy_iteration(double e=1e-5){//这里policy是类内的一个私有成员
            vector<double> sv(s,0);
            cout<<"policy iteration algorithm:"<<endl;
            policy_evaluation(policy,sv,e);
            cout<<"policy:";
            for(auto d:policy){
                cout<<d<<' ';
            }
            cout<<endl<<"state value:";
            for(auto v:sv){
                cout<<v<<' ';
            }
            cout<<endl;
            while(policy_improvement(policy,sv)) {
                    cout<<"policy:";
                    for(auto d:policy){
                        cout<<d<<' ';
                    }
                    policy_evaluation(policy,sv,e);
                    cout<<endl<<"state value:";
                    for(auto v:sv){
                        cout<<v<<' ';
                    }
                    cout<<endl;
            }
        }
        void value_iteration(double e=1e-5){
            cout<<"value iteration algorithm:"<<endl;
            vector<vector<double>> R2;
            //计算系数
            for(int i=0;i<s;i++){
                R2.push_back(vector<double>(P[i].size(),0));
                for(int j=0;j<P[i].size();j++){
                    for(int k=0;k<s;k++){
                        R2[i][j]+=R[i][j][k]*P[i][j][k];
                    }
                }
            }
            //价值迭代
            double e0=1;
            vector<double> value(s,0);
            while(e0>e){
                cout<<"value:";
                for(auto e:value) cout<<e<<' ';
                cout<<endl;
                e0=0;
                for(int i=0;i<s;i++){
                    double v=0;
                    for(int k=0;k<s;k++){
                        v+=P[i][0][k]*value[k];
                    }
                    v*=gamma;
                    v+=R2[i][0];
                    policy[i]=0;
                    for(int j=1;j<P[i].size();j++){
                        double q=0;
                        for(int k=0;k<s;k++){
                            q+=P[i][j][k]*value[k];
                        }
                        q*=gamma;
                        q+=R2[i][j];
                        if(q>v){
                            policy[i]=j;
                            v=q;
                        }
                    }
                    e0=max(e0,abs(v-value[i]));
                    value[i]=v;
                }
                cout<<"policy:";
                for(auto d:policy) cout<<d<<' ';
            }
        }
};
int main(){
    vector<vector<vector<double>>> P={
    {{0.2,0.3,0.5},{0.3,0.3,0.4}},
    {{0.1,0.2,0.7},{0.2,0.1,0.7}},
    {{0.2,0.4,0.4},{0.5,0.3,0.2}}
    };
    vector<vector<vector<double>>> R={
    {{0,2,3},{0,2,3}},
    {{3,0,4},{3,0,4}},
    {{5,3,0},{5,3,0}}
    };
   try{
        MDP2 process(3,0.5,P,R);
        process.policy_iteration();
        process.value_iteration();
   }catch(const MDP_exception& e){
        cout<<e.exception_id;
   }
}

当折现率为0.5时运行结果如下:

可以看到策略迭代和价值迭代选出的最优策略都是0,1,1,但策略迭代只需要两轮迭代就可以收敛到Bellman最优方程的解,而价值迭代却需要11轮,但无论是策略迭代还是价值迭代,最终都能收敛到Bellman最优方程的唯一解。我们还可以看折现率为0.33时解来验证程序的正确性。

同样地,策略迭代只需要两轮就能收敛到最优策略0,1,1,价值迭代需要9轮,但最终的解是一样的。

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值