PPO算法 车杆环境 倒立摆环境

第十二章 PPO算法

12.1 简介

上一章介绍的TRPO算法在很多场景上的应用都很成功,但是它的计算过程非常复杂,每一步更新的运算量非常大。于是,TRPO算法的改进版–PPO算法在2017年被提出,PPO基于TRPO的思想,但是其算法实现更加简单。并且大量试验结果表明,与TRPO算法相比,PPO能学习得一样好(甚至更快),这使得PPO成为非常流行的强化学习算法。如果我们想要尝试在一个新的环境中使用强化学习算法,那么PPO就属于可以首先尝试的算法。

TRPO算法的优化目标:

max ⁡ θ E s   ν π θ k E a   π θ k ( ⋅ ∣ s ) [ π θ k ( a ∣ s ) π θ ( a ∣ s ) A π θ k ( s , a ) ] \underset{\theta}\max \mathbb E_{s~\nu ^{\pi _{\theta_k }}}\mathbb E_{a~\pi _{\theta_k }\left( \cdot |s \right)}\left[\frac{\pi_{\theta_k}(a|s)}{\pi_\theta(a|s)} A^{\pi _{\theta_k}}\left( s,a \right) \right] θmaxEs νπθkEa πθk(s)[πθ(as)πθk(as)Aπθk(s,a)]

s . t . E s   ν π θ k [ D K L ( π θ k ( ⋅ ∣ s ) , π θ ( ⋅ ∣ s ) ) ] ≤ δ s.t. \mathbb E_{s~\nu ^{\pi _{\theta _k}}}\left[ D_{KL}\left( \pi _{\theta _k}\left( \cdot |s \right) ,\pi _{\theta }\left( \cdot |s \right) \right) \right] \le \delta s.t.Es νπθk[DKL(πθk(s),πθ(s))]δ

TRPO使用泰勒展开近似、共轭梯度、线性搜索等方法直接求解。PPO的优化目标和TRPO算法的相同,但是PPO算法用了一些相对简单的方法来求解。具体来说PPO算法有两种形式,一是PPO-惩罚,二是PPO-截断,以下是对这两种形式的介绍。

12.2 PPO-惩罚

PPO-惩罚(PPO-Penalty)用拉格朗日乘数法直接将KL散度的限制放进了目标函数中,这就变成了一个无约束的优化问题,在迭代过程中不断更新KL散度前的系数。即:

a r g max ⁡ θ E s   ν π θ k E a   π θ k ( ⋅ ∣ s ) [ π θ k ( a ∣ s ) π θ ( a ∣ s ) A π θ k ( s , a ) − β D K L [ π θ k ( ⋅ ∣ s ) , π θ ( ⋅ ∣ s ) ] ] arg\underset{\theta}\max \mathbb E_{s~\nu ^{\pi _{\theta_k }}}\mathbb E_{a~\pi _{\theta_k }\left( \cdot |s \right)}\left[\frac{\pi_{\theta_k}(a|s)}{\pi_\theta(a|s)} A^{\pi _{\theta_k}}\left( s,a \right) -\beta D_{KL}[\pi_{\theta_k}(\cdot|s),\pi_\theta(\cdot|s)]\right] argθmaxEs νπθkEa πθk(s)[πθ(as)πθk(as)Aπθk(s,a)βDKL[πθk(s),πθ(s)]]

d k = D K L ν π θ k ( π θ k , π θ ) d_k=D^{\nu^{\pi_{\theta_k}}}_{KL}(\pi_{\theta_k},\pi_\theta) dk=DKLνπθk(πθk,πθ) β \beta β的更新规则如下:

  1. 如果 d k < δ / 1.5 d_k< \delta /1.5 dk<δ/1.5,那么 β k + 1 = β k / 2 \beta_{k+1}=\beta _k/2 βk+1=βk/2
  2. 如果 d k > δ × 1.5 d_k>\delta ×1.5 dk>δ×1.5,那么 β k + 1 = β k × 2 \beta_{k+1}=\beta _k×2 βk+1=βk×2
  3. 否则 β k + 1 = β k \beta_{k+1}=\beta_k βk+1=βk

其中, δ \delta δ是事先设定的一个超参数,用于限制学习策略和之前一轮策略的差距。

12.3 PPO-截断

PPO的另一种形式PPO-截断(PPO-Clip)更加直接,它在目标函数中进行限制,以保证新的参数和旧的参数的差距不会太大,即:

a r g max ⁡ θ E s   ν π θ k E a   π θ k ( ⋅ ∣ s ) [ m i n ( π θ k ( a ∣ s ) π θ ( a ∣ s ) A π θ k ( s , a ) , c l i p ( π θ k ( a ∣ s ) π θ ( a ∣ s ) , 1 − ϵ , 1 + ϵ ) A π θ k ] arg\underset{\theta}\max \mathbb E_{s~\nu ^{\pi _{\theta_k }}}\mathbb E_{a~\pi _{\theta_k }\left( \cdot |s \right)}\left[min(\frac{\pi_{\theta_k}(a|s)}{\pi_\theta(a|s)} A^{\pi _{\theta_k}}\left( s,a \right),clip(\frac{\pi_{\theta_k}(a|s)}{\pi_\theta(a|s)},1-\epsilon,1+\epsilon)A^{\pi _{\theta_k}}\right] argθmaxEs νπθkEa πθk(s)[min(πθ(as)πθk(as)Aπθk(s,a),clip(πθ(as)πθk(as),1ϵ,1+ϵ)Aπθk]

其中 c l i p ( x , l , r ) : = m a x ( m i n ( x , r ) , l ) clip(x,l,r):=max(min(x,r),l) clip(x,l,r):=max(min(x,r),l),即把 x x x限制在 [ l , r ] [l,r] [l,r]内。上式中 ϵ \epsilon ϵ是一个超参数,表示进行截断(clip)的范围。

如果 A π θ k ( s , a ) > 0 A^{\pi _{\theta_k}}(s,a)>0 Aπθk(s,a)>0说明这个动作的价值高于平均,最大化会增大 π θ ( a ∣ s ) π θ k ( a ∣ s ) \frac{\pi_\theta(a|s)}{\pi_{\theta_k}(a|s)} πθk(as)πθ(as),但不会让其超过 1 + ϵ 1+\epsilon 1+ϵ。反之,如果 A π θ k ( s , a ) < 0 A^{\pi _{\theta_k}}(s,a)<0 Aπθk(s,a)<0,最大化这个式子会减小 π θ ( a ∣ s ) π θ k ( a ∣ s ) \frac{\pi_\theta(a|s)}{\pi_{\theta_k}(a|s)} πθk(as)πθ(as),但是不会让其超过 1 − ϵ 1-\epsilon 1ϵ。如图12-1所示:

在这里插入图片描述

12.4 PPO代码实践

与TRPO相同,我们仍然在车杆和倒立摆两个环境中测试PPO算法。大量实验表明,PPO-截断总是比PPO-惩罚表现得更好。因此下面我们专注于PPO-截断的代码实现。

12.4.1 车杆环境

import gym
import torch
import torch.nn.functional as F
import numpy as np
import matplotlib.pyplot as plt
import rl_utils


class PolicyNet(torch.nn.Module):
    def __init__(self, state_dim, hidden_dim, action_dim):
        super(PolicyNet, self).__init__()
        self.fc1 = torch.nn.Linear(state_dim, hidden_dim)
        self.fc2 = torch.nn.Linear(hidden_dim, action_dim)

    def forward(self, x):
        x = F.relu(self.fc1(x))
        return F.softmax(self.fc2(x), dim=1)


class ValueNet(torch.nn.Module):
    def __init__(self, state_dim, hidden_dim):
        super(ValueNet, self).__init__()
        self.fc1 = torch.nn.Linear(state_dim, hidden_dim)
        self.fc2 = torch.nn.Linear(hidden_dim, 1)

    def forward(self, x):
        x = F.relu(self.fc1(x))
        return self.fc2(x)


class PPO:
    ''' PPO算法,采用截断方式 '''
    def __init__(self, state_dim, hidden_dim, action_dim, actor_lr, critic_lr, lmbda, epochs, eps, gamma, device):
        self.actor = PolicyNet(state_dim, hidden_dim, action_dim).to(device)
        self.critic = ValueNet(state_dim, hidden_dim).to(device)
        self.actor_optimizer = torch.optim.Adam(self.actor.parameters(), lr=actor_lr)
        self.critic_optimizer = torch.optim.Adam(self.critic.parameters(), lr=critic_lr)
        self.gamma = gamma
        self.lmbda = lmbda
        self.epochs = epochs  # 一条序列的数据用来训练轮数
        self.eps = eps  # PPO中截断范围的参数
        self.device = device

    def take_action(self, state):  # 动作的选择没有发生改变和TRPO算法  策略梯度算法是一样的
        state = torch.tensor([state], dtype=torch.float).to(self.device)
        probs = self.actor(state)
        action_dist = torch.distributions.Categorical(probs)
        action = action_dist.sample()
        return action.item()

    def update(self, transition_dict):
        states = torch.tensor(transition_dict['states'], dtype=torch.float).to(self.device)
        actions = torch.tensor(transition_dict['actions']).view(-1, 1).to(self.device)
        rewards = torch.tensor(transition_dict['rewards'], dtype=torch.float).view(-1, 1).to(self.device)
        next_states = torch.tensor(transition_dict['next_states'], dtype=torch.float).to(self.device)
        dones = torch.tensor(transition_dict['dones'], dtype=torch.float).view(-1, 1).to(self.device)
        td_target = rewards + self.gamma * self.critic(next_states) * (1 - dones)
        td_delta = td_target - self.critic(states)
        advantage = rl_utils.compute_advantage(self.gamma, self.lmbda, td_delta.cpu()).to(self.device)
        old_log_probs = torch.log(self.actor(states).gather(1, actions)).detach()

        for _ in range(self.epochs):  # 策略更新  价值更新
            log_probs = torch.log(self.actor(states).gather(1, actions))
            ratio = torch.exp(log_probs - old_log_probs)
            surr1 = ratio * advantage  # 计算策略目标
            surr2 = torch.clamp(ratio, 1 - self.eps, 1 + self.eps) * advantage  # 截断
            actor_loss = torch.mean(-torch.min(surr1, surr2))  # PPO损失函数
            critic_loss = torch.mean(F.mse_loss(self.critic(states), td_target.detach()))
            self.actor_optimizer.zero_grad()
            self.critic_optimizer.zero_grad()
            actor_loss.backward()
            critic_loss.backward()
            self.actor_optimizer.step()
            self.critic_optimizer.step()


actor_lr = 1e-3
critic_lr = 1e-2
num_episodes = 500
hidden_dim = 128
gamma = 0.98
lmbda = 0.95
epochs = 10
eps = 0.2
device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")

env_name = 'CartPole-v0'
env = gym.make(env_name)
env.seed(0)
torch.manual_seed(0)
state_dim = env.observation_space.shape[0]
action_dim = env.action_space.n
agent = PPO(state_dim, hidden_dim, action_dim, actor_lr, critic_lr, lmbda, epochs, eps, gamma, device)

return_list = rl_utils.train_on_policy_agent(env, agent, num_episodes)

episodes_list = list(range(len(return_list)))
plt.plot(episodes_list, return_list)
plt.xlabel('Episodes')
plt.ylabel('Returns')
plt.title('PPO on {}'.format(env_name))
plt.show()

mv_return = rl_utils.moving_average(return_list, 9)
plt.plot(episodes_list, mv_return)
plt.xlabel('Episodes')
plt.ylabel('Returns')
plt.title('PPO on {}'.format(env_name))
plt.show()

个人见解:(有不对的地方欢迎讨论)

PPO算法和TRPO算法最大的不同就是在寻找最优策略上,PPO算法使用截断法寻找最优策略,更方便简单,而TRPO算法是使用共轭梯度法和直线搜索方法,比较复杂

不管是PPO算法还是TRPO算法都是在Actor-Critic算法基础上的改进,TRPO算法是通过最优化方法找到一个信任区域,PPO算法是优化TRPO算法中寻找最优区域的方法,通过截断法来最优化目标。 但是本质上还是用价值函数来指导策略函数以选取最有动作的策略。

Iteration 0:   0%|          | 0/50 [00:00<?, ?it/s]/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py:43: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at  ../torch/csrc/utils/tensor_new.cpp:201.)
Iteration 0: 100%|██████████| 50/50 [00:02<00:00, 19.41it/s, episode=50, return=183.200]
Iteration 1: 100%|██████████| 50/50 [00:03<00:00, 13.49it/s, episode=100, return=184.900]
Iteration 2: 100%|██████████| 50/50 [00:03<00:00, 12.64it/s, episode=150, return=200.000]
Iteration 3: 100%|██████████| 50/50 [00:03<00:00, 12.64it/s, episode=200, return=200.000]
Iteration 4: 100%|██████████| 50/50 [00:03<00:00, 12.81it/s, episode=250, return=200.000]
Iteration 5: 100%|██████████| 50/50 [00:03<00:00, 12.63it/s, episode=300, return=200.000]
Iteration 6: 100%|██████████| 50/50 [00:03<00:00, 12.83it/s, episode=350, return=200.000]
Iteration 7: 100%|██████████| 50/50 [00:03<00:00, 12.58it/s, episode=400, return=200.000]
Iteration 8: 100%|██████████| 50/50 [00:03<00:00, 12.78it/s, episode=450, return=200.000]
Iteration 9: 100%|██████████| 50/50 [00:03<00:00, 12.59it/s, episode=500, return=187.200]

在这里插入图片描述
在这里插入图片描述

12.4.2 倒立摆环境

倒立摆环境是一个连续的动作 和车杆环境最大的区别就是动作的选取方面 车杆环境使用的是伯努利分布(二项分布),而倒立摆是一个连续的动作,用的是高斯分布

import gym
import torch
import torch.nn.functional as F
import numpy as np
import matplotlib.pyplot as plt
import rl_utils


class PolicyNetContinuous(torch.nn.Module):
    def __init__(self, state_dim, hidden_dim, action_dim):
        super(PolicyNetContinuous, self).__init__()
        self.fc1 = torch.nn.Linear(state_dim, hidden_dim)
        self.fc_mu = torch.nn.Linear(hidden_dim, action_dim)
        self.fc_std = torch.nn.Linear(hidden_dim, action_dim)

    def forward(self, x):
        x = F.relu(self.fc1(x))
        mu = 2.0 * torch.tanh(self.fc_mu(x))
        std = F.softplus(self.fc_std(x))
        return mu, std


class ValueNet(torch.nn.Module):
    def __init__(self, state_dim, hidden_dim):
        super(ValueNet, self).__init__()
        self.fc1 = torch.nn.Linear(state_dim, hidden_dim)
        self.fc2 = torch.nn.Linear(hidden_dim, 1)

    def forward(self, x):
        x = F.relu(self.fc1(x))
        return self.fc2(x)

class PPOContinuous:
    ''' 处理连续动作的PPO算法 '''
    def __init__(self, state_dim, hidden_dim, action_dim, actor_lr, critic_lr,
                 lmbda, epochs, eps, gamma, device):
        self.actor = PolicyNetContinuous(state_dim, hidden_dim,
                                         action_dim).to(device)
        self.critic = ValueNet(state_dim, hidden_dim).to(device)
        self.actor_optimizer = torch.optim.Adam(self.actor.parameters(),
                                                lr=actor_lr)
        self.critic_optimizer = torch.optim.Adam(self.critic.parameters(),
                                                 lr=critic_lr)
        self.gamma = gamma
        self.lmbda = lmbda
        self.epochs = epochs
        self.eps = eps
        self.device = device

    def take_action(self, state):
        state = torch.tensor([state], dtype=torch.float).to(self.device)
        mu, sigma = self.actor(state)
        action_dist = torch.distributions.Normal(mu, sigma)
        action = action_dist.sample()
        return [action.item()]

    def update(self, transition_dict):
        states = torch.tensor(transition_dict['states'],
                              dtype=torch.float).to(self.device)
        actions = torch.tensor(transition_dict['actions'],
                               dtype=torch.float).view(-1, 1).to(self.device)
        rewards = torch.tensor(transition_dict['rewards'],
                               dtype=torch.float).view(-1, 1).to(self.device)
        next_states = torch.tensor(transition_dict['next_states'],
                                   dtype=torch.float).to(self.device)
        dones = torch.tensor(transition_dict['dones'],
                             dtype=torch.float).view(-1, 1).to(self.device)
        rewards = (rewards + 8.0) / 8.0  # 和TRPO一样,对奖励进行修改,方便训练
        td_target = rewards + self.gamma * self.critic(next_states) * (1 -
                                                                       dones)
        td_delta = td_target - self.critic(states)
        advantage = rl_utils.compute_advantage(self.gamma, self.lmbda,
                                               td_delta.cpu()).to(self.device)
        mu, std = self.actor(states)
        action_dists = torch.distributions.Normal(mu.detach(), std.detach())
        # 动作是正态分布
        old_log_probs = action_dists.log_prob(actions)

        for _ in range(self.epochs):
            mu, std = self.actor(states)
            action_dists = torch.distributions.Normal(mu, std)
            log_probs = action_dists.log_prob(actions)
            ratio = torch.exp(log_probs - old_log_probs)
            surr1 = ratio * advantage
            surr2 = torch.clamp(ratio, 1 - self.eps, 1 + self.eps) * advantage
            actor_loss = torch.mean(-torch.min(surr1, surr2))
            critic_loss = torch.mean(
                F.mse_loss(self.critic(states), td_target.detach()))
            self.actor_optimizer.zero_grad()
            self.critic_optimizer.zero_grad()
            actor_loss.backward()
            critic_loss.backward()
            self.actor_optimizer.step()
            self.critic_optimizer.step()

actor_lr = 1e-4
critic_lr = 5e-3
num_episodes = 2000
hidden_dim = 128
gamma = 0.9
lmbda = 0.9
epochs = 10
eps = 0.2
device = torch.device("cuda") if torch.cuda.is_available() else torch.device(
    "cpu")

env_name = 'Pendulum-v0'
env = gym.make(env_name)
env.seed(0)
torch.manual_seed(0)
state_dim = env.observation_space.shape[0]
action_dim = env.action_space.shape[0]  # 连续动作空间
agent = PPOContinuous(state_dim, hidden_dim, action_dim, actor_lr, critic_lr,
                      lmbda, epochs, eps, gamma, device)

return_list = rl_utils.train_on_policy_agent(env, agent, num_episodes)

episodes_list = list(range(len(return_list)))
plt.plot(episodes_list, return_list)
plt.xlabel('Episodes')
plt.ylabel('Returns')
plt.title('PPO on {}'.format(env_name))
plt.show()

mv_return = rl_utils.moving_average(return_list, 21)
plt.plot(episodes_list, mv_return)
plt.xlabel('Episodes')
plt.ylabel('Returns')
plt.title('PPO on {}'.format(env_name))
plt.show()
Iteration 0: 100%|██████████| 200/200 [00:22<00:00,  9.02it/s, episode=200, return=-1000.354]
Iteration 1: 100%|██████████| 200/200 [00:22<00:00,  8.78it/s, episode=400, return=-922.780]
Iteration 2: 100%|██████████| 200/200 [00:20<00:00,  9.63it/s, episode=600, return=-483.957]
Iteration 3: 100%|██████████| 200/200 [00:20<00:00,  9.80it/s, episode=800, return=-472.933]
Iteration 4: 100%|██████████| 200/200 [00:20<00:00,  9.54it/s, episode=1000, return=-327.589]
Iteration 5: 100%|██████████| 200/200 [00:20<00:00,  9.63it/s, episode=1200, return=-426.262]
Iteration 6: 100%|██████████| 200/200 [00:20<00:00,  9.73it/s, episode=1400, return=-224.806]
Iteration 7: 100%|██████████| 200/200 [00:21<00:00,  9.49it/s, episode=1600, return=-279.722]
Iteration 8: 100%|██████████| 200/200 [00:20<00:00,  9.62it/s, episode=1800, return=-428.538]
Iteration 9: 100%|██████████| 200/200 [00:20<00:00,  9.81it/s, episode=2000, return=-235.771]

在这里插入图片描述
在这里插入图片描述

PPO算法是TRPO算法的一种改进算法,它在实现上简化了TRPO中的复杂计算,并且它在实验中的性能大多数情况下会比TRPO更好,因此目前常被用作一种的基准算法。需要注意的是,TRPO和PPO算法都属于在线策略学习算法,即使优化目标中包含重要性采样的过程,但其只是用到了上一轮策略的数据,而不是过去所有策略的数据。

Python

torch.clamp()

函数原型

torch.clamp(input, min=None, max=None, *, out=None)

clamp的是夹子,夹紧

所以这个函数的作用是将输入的input夹紧在min和max之间

当input在min和max之间的时候输出本身,小于min的时候输出min 大于max时,输出max

import torch
a = torch.randn(4)
b = torch.clamp(a, min=-0.5, max=0.5)
print(a, '\n', b)

输出为:

tensor([-1.7120,  0.1734, -0.0478, -0.0922])
tensor([-0.5000,  0.1734, -0.0478, -0.0922])

  • 20
    点赞
  • 16
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值