深度强化学习(6)Actor-Critic & DDPG算法

6.1 Actor-Critic算法

基本概念

深度Q网络(DQN)是一个基于值函数的方法,它只学习一个价值函数。REINFORCE是基于策略的方法,它学习的是一个策略函数,直接输出动作的概率分布。而Actor-Critic算法结合了DQN和REINFORCE,它既学习价值函数,又学习策略函数。

在Actor-Critic算法中,Actor就是Policy Gradient,能够在连续动作空间中选择合适的动作;Critic就是Q-learning(如DQN),能估计期望奖励,能进行单步更新。

在策略梯度当中,引入基线(baseline) b b b,用 Q π ( s , a ) − b Q_{\pi}(s,a)-b Qπ(s,a)b替换 Q π ( s , a ) Q_{\pi}(s,a) Qπ(s,a)。引入基线 b b b的目的是对 Q π Q_{\pi} Qπ进行整体搬移,相对大小不变,结果不受影响,方差减小(收敛速度加快)。
在这里插入图片描述
引入基线后的策略梯度变为 ∇ θ J ( θ ) = E π θ [ ( ∑ t ′ = t T γ t ′ − t r t ′ − b ) ∇ θ log ⁡ π θ ( s , a ) ] \nabla_{\theta}J(\theta)=\Bbb E_{\pi_{\theta}}[(\sum^T_{t^{\prime}=t}\gamma^{t^{\prime}-t}r_{t^{\prime}}-b)\nabla_{\theta}\log\pi_{\theta}(s,a)] θJ(θ)=Eπθ[(t=tTγttrtb)θlogπθ(s,a)]。其中的 ∑ t ′ = t T γ t ′ − t r t ′ \sum^T_{t^{\prime}=t}\gamma^{t^{\prime}-t}r_{t^{\prime}} t=tTγttrt即为 Q π θ ( s t , a t ) Q^{\pi_{\theta}}(s_t,a_t) Qπθ(st,at),基线为 b b b

在Actor-Critic中,用价值函数 V π θ ( s t ) V^{\pi_{\theta}}(s_t) Vπθ(st)表示基线 b b b,于是AC的策略梯度可表示为:
∇ θ J ( θ ) = E π θ [ ( Q π θ ( s t , a t ) − V π θ ( s t ) ) ∇ θ log ⁡ π θ ( s , a ) ] \nabla_{\theta}J(\theta)=\Bbb E_{\pi_{\theta}}[(Q^{\pi_{\theta}}(s_t,a_t)-V^{\pi_{\theta}}(s_t))\nabla_{\theta}\log\pi_{\theta}(s,a)] θJ(θ)=Eπθ[(Qπθ(st,at)Vπθ(st))θlogπθ(s,a)]
其中 Q Q Q V V V是值估计网络,而 π θ ( s , a ) \pi_{\theta}(s,a) πθ(s,a)是策略网络。
在上面的公式中,会发现现在有三个网络需要学习,增大了估计不准的风险。因此可以利用贝尔曼方程建立 Q Q Q V V V的直接联系:
Q π θ ( s t , a t ) = r t + γ V π θ ( s t + 1 ) Q^{\pi_{\theta}}(s_t,a_t)=r_t+\gamma V^{\pi_{\theta}}(s_{t+1}) Qπθ(st,at)=rt+γVπθ(st+1)
因此可以将原AC的策略梯度表示为:
∇ θ J ( θ ) = E π θ [ ( r t + γ V π θ ( s t + 1 ) − V π θ ( s t ) ) ∇ θ log ⁡ π θ ( s , a ) ] \nabla_{\theta}J(\theta)=\Bbb E_{\pi_{\theta}}[(r_t+\gamma V^{\pi_{\theta}}(s_{t+1})-V^{\pi_{\theta}}(s_t))\nabla_{\theta}\log\pi_{\theta}(s,a)] θJ(θ)=Eπθ[(rt+γVπθ(st+1)Vπθ(st))θlogπθ(s,a)]
这样就省去了一个网络,其中 A π θ ( s t , a t ) = r t + γ V π θ ( s t + 1 ) − V π θ ( s t ) A^{\pi_{\theta}}(s_t,a_t)=r_t+\gamma V^{\pi_{\theta}}(s_{t+1})-V^{\pi_{\theta}}(s_t) Aπθ(st,at)=rt+γVπθ(st+1)Vπθ(st)为优势函数,以上即为优势演员-评论家(Advantage Actor-Critic,A2C)算法。
在这里插入图片描述

Actor的更新采用梯度策略原则,而将Critic价值网络表示为 V ω V_{\omega} Vω,参数为 ω \omega ω,于是可以采用时序差分残差的学习方式,对于单个数据,定义如下价值函数的损失函数:
L ( ω ) = 1 2 ( r + γ V ω ( s t + 1 ) − V ω ( s t ) ) 2 \mathcal L(\omega)=\frac 12(r+\gamma V_{\omega}(s_{t+1})-V_{\omega}(s_t))^2 L(ω)=21(r+γVω(st+1)Vω(st))2
同DQN采用类似于目标网络的方法,将上式 r + γ V ω ( s t + 1 ) r+\gamma V_{\omega}(s_{t+1}) r+γVω(st+1)作为时序差分目标,不会产生梯度来更新价值函数。因此,价值函数的梯度为:
∇ ω L ( ω ) = − ( r + γ V ω ( s t + 1 ) − V ω ( s t ) ) ∇ ω V ω ( s t ) \nabla_{\omega}\mathcal L(\omega)=-(r+\gamma V_{\omega}(s_{t+1})-V_{\omega}(s_t))\nabla_{\omega}V_{\omega}(s_t) ωL(ω)=(r+γVω(st+1)Vω(st))ωVω(st)
然后使用梯度下降方法来更新 Critic 价值网络参数即可。

Actor-Critic算法流程:

  • 初始化策略网络参数 θ \theta θ,价值网络参数 ω \omega ω
  • f o r for for序列 e = 1 → E   d o e=1\rarr E\space do e=1E do
    • 用当前策略 π θ \pi_{\theta} πθ采样轨迹 { s 1 , a 1 , r 1 , s 2 , a 2 , r 2 , ⋯   } \{s_1,a_1,r_1,s_2,a_2,r_2,\cdots\} {s1,a1,r1,s2,a2,r2,}
    • 为每一步数据计算 δ t = r t + γ V ω ( s t + 1 ) − V ω ( s t ) \delta_t=r_t+\gamma V_{\omega}(s_{t+1})-V_{\omega}(s_t) δt=rt+γVω(st+1)Vω(st)
    • 更新价值参数 ω = ω + α ω ∑ t δ t ∇ ω V ω ( s t ) \omega=\omega+\alpha_{\omega}\sum_t\delta_t\nabla_{\omega}V_{\omega}(s_t) ω=ω+αωtδtωVω(st)
    • 更新策略参数 θ = θ + α θ ∑ t δ t ∇ θ log ⁡ π θ ( a t ∣ s t ) \theta=\theta+\alpha_{\theta}\sum_t\delta_t\nabla_{\theta}\log_{\pi_{\theta}}(a_t|s_t) θ=θ+αθtδtθlogπθ(atst)
代码实现

定义策略网络,其输入是某个状态,输出则是该状态下的动作概率分布

class PolicyNet(torch.nn.Module):
    def __init__(self, state_dim, hidden_dim, action_dim):
        super(PolicyNet, self).__init__()
        self.fc1 = torch.nn.Linear(state_dim, hidden_dim)
        self.fc2 = torch.nn.Linear(hidden_dim, action_dim)

    def forward(self, x):
        x = F.relu(self.fc1(x))
        return F.softmax(self.fc2(x), dim=1)

定义价值网络,其输入是某个状态,输出则是状态的价值

class ValueNet(torch.nn.Module):
    def __init__(self, state_dim, hidden_dim):
        super(ValueNet, self).__init__()
        self.fc1 = torch.nn.Linear(state_dim, hidden_dim)
        self.fc2 = torch.nn.Linear(hidden_dim, 1)

    def forward(self, x):
        x = F.relu(self.fc1(x))
        return self.fc2(x)

定义ActorCritic算法,主要包含采取动作take_action()和更新网络参数update()两个函数

class ActorCritic:
    def __init__(self, state_dim, hidden_dim, action_dim, actor_lr, critic_lr, gamma, device):
        # 策略网络
        self.actor = PolicyNet(state_dim, hidden_dim, action_dim).to(device)
        self.critic = ValueNet(state_dim, hidden_dim).to(device)  # 价值网络
        # 策略网络优化器
        self.actor_optimizer = torch.optim.Adam(self.actor.parameters(), lr=actor_lr)
        self.critic_optimizer = torch.optim.Adam(self.critic.parameters(), lr=critic_lr)  # 价值网络优化器
        self.gamma = gamma
        self.device = device

    def take_action(self, state):
        state = torch.tensor([state], dtype=torch.float).to(self.device)
        probs = self.actor(state)
        action_dist = torch.distributions.Categorical(probs)
        action = action_dist.sample()
        return action.item()

    def update(self, transition_dict):
        states = torch.tensor(transition_dict['states'], dtype=torch.float).to(self.device)
        actions = torch.tensor(transition_dict['actions']).view(-1, 1).to(self.device)
        rewards = torch.tensor(transition_dict['rewards'], dtype=torch.float).view(-1, 1).to(self.device)
        next_states = torch.tensor(transition_dict['next_states'], dtype=torch.float).to(self.device)
        dones = torch.tensor(transition_dict['dones'], dtype=torch.float).view(-1, 1).to(self.device)
        # 时序差分目标
        td_target = rewards + self.gamma * self.critic(next_states) * (1 - dones)
        td_delta = td_target - self.critic(states)  # 时序差分误差
        log_probs = torch.log(self.actor(states).gather(1, actions))
        actor_loss = torch.mean(-log_probs * td_delta.detach())
        # 均方误差损失函数
        critic_loss = torch.mean(F.mse_loss(self.critic(states), td_target.detach()))
        self.actor_optimizer.zero_grad()
        self.critic_optimizer.zero_grad()
        actor_loss.backward()  # 计算策略网络的梯度
        critic_loss.backward()  # 计算价值网络的梯度
        self.actor_optimizer.step()  # 更新策略网络的参数
        self.critic_optimizer.step()  # 更新价值网络的参数

实现

actor_lr = 1e-3
critic_lr = 1e-2
num_episodes = 1000
hidden_dim = 128
gamma = 0.98
device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")

env_name = 'CartPole-v0'
env = gym.make(env_name)
env.seed(0)
torch.manual_seed(0)
state_dim = env.observation_space.shape[0]
action_dim = env.action_space.n
agent = ActorCritic(state_dim, hidden_dim, action_dim, actor_lr, critic_lr, gamma, device)

return_list = train_on_policy_agent(env, agent, num_episodes)

episodes_list = list(range(len(return_list)))
plt.plot(episodes_list, return_list)
plt.xlabel('Episodes')
plt.ylabel('Returns')
plt.title('Actor-Critic on {}'.format(env_name))
plt.show()

def moving_average(a, window_size):
    cumulative_sum = np.cumsum(np.insert(a, 0, 0))
    middle = (cumulative_sum[window_size:] - cumulative_sum[:-window_size]) / window_size
    r = np.arange(1, window_size - 1, 2)
    begin = np.cumsum(a[:window_size - 1])[::2] / r
    end = (np.cumsum(a[:-window_size:-1])[::2] / r)[::-1]
    return np.concatenate((begin, middle, end))

mv_return = moving_average(return_list, 9)
plt.plot(episodes_list, mv_return)
plt.xlabel('Episodes')
plt.ylabel('Returns')
plt.title('Actor-Critic on {}'.format(env_name))
plt.show()
Iteration 0: 100%|██████████| 100/100 [00:03<00:00, 26.18it/s, episode=100, return=20.200]
Iteration 1: 100%|██████████| 100/100 [00:02<00:00, 43.91it/s, episode=200, return=39.100]
Iteration 2: 100%|██████████| 100/100 [00:05<00:00, 17.48it/s, episode=300, return=126.100]
Iteration 3: 100%|██████████| 100/100 [00:10<00:00,  9.40it/s, episode=400, return=195.800]
Iteration 4: 100%|██████████| 100/100 [00:14<00:00,  7.01it/s, episode=500, return=198.000]
Iteration 5: 100%|██████████| 100/100 [00:14<00:00,  6.87it/s, episode=600, return=195.800]
Iteration 6: 100%|██████████| 100/100 [00:13<00:00,  7.16it/s, episode=700, return=183.100]
Iteration 7: 100%|██████████| 100/100 [00:14<00:00,  6.71it/s, episode=800, return=189.700]
Iteration 8: 100%|██████████| 100/100 [00:14<00:00,  7.14it/s, episode=900, return=200.000]
Iteration 9: 100%|██████████| 100/100 [00:14<00:00,  7.05it/s, episode=1000, return=200.000]

在这里插入图片描述
在这里插入图片描述

6.2 DDPG算法

在已有的方法中,REINFORCE、Actor-Critic都是在线策略算法,样本的效率低。DQN算法可以离线学习,但只能处理有限的动作空间。而深度确定性策略梯度(Deep Deterministic Policy Gradient,DDPG)结合了Actor-Critic和DQN中的经验回放(Off-Policy)和目标网络,学习一个确定性的策略。

在这里插入图片描述

经验回放

主要包括存储回放两个关键步骤:

  • 存储:智能体将得到的经验数据 ( s , a , r , s ′ , done ) (s,a,r,s^{\prime},\text{done}) (s,a,r,s,done)放入ReplayBuffer中
  • 回放:按照某种规则从经验池中采样一条或多条经验数据用于更新网络参数
目标网络

共计四个神经网络:ActorCriticTarget ActorTarget Critic

  • Critic更新过程为最小化评估值与目标值之间的误差,表示为:
    L c = ( y − q ) 2 \mathcal L_c=(y-q)^2 Lc=(yq)2
    其中, y y y为Target Critic的目标值, q q q为Critic的评估值,分别表示为:
    y = r + γ ( 1 − d o n e ) Q ′ ( s ′ , a ′ ∣ θ Q ′ ) q = Q ( s , a ∣ θ Q ) y=r+\gamma(1-done)Q^{\prime}(s^{\prime},a^{\prime}|\theta^{Q^{\prime}})\\ q=Q(s,a|\theta^Q) y=r+γ(1done)Q(s,aθQ)q=Q(s,aθQ)
    y y y的计算公式里 a ′ a^{\prime} a表示Target Actor的目标动作, q q q中的 a a a则表示Actor的策略动作,二者分别表示为:
    a ′ = μ ′ ( s ′ ∣ θ μ ′ ) a = μ ( s ∣ θ μ ) a^{\prime}=\mu^{\prime}(s^{\prime}|\theta^{\mu^{\prime}})\\ a=\mu(s|\theta^{\mu}) a=μ(sθμ)a=μ(sθμ)

  • Actor更新过程为最大化累积期望回报,即Q值 Q ( s , a ∣ θ Q ) Q(s,a|\theta^Q) Q(s,aθQ),其中 a a a为Actor的策略动作,表示为 a = μ ( s ∣ θ μ ) a=\mu(s|\theta^{\mu}) a=μ(sθμ)

  • 软更新目标网络Target Actor和Target Critic,利用软更新,定义一个超参数 τ \tau τ τ \tau τ通常取 τ ∈ ( 0 , 1 ) \tau\in(0,1) τ(0,1),典型值如0.005。更新Target Actor和更新Target Critic可以表示为:
    θ μ ′ = τ θ μ + ( 1 − τ ) θ μ ′ θ Q ′ = τ θ Q + ( 1 − τ ) θ Q ′ \theta^{\mu^{\prime}}=\tau\theta^{\mu}+(1-\tau)\theta^{\mu^{\prime}}\\ \theta^{Q^{\prime}}=\tau\theta^{Q}+(1-\tau)\theta^{Q^{\prime}} θμ=τθμ+(1τ)θμθQ=τθQ+(1τ)θQ

噪声探索

确定性策略输出的动作为确定性动作,缺乏对环境的探索。给Actor输出的动作加入噪声,让智能体具备探索能力。

  • Ornstein Uhlenbeck(OU)噪声
    d N t = θ ( μ − N t ) d t + σ d B t dN_t=\theta(\mu-N_t)dt+\sigma dB_t dNt=θ(μNt)dt+σdBt
    对于 t ≠ s t\neq s t=s,总有 ∣ t − s ∣ < t + s |t-s|\lt t+s ts<t+s,所以协方差总是大于0。使用OU噪声让相邻扰动正相关,进而让动作向相近的方向偏移。

  • 正态分布噪声

    舍弃复杂的OU噪声,简单采用服从正态分布的噪声也很有效。

代码实现

算法流程

  • 随机噪声 N \mathcal N N,用随机的网络参数 ω \omega ω θ \theta θ分别初始化Critic网络 Q ω ( s , a ) Q_{\omega}(s,a) Qω(s,a)和Actor网络 μ θ ( s ) \mu_{\theta}(s) μθ(s)

  • 复制相同的参数 ω − ← ω \omega^-\larr \omega ωω θ − ← θ \theta^-\larr \theta θθ,分别初始化目标网络 Q ω − Q_{\omega^-} Qω μ θ − \mu_{\theta^-} μθ

  • 初始化经验回放池 R R R

  • f o r for for序列 t = 1 → T   d o t=1\rarr T\space do t=1T do

    • 根据当前策略和噪声选择动作 a t = μ θ ( s t ) + N a_t=\mu_{\theta}(s_t)+\mathcal N at=μθ(st)+N

    • 执行动作 a t a_t at,获得奖励 r t r_t rt,环境状态变为 s t + 1 s_{t+1} st+1

    • ( s t , a t , r t , s t + 1 ) (s_t,a_t,r_t,s_{t+1}) (st,at,rt,st+1)存储进回放池 R R R

    • R R R中采样 N N N个元组 { ( s i , a i , r i , s t + 1 ) } i = 1 , ⋯   , N \{(s_i,a_i,r_i,s_{t+1})\}_{i=1,\cdots,N} {(si,ai,ri,st+1)}i=1,,N

    • 对每个元组,用目标网络计算 y i = r i + γ Q ω − ( s i + 1 , μ θ ( s i + 1 ) ) y_i=r_i+\gamma Q_{\omega^-}(s_{i+1},\mu_{\theta}(s_{i+1})) yi=ri+γQω(si+1,μθ(si+1))

    • 最小化目标损失 L = 1 N ∑ i = 1 N ( y i − Q ω ( s i , a i ) ) 2 L=\frac 1N\sum^N_{i=1}(y_i-Q_{\omega}(s_i,a_i))^2 L=N1i=1N(yiQω(si,ai))2,以此更新当前Critic网络

    • 计算采样的策略梯度 ∇ θ J ≈ 1 N ∑ i = 1 N ∇ θ μ θ ( s i ) ∇ a Q ω ( s i , a ) ∣ a = μ θ ( s i ) \nabla_{\theta}J\approx\frac 1N \sum^N_{i=1}\nabla_{\theta}\mu_{\theta}(s_i)\nabla_aQ_{\omega}(s_i,a)|_{a=\mu_{\theta}(s_i)} θJN1i=1Nθμθ(si)aQω(si,a)a=μθ(si),以此更新当前Actor网络

    • 更新目标网络:
      ω − ← τ ω + ( 1 − τ ) ω − θ − ← τ θ + ( 1 − τ ) θ − \omega^-\larr\tau\omega+(1-\tau)\omega^-\\ \theta^-\larr\tau\theta+(1-\tau)\theta^- ωτω+(1τ)ωθτθ+(1τ)θ

Actor网络

class PolicyNet(torch.nn.Module):
    def __init__(self, state_dim, hidden_dim, action_dim, action_bound):
        super(PolicyNet, self).__init__()
        self.fc1 = torch.nn.Linear(state_dim, hidden_dim)
        self.fc2 = torch.nn.Linear(hidden_dim, action_dim)
        self.action_bound = action_bound  # action_bound是环境可以接受的动作最大值

    def forward(self, x):
        x = F.relu(self.fc1(x))
        return torch.tanh(self.fc2(x)) * self.action_bound

Critic网络

class QValueNet(torch.nn.Module):
    def __init__(self, state_dim, hidden_dim, action_dim):
        super(QValueNet, self).__init__()
        self.fc1 = torch.nn.Linear(state_dim + action_dim, hidden_dim)
        self.fc2 = torch.nn.Linear(hidden_dim, hidden_dim)
        self.fc_out = torch.nn.Linear(hidden_dim, 1)

    def forward(self, x, a):
        cat = torch.cat([x, a], dim=1) # 拼接状态和动作
        x = F.relu(self.fc1(cat))
        x = F.relu(self.fc2(x))
        return self.fc_out(x)

DDPG算法

class DDPG:
    def __init__(self, state_dim, hidden_dim, action_dim, action_bound, sigma,
                 actor_lr, critic_lr, tau, gamma, device):
        self.actor = PolicyNet(state_dim, hidden_dim, action_dim, action_bound).to(device)
        self.critic = QValueNet(state_dim, hidden_dim, action_dim).to(device)
        self.target_actor = PolicyNet(state_dim, hidden_dim, action_dim, action_bound).to(device)
        self.target_critic = QValueNet(state_dim, hidden_dim, action_dim).to(device)
        # 初始化目标价值网络和目标策略网络,分别设置与价值网络和策略网络相同的参数
        self.target_critic.load_state_dict(self.critic.state_dict())
        self.target_actor.load_state_dict(self.actor.state_dict())
        self.actor_optimizer = torch.optim.Adam(self.actor.parameters(), actor_lr)
        self.critic_optimizer = torch.optim.Adam(self.critic.parameters(), critic_lr)
        self.gamma = gamma
        self.sigma = sigma  # 高斯噪声的标准差,均值直接设为0
        self.tau = tau  # 目标网络软更新参数
        self.action_dim = action_dim
        self.device = device

    def take_action(self, state):
        state = torch.tensor([state], dtype=torch.float).to(self.device)
        action = self.actor(state).item()
        # 添加噪声
        action = action + self.sigma * np.random.randn(self.action_dim)
        return action

    def soft_update(self, net, target_net):  # 软更新
        for param_target, param in zip(target_net.parameters(), net.parameters()):
            param_target.data.copy_(param_target.data * (1.0 - self.tau) + param.data * self.tau)

    def update(self, transition_dict):
        states = torch.tensor(transition_dict['states'], dtype=torch.float).to(self.device)
        actions = torch.tensor(transition_dict['actions'], dtype=torch.float).view(-1, 1).to(self.device)
        rewards = torch.tensor(transition_dict['rewards'], dtype=torch.float).view(-1, 1).to(self.device)
        next_states = torch.tensor(transition_dict['next_states'], dtype=torch.float).to(self.device)
        dones = torch.tensor(transition_dict['dones'], dtype=torch.float).view(-1, 1).to(self.device)

        next_q_value = self.target_critic(next_states, self.target_actor(next_states))
        q_targets = rewards + self.gamma * next_q_value * (1 - dones)
        critic_loss = torch.mean(F.mse_loss(self.critic(states, actions), q_targets))
        self.critic_optimizer.zero_grad()
        critic_loss.backward()
        self.critic_optimizer.step()

        actor_loss = -torch.mean(self.critic(states, self.actor(states)))
        self.actor_optimizer.zero_grad()
        actor_loss.backward()
        self.actor_optimizer.step()

        self.soft_update(self.actor, self.target_actor)  # 软更新策略网络
        self.soft_update(self.critic, self.target_critic)  # 软更新价值网络

在倒立摆环境中训练 DDPG

actor_lr = 3e-4
critic_lr = 3e-3
num_episodes = 200
hidden_dim = 64
gamma = 0.98
tau = 0.005  # 软更新参数
buffer_size = 10000
minimal_size = 1000
batch_size = 64
sigma = 0.01  # 高斯噪声标准差
device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")

env_name = 'Pendulum-v1'
env = gym.make(env_name)
random.seed(0)
np.random.seed(0)
env.seed(0)
torch.manual_seed(0)
replay_buffer = rl_utils.ReplayBuffer(buffer_size)
state_dim = env.observation_space.shape[0]
action_dim = env.action_space.shape[0]
action_bound = env.action_space.high[0]  # 动作最大值
agent = DDPG(state_dim, hidden_dim, action_dim, action_bound, sigma, actor_lr, critic_lr, tau, gamma, device)

return_list = rl_utils.train_off_policy_agent(env, agent, num_episodes, replay_buffer, minimal_size, batch_size)

episodes_list = list(range(len(return_list)))
plt.plot(episodes_list, return_list)
plt.xlabel('Episodes')
plt.ylabel('Returns')
plt.title('DDPG on {}'.format(env_name))
plt.show()

mv_return = rl_utils.moving_average(return_list, 9)
plt.plot(episodes_list, mv_return)
plt.xlabel('Episodes')
plt.ylabel('Returns')
plt.title('DDPG on {}'.format(env_name))
plt.show()
Iteration 0: 100%|██████████| 20/20 [00:16<00:00,  1.21it/s, episode=20, return=-1342.045]
Iteration 1: 100%|██████████| 20/20 [00:21<00:00,  1.06s/it, episode=40, return=-1130.349]
Iteration 2: 100%|██████████| 20/20 [00:21<00:00,  1.06s/it, episode=60, return=-924.681]
Iteration 3: 100%|██████████| 20/20 [00:20<00:00,  1.04s/it, episode=80, return=-852.636]
Iteration 4: 100%|██████████| 20/20 [00:20<00:00,  1.02s/it, episode=100, return=-209.287]
Iteration 5: 100%|██████████| 20/20 [00:20<00:00,  1.02s/it, episode=120, return=-243.581]
Iteration 6: 100%|██████████| 20/20 [00:20<00:00,  1.04s/it, episode=140, return=-152.858]
Iteration 7: 100%|██████████| 20/20 [00:20<00:00,  1.05s/it, episode=160, return=-244.480]
Iteration 8: 100%|██████████| 20/20 [00:20<00:00,  1.02s/it, episode=180, return=-172.151]
Iteration 9: 100%|██████████| 20/20 [00:20<00:00,  1.05s/it, episode=200, return=-168.120]

在这里插入图片描述
在这里插入图片描述

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值