Policy Gradients & Actor Critic

Policy Gradients

直接输出概率

Policy gradient 是 RL 中另外一个大家族, 他不像 Value-based 方法 (Q learning, Sarsa), 但他也要接受环境信息 (observation), 不同的是他要输出不是 action 的 value, 而是具体的那一个 action, 这样 policy gradient 就跳过了 value 这个阶段. 而且个人认为 Policy gradient 最大的一个优势是: 输出的这个 action 可以是一个连续的值, 之前我们说到的 value-based 方法输出的都是不连续的值, 然后再选择值最大的 action. 而 policy gradient 可以在一个连续分布上选取 action.

以回合为基础更新

 

Actor Critic

能结合概率和值

结合了 Policy Gradient (Actor) 和 Function Approximation (Critic) 的方法. Actor 基于概率选行为, Critic 基于 Actor 的行为评判行为的得分, Actor 根据 Critic 的评分修改选行为的概率.

Actor Critic 方法的优势: 可以进行单步更新, 比传统的 Policy Gradient 要快.

Actor Critic 方法的劣势: 取决于 Critic 的价值判断, 但是 Critic 难收敛, 再加上 Actor 的更新, 就更难收敛. 为了解决收敛问题, Google Deepmind 提出了 Actor Critic 升级版 Deep Deterministic Policy Gradient. 后者融合了 DQN 的优势, 解决了收敛难的问题. 我们之后也会要讲到 Deep Deterministic Policy Gradient. 不过那个是要以 Actor Critic 为基础, 懂了 Actor Critic, 后面那个就好懂了.

 

Deep Deterministic Policy Gradient(DDPG

它吸收了 Actor-CriticPolicy gradient 单步更新的精华, 而且还吸收让计算机学会玩游戏的 DQN 的精华, 合并成了一种新算法, 叫做 Deep Deterministic Policy Gradient. 那 DDPG 到底是什么样的算法呢, 我们就拆开来分析, 我们将 DDPG 分成 ‘Deep’ 和 ‘Deterministic Policy Gradient’, 然后 ‘Deterministic Policy Gradient’ 又能被细分为 ‘Deterministic’ 和 ‘Policy Gradient’.

DDPG 中所用到的神经网络. 它其实和我们之前提到的 Actor-Critic 形式差不多, 也需要有基于 策略 Policy 的神经网络 和基于 价值 Value 的神经网络, 但是为了体现 DQN 的思想, 每种神经网络我们都需要再细分为两个, Policy Gradient 这边, 我们有估计网络和现实网络, 估计网络用来输出实时的动作, 供 actor 在现实中实行. 而现实网络则是用来更新价值网络系统的. 所以我们再来看看价值系统这边, 我们也有现实网络和估计网络, 他们都在输出这个状态的价值, 而输入端却有不同, 状态现实网络这边会拿着从动作现实网络来的动作加上状态的观测值加以分析, 而状态估计网络则是拿着当时 Actor 施加的动作当做输入.在实际运用中, DDPG 的这种做法的确带来了更有效的学习过程.

  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
以下是基于Tensorflow的《Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor》版本的SAC强化学习算法的Python代码: ```python import tensorflow as tf import numpy as np import gym # Create actor network class Actor(tf.keras.Model): def __init__(self, state_dim, action_dim, max_action): super(Actor, self).__init__() self.layer1 = tf.keras.layers.Dense(256, activation='relu') self.layer2 = tf.keras.layers.Dense(256, activation='relu') self.mu_layer = tf.keras.layers.Dense(action_dim, activation='tanh') self.sigma_layer = tf.keras.layers.Dense(action_dim, activation='softplus') self.max_action = max_action def call(self, state): x = self.layer1(state) x = self.layer2(x) mu = self.mu_layer(x) * self.max_action sigma = self.sigma_layer(x) + 1e-4 return mu, sigma # Create two critic networks class Critic(tf.keras.Model): def __init__(self, state_dim, action_dim): super(Critic, self).__init__() self.layer1 = tf.keras.layers.Dense(256, activation='relu') self.layer2 = tf.keras.layers.Dense(256, activation='relu') self.layer3 = tf.keras.layers.Dense(1, activation=None) def call(self, state, action): x = tf.concat([state, action], axis=1) x = self.layer1(x) x = self.layer2(x) x = self.layer3(x) return x # Create Soft Actor-Critic (SAC) Agent class SACAgent: def __init__(self, state_dim, action_dim, max_action): self.actor = Actor(state_dim, action_dim, max_action) self.critic1 = Critic(state_dim, action_dim) self.critic2 = Critic(state_dim, action_dim) self.target_critic1 = Critic(state_dim, action_dim) self.target_critic2 = Critic(state_dim, action_dim) self.max_action = max_action self.alpha = tf.Variable(0.1, dtype=tf.float32, name='alpha') self.gamma = 0.99 self.tau = 0.005 self.optimizer_actor = tf.keras.optimizers.Adam(learning_rate=3e-4) self.optimizer_critic1 = tf.keras.optimizers.Adam(learning_rate=3e-4) self.optimizer_critic2 = tf.keras.optimizers.Adam(learning_rate=3e-4) def get_action(self, state): state = np.expand_dims(state, axis=0) mu, sigma = self.actor(state) dist = tfp.distributions.Normal(mu, sigma) action = tf.squeeze(dist.sample(1), axis=0) return action.numpy() def update(self, replay_buffer, batch_size): states, actions, rewards, next_states, dones = replay_buffer.sample(batch_size) with tf.GradientTape(persistent=True) as tape: # Compute actor loss mu, sigma = self.actor(states) dist = tfp.distributions.Normal(mu, sigma) log_pi = dist.log_prob(actions) q1 = self.critic1(states, actions) q2 = self.critic2(states, actions) q_min = tf.minimum(q1, q2) alpha_loss = -tf.reduce_mean(self.alpha * (log_pi + self.target_entropy)) actor_loss = -tf.reduce_mean(tf.exp(self.alpha) * log_pi * q_min) # Compute critic loss next_mu, next_sigma = self.actor(next_states) next_dist = tfp.distributions.Normal(next_mu, next_sigma) next_actions = tf.clip_by_value(next_dist.sample(1), -self.max_action, self.max_action) target_q1 = self.target_critic1(next_states, next_actions) target_q2 = self.target_critic2(next_states, next_actions) target_q = tf.minimum(target_q1, target_q2) target_q = rewards + self.gamma * (1.0 - dones) * (target_q - tf.exp(self.alpha) * next_dist.entropy()) q1_loss = tf.reduce_mean(tf.square(q1 - target_q)) q2_loss = tf.reduce_mean(tf.square(q2 - target_q)) critic_loss = q1_loss + q2_loss + alpha_loss # Compute gradients and update weights actor_grads = tape.gradient(actor_loss, self.actor.trainable_variables) critic1_grads = tape.gradient(critic_loss, self.critic1.trainable_variables) critic2_grads = tape.gradient(critic_loss, self.critic2.trainable_variables) self.optimizer_actor.apply_gradients(zip(actor_grads, self.actor.trainable_variables)) self.optimizer_critic1.apply_gradients(zip(critic1_grads, self.critic1.trainable_variables)) self.optimizer_critic2.apply_gradients(zip(critic2_grads, self.critic2.trainable_variables)) # Update target networks for w, w_target in zip(self.critic1.weights, self.target_critic1.weights): w_target.assign(self.tau * w + (1 - self.tau) * w_target) for w, w_target in zip(self.critic2.weights, self.target_critic2.weights): w_target.assign(self.tau * w + (1 - self.tau) * w_target) # Update alpha alpha_grad = tape.gradient(alpha_loss, self.alpha) self.alpha.assign_add(1e-4 * alpha_grad) def save(self, filename): self.actor.save_weights(filename + '_actor') self.critic1.save_weights(filename + '_critic1') self.critic2.save_weights(filename + '_critic2') def load(self, filename): self.actor.load_weights(filename + '_actor') self.critic1.load_weights(filename + '_critic1') self.critic2.load_weights(filename + '_critic2') # Create replay buffer class ReplayBuffer: def __init__(self, max_size): self.max_size = max_size self.buffer = [] self.position = 0 def add(self, state, action, reward, next_state, done): if len(self.buffer) < self.max_size: self.buffer.append(None) self.buffer[self.position] = (state, action, reward, next_state, done) self.position = (self.position + 1) % self.max_size def sample(self, batch_size): indices = np.random.choice(len(self.buffer), batch_size, replace=False) states, actions, rewards, next_states, dones = [], [], [], [], [] for idx in indices: state, action, reward, next_state, done = self.buffer[idx] states.append(np.array(state, copy=False)) actions.append(np.array(action, copy=False)) rewards.append(reward) next_states.append(np.array(next_state, copy=False)) dones.append(done) return np.array(states), np.array(actions), np.array(rewards, dtype=np.float32), np.array(next_states), np.array(dones, dtype=np.uint8) # Create environment and agent env = gym.make('Pendulum-v0') state_dim = env.observation_space.shape[0] action_dim = env.action_space.shape[0] max_action = float(env.action_space.high[0]) agent = SACAgent(state_dim, action_dim, max_action) replay_buffer = ReplayBuffer(1000000) # Train agent max_episodes = 1000 max_steps = 500 batch_size = 256 update_interval = 1 target_entropy = -action_dim for episode in range(max_episodes): state = env.reset() total_reward = 0 for step in range(max_steps): action = agent.get_action(state) next_state, reward, done, _ = env.step(action) replay_buffer.add(state, action, reward, next_state, done) if len(replay_buffer.buffer) > batch_size: agent.update(replay_buffer, batch_size) state = next_state total_reward += reward if done: break print("Episode:", episode, "Total Reward:", total_reward) ``` 请注意,以上代码仅供参考,并且需要根据具体环境和参数进行调整和完善。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值