【强化学习】SAC算法+离散环境Acrobot

本文介绍了SAC(SoftActor-Critic)算法,一种稳定的离线策略强化学习算法,用于解决连续动作空间的问题。文章详细阐述了SAC的背景、工作原理,并通过Acrobot环境展示了在离散环境中的应用。此外,还提供了使用PaddlePaddle实现的SAC算法代码,包括策略网络、价值网络和经验回放机制的定义,以及训练和验证过程。
摘要由CSDN通过智能技术生成

★★★ 本文源自AI Studio社区精品项目,【点击此处】查看更多精品内容 >>>


1. SAC算法

1.1 SAC简介

我们之前学习了一些on-policy算法,如A2C,REINFORCE,PPO,但是他们的采样效率比较低;因此我们通常更倾向于使用off-policy算法,如DQN,DDPG,TD3。但是off-policy的训练通过不稳定,收敛性较差,对超参数比较敏感,也难以适应不同的复杂环境。2018 年,一个更加稳定的离线策略算法 Soft Actor-Critic(SAC)被提出。SAC 的前身是 Soft Q-learning,它们都属于最大熵强化学习的范畴。Soft Q-learning 不存在一个显式的策略函数,而是使用一个函数Q的波尔兹曼分布,在连续空间下求解非常麻烦。于是 SAC 提出使用一个 Actor 表示策略函数,从而解决这个问题。目前,在无模型的强化学习算法中,SAC 是一个非常高效的算法,它学习一个随机性策略,在不少标准环境中取得了领先的成绩。

1.2 SAC算法

在 SAC 算法中,我们为两个动作价值函数Critic和一个策略函数Actor建模。基于 Double DQN的思想,SAC使用两个Critic网络,但每次用Critic网络时会挑选一个值小的网络,从而缓解值过高估计的问题。具体算法如下(《Soft Actor-Critic:Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor》):

SAC算法原本是针对连续动作交互环境提出的,但是SAC也可应用于离散的环境。本项目以Acrobot环境为例,展示SAC算法在离散环境中的应用。

2. Acrobot环境

2.1 Acrobot示意图

2.2 Acrobot简介

The system consists of two links connected linearly to form a chain, with one end of the chain fixed. The joint between the two links is actuated. The goal is to apply torques on the actuated joint to swing the free end of the linear chain above a given height while starting from the initial state of hanging downwards.

As seen in the Gif: two blue links connected by two green joints. The joint in between the two links is actuated. The goal is to swing the free end of the outer-link to reach the target height (black horizontal line above system) by applying torque on the actuator.

该系统由两个线性连接的环节组成,形成一个链条,链条的一端固定。两个链节之间的接头是被驱动的。目标是在被驱动的关节上施加扭矩,使线性链条的自由端在给定的高度上摆动,同时从最初的向下悬挂状态开始。

如图所示:两个蓝色链接由两个绿色关节连接。两个链节之间的接头被驱动。目标是通过在致动器上施加扭矩,使外链的自由端摆动达到目标高度(系统上方的黑色水平线)。

2.3 Acrobot基本信息

关于Acrobot的更多信息可以参考gym的官方网站:Acrobot

3.导入依赖包

import paddle
import paddle.nn as nn
import paddle.nn.functional as F
from paddle.distribution import Normal
from visualdl import LogWriter
import random
import collections
import gym
import matplotlib.pyplot as plt
from matplotlib import animation
from tqdm import tqdm
import numpy as np
import copy

4. 定义网络

4.1 策略网络

class PolicyNet(paddle.nn.Layer):
    def __init__(self, state_dim, hidden_dim, action_dim):
        super(PolicyNet, self).__init__()
        self.fc1 = paddle.nn.Linear(state_dim, hidden_dim)
        self.fc2 = paddle.nn.Linear(hidden_dim, action_dim)

    def forward(self, x):
        x = F.relu(self.fc1(x))
        return F.softmax(self.fc2(x), axis=1)

4.2 价值网络

class QValueNet(paddle.nn.Layer):
    ''' 只有一层隐藏层的Q网络 '''
    def __init__(self, state_dim, hidden_dim, action_dim):
        super(QValueNet, self).__init__()
        self.fc1 = paddle.nn.Linear(state_dim, hidden_dim)
        self.fc2 = paddle.nn.Linear(hidden_dim, action_dim)

    def forward(self, x):
        x = F.relu(self.fc1(x))
        return self.fc2(x)

5. 经验回放

定义经验回放类,用于存储智能体与环境的经验(交互轨迹),事后反复利用这些经验训练智能体。

经验回放机制有两个好处:

  • 打破序列的相关性
  • 是重复利用收集到的经验,而不是用一次就丢弃,这样可以用更少的样本数量达到同样的表现
class ReplayBuffer:
    def __init__(self, capacity):
        self.buffer = collections.deque(maxlen=capacity) 

    def add(self, state, action, reward, next_state, done): 
        self.buffer.append((state, action, reward, next_state, done)) 

    def sample(self, batch_size): 
        transitions = random.sample(self.buffer, batch_size)
        state, action, reward, next_state, done = zip(*transitions)
        return np.array(state), action, reward, np.array(next_state), done 

    def size(self): 
        return len(self.buffer)

6. 定义SAC算法类

# 函数功能:沿指定的轴(axis)收集值,在使用中的具体作用是仅收集选取的动作的价值
def paddle_gather(x, axis, index):
    index_shape = index.shape
    index_flatten = index.flatten()
    if axis < 0:
        axis = len(x.shape) + axis
    nd_index = []
    for k in range(len(x.shape)):
        if k == axis:
            nd_index.append(index_flatten)
        else:
            reshape_shape = [1] * len(x.shape)
            reshape_shape[k] = x.shape[k]
            x_arange = paddle.arange(x.shape[k], dtype=index.dtype)
            x_arange = x_arange.reshape(reshape_shape)
            axis_index = paddle.expand(x_arange, index_shape).flatten()
            nd_index.append(axis_index)
    ind2 = paddle.transpose(paddle.stack(nd_index), [1, 0]).astype("int64")
    paddle_out = paddle.gather_nd(x, ind2).reshape(index_shape)
    return paddle_out
class SAC:
    ''' 处理离散动作的SAC算法 '''
    def __init__(self, state_dim, hidden_dim, action_dim, actor_lr, critic_lr,
                 alpha_lr, target_entropy, tau, gamma):
        # 策略网络
        self.actor = PolicyNet(state_dim, hidden_dim, action_dim)
        # 第一个Q网络
        self.critic_1 = QValueNet(state_dim, hidden_dim, action_dim)
        # 第二个Q网络
        self.critic_2 = QValueNet(state_dim, hidden_dim, action_dim)
        self.target_critic_1 = QValueNet(state_dim, hidden_dim,
                                         action_dim)  # 第一个目标Q网络
        self.target_critic_2 = QValueNet(state_dim, hidden_dim,
                                         action_dim)  # 第二个目标Q网络
        # 令目标Q网络的初始参数和Q网络一样
        self.target_critic_1.set_state_dict(self.critic_1.state_dict())
        self.target_critic_2.set_state_dict(self.critic_2.state_dict())

        self.actor_optimizer = paddle.optimizer.Adam(parameters = self.actor.parameters(),
                                                learning_rate=actor_lr)
        self.critic_1_optimizer = paddle.optimizer.Adam(parameters = self.critic_1.parameters(),
                                                   learning_rate=critic_lr)
        self.critic_2_optimizer = paddle.optimizer.Adam(parameters = self.critic_2.parameters(),
                                                   learning_rate=critic_lr)

        # 使用alpha的log值,可以使训练结果比较稳定
        self.log_alpha = paddle.to_tensor(np.log(0.01), dtype="float32")
        self.log_alpha.stop_gradient  = False  # 可以对alpha求梯度
        self.log_alpha_optimizer = paddle.optimizer.Adam(parameters = [self.log_alpha],
                                                    learning_rate=alpha_lr)

        self.target_entropy = target_entropy  # 目标熵的大小
        self.gamma = gamma
        self.tau = tau
    
    def save(self):
        paddle.save(self.actor.state_dict(),'net.pdparams')


    def take_action(self, state):
        state = paddle.to_tensor([state], dtype="float32")
        probs = self.actor(state)
        action_dist = paddle.distribution.Categorical(probs)
        action = action_dist.sample([1])
        return action.numpy()[0][0]

    # 计算目标Q值,直接用策略网络的输出概率进行期望计算
    def calc_target(self, rewards, next_states, dones):
        next_probs = self.actor(next_states)
        next_log_probs = paddle.log(next_probs + 1e-8)
        entropy = -paddle.sum(next_probs * next_log_probs, axis=1, keepdim=True)
        q1_value = self.target_critic_1(next_states)
        q2_value = self.target_critic_2(next_states)
        min_qvalue = paddle.sum(next_probs * paddle.minimum(q1_value, q2_value),
                               axis=1,
                               keepdim=True)
        next_value = min_qvalue + self.log_alpha.exp() * entropy
        td_target = rewards + self.gamma * next_value * (1 - dones)
        return td_target

    def soft_update(self, net, target_net):
        for param_target, param in zip(target_net.parameters(),
                                       net.parameters()):
            param_target.set_value(param_target * (1.0 - self.tau) + param * self.tau)

    def update(self, transition_dict):
        states = paddle.to_tensor(transition_dict['states'],dtype="float32")
        actions = paddle.to_tensor(transition_dict['actions']).reshape([-1, 1])  
        rewards = paddle.to_tensor(transition_dict['rewards'],dtype="float32").reshape([-1, 1])
        next_states = paddle.to_tensor(transition_dict['next_states'],dtype="float32")
        dones = paddle.to_tensor(transition_dict['dones'],dtype="float32").reshape([-1, 1])

        # 更新两个Q网络
        td_target = self.calc_target(rewards, next_states, dones)
        critic_1_q_values = paddle_gather(self.critic_1(states), 1, actions)

        critic_1_loss = paddle.mean(F.mse_loss(critic_1_q_values, td_target.detach()))

        critic_2_q_values = paddle_gather(self.critic_2(states), 1, actions)

        critic_2_loss = paddle.mean( F.mse_loss(critic_2_q_values, td_target.detach()))

        self.critic_1_optimizer.clear_grad()
        critic_1_loss.backward()
        self.critic_1_optimizer.step()
        self.critic_2_optimizer.clear_grad()
        critic_2_loss.backward()
        self.critic_2_optimizer.step()

        # 更新策略网络
        probs = self.actor(states)
        log_probs = paddle.log(probs + 1e-8)
        # 直接根据概率计算熵
        entropy = -paddle.sum(probs * log_probs, axis=1, keepdim=True)  #
        q1_value = self.critic_1(states)
        q2_value = self.critic_2(states)
        min_qvalue = paddle.sum(probs * paddle.minimum(q1_value, q2_value), axis=1, keepdim=True)  # 直接根据概率计算期望
        actor_loss = paddle.mean(-self.log_alpha.exp() * entropy - min_qvalue)
        self.actor_optimizer.clear_grad()
        actor_loss.backward()
        self.actor_optimizer.step()

        # 更新alpha值
        alpha_loss = paddle.mean((entropy - target_entropy).detach() * self.log_alpha.exp())
        self.log_alpha_optimizer.clear_grad()
        alpha_loss.backward()
        self.log_alpha_optimizer.step()

        self.soft_update(self.critic_1, self.target_critic_1)
        self.soft_update(self.critic_2, self.target_critic_2)




7. 训练及验证

7.1 定义超参数

actor_lr = 1e-3
critic_lr = 1e-2
alpha_lr = 1e-2
num_episodes = 200
hidden_dim = 128
gamma = 0.98
tau = 0.005  # 软更新参数
buffer_size = 10000
minimal_size = 500
batch_size = 64
target_entropy = -1

writer=LogWriter('./logs')

env_name = 'Acrobot-v1'
env = gym.make(env_name)
replay_buffer = ReplayBuffer(buffer_size)
state_dim = env.observation_space.shape[0]
action_dim = env.action_space.n

3.2 定义训练函数

  • SAC是off-policy,反复使用经验池中的历史经验进行网络更新。
  • 使用tqdm显示训练的进度
  • 保存回合奖励最大值时的策略网络参数,用于验证
def train_off_policy_agent(env, agent, num_episodes, replay_buffer, minimal_size, batch_size):
    return_list = []
    maxre=-100000
    episode=0
    for i in range(10):
        with tqdm(total=int(num_episodes/10), desc='Iteration %d' % i) as pbar:
            for i_episode in range(int(num_episodes/10)):
                episode_return = 0
                state = env.reset()
                done = False
                while not done:
                    action = agent.take_action(state)
                    next_state, reward, done, _ = env.step(action)
                    replay_buffer.add(state, action, reward, next_state, done)
                    state = next_state
                    episode_return += reward
                    if replay_buffer.size() > minimal_size:
                        b_s, b_a, b_r, b_ns, b_d = replay_buffer.sample(batch_size)
                        transition_dict = {'states': b_s, 'actions': b_a, 'next_states': b_ns, 'rewards': b_r, 'dones': b_d}
                        agent.update(transition_dict)

                writer.add_scalar('Reward',episode_return,episode)
                
                # 保存最大epoisde奖励的参数            
                if maxre<episode_return:
                    maxre=episode_return
                    agent.save()
                return_list.append(episode_return)
                if (i_episode+1) % 10 == 0:
                    pbar.set_postfix({'episode': '%d' % (num_episodes/10 * i + i_episode+1), 'return': '%.3f' % np.mean(return_list[-10:])})
                pbar.update(1)
    return return_list
agent = SAC(state_dim, hidden_dim, action_dim, actor_lr, critic_lr, alpha_lr,target_entropy, tau, gamma)

return_list = train_off_policy_agent(env, agent, num_episodes,replay_buffer, minimal_size,batch_size)
W0124 19:13:17.584054  6126 gpu_resources.cc:61] Please NOTE: device: 0, GPU Compute Capability: 7.0, Driver API Version: 11.2, Runtime API Version: 11.2
W0124 19:13:17.588883  6126 gpu_resources.cc:91] device: 0, cuDNN Version: 8.2.
Iteration 0: 100%|██████████| 20/20 [00:44<00:00,  2.24s/it, episode=20, return=-267.300]
Iteration 1: 100%|██████████| 20/20 [00:37<00:00,  1.89s/it, episode=40, return=-232.600]
Iteration 2: 100%|██████████| 20/20 [00:33<00:00,  1.70s/it, episode=60, return=-214.100]
Iteration 3: 100%|██████████| 20/20 [00:32<00:00,  1.62s/it, episode=80, return=-196.900]
Iteration 4: 100%|██████████| 20/20 [00:30<00:00,  1.55s/it, episode=100, return=-174.900]
Iteration 5: 100%|██████████| 20/20 [00:34<00:00,  1.71s/it, episode=120, return=-215.600]
Iteration 6: 100%|██████████| 20/20 [00:32<00:00,  1.61s/it, episode=140, return=-182.800]
Iteration 7: 100%|██████████| 20/20 [00:31<00:00,  1.58s/it, episode=160, return=-197.000]
Iteration 8: 100%|██████████| 20/20 [00:32<00:00,  1.64s/it, episode=180, return=-204.000]
Iteration 9: 100%|██████████| 20/20 [00:33<00:00,  1.66s/it, episode=200, return=-202.000]

奖励曲线

7.3 验证

Acrobot-v1环境有两个终止条件:

  1. 自由端达到指定高度(黑线)
  2. 回合长度超过500

触发任意一个条件,环境都会终止。

actor = PolicyNet(state_dim,hidden_dim,action_dim)
actor.set_state_dict(paddle.load("net.pdparams"))

success_count=0
for j in range(10):
    state=env.reset()
    i=0
    done=0
    while not done:
    
        state = paddle.to_tensor([state],dtype='float32')
        probs = actor(state)
        action = np.argmax(probs.numpy()[0])
        next_state,reward,done,_ = env.step(action)
        i+=1
        state=next_state
    if i<500:
        success_count+=1
print("在10次重复实验中,有{}次均是自由端到达指定高度进而终止环境".format(success_count))
env.close()
在10次重复实验中,有10次均是自由端到达指定高度进而终止环境

有10次均是自由端到达指定高度进而终止环境

根据我们的10次重复验证可以看出,我们通过SAC算法训练出的策略网络可以在指定时间步内解决Acrobot环境问题。
  • 0
    点赞
  • 9
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值