(17-6-02)基于强化学习的自动驾驶系统:深度强化学习智能体

17.7.3  深度强化学习智能体

编写文件reinforcement/agent.py,功能是实现了一个深度强化学习智能体,它具有演员-评论家架构(Actor-Critic),使用Twin Delayed Deep Deterministic Policy Gradients(TD3)算法进行训练。它包括了演员模型、评论家模型、环境模型、以及相关的训练和更新方法。文件reinforcement/agent.py的具体实现流程如下所示。

(1)编写类OUNoise,功能是实现了Ornstein-Uhlenbeck过程(是一种随机过程,通常用于模拟具有持续随机性的物理系统中的噪音。这个过程最初用于描述气体分子在液体中的扩散行为,但后来也被广泛用于金融、控制系统和深度强化学习等领域。),用于生成动作的噪音,以帮助探索性行为。在类OUNoise中包含如下函数:

  1. __init__(self, size, mu=0., theta=0.6, sigma=0.2):初始化噪音生成器的参数。
  2. reset(self):重置内部状态(噪音)为均值(mu)。
  3. sample(self):更新内部状态并返回噪音样本。

类OUNoise的具体实现代码如下所示。

class OUNoise:
    def __init__(self, size, mu=0., theta=0.6, sigma=0.2):
        """# 初始化智能体参数和模型"""
        self.mu = mu * np.ones(size)
        self.theta = theta
        self.sigma = sigma
        self.reset()

    def reset(self):
        """Reset the internal state (= noise) to mean (mu)."""
        self.state = copy.copy(self.mu)

    def sample(self):
        x = self.state
        dx = self.theta * (self.mu - x) + self.sigma * np.array([np.random.randn() for i in range(len(x))])
        self.state = x + dx
        return self.state

(2)编写类StepLR,功能是自定义实现学习率调度器,用于调整优化器的学习率。类StepLR包括如下函数:

  1. __init__():初始化学习率调度器的参数。
  2. get_lr(self):获取每个参数组的学习率。 

类StepLR的具体实现代码如下所示。  

class StepLR(torch.optim.lr_scheduler._LRScheduler):
    def __init__(self, optimizer, step_size, gamma=0.9, last_epoch=-1, min_lr=1e-6, verbose=False):
        self.step_size = step_size
        self.gamma = gamma
        self.min_lr = min_lr
        super().__init__(optimizer, last_epoch, verbose)

    def get_lr(self):
        if (self.last_epoch == 0) or (self.last_epoch % self.step_size != 0):
            return [group['lr'] for group in self.optimizer.param_groups]
        return [max(self.min_lr, group['lr'] * self.gamma)
                for group in self.optimizer.param_groups]

(3)定义类TD3ColDeductiveAgent,功能是实现了一个使用TD3算法进行训练的深度强化学习智能体,主要包括如下所示的函数:

  1. __init__(self, ...):初始化智能体的参数和模型。
  2. select_action(self, obs, prev_action, eval=False):选择动作,可以包括动作噪音。
  3. reset_noise(self):重置动作噪音。
  4. store_transition(self, p_act, obs, act, rew, next_obs, done):存储经验元组。
  5. update_step(self, it, is_pretraining):更新智能体模型。
  6. _compute_bc_loss(self, obs, act, p_act):计算行为克隆损失。
  7. _compute_critic_loss(self, obs, act, rew, next_obs, done):计算评论家损失。
  8. _compute_actor_loss(self, obs, p_act):计算演员损失。
  9. _compute_env_loss(self, obs, p_act):计算环境损失。
  10. _update_env_model(self, obs, act, rew, next_obs):更新环境模型。
  11. change_opt_lr(self, actor_lr, critic_lr):更改优化器的学习率。
  12. load_exp_buffer(self, data):加载经验缓冲区。
  13. save(self, save_path):保存智能体模型。

类TD3ColDeductiveAgent的具体实现代码如下所示。  

class TD3ColDeductiveAgent:
    def __init__(self, obs_size=256, device='cpu', actor_lr=1e-3, critic_lr=1e-3,
                 pol_freq_update=2, policy_noise=0.2, noise_clip=0.5, act_noise=0.1, gamma=0.99,
                 tau=0.005, l2_reg=1e-5, env_steps=8, env_w=0.2, lambda_bc=0.1, lambda_a=0.9, lambda_q=1.0,
                 exp_buff_size=20000, actor_buffer_size=20000, exp_prop=0.25, batch_size=64,
                 scheduler_step_size=350, scheduler_gamma=0.9):
        assert device in ['cpu', 'cuda'], "device must be either 'cpu' or 'cuda'"

        self.actor = Actor(obs_size).to(device)
        self.actor_target = Actor(obs_size).to(device)
        self.actor_target.load_state_dict(self.actor.state_dict())
        self.actor_optimizer = torch.optim.Adam(self.actor.parameters(), lr=actor_lr, weight_decay=l2_reg)
        self.actor_scheduler = StepLR(self.actor_optimizer,
                                        step_size=scheduler_step_size,
                                        gamma=scheduler_gamma)

        self.critic = TwinCritic(obs_size).to(device)
        self.critic_target = TwinCritic(obs_size).to(device)
        self.critic_target.load_state_dict(self.critic.state_dict())
        self.critic_optimizer = torch.optim.Adam(self.critic.parameters(), lr=critic_lr, weight_decay=l2_reg)
        self.critic_scheduler = StepLR(self.critic_optimizer,
                                        step_size=scheduler_step_size,
                                        gamma=scheduler_gamma)

        self.env_model = Environment(obs_size).to(device)
        self.env_model_optimizer = torch.optim.Adam(self.env_model.parameters(), lr=1e-3)

        self.pol_freq_update = pol_freq_update
        self.policy_noise = policy_noise
        self.noise_clip = noise_clip
        self.act_noise = act_noise
        self.gamma = gamma
        self.tau = tau
        self.env_steps = env_steps
        self.env_w = env_w
        self.lambda_a = lambda_a
        self.lambda_bc = lambda_bc
        self.lambda_q = lambda_q

        self.policy_clip_min = torch.tensor([-0.75, -1.0]).to(device)
        self.policy_clip_max = torch.tensor([0.75, 1.0]).to(device)

        self.policy_clip_max_np = self.policy_clip_max.cpu().numpy()
        self.policy_clip_min_np = self.policy_clip_min.cpu().numpy()

        self.expert_buffer = ReplayBuffer(exp_buff_size, device)
        self.actor_buffer = ReplayBuffer(actor_buffer_size, device)
        self.batch_size = batch_size
        self.exp_batch_size = int(exp_prop*batch_size)
        self.actual_batch_size = batch_size - self.exp_batch_size

        self.device = device

        self.ou_noise = OUNoise(2, sigma=act_noise)

        #Training variables
        self.pre_tr_step = 0
        self.change_lr = True
        self.tr_step = 0
        self.tr_steps_vec = []
        self.avg_reward_vec = []
        self.std_reward_vec = []
        self.success_rate_vec = []
        self.episode_nb = 0

    def select_action(self, obs, prev_action, eval=False):
        emb, command = obs
        emb = torch.FloatTensor(emb.reshape(1, -1)).to(self.device)
        command = torch.tensor(command).reshape(1, 1).to(self.device)
        prev_action = torch.FloatTensor(prev_action).reshape(1, -1).to(self.device)

        with torch.no_grad():
            action = self.actor(emb, command, prev_action).cpu().numpy().flatten()
        if not eval:
            #noise = np.random.normal(0, self.act_noise, size=action.shape)
            noise = self.ou_noise.sample()
            action = (action + noise).clip(self.policy_clip_min_np, self.policy_clip_max_np)
        return action.tolist()
    
    def reset_noise(self):
        self.ou_noise.reset()
    
    def store_transition(self, p_act, obs, act, rew, next_obs, done):
        self.actor_buffer.store_transition((p_act, obs, act, rew, next_obs, done))

    def update_step(self, it, is_pretraining):
        self.actor_optimizer.zero_grad()
        self.critic_optimizer.zero_grad()

        if is_pretraining:
            p_act_exp, obs_exp, act_exp, rew_exp, next_obs_exp, done_exp = self.expert_buffer.sample(self.batch_size)
            obs = obs_exp
            next_obs = next_obs_exp
            act = act_exp
            rew = rew_exp

            critic_loss = self._compute_critic_loss(obs_exp, act_exp, rew_exp, next_obs_exp, done_exp)
        else:
            p_act_exp, obs_exp, act_exp, rew_exp, next_obs_exp, done_exp = self.expert_buffer.sample(self.exp_batch_size)
            p_act_act, obs_act, act_act, rew_act, next_obs_act, done_act = self.actor_buffer.sample(self.actual_batch_size)

            emb_exp, command_exp = obs_exp
            emb_act, command_act = obs_act
            obs = (torch.cat((emb_exp, emb_act), dim=0), torch.cat((command_exp, command_act), dim=0))
            p_act = torch.cat((p_act_exp, p_act_act), dim=0)
            act = torch.cat((act_exp, act_act), dim=0)
            rew = torch.cat((rew_exp, rew_act), dim=0)
            next_emb_exp, next_command_exp = next_obs_exp
            next_emb_act, next_command_act = next_obs_act
            next_obs = (torch.cat((next_emb_exp, next_emb_act), dim=0), torch.cat((next_command_exp, next_command_act), dim=0))
            done = torch.cat((done_exp, done_act), dim=0)

            critic_loss = self.lambda_q*self._compute_critic_loss(obs, act, rew, next_obs, done)

        critic_loss.backward()
        torch.nn.utils.clip_grad_norm_(self.critic.parameters(), 2.0)
        self.critic_optimizer.step()
        self.critic_scheduler.step()
        
        if it%self.pol_freq_update==0:
            if is_pretraining:
                actor_loss = self._compute_bc_loss(obs_exp, act_exp, p_act_exp)
            else:
                actor_loss = self.lambda_bc*self._compute_bc_loss(obs_exp, act_exp, p_act_exp)
                actor_loss += self.lambda_a*self._compute_actor_loss(obs, p_act)
                actor_loss += self.env_w*self._compute_env_loss(obs, p_act)

            actor_loss.backward()
            torch.nn.utils.clip_grad_norm_(self.actor.parameters(), 2.0)
            self.actor_optimizer.step()
            # Update the frozen target models
            for param, target_param in zip(self.critic.parameters(), self.critic_target.parameters()):
                target_param.data.copy_(self.tau * param.data + (1 - self.tau) * target_param.data)

            for param, target_param in zip(self.actor.parameters(), self.actor_target.parameters()):
                target_param.data.copy_(self.tau * param.data + (1 - self.tau) * target_param.data)
            
            self.actor_scheduler.step()

        self._update_env_model(obs, act, rew, next_obs)

    
    def _compute_bc_loss(self, obs, act, p_act):
        emb, command = obs
        pi_s = self.actor(emb, command, p_act)
        return torch.mean((pi_s-act)**2)
    
    def _compute_critic_loss(self, obs, act, rew, next_obs, done):
        emb, command = obs
        emb_, command_ = next_obs
        noise = torch.randn_like(act)*self.policy_noise
        noise = noise.clamp(-self.noise_clip, self.noise_clip).to(self.device)
        next_act = (self.actor_target(emb_, command_, act)+noise).clamp(self.policy_clip_min, self.policy_clip_max)

        q1, q2 = self.critic_target(emb_, command_, next_act)
        q = torch.min(q1, q2)
        q_target = rew + (self.gamma*(1-done)*q).detach()

        current_q1, current_q2 = self.critic(emb, command, act)

        critic_loss = F.mse_loss(current_q1, q_target) + F.mse_loss(current_q2, q_target)

        return critic_loss
    
    def _compute_actor_loss(self, obs, p_act):
        emb, command = obs
        loss = -self.critic.critic1(emb, command, self.actor(emb, command, p_act)).mean()
        return loss
    
    def _compute_env_loss(self, obs, p_act):
        emb, command = obs
        action = self.actor(emb, command, p_act)
        loss = 0
        for i in range(self.env_steps):
            emb, rew_pred = self.env_model(emb, action)
            loss += self.gamma**i*rew_pred
            action = self.actor(emb, command, action)
        return -loss.mean()
    
    def _update_env_model(self, obs, act, rew, next_obs):
        emb, _ = obs
        next_emb, _ = next_obs

        self.env_model_optimizer.zero_grad()
        transition = self.env_model.transition_model(emb, act)
        reward = self.env_model.reward_model(emb, act, next_emb)
        t_loss = F.mse_loss(transition, next_emb)
        r_loss = F.mse_loss(reward, rew)
        loss = t_loss + r_loss
        loss.backward()
        torch.nn.utils.clip_grad_norm_(self.env_model.parameters(), 2.0)
        self.env_model_optimizer.step()

    def change_opt_lr(self, actor_lr, critic_lr):
        for param_group in self.actor_optimizer.param_groups:
            param_group['lr'] = actor_lr
        for param_group in self.critic_optimizer.param_groups:
            param_group['lr'] = critic_lr

    def load_exp_buffer(self, data):
        for d in data:
            self.expert_buffer.store_transition(d)

    def save(self, save_path):
        with open(save_path, 'wb') as f:
            pickle.dump(self, f)

在上述代码中,类TD3ColDeductiveAgent是一个强化学习代理,用于实现 Twin-Delayed Deep Deterministic Policy Gradient (TD3) 算法的变种。类TD3ColDeductiveAgent的功能如下所示:

  1. 初始化网络和优化器:在初始化过程中,创建了两个神经网络(actor 和 critic),以及它们对应的目标网络(actor_target 和 critic_target)。还初始化了用于训练这些网络的优化器,包括 actor_optimizer、critic_optimizer 和 env_model_optimizer。
  2. 动作选择:通过 select_action 方法,根据当前状态和前一个动作选择下一个动作。在训练期间,还可以添加噪声以促进探索。
  3. 经验存储:代理使用 expert_buffer 和 actor_buffer 存储经验元组。expert_buffer 存储来自专家策略的经验,而 actor_buffer 存储来自代理策略的经验。
  4. 训练:使用 update_step 方法来执行训练步骤。根据不同的模式(预训练或正式训练),它可以执行不同的损失函数计算和梯度更新。
  5. 损失函数:包括演员(actor)损失、评论家(critic)损失和环境模型(env_model)损失。这些损失函数在训练过程中根据不同的权重和参数组合进行组合。
  6. 环境模型更新:代理根据当前状态、动作、奖励和下一个状态更新环境模型。
  7. 学习率更改:可以通过 change_opt_lr 方法更改演员和评论家网络的学习率。
  8. 经验缓冲加载:可以使用 load_exp_buffer 方法加载专家策略的经验缓冲。
  9. 保存模型:可以使用 save 方法保存整个代理模型,包括网络和优化器状态。

总之,TD3ColDeductiveAgent 是一个用于训练和评估深度强化学习策略的通用代理类,它实现了包括演员-评论家学习、经验回放、噪声添加等功能,适用于多种环境和任务。

未完待续

  • 28
    点赞
  • 13
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

码农三叔

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值