MAPPO算法流程和代码解读

  本文主要是记录使用MAPPO算法流程理解和代码的含义分析

参考博客:多智能体强化学习(二) MAPPO算法详解-CSDN博客

目录

目录

1、多智能体算法分类

2、MAPPO理论解读

MAPPO算法论文和代码参考

2.1 IPPO算法

2.2、MAPPO的算法

官方开源代码为:https://github.com/marlbenchmark/on-policy官方代码对环境的要求可能比较高,更加轻量版,对环境没有依赖的版本,更好方便移植到自己项目的代码为:https://github.com/tinyzqh/light_mappo

​编辑

 2.3、MAPPO算法流程

Overall Process:

2.3.1. Collect Trajectories:

2.3.2. Compute Advantage Estimates:

2.3.3. Compute Policy Loss:

2.3.4. Update Actor Parameters:

2.3.5. Compute Value Loss:

2.3.6. Update Critic Parameters:

2.3.7. Repeat:

3、MAPPO 代码解释

3.1、代码总体流程

3.2、环境设置

3.3、网络定义

3.3.1、R_Actor

3.3.2、C_Critic

3.4、采样流程

3.4.1、初始化环境

3.4.2、数据采样( Collect Trajectories)

3.4.3、 actor网络的输入输出

3.4.4、 critic网络输入输出

3.4.5、 奖励和下一步状态

3.5、计算优势值(Compute Advantage Estimates)

3.5.1、计算next_value()

3.5.2、计算折扣回报

3.5.2.1GAE的优势

3.5.2.2如何计算GAE

3.5.2.3 计算折扣回报 returns,discounted sum of reward 或者GAE

3.6、开始训练

调用r_mappo.py中train函数,r_mappo.py中train函数将抽样的数据里发送给r_mappo.py中ppo_update()函数

ppo_update函数大体流程是:1)从buffer中抽样建立sample2)将抽样的数据传递给rMAPPOPolicy.py中的evaluate_actions函数,得到values, action_log_probs, dist_entropy3)计算actor的loss4)计算critic的loss

3.6.1取样,计算优势函数

3.6.2 ppo_update 主要时得到values, action_log_probs, dist_entropy

3.6.3 ppo_update 计算actor的loss并更新actor

3.6.4 ppo_update 计算critic的loss并更新critic

1、多智能体算法分类

多智能体强化学习算法大致上可以分为两类,中心式和分散式。

中心式的思想是考虑一个合作式的环境,直接将单智能体算法扩展,让其直接学习一个联合动作的输出,但是并不好给出单个智能体该如何进行决策。
分散式是每个智能体独立学习自己的奖励函数,对于每个智能体来说,其它智能体就是环境的一部分,因此往往需要去考虑环境的非平稳态,并且分散式学习到的并不是全局的策略。
最近的一些工作提出了两种框架连接中心式和分散式这两种极端方法,从而得到折衷的办法:中心式训练分散式执行(centealized training and decentralized execuation,CTDE)
和值分解(value decomposition,VD)

CTDE是通过学习一个全局的Critic来减少值函数的方差,这类方法的代表有MADDPG和COMA;VD是通过对局部智能体的Q函数进行组合来得到一个联合Q函数。

MAPPO采用的一种中心式的值函数方法来考虑全局信息,属于CTDE方法,通过一个全局的值函数使得单个PPO智能体相互配合。前身是IPPO,是一个完全分散式PPO算法。

MAPPO中每个智能体i基于局部观测o_i和一个共享策略\pi _\theta (a_i|o_i)(这里的共享策略针对的是智能体同类型的情况,对于非同类型每个智能体有自己的actor和critic网络)生成一个动作a_i来最大化折扣累积奖励

2、MAPPO理论解读

MAPPO算法论文和代码参考

MAPPO论文全称为:The Surprising Effectiveness of MAPPO in Cooperative, Multi-Agent Games

这一部分参考(多智能体强化学习2021论文(一)MAPPO & IPPO - 知乎

2.1 IPPO算法

论文:Is Independent Learning All You Need in the StarCraft Multi-Agent Challenge?

论文地址:https://arxiv.org/abs/2011.09533

这篇文章中提到,可以直接将PPO算法应用到SMAC任务中,构建一个像IQL一样的IPPO(independent PPO)算法,这样简单的算法也能在部分任务上取得了超越QMIX的效果。

IPPO算法将PPO算法扩展到多智能体任务中,每个智能体都独立运行一套PPO算法,这个想法虽然简单,但却很有效。实验结果表明,在一些任务中IPPO能够达到SOTA。

IPPO算法说明了将PPO应用到多智能体系统中是十分有效的。

PPO算法:

PPO算法是对TRPO算法的进一步优化,主要使用了重要性采样,和策略更新裁剪等思想。这里用的是带clip的PPO,clip的目的是防止策略更新的幅度太大或者太小。

原文中的PPO还使用了Generalized Advantage Estimation (GAE),使用了GAE后优势函数如下:

其中

GAE的大致想法是对从1步到n步计算出的各个优势函数进行加权,加权计算出的均值为最后的优势函数,这样做可以减小方差。

最后的actor目标函数为:

Critic目标函数为:

总体的目标函数为:

其中a为智能体的数量。

IPPO算法将PPO算法扩展到多智能体任务中,每个智能体都独立运行一套PPO算法,这个想法虽然简单,但却很有效。实验结果表明,在一些任务中IPPO能够达到SOTA。

这篇文章认为,PPO的策略裁剪机制在SMAC任务中非常有效。PPO算法的稳定性更好,因此在多智能体系统非稳态的环境中,IPPO的学习稳定性优于IAC(independent actor-critic)和IQL(independent q-learning),所以性能更好。

同时,在部分SMAC的任务中,智能体是完全协作的,引入中心化的机制未必会让性能提升。

对于QMIX等中心化的算法来说,尚不清楚全局信息state起作用的机制。

2.2、MAPPO的算法

The Surprising Effectiveness of MAPPO in Cooperative, Multi-Agent Games更进一步,将IPPO算法扩展为MAPPO。区别是PPO的critic部分使用全局状态state而不是observation作为输入,另外critic的奖励也变为全局的奖励,相当于critic的输入是全局的状态s和全局的动作。

同时,文章还提供了五个有用的建议:

1.Value normalization: 使用PopArt对 value进行normalization。PopArt是一种多任务强化学习的算法,将不同任务的奖励进行处理,只需要训练一个智能体就可以同时解决多种任务。这篇文章认为,使用PopArt是有帮助的。

2.Agent Specific Global State: 采用 agent-specific 的全局信息,直接使用各个智能体observation联合起来可能会导致维度过高,而使用state又可能导致丢失信息。本文使用了一种特殊的state,包含了更多的信息。本文作者认为SMAC中原有的全局state存在信息遗漏,其所包含的信息甚至少于agent的局部观测量,这也是直接将MAPPO应用在 StarCraftII中性能表现不佳的重要原因。

3.Training Data Usage: 防止同一份数据训练次数过多。简单任务中推荐使用 15 training epochs,而对于较难的任务,尝试 10 或者 5 training epochs。除此之外,尽量使用一整份的训练数据,而不要切成很多小份的mini-batch训练。

4.Action Masking: 在多智能体任务中经常出现 agent 无法执行某些 action 的情况,建议无论前向执行还是反向传播时,都应将这些无效动作屏蔽掉,使其不参与动作概率计算。

5.Death Masking: 在多智能体任务中,也经常会出现agent死亡的情况。当 agent 死亡后,应当仅保留其 agent id,将其他信息屏蔽。

官方开源代码为:https://github.com/marlbenchmark/on-policy
官方代码对环境的要求可能比较高,更加轻量版,对环境没有依赖的版本,更好方便移植到自己项目的代码为:https://github.com/tinyzqh/light_mappo

RNN-MAPPO算法流程


 

 2.3、MAPPO算法流程

Overall Process:

  1. Collect trajectories by interacting with the environment.
  2. Compute advantage estimates
  3. Compute policy loss for the Actor network.
  4. Update Actor parameters using gradient ascent.
  5. Compute value loss for the Critic network.
  6. Update Critic parameters using gradient descent.
  7. Repeat the process for multiple iterations or until convergence.

2.3.1. Collect Trajectories:

  • Process:
    • Agent interacts with the environment over multiple episodes, collecting trajectories of state-action pairs.
    • At each time step t the agent selects an action a_t​ based on the current policy \pi _\theta (a|s)and observes the resulting state s{_{t+1}}and reward r_t​.
    • Trajectories are stored as sequences of tuple(s_t,a_t,r_t).

2.3.2. Compute Advantage Estimates:

  • Process:
    • Compute the returns Rt​ for each time step t in the trajectory using the collected rewards and the discount factor \gamma: R_t=\sum_{i=t}^{T}\gamma^{i-t}r_i
    • Estimate the value function V(s_t) for each state stst​ using the Critic network.
    • Calculate advantage estimates A(st,at)for each state-action pair: A(s_t,a_t)=R_t-V(s_t)

2.3.3. Compute Policy Loss:

  • Process:
    • Use advantage estimates to compute the surrogate objective function for each state-action pair:

L^{ppo}(\theta)=min(r_{\theta,i}A_i,clip(r_{\theta,i},1-\epsilon ,1+\epsilon )A_i), where, r_{\theta,i}=\frac{\pi_\theta(a_i|o_i)}{\pi_{\theta_{old}}(a_i|o_i)}

Where \pi_\theta(a_i|o_i) is the current policy, \pi_{\theta_{old}}(a_i|o_i)is the old policy, and ϵ is a hyperparameter controlling the clipping.

L^{mappo}(\theta)=[\frac{1}{Bn}\sum_{i=1}^{B}\sum_{K=1}^{n} min(r_{\theta,i}^{k}A_i^{k},clip(r_{\theta,i},(1-\epsilon ,1+\epsilon)A_i^{k})]+[\frac{1}{Bn}\sum_{i=1}^{B}\sum_{K=1}^{n}S[\pi_\theta(o_i^{k})], where, r_{\theta,i}=\frac{\pi_\theta(a_i|o_i)}{\pi_{\theta_{old}}(a_i|o_i)}

2.3.4. Update Actor Parameters:

  • Process:

2.3.5. Compute Value Loss:

  • Process:

2.3.6. Update Critic Parameters:

  • Process:

    2.3.7. Repeat:

    3、MAPPO 代码解释

    • Process:
      • Iterate through steps 1 to 6 for multiple episodes or until convergence.
      • Continuously update the policy and value function estimates based on collected trajectories and their respective advantages.
  • 3.1、代码总体流程

  • 总体理解

  • 每个局部智能体接收一个局部的观察obs,输出动作概率,所有智能体都采用一个actor网络,critic网络接受的是所有智能体的观察,cent_obs_space=n x obs,输出V,V用于计算advantage然后用于actor的更新,actor的loss与ppo的类似,多个智能体的累积和,并添加了策略的熵S.

  • critic的loss是对V值做normalizer.

  • 1)环境设置,设置智能体个数、动作空间维度、观测空间维度
    2)初始化环境,将obs输入到actor网络生成action,将cent_obs输入到critic网络生成values
    3)计算折扣奖励
    4)开始训练,从buffer中抽样数据,计算actor的loss、critic的loss
    5)保存模型,计算average episode rewards

3.2、环境设置

打开train文件夹下的train.py文件,运行此文件可对模型开始训练。

  • train文件包括config获取设置,以及run进入env_runner

  • env_core.py文件中有环境设置,智能体数量2,智能体观测的obs=14,动作维度5.

3.3、网络定义

代码在rMAPPOPolicy.py文件

#根据env_core可知每个agent有14个特征量(obs)。5个动作(action),总共2个agent
        self.actor = R_Actor(args, self.obs_space, self.act_space, self.device)#actor的输入是一个智能体的输入。即14
        self.critic = R_Critic(args, self.share_obs_space, self.device)#critic的输入是share_obs_space,即14*2=28

具体R_Actor和R_Critic见r_actor_crtic.py

3.3.1、R_Actor

  class R_Actor(nn.Module):
    """
    Actor network class for MAPPO. Outputs actions given observations.
    :param args: (argparse.Namespace) arguments containing relevant model information.
    :param obs_space: (gym.Space) observation space.
    :param action_space: (gym.Space) action space.
    :param device: (torch.device) specifies the device to run on (cpu/gpu).
    """
    def __init__(self, args, obs_space, action_space, device=torch.device("cpu")):
        super(R_Actor, self).__init__()
        self.hidden_size = args.hidden_size

        self._gain = args.gain
        self._use_orthogonal = args.use_orthogonal
        self._use_policy_active_masks = args.use_policy_active_masks
        self._use_naive_recurrent_policy = args.use_naive_recurrent_policy
        self._use_recurrent_policy = args.use_recurrent_policy
        self._recurrent_N = args.recurrent_N
        self.tpdv = dict(dtype=torch.float32, device=device)

        obs_shape = get_shape_from_obs_space(obs_space)
        base = CNNBase if len(obs_shape) == 3 else MLPBase
        self.base = base(args, obs_shape)

        if self._use_naive_recurrent_policy or self._use_recurrent_policy:
            self.rnn = RNNLayer(self.hidden_size, self.hidden_size, self._recurrent_N, self._use_orthogonal)#self.rnn网络,

        self.act = ACTLayer(action_space, self.hidden_size, self._use_orthogonal, self._gain)

        self.to(device)

3.3.2、C_Critic

class R_Critic(nn.Module):
    """
    Critic network class for MAPPO. Outputs value function predictions given centralized input (MAPPO) or
                            local observations (IPPO).
    :param args: (argparse.Namespace) arguments containing relevant model information.
    :param cent_obs_space: (gym.Space) (centralized) observation space.
    :param device: (torch.device) specifies the device to run on (cpu/gpu).
    """
    def __init__(self, args, cent_obs_space, device=torch.device("cpu")):
        super(R_Critic, self).__init__()
        self.hidden_size = args.hidden_size
        self._use_orthogonal = args.use_orthogonal
        self._use_naive_recurrent_policy = args.use_naive_recurrent_policy
        self._use_recurrent_policy = args.use_recurrent_policy
        self._recurrent_N = args.recurrent_N
        self._use_popart = args.use_popart
        self.tpdv = dict(dtype=torch.float32, device=device)
        init_method = [nn.init.xavier_uniform_, nn.init.orthogonal_][self._use_orthogonal]

        cent_obs_shape = get_shape_from_obs_space(cent_obs_space)
        base = CNNBase if len(cent_obs_shape) == 3 else MLPBase
        self.base = base(args, cent_obs_shape)

        if self._use_naive_recurrent_policy or self._use_recurrent_policy:
            self.rnn = RNNLayer(self.hidden_size, self.hidden_size, self._recurrent_N, self._use_orthogonal)

        def init_(m):
            return init(m, init_method, lambda x: nn.init.constant_(x, 0))

        if self._use_popart:
            self.v_out = init_(PopArt(self.hidden_size, 1, device=device))
        else:
            self.v_out = init_(nn.Linear(self.hidden_size, 1))

        self.to(device)

3.4、采样流程

3.4.1、初始化环境

实例化5个环境(线程)

def make_train_env(all_args):
    # 定义了一个名为 get_env_fn 的嵌套函数。这个函数接受一个参数 rank,用于标识环境的排名。
    def get_env_fn(rank):
        # 定义了另一个嵌套函数 init_env,用于初始化环境
        def init_env():
            # TODO 注意注意,这里选择连续还是离散可以选择注释上面两行,或者下面两行。
            # TODO Important, here you can choose continuous or discrete action space by uncommenting the above two lines or the below two lines.

            from envs.env_continuous import ContinuousActionEnv
            env = ContinuousActionEnv()

            # from envs.env_discrete import DiscreteActionEnv
            # env = DiscreteActionEnv()

            env.seed(all_args.seed + rank * 1000)#通过调用 env.seed(all_args.seed + rank * 1000) 设置环境的随机种子,以确保每个环境实例有不同的随机性。
            return env

        return init_env# get_env_fn 函数返回 init_env 函数。
    # 通过列表推导式创建了一个 DummyVecEnv 的实例,其中的每个元素都是通过调用 get_env_fn(i) 创建的环境初始化函数。列表推导式根据 all_args.n_rollout_threads 的值,生成了相应数量的环境初始化函数。
    # make_train_env 函数返回了一个 DummyVecEnv 实例,该实例包含了多个环境实例,用于进行训练。
    # n_rollout_threads 参数用于指定训练过程中并行执行的环境数量。
    # 它的类型为整数(int),默认值为 5。该参数用于控制并行环境实例的数量,用于同时进行多个环境的经验采样和策略更新。
    # 增加 n_rollout_threads 的值可以加速训练过程并提高采样效率,但也可能增加计算资源的使用量。
    return DummyVecEnv([get_env_fn(i) for i in range(all_args.n_rollout_threads)])#返回的是函数,并没有执行

如果采用centralized_V函数,初始化是会给每个智能体构造share_obs,就是n x obs

    def __init__(self):
        self.env = EnvCore()
        self.num_agent = self.env.agent_num

        self.signal_obs_dim = self.env.obs_dim
        self.signal_action_dim = self.env.action_dim

        # if true, action is a number 0...N, otherwise action is a one-hot N-dimensional vector
        self.discrete_action_input = False

        self.movable = True

        # configure spaces
        self.action_space = []
        self.observation_space = []
        self.share_observation_space = []

        share_obs_dim = 0
        total_action_space = []
        for agent_idx in range(self.num_agent):
            # physical action space
            u_action_space = spaces.Discrete(self.signal_action_dim)  # 5个离散的动作

            # if self.movable:
            total_action_space.append(u_action_space)

            # total action space
            # if len(total_action_space) > 1:
            #     # all action spaces are discrete, so simplify to MultiDiscrete action space
            #     if all(
            #         [
            #             isinstance(act_space, spaces.Discrete)
            #             for act_space in total_action_space
            #         ]
            #     ):
            #         act_space = MultiDiscrete(
            #             [[0, act_space.n - 1] for act_space in total_action_space]
            #         )
            #     else:
            #         act_space = spaces.Tuple(total_action_space)
            #     self.action_space.append(act_space)
            # else:
            self.action_space.append(total_action_space[agent_idx])

            # observation space
            share_obs_dim += self.signal_obs_dim
            self.observation_space.append(
                spaces.Box(
                    low=-np.inf,
                    high=+np.inf,
                    shape=(self.signal_obs_dim,),
                    dtype=np.float32,
                )
            )  # [-inf,inf]

        self.share_observation_space = [
            spaces.Box(low=-np.inf, high=+np.inf, shape=(share_obs_dim,), dtype=np.float32)
            for _ in range(self.num_agent)
        ]

3.4.2、数据采样( Collect Trajectories)

采用runner/env_runner.py文件里collect()方法采样数据,每步每个智能体都采样。调用self.trainer.prep_rollout()函数将actor和critic都设置为eval格式。

 def collect(self, step):#episode_length包括200个step,每个步长执行一次
        values = []
        actions = []
        temp_actions_env = []
        action_log_probs = []
        rnn_states = []
        rnn_states_critic = []

        for agent_id in range(self.num_agents):# 每个智能体都这样做,都调用一次get_actions
            self.trainer[agent_id].prep_rollout()#设置为评估模式?。这行代码是将智能体的训练算法(trainer)设置为准备进行推断。在推断过程中,通常不需要进行梯度计算和参数更新,所以将其设置为评估模式是合理的。
            value, action, action_log_prob, rnn_state, rnn_state_critic = self.trainer[
                agent_id
            ].policy.get_actions(
                self.buffer[agent_id].share_obs[step],
                self.buffer[agent_id].obs[step],#每个智能体每步
                self.buffer[agent_id].rnn_states[step],#传到get_action就是rnn_states_actor,rnn_state=(self.episode_length + 1, self.n_rollout_threads, self.recurrent_N, self.rnn_hidden_size)
                self.buffer[agent_id].rnn_states_critic[step],
                self.buffer[agent_id].masks[step],
            )#这里也就是rMAPPO的get_actions函数,包括actor和critic

上面的代码就是将数据传入到MAPPO策略网络中R_MAPPOPolicy()去获取一个时间步的数据。在get_actions()函数里面调用actor去获取动作和动作的对数概率,critic网络去获取对于cent_obs的状态值函数的输出。

3.4.3、 actor网络的输入输出

 # actor会调用r_actor_critic文件里的R_Actor,Compute actions from the given inputs
        # actor将obs经过rnn转一下,然后再将obs和avaiable_action输入,计算action
        actions, action_log_probs, rnn_states_actor = self.actor(obs,# local agent inputs to the actor.
                                                                 rnn_states_actor,# if actor is RNN, RNN states for actor.
                                                                 masks,
                                                                 available_actions,# none 代表都可用
                                                                 deterministic)# self.actor会调用r_actor_critic文件里的R_Actor,
        # 类继承自 nn.Module,而 nn.Module 类中包含了 forward 方法。在 PyTorch 中,当你调用一个继承自 nn.Module 的类的对象时,PyTorch 会自动调用该类的 forward 方法。
        # self.critic会调用r_actor_critic文件里的R_Critic,输入的是cent_obs
        values, rnn_states_critic = self.critic(cent_obs, rnn_states_critic, masks)#rnn_states_actor和rnn_state_critic其实一样
        return values, actions, action_log_probs, rnn_states_actor, rnn_states_critic#更新后的RNN中state参数,为0是因为没用RNN

3.4.4、 critic网络输入输出

 values, rnn_states_critic = self.critic(cent_obs, rnn_states_critic, masks)#rnn_states_actor和rnn_state_critic其实一样

3.4.5、 奖励和下一步状态

# Obser reward and next obs
                obs, rewards, dones, infos = self.envs.step(actions_env)# 把动作输入环境,观察下一步

obs, rewards, dones, infos = self.envs.step(actions_env)# 把动作输入环境,观察下一步

runner/env_runner.py文件里step()方法会根据情况选择env_continous.py中step函数然后运行一步得到obs,再将data调用insert方法添加到buffer里面

# Obser reward and next obs
                obs, rewards, dones, infos = self.envs.step(actions_env)# 把动作输入环境,观察下一步

                data = (
                    obs,
                    rewards,
                    dones,
                    infos,
                    values,
                    actions,
                    action_log_probs,
                    rnn_states,
                    rnn_states_critic,
                )

                # insert data into buffer
                self.insert(data)# 把数据放入到buffer中

 obs的shape是(5,14)5个线程,14个特征。

cent_obs(5,28)28是2个智能体x14个特征

环境得到的obs还是(5,14) 调用insert方法将数据(包括reward)添加到buffer中,在insert方法里面构建一个全局share_obs(5,28)

Reward Function: The environment's reward function R(st,at,st+1)R(st​,at​,st+1​) provides the immediate reward rtrt​ associated with taking action atat​ in state st2. and transitioning to state st+1​.

但是给定轻量版的代码env_core.py文件中下一步的状态s和奖励r都是随机的,实际执行时需要自己改写这一步。

    def reset(self):
        """
        # self.agent_num设定为2个智能体时,返回值为一个list,每个list里面为一个shape = (self.obs_dim, )的观测数据
        # When self.agent_num is set to 2 agents, the return value is a list, each list contains a shape = (self.obs_dim, ) observation data
        """
        sub_agent_obs = []
        for i in range(self.agent_num):
            sub_obs = np.random.random(size=(14,))#观测数据也是随机的
            sub_agent_obs.append(sub_obs)
        return sub_agent_obs

    def step(self, actions):
        """
        # self.agent_num设定为2个智能体时,actions的输入为一个2纬的list,每个list里面为一个shape = (self.action_dim, )的动作数据
        # 默认参数情况下,输入为一个list,里面含有两个元素,因为动作维度为5,所里每个元素shape = (5, )
        # When self.agent_num is set to 2 agents, the input of actions is a 2-dimensional list, each list contains a shape = (self.action_dim, ) action data
        # The default parameter situation is to input a list with two elements, because the action dimension is 5, so each element shape = (5, )
        """
        sub_agent_obs = []
        sub_agent_reward = []
        sub_agent_done = []
        sub_agent_info = []
        for i in range(self.agent_num):
            sub_agent_obs.append(np.random.random(size=(14,)))
            sub_agent_reward.append([np.random.rand()])#这里的reward是随机的
            sub_agent_done.append(False)
            sub_agent_info.append({})

        return [sub_agent_obs, sub_agent_reward, sub_agent_done, sub_agent_info]

在官方开源代码envs/mpe/environment.py中奖励被设置为全局的奖励(每个智能体的奖励都是所有智能体的奖励累加)

# record observation for each agent
        for i, agent in enumerate(self.agents):
            obs_n.append(self._get_obs(agent))
            reward_n.append([self._get_reward(agent)])
            done_n.append(self._get_done(agent))
            info = {'individual_reward': self._get_reward(agent)}
            env_info = self._get_info(agent)
            if 'fail' in env_info.keys():
                info['fail'] = env_info['fail']
            info_n.append(info)

        # all agents get total reward in cooperative case, if shared reward, all agents have the same reward, and reward is sum
        reward = np.sum(reward_n)
        if self.shared_reward:
            reward_n = [[reward]] * self.n

        if self.post_step_callback is not None:
            self.post_step_callback(self.world)

        return obs_n, reward_n, done_n, info_n##是协同,共享奖励,即奖励之和

每个智能体循环结束后,外围是一个episode,episode的长度是200次,即再循环200次

3.5、计算优势值(Compute Advantage Estimates)

训练开始之前,首先调用self.compute函数来计算这个episode(200步)的折扣累积回报。

调用每个智能体的 prep_rollout() 方法。这可能是为了准备智能体进行推断(inference)操作,使其处于合适的状态。在这里,通过调用 prep_rollout() 方法,确保评估模块不会更新训练梯度。

 # compute return and update network
            self.compute()# compute advantage estimate A via GAE on tao,using PopArt


3.5.1、计算next_value(V(s_t)

next_value :调用每个智能体策略(policy)的 get_values() 方法,获取下一个状态的价值函数估计值(算这个episode最后一个状态的状态价值next_value V(s_t))。传递给该方法的参数包括智能体的共享观察数据、评论者网络的 RNN 状态和环境的掩码
然后,通过 _t2n() 函数将获取的值转换为NumPy数组。

runner/shared_runner/base_runner.py中的compute函数

    def compute(self):
        for agent_id in range(self.num_agents):
            self.trainer[agent_id].prep_rollout()# 用评估模块不会更新训练梯度
            #每个智能体都执行get_values函数
            next_value = self.trainer[agent_id].policy.get_values(
                self.buffer[agent_id].share_obs[-1],
                self.buffer[agent_id].rnn_states_critic[-1],
                self.buffer[agent_id].masks[-1],
            )
            next_value = _t2n(next_value)
            self.buffer[agent_id].compute_returns(next_value, self.trainer[agent_id].value_normalizer)

 r_actor_critic.py中R_Critic类里的forward函数

    def forward(self, cent_obs, rnn_states, masks):
        """
        Compute actions from the given inputs.
        :param cent_obs: (np.ndarray / torch.Tensor) observation inputs into network.
        :param rnn_states: (np.ndarray / torch.Tensor) if RNN network, hidden states for RNN.
        :param masks: (np.ndarray / torch.Tensor) mask tensor denoting if RNN states should be reinitialized to zeros.

        :return values: (torch.Tensor) value function predictions.
        :return rnn_states: (torch.Tensor) updated RNN hidden states.
        """
        cent_obs = check(cent_obs).to(**self.tpdv)
        rnn_states = check(rnn_states).to(**self.tpdv)
        masks = check(masks).to(**self.tpdv)

        critic_features = self.base(cent_obs)#(5,28)变成了(5,64)
        if self._use_naive_recurrent_policy or self._use_recurrent_policy:
            critic_features, rnn_states = self.rnn(critic_features, rnn_states, masks)# 也是经过RNN把特征转一下
        values = self.v_out(critic_features)# 价值函数通过v_out,values 变成了5*1

        return values, rnn_states

3.5.2、计算折扣回报

会调用compute_return

GAE参考论文

http://arxiv.org/abs/1506.02438High-Dimensional Continuous Control Using Generalized Advantage Estimation 

这里采用GAE的方法计算折扣回报,并且有将value做normalizer。compute_returns的函数再shared_buffer.py里面。
 

  • advantage A^\pi(s_t,a_t)=Q^\pi(s_t,a_t)-V^\pi(s_t)

  • 其中 A^\pi(s_t,a_t)表示在当前状态 s_t下,采取动作a_t所能带来的未来Return的相对期望(相对期望,所以要剪掉baseline)其本质是站在t时刻,对未来收益的估计。

  • Q^\pi(s_t,a_t)写为残差的形式

  • \delta_t^{V} =r_t+\gamma V(s_{t+1})-V(s_t),

  • Q^\pi(s_t,a_t)是在当前状态 s_t下,采取当前动作a_t 时,未来总Return的期望。V(s_t)V(s_{t+1})是当前时状态和下一状态下获得的未来总Return的期望,可以理解为是基于环境对未来Return的估计r_t是当前时间戳下获得的reward的观测值。残差的形式相当于利用当前时间戳的观测值去逼近了真实Return一小步。所以 \delta _t表示采取当前动作 a_t的好坏程度。

  • 采用GAE的原因,GAE是k-step returns的加权均值,为了平衡偏差和方差。过大的k会导致低偏差但是高方差。小的k值会导致高偏差和低方差。

  • GAE的大致想法是对从1步到n步计算出的各个优势函数进行加权,加权计算出的均值为最后的优势函数,这样做可以减小方差。

  • 3.5.2.1GAE的优势
  • Bias Reduction: GAE reduces bias in advantage estimation compared to using just the discounted sum of rewards. By incorporating information from multiple time steps and considering the difference between predicted values, GAE provides a more accurate estimate of the advantages.

  • Variance Reduction: GAE also helps in reducing variance in advantage estimation. It achieves this by incorporating bootstrapping, which allows for more stable updates to the policy network.

  • Better Handling of Variable-Length Episodes: GAE can handle variable-length episodes more effectively compared to methods relying solely on discounted rewards. It does so by incorporating masks to account for episode terminations and ensuring that the advantage estimates are appropriately truncated.

  • 3.5.2.2如何计算GAE
  • 定义 \delta_t^{V} =r_t+\gamma V(s_{t+1})-V(s_t),

A_t^{GAE}=\sum_{l=0}^{\infty}(\gamma \lambda )^l\delta _{t+l}^V

  • 具体实现时不可能算到无穷,通过固定采样的长度n step

This code snippet is a method called compute_returns within a class. It's used to compute returns for each step in a trajectory, which are either discounted sums of rewards or computed using Generalized Advantage Estimation (GAE). Here's a detailed explanation of what it does:

  • The method takes two main parameters:

    • next_value: This represents the value predictions for the step after the last episode step. It's the estimate of the value function for the next state.
    • value_normalizer: This parameter is an optional value normalizer instance, typically used for techniques like PopArt normalization.
  • If GAE is used (self._use_gae is True), it iterates backward through the trajectory to calculate the advantages and returns.

  • If GAE is not used, it directly calculates the returns using discounted sum of rewards.

  • The computed returns are stored in the self.returns array.

3.5.2.3 计算折扣回报 returns,discounted sum of reward 或者GAE
  • self.buffer:调用缓存器(buffer)的 compute_returns() 方法,传递下一个状态的价值函数估计值和智能体的价值函数正则化器(value normalizer)。
    该方法将使用传入的值函数估计值计算每个智能体的回报R,并将其存储在缓存器中。

  • 为了避免反复计算累加的 ,一般采用倒推的方式,即先算最后一步的\delta_t ,然后计算倒数第二步,接着倒数第三步,直到探索过程中最开始的那一步。

     

  • util/shared_buffer.py中的compute_returns函数,compute_return的输入是next_value,正则化的value,以及函数内部获取reward

  •     def compute_returns(self, next_value, value_normalizer=None):
            """
            Compute returns either as discounted sum of rewards, or using GAE.
            :param next_value: (np.ndarray) value predictions for the step after the last episode step.
            :param value_normalizer: (PopArt) If not None, PopArt value normalizer instance.
            """
            if self._use_proper_time_limits:
                if self._use_gae:
                    self.value_preds[-1] = next_value
                    gae = 0
                    for step in reversed(range(self.rewards.shape[0])):
                        if self._use_popart or self._use_valuenorm:
                            # step + 1
                            delta = self.rewards[step] + self.gamma * value_normalizer.denormalize(
                                self.value_preds[step + 1]) * self.masks[step + 1] \
                                    - value_normalizer.denormalize(self.value_preds[step])
                            gae = delta + self.gamma * self.gae_lambda * gae * self.masks[step + 1]
                            gae = gae * self.bad_masks[step + 1]
                            self.returns[step] = gae + value_normalizer.denormalize(self.value_preds[step])
                        else:
                            delta = self.rewards[step] + self.gamma * self.value_preds[step + 1] * self.masks[step + 1] - \
                                    self.value_preds[step]
                            gae = delta + self.gamma * self.gae_lambda * self.masks[step + 1] * gae
                            gae = gae * self.bad_masks[step + 1]
                            self.returns[step] = gae + self.value_preds[step]
                else:
                    self.returns[-1] = next_value
                    for step in reversed(range(self.rewards.shape[0])):
                        if self._use_popart or self._use_valuenorm:
                            self.returns[step] = (self.returns[step + 1] * self.gamma * self.masks[step + 1] + self.rewards[
                                step]) * self.bad_masks[step + 1] \
                                                 + (1 - self.bad_masks[step + 1]) * value_normalizer.denormalize(
                                self.value_preds[step])
                        else:
                            self.returns[step] = (self.returns[step + 1] * self.gamma * self.masks[step + 1] + self.rewards[
                                step]) * self.bad_masks[step + 1] \
                                                 + (1 - self.bad_masks[step + 1]) * self.value_preds[step]
            else:
                if self._use_gae:
                    self.value_preds[-1] = next_value
                    gae = 0
                    for step in reversed(range(self.rewards.shape[0])):
                        if self._use_popart or self._use_valuenorm:
                            delta = self.rewards[step] + self.gamma * value_normalizer.denormalize(
                                self.value_preds[step + 1]) * self.masks[step + 1] \
                                    - value_normalizer.denormalize(self.value_preds[step])
                            gae = delta + self.gamma * self.gae_lambda * self.masks[step + 1] * gae
                            self.returns[step] = gae + value_normalizer.denormalize(self.value_preds[step])
                        else:
                            delta = self.rewards[step] + self.gamma * self.value_preds[step + 1] * self.masks[step + 1] - \
                                    self.value_preds[step]
                            gae = delta + self.gamma * self.gae_lambda * self.masks[step + 1] * gae
                            self.returns[step] = gae + self.value_preds[step]
                else:
                    self.returns[-1] = next_value
                    for step in reversed(range(self.rewards.shape[0])):
                        self.returns[step] = self.returns[step + 1] * self.gamma * self.masks[step + 1] + self.rewards[step]

  • 3.6、开始训练

  • 算完折扣回报后就开始训练
  • env_runner()中的train_infos = self.train()会调用base_runner.py中train函数
  •     def train(self):
            train_infos = []
            for agent_id in range(self.num_agents):
                self.trainer[agent_id].prep_training()
                train_info = self.trainer[agent_id].train(self.buffer[agent_id])
                train_infos.append(train_info)
                self.buffer[agent_id].after_update()
    
            return train_infos
    

    调用r_mappo.py中train函数,r_mappo.py中train函数将抽样的数据里发送给r_mappo.py中ppo_update()函数

  • ppo_update函数大体流程是:
    1)从buffer中抽样建立sample
    2)将抽样的数据传递给rMAPPOPolicy.py中的evaluate_actions函数,得到values, action_log_probs, dist_entropy
    3)计算actor的loss
    4)计算critic的loss

  • 3.6.1取样,计算优势函数

  • step_into train_infos = self.trainer.train(self.buffer),在self.trainer.train(self.buffer)函数中先基于数据,计算优势函数(优势函数是针对全局的观测信息和奖励所得到的):

  • 调用r_mappo.py中train函数

  •  def train(self, buffer, update_actor=True):
            """
            Perform a training update using minibatch GD.
            :param buffer: (SharedReplayBuffer) buffer containing training data.
            :param update_actor: (bool) whether to update actor network.
    
            :return train_info: (dict) contains information regarding training update (e.g. loss, grad norms, etc).
            """
            #计算优势函数GAE,#self.computer_return时next_value赋值给value_preds,next_value是用share_obs获得的
            if self._use_popart or self._use_valuenorm:
                advantages = buffer.returns[:-1] - self.value_normalizer.denormalize(buffer.value_preds[:-1])#returns-values=GAE,(200,5,1)
            else:
                advantages = buffer.returns[:-1] - buffer.value_preds[:-1]#excluding the last time step. returns是201个,advantage是200个
            advantages_copy = advantages.copy()
            advantages_copy[buffer.active_masks[:-1] == 0.0] = np.nan
            mean_advantages = np.nanmean(advantages_copy)#均值
            std_advantages = np.nanstd(advantages_copy)#方差
            advantages = (advantages - mean_advantages) / (std_advantages + 1e-5)#(200,5,1)

  • 然后从buffer中采样数据,把线程、智能体的维度降掉。

3.6.2 ppo_update 主要时得到values, action_log_probs, dist_entropy

  • 采样完成后,r_mappo.py中train函数将抽样的数据里发送给r_mappo.py中ppo_update()函数

        for _ in range(self.ppo_epoch):#Loop Over PPO Epochs: In each epoch, the policy and value networks are updated using the PPO algorithm.
            #Depending on the type of policy network used (self._use_recurrent_policy and self._use_naive_recurrent flags), different data generators are selected from the buffer. These generators yield batches of data used for training.
            if self._use_recurrent_policy:
                data_generator = buffer.recurrent_generator(advantages, self.num_mini_batch, self.data_chunk_length)
            elif self._use_naive_recurrent:
                data_generator = buffer.naive_recurrent_generator(advantages, self.num_mini_batch)
            else:
                data_generator = buffer.feed_forward_generator(advantages, self.num_mini_batch)#num_mini_batch是1

            for sample in data_generator:
                value_loss, critic_grad_norm, policy_loss, dist_entropy, actor_grad_norm, imp_weights \
                    = self.ppo_update(sample, update_actor)#
                #After each PPO update, various training statistics are aggregated. T
                train_info['value_loss'] += value_loss.item()
                train_info['policy_loss'] += policy_loss.item()
                train_info['dist_entropy'] += dist_entropy.item()
                train_info['actor_grad_norm'] += actor_grad_norm
                train_info['critic_grad_norm'] += critic_grad_norm
                train_info['ratio'] += imp_weights.mean()

        num_updates = self.ppo_epoch * self.num_mini_batch

  • r_mappo.py中ppo_update函数

  • ppo_update函数是主要是actor和critic的update。得到acvalue_loss, critic_grad_norm, policy_loss, dist_entropy, actor_grad_norm, imp_weights

values, action_log_probs, dist_entropy = self.policy.evaluate_actions(share_obs_batch,
                                                                              obs_batch,
                                                                              rnn_states_batch,
                                                                              rnn_states_critic_batch,
                                                                              actions_batch,
                                                                              masks_batch,
                                                                              available_actions_batch,
                                                                              active_masks_batch)

r_mappo.py中ppo_update函数调用evaluate_actions()函数, rMAPPOPolicy.py里的evaluate_actions函数

  • obs送给actor网络,得到action_log_probs, dist_entropy。

  • 把cent_obs送到critic得到新的values。

 具体来说,evaluate函数接收了一系列输入参数,包括局部观察(obs)、中心化观察(cent_obs)、行动(action)、RNN 状态等。然后,它调用了 Actor 网络的 evaluate_actions 方法来获取行动的对数概率action_log_probs和动作分布熵dist_entropy。

  • rMAPPOPolicy.py中evaluate_actions函数
    def evaluate_actions(self, cent_obs, obs, rnn_states_actor, rnn_states_critic, action, masks,
                         available_actions=None, active_masks=None):
        """
        Get action logprobs / entropy and value function predictions for actor update.
        :param cent_obs (np.ndarray): centralized input to the critic.
        :param obs (np.ndarray): local agent inputs to the actor.
        :param rnn_states_actor: (np.ndarray) if actor is RNN, RNN states for actor.
        :param rnn_states_critic: (np.ndarray) if critic is RNN, RNN states for critic.
        :param action: (np.ndarray) actions whose log probabilites and entropy to compute.
        :param masks: (np.ndarray) denotes points at which RNN states should be reset.
        :param available_actions: (np.ndarray) denotes which actions are available to agent
                                  (if None, all actions available)
        :param active_masks: (torch.Tensor) denotes whether an agent is active or dead.

        :return values: (torch.Tensor) value function predictions.
        :return action_log_probs: (torch.Tensor) log probabilities of the input actions.
        :return dist_entropy: (torch.Tensor) action distribution entropy for the given inputs.
        """
        action_log_probs, dist_entropy = self.actor.evaluate_actions(obs,
                                                                     rnn_states_actor,
                                                                     action,
                                                                     masks,
                                                                     available_actions,
                                                                     active_masks)

        values, _ = self.critic(cent_obs, rnn_states_critic, masks)
        return values, action_log_probs, dist_entropy
  • rMAPPOPolicy.py中evaluate_action函数会进一步调用r_actor_critic.py中的R_Actor 类中的evaluate_action函数
    def evaluate_actions(self, obs, rnn_states, action, masks, available_actions=None, active_masks=None):
        """
        Compute log probability and entropy of given actions.
        :param obs: (torch.Tensor) observation inputs into network.
        :param action: (torch.Tensor) actions whose entropy and log probability to evaluate.
        :param rnn_states: (torch.Tensor) if RNN network, hidden states for RNN.
        :param masks: (torch.Tensor) mask tensor denoting if hidden states should be reinitialized to zeros.
        :param available_actions: (torch.Tensor) denotes which actions are available to agent
                                                              (if None, all actions available)
        :param active_masks: (torch.Tensor) denotes whether an agent is active or dead.

        :return action_log_probs: (torch.Tensor) log probabilities of the input actions.
        :return dist_entropy: (torch.Tensor) action distribution entropy for the given inputs.
        """
        obs = check(obs).to(**self.tpdv)
        rnn_states = check(rnn_states).to(**self.tpdv)
        action = check(action).to(**self.tpdv)
        masks = check(masks).to(**self.tpdv)
        if available_actions is not None:
            available_actions = check(available_actions).to(**self.tpdv)

        if active_masks is not None:
            active_masks = check(active_masks).to(**self.tpdv)

        actor_features = self.base(obs)

        if self._use_naive_recurrent_policy or self._use_recurrent_policy:
            actor_features, rnn_states = self.rnn(actor_features, rnn_states, masks)

        action_log_probs, dist_entropy = self.act.evaluate_actions(actor_features,
                                                                   action, available_actions,
                                                                   active_masks=
                                                                   active_masks if self._use_policy_active_masks
                                                                   else None)

        return action_log_probs, dist_entropy

rMAPPOPolicy.py中values, _ = self.critic(cent_obs, rnn_states_critic, masks)会调用r_actor_critic.py中的R_Critic类中forward(self, cent_obs, rnn_states, masks)函数,得到values

    def forward(self, cent_obs, rnn_states, masks):
        """
        Compute actions from the given inputs.
        :param cent_obs: (np.ndarray / torch.Tensor) observation inputs into network.
        :param rnn_states: (np.ndarray / torch.Tensor) if RNN network, hidden states for RNN.
        :param masks: (np.ndarray / torch.Tensor) mask tensor denoting if RNN states should be reinitialized to zeros.

        :return values: (torch.Tensor) value function predictions.
        :return rnn_states: (torch.Tensor) updated RNN hidden states.
        """
        cent_obs = check(cent_obs).to(**self.tpdv)
        rnn_states = check(rnn_states).to(**self.tpdv)
        masks = check(masks).to(**self.tpdv)

        critic_features = self.base(cent_obs)#(5,28)变成了(5,64)
        if self._use_naive_recurrent_policy or self._use_recurrent_policy:
            critic_features, rnn_states = self.rnn(critic_features, rnn_states, masks)# 也是经过RNN把特征转一下
        values = self.v_out(critic_features)# 价值函数通过v_out,values 变成了5*1

        return values, rnn_states

3.6.3 ppo_update 计算actor的loss并更新actor

r_mappo中train会调用ppo_update函数,有了新老动作的概率分布和优势函数A_\theta(s_t,a_t)(即所有智能体的s和a后就可以更新actor网络。

在rmappo.py文件中ppo_update函数中actor 的损失函数通常定义为其负loss的预期值。这是因为优化算法通常是通过最小化损失函数来更新参数的。当我们在计算 actor 的损失时,我们希望最大化预期的奖励,因此我们会取其负值以得到最小化的目标。

# actor update
        imp_weights = torch.exp(action_log_probs - old_action_log_probs_batch)

        surr1 = imp_weights * adv_targ
        surr2 = torch.clamp(imp_weights, 1.0 - self.clip_param, 1.0 + self.clip_param) * adv_targ

        if self._use_policy_active_masks:
            policy_action_loss = (-torch.sum(torch.min(surr1, surr2),
                                             dim=-1,
                                             keepdim=True) * active_masks_batch).sum() / active_masks_batch.sum()
        else:
            policy_action_loss = -torch.sum(torch.min(surr1, surr2), dim=-1, keepdim=True).mean()

        policy_loss = policy_action_loss

        self.policy.actor_optimizer.zero_grad()

        if update_actor:
            (policy_loss - dist_entropy * self.entropy_coef).backward()#反向传播这一步会更新

        if self._use_max_grad_norm:
            actor_grad_norm = nn.utils.clip_grad_norm_(self.policy.actor.parameters(), self.max_grad_norm)
        else:
            actor_grad_norm = get_gard_norm(self.policy.actor.parameters())

        self.policy.actor_optimizer.step()

3.6.4 ppo_update 计算critic的loss并更新critic

新的value、老的value_preds_batch、计算的 return_batch和active_masks_batch,发送到

r_mappo.py中cal_value_loss函数中去计算critic的loss
value_loss = self.cal_value_loss(values, value_preds_batch, return_batch, active_masks_batch)

r_mappo.py中cal_value_loss函数中先对value做一个clipped,再对误差error做一个clip。然后计算loss,计算出loss后反向传播即可

  def cal_value_loss(self, values, value_preds_batch, return_batch, active_masks_batch):
        """
        Calculate value function loss.
        :param values: (torch.Tensor) value function predictions.
        :param value_preds_batch: (torch.Tensor) "old" value  predictions from data batch (used for value clip loss)
        :param return_batch: (torch.Tensor) reward to go returns.
        :param active_masks_batch: (torch.Tensor) denotes if agent is active or dead at a given timesep.

        :return value_loss: (torch.Tensor) value function loss.
        """
        value_pred_clipped = value_preds_batch + (values - value_preds_batch).clamp(-self.clip_param,
                                                                                    self.clip_param)
        if self._use_popart or self._use_valuenorm:
            self.value_normalizer.update(return_batch)
            error_clipped = self.value_normalizer.normalize(return_batch) - value_pred_clipped
            error_original = self.value_normalizer.normalize(return_batch) - values
        else:
            error_clipped = return_batch - value_pred_clipped
            error_original = return_batch - values

        if self._use_huber_loss:
            value_loss_clipped = huber_loss(error_clipped, self.huber_delta)
            value_loss_original = huber_loss(error_original, self.huber_delta)
        else:
            value_loss_clipped = mse_loss(error_clipped)
            value_loss_original = mse_loss(error_original)

        if self._use_clipped_value_loss:
            value_loss = torch.max(value_loss_original, value_loss_clipped)
        else:
            value_loss = value_loss_original

        if self._use_value_active_masks:
            value_loss = (value_loss * active_masks_batch).sum() / active_masks_batch.sum()
        else:
            value_loss = value_loss.mean()

        return value_loss

更新critic

# critic update
        value_loss = self.cal_value_loss(values, value_preds_batch, return_batch, active_masks_batch)

        self.policy.critic_optimizer.zero_grad()

        (value_loss * self.value_loss_coef).backward()

        if self._use_max_grad_norm:
            critic_grad_norm = nn.utils.clip_grad_norm_(self.policy.critic.parameters(), self.max_grad_norm)
        else:
            critic_grad_norm = get_gard_norm(self.policy.critic.parameters())

        self.policy.critic_optimizer.step()

        return value_loss, critic_grad_norm, policy_loss, dist_entropy, actor_grad_norm, imp_weights

MAPPO (Multi-Agent Proximal Policy Optimization) 是一种在多智能体系统中使用的强化学习算法。它基于Proximal Policy Optimization (PPO),并引入了并行化策略更新的概念,适用于处理复杂、高维度和多代理环境的问题。下面是对MAPPO算法结构的基本描述: ### 算法概述 MAPPO通过将所有智能体分组到不同的队列中,并在每个时间步同时对各个队列中的智能体应用策略更新。这种设计允许算法在保持计算效率的同时减少智能体之间的通信需求。 ### 主要组成部分及功能: 1. **策略网络**:对于每一个智能体,都存在一个策略网络,负责生成动作分布。这个网络通常是一个深度神经网络,输入包括观察状态信息和其他智能体的状态表示。 2. **价值函数**:为当前状态评估出一个估计的价值,用于指导策略优化过程。价值函数可以帮助算法预测采取某一行动后的长期奖励期望。 3. **并行训练**:智能体按照不同的组别进行并行训练,这意味着不同组别的智能体会在不同的时间点接收反馈并调整策略,减少了等待其他智能体完成训练的时间。 4. **策略更新**:基于策略梯度的思想,使用反向传播算法调整策略网络的权重,使其能够最大化预期的长期累积奖励。这里的更新考虑到邻近性的约束,使得新策略不会离原有策略太远,以此来增加策略稳定性。 5. **全局共享模型**(可选):在某些配置下,所有智能体可能会共享一个全局策略网络,但在每个步骤只更新部分智能体的局部策略。这有助于促进整个群体的学习一致性,而不需要全局同步操作。 ### 结构图示例描述 在典型的MAPPO算法结构图中: - 每个智能体(Agent)从环境中获取观察数据作为输入。 - 输入到各自的策略网络中,输出概率分布,智能体据此采样行动。 - 行动被执行,在环境中产生新的状态和奖励反馈。 - 反馈被收集并传回给智能体,更新价值函数和策略网络。 - 根据并行化的策略更新机制,一部分智能体在其组内同时接受反馈并进行策略优化。 ### 实现细节 实际的MAPPO算法实现会包含更多的组件,如经验回放、熵调节等,旨在提高学习的稳定性和收敛速度。此外,为了处理分布式部署的情况,算法还会涉及到高效的通信协议和同步策略。 ### 应用场景 MAPPO广泛应用于各种需要多智能体协作的任务中,例如游戏、机器人协同作业、自动驾驶车队管理等领域。 ###
评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值