Communication-Efficient Learning of Deep Networks from Decentralized Data

类型 说明
论文信息 Communication-Efficient Learning of Deep Networks from Decentralized Data
H.Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, Blaise Agüera y Arcas
研究的问题 移动设备的去中心化数据的高效率通信的分布式联合平均算法
算法名称 FederatedAveraging Algorithm(FedAVG)
前景知识
有限和形式 min ⁡ _ ω ∈ R d f ( ω )     w h e r e     f ( ω ) = def 1 n ∑ i = 1 n f _ i ( ω ) . \min\_{\omega\in\mathbb{R}^{d}} f\left(\omega\right)  where  f\left(\omega\right) \stackrel{\text{def}}{=}\frac{1}{n}\sum^{n}_{i=1}f\_{i}\left(\omega\right). min_ωRdf(ω)  where  f(ω)=defn1i=1nf_i(ω).For a machine learning problem: f i ( ω ) = ℓ ( x i , y i ; ω ) f_{i}\left(\omega\right)=\ell\left(x_{i},y_{i};\omega\right) fi(ω)=(xi,yi;ω)
分布式形式 f ( ω ) = ∑ k = 1 K n k n F k ( ω )     w h e r e     F k ( ω ) = 1 n k ∑ i
  • 3
    点赞
  • 6
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
Multi-agent reinforcement learning (MARL) is a subfield of reinforcement learning (RL) that involves multiple agents learning simultaneously in a shared environment. MARL has been studied for several decades, but recent advances in deep learning and computational power have led to significant progress in the field. The development of MARL can be divided into several key stages: 1. Early approaches: In the early days, MARL algorithms were based on game theory and heuristic methods. These approaches were limited in their ability to handle complex environments or large numbers of agents. 2. Independent Learners: The Independent Learners (IL) algorithm was proposed in the 1990s, which allowed agents to learn independently while interacting with a shared environment. This approach was successful in simple environments but often led to convergence issues in more complex scenarios. 3. Decentralized Partially Observable Markov Decision Process (Dec-POMDP): The Dec-POMDP framework was introduced to address the challenges of coordinating multiple agents in a decentralized manner. This approach models the environment as a Partially Observable Markov Decision Process (POMDP), which allows agents to reason about the beliefs and actions of other agents. 4. Deep MARL: The development of deep learning techniques, such as deep neural networks, has enabled the use of MARL in more complex environments. Deep MARL algorithms, such as Deep Q-Networks (DQN) and Deep Deterministic Policy Gradient (DDPG), have achieved state-of-the-art performance in many applications. 5. Multi-Agent Actor-Critic (MAAC): MAAC is a recent algorithm that combines the advantages of policy-based and value-based methods. MAAC uses an actor-critic architecture to learn decentralized policies and value functions for each agent, while also incorporating a centralized critic to estimate the global value function. Overall, the development of MARL has been driven by the need to address the challenges of coordinating multiple agents in complex environments. While there is still much to be learned in this field, recent advancements in deep learning and reinforcement learning have opened up new possibilities for developing more effective MARL algorithms.

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值