- Decentralized Learning Algorithms. Decentralized learning algorithms are necessary for decentralized model training, local aggregation, model sharing, and local or collaborative inference. Reinventing these algorithms may not be necessary and so far, several decentralized learning paradigms [2, 4 , 10 , 11 , 15 ] have been studied and improved [ 11 , 12 , 27 ](全是和通信有关的). The selection of these algorithms per application is a well-acknowledged problem generally in EI [6 , 39 ], and, in pursuing community-structured decentralized learning specifically, needs to consider the more consistent exposure to fewer data sources when compared to general decentralized learning. Furthermore, work must also be done in adapting existing algorithms to the specific needs of edge applications, especially to facilitate personalization and localization. For example, many works on consensus strategies in decentralized paradigms [13, 17 ] are aimed at singular global convergence among the entire network, which is useful in some applications but not all. Many applications favor differentiated and more localized outcomes, such as predictive text in the NLP domain [ 14], and thus decentralized learning algorithms should be adapted to accommodate differentiated behavior among communities to produce useful differentiation of models.
- 去中心化学习算法。去中心化学习算法是去中心化模型训练、局部聚合、模型共享以及局部或协作推理所必需的。重新发明这些算法可能没有必要,到目前为止,已经研究和改进了几个分散学习范式[2,4,10,11,15][11,12,27]。每个应用程序选择这些算法通常是EI中公认的问题[6,39],并且,在追求社区结构的去中心化学习时,与一般的去中心化学习相比,需要考虑更一致地暴露于更少的数据源。此外,还必须使现有算法适应边缘应用程序的特定需求,特别是促进个性化和本地化。例如,许多关于去中心化范式中的共识策略的工作[13,17]都是针对整个网络的单一全局收敛,这在某些应用中是有用的,但不是所有应用。许多应用程序倾向于差异化和更本地化的结果,例如NLP域[14]中的预测文本,因此去中心化学习算法应该适应社区之间的差异化行为,以产生有用的模型差异化。
- 去中心化学习算法。新发明这些算法可能没有必要,到目前为止,已经研究和改进了几个分散学习范式[2,4,10,11,15][11,12,27]
- 分散学习范式[2,4,10,11,15]
- 2 Michael Blot et al. “Gossip training for deep learning” arXiv: Computer Vision and Pattern Recognition(2016): n. pag.
- 深度学习的八卦训练
- 4 Jeff Daily et al. “GossipGraD: Scalable Deep Learning using Gossip Communication based Asynchronous Gradient Descent.” arXiv: Distributed, Parallel, and Cluster Computing(2018): n. pag.
- GossipGraD:使用基于八卦通信的异步梯度下降的可扩展深度学习。
- 10 István Hegedűs et al. “Gossip Learning as a Decentralized Alternative to Federated Learning” distributed applications and interoperable systems(2019): n. pag.
- 闲话学习作为联合学习的分散替代方案
- 11 Jiawen Kang et al. “Scalable and Communication-efficient Decentralized Federated Edge Learning with Multi-blockchain Framework” International Conference on Blockchain and Trustworthy Systems(2020): n. pag.
- 具有多区块链框架的可扩展和通信效率的去中心化联盟式边缘学习
- 15 Anusha Lalitha et al. “Peer-to-Peer Federated Learning on Graphs” (2022).
- 图上的点对点联合学习
- 2 Michael Blot et al. “Gossip training for deep learning” arXiv: Computer Vision and Pattern Recognition(2016): n. pag.
- 已经研究改进[11,12,27]
- 11 Jiawen Kang et al. “Scalable and Communication-efficient Decentralized Federated Edge Learning with Multi-blockchain Framework” International Conference on Blockchain and Trustworthy Systems(2020): n. pag.
- 具有多区块链框架的可扩展和通信高效的去中心化联邦边缘学习
- 12(已读) Jakub Konečný et al. “Federated Learning: Strategies for Improving Communication Efficiency” arXiv: Learning(2016): n. pag.
- 联邦学习:提高沟通效率的策略
- 27 Yuanming Shi et al. “Communication-Efficient Edge AI: Algorithms and Systems” IEEE Communications Surveys and Tutorials(2020): n. pag.
- 通信高效的边缘人工智能:算法和系统
- 11 Jiawen Kang et al. “Scalable and Communication-efficient Decentralized Federated Edge Learning with Multi-blockchain Framework” International Conference on Blockchain and Trustworthy Systems(2020): n. pag.
- 每个应用程序选择这些算法通常是EI中公认的问题[6,39]
- 6(已读) Shuiguang Deng et al. “Edge Intelligence: The Confluence of Edge Computing and Artificial Intelligence” IEEE Internet of Things Journal(2019): n. pag.
- 边缘智能:边缘计算与人工智能的融合
- 39(已读) Zhi Zhou et al. “Edge Intelligence: Paving the Last Mile of Artificial Intelligence With Edge Computing” arXiv: Distributed, Parallel, and Cluster Computing(2019): n. pag.
- 边缘智能:用边缘计算铺平人工智能的最后一公里
- 6(已读) Shuiguang Deng et al. “Edge Intelligence: The Confluence of Edge Computing and Artificial Intelligence” IEEE Internet of Things Journal(2019): n. pag.
- 许多应用程序倾向于差异化和更本地化的结果,例如NLP域[14]中的预测文本,因此去中心化学习算法应该适应社区之间的差异化行为,以产生有用的模型差异化。
- 14(已读) Viraj Kulkarni et al. “Survey of Personalization Techniques for Federated Learning” 2020 Fourth World Conference on Smart Trends in Systems, Security and Sustainability (WorldS4)(2020): n. pag.
- 联邦学习个性化技术调查
- 这个过程如图2所示,并在第2节中进行解释。分布式学习中的结构和选择性协作并不新鲜,已经在集中式联邦学习[12,14,18]和八卦训练协议[4,21,22,27]中实现。
- 首先,集中式分布式学习方案(centralized distributed learning schemes)[12,14]和八卦学习[13]中引入了通过选择性协作实现的沟通效率,但在完全去中心化学习中没有充分考虑到这一点。其次,虽然在其他分布式学习方案(other distributed learning schemes)中引入了特定于应用程序的个性化方法[14,18],但现有的完全去中心化学习工作并没有考虑使用数据和特征亲和性来实现个性化应用程序模型本地化的潜力。
- 分散学习范式[2,4,10,11,15]
Begin-Community-Structured Decentralized Learning for Resilient EI
最新推荐文章于 2024-06-25 22:59:48 发布