从文章「A Field Guide to Federated Optimization」整理的联邦学习科研入门实验

A Field Guide to Federated Optimization

本文是我从文章「A Field Guide to Federated Optimization」整理的联邦学习科研入门实验

〇、作者信息

Jianyu Wang 卡耐基梅隆大学,谷歌实习期间完成的这篇文章,代表了谷歌对于联邦学习最新的认识吧。

Abtract

联邦学习和分析都是一个分布式方法,用来从去中心化的数据中协作学习模型(或者是单纯的统计数据),设计它的目的是为了隐私保护。这个分布式学习的过程可以被定义为解决一个联邦优化问题,它强调通信效率、数据异构性、兼顾隐私和系统需求、以及其他约束(那些在其他问题设置中不是主要被考虑的因素「我觉得这个指的是中心化训练转向到分布式环境中二产生的问题」)。本文通过具体的例子和实际的实现,为制定、设计、评估和分析联邦优化算法提供了建议和指南,重点是进行有效的模拟以推断真实世界的性能。这项工作的目的不是调查当前的文献,而是启发研究人员和实践者设计可以用于各种实际应用的联合学习算法。

个人实验

本章记录我在本文中找到的可以在联邦学习入门阶段的实验,以便于尽快进入到学习的过程中。

实验一:Client update rule

Jianyu Wang, Qinghua Liu, Hao Liang, Gauri Joshi, and H Vincent Poor. Tackling the objective inconsistency problem in heterogeneous federated optimization. In Advances in Neural Information Processing Systems (NeurIPS), 2020.

Jianyu Wang, Zheng Xu, Zachary Garrett, Zachary Charles, Luyang Liu, and Gauri Joshi. Local adaptivity in federated learning: Convergence and consistency. arXiv preprint arXiv:2106.02305, 2021.

Honglin Yuan and Tengyu Ma. Federated accelerated stochastic gradient descent. In Advances in Neural Information Processing Systems (NeurIPS), 2020.

实验二:Global Update Rule

Tzu-Ming Harry Hsu, Hang Qi, and Matthew Brown. Measuring the effects of non-identical data distribution for federated visual classification. arXiv preprint arXiv:1909.06335, 2019.

Sashank J. Reddi, Zachary Charles, Manzil Zaheer, Zachary Garrett, Keith Rush, Jakub Koneˇcn´y, Sanjiv Kumar, and Hugh Brendan McMahan. Adaptive federated optimization. In International Conference on Learning Representations, 2021. URL https://openreview.net/ forum?id=LkFG3lB13U5

Jianyu Wang, Vinayak Tantia, Nicolas Ballas, and Michael Rabbat. SlowMo: Improving communication-efficient distributed SGD with slow momentum. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=SkxJ8REYPH.

实验三:Aggregation Method

Chaoyang He, Murali Annavaram, and Salman Avestimehr. Group knowledge transfer: Feder- ated learning of large cnns at the edge. Advances in Neural Information Processing Systems, 33, 2020.

Tao Lin, Lingjing Kong, Sebastian U Stich, and Martin Jaggi. Ensemble distillation for robust model fusion in federated learning. Advances in Neural Information Processing Systems, 33, 2020.

实验四:personalized model

Fei Chen, Mi Luo, Zhenhua Dong, Zhenguo Li, and Xiuqiang He. Federated meta-learning with fast convergence and efficient communication. arXiv preprint arXiv:1802.07876, 2018.

Alireza Fallah, Aryan Mokhtari, and Asuman Ozdaglar. Personalized federated learning: A meta-learning approach. In Advances in Neural Information Processing Systems, 2020.

Yihan Jiang, Jakub Koneˇcn´y, Keith Rush, and Sreeram Kannan. Improving federated learning personalization via model agnostic meta learning. arXiv preprint arXiv:1909.12488, 2019.

Jeffrey Li, Mikhail Khodak, Sebastian Caldas, and Ameet Talwalkar. Differentially private meta-learning. In International Conference on Learning Representations, 2020.

实验五:multi-task learning

Canh T Dinh, Nguyen H Tran, and Tuan Dung Nguyen. Personalized federated learning with moreau envelopes. In Advances in Neural Information Processing Systems, 2020.

Theodoros Evgeniou and Massimiliano Pontil. Regularized multi–task learning. In International Conference on Knowledge Discovery and Data Mining, 2004.

Filip Hanzely and Peter Richt´ arik. Federated learning of a mixture of global and local models. arXiv preprint arXiv:2002.05516, 2020.

Tian Li, Shengyuan Hu, Ahmad Beirami, and Virginia Smith. Ditto: Fair and robust federated learning through personalization. In International Conference on Machine Learning, 2021

Virginia Smith, Chao-Kai Chiang, Maziar Sanjabi, and Ameet S Talwalkar. Federated multi-task learning. In Advances in Neural Information Processing Systems, 2017.

其他

有一说一,科研太难了emmm

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值