论文阅读笔记—Model-Contrastive Federated Learning(Abstract)

本文提出MOON,一种在模型层面利用模型表示间相似性的简单而有效的联邦学习框架,解决异构本地数据的问题,显著提升深度学习在图像分类任务上的性能。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

这是一篇来自新加坡国立大学和加州大学伯克利分校的CVPR 2021论文。

首先是摘要:

Abstract

Federated learning enables multiple parties to collaboratively train a machine learning model without communicating their local data.
联邦学习使(enables)多方(multiple parties)能够在不交流(without communicating)本地数据的情况下协作(collaboratively)训练机器学习模型。
A key challenge in federated learning is to handle the heterogeneity of local data distribution across parties.
联合学习中的一个关键挑战(A key challenge)是处理(handle)跨各方(across parities)的本地数据分布(local data distribution)的异构性(heterogeneity)。
Although many studies have been proposed to address this challenge, we find that they fail to achieve high performance in image datasets with deep learning models.
尽管(Although)已经提出了许多研究来应对(address)这一挑战(this challenge),但我们发现,它们未能(fail to)在使用深度学习模型的图像数据集中实现高性能(high performance)。
In this paper, we propose MOON: model-contrastive federated learning.
MOON is a simple and effective federated learning framework.
The key idea of MOON is to utilize the similarity between model representations to correct the local training of individual parties, i.e., conducting contrastive learning in model-level.

MOON的关键思想(The key idea)是利用(utilize)模型表示(model representations)之间的相似性(similarity)来纠正个体(individual parties)的本地训练(local training),即在模型层面(model-level)进行对比学习(contrastive learning)。
Our extensive experiments show that MOON significantly outperforms the other state-of-the-art federated learning algorithms on various image classification tasks.
我们的大量(extensive)实验表明(show),MOON在各种图像分类任务上显著(significantly)优于(outperforms)其他最先进(state-of-the-art )的联邦学习算法。

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值