联合学习_联合学习为什么以及如何开始

本文介绍了联合学习的概念,解释了为何这种机器学习方法受到关注,并提供了开始实施联合学习的初步指导。
摘要由CSDN通过智能技术生成

联合学习

A general audience introduction to the federated learning technique and its goals, with a brief review of existing platforms and Digital Catapult’s own demonstration example.

向大众介绍联邦学习技术及其目标,并简要回顾现有平台和Digital Catapult自己的演示示例。

什么是联合学习? (What is federated learning?)

A topic of growing interest, federated learning can be associated with data privacy, distributed systems and machine learning, but what is it?

联合学习是一个越来越受关注的话题,它可以与数据隐私,分布式系统和机器学习相关联,但这是什么?

Federated learning is a particular approach for training machine learning algorithms in a way that means data stays private. Specifically, federated learning (FL) techniques aim to train machine learning (ML) algorithms across multiple, distributed devices or servers, each holding their own local and private data.

联合学习是一种训练机器学习算法的特殊方法,它意味着数据保持私密性。 具体来说,联合学习(FL)技术旨在跨多个分布式设备或服务器训练机器学习(ML)算法,每个设备或服务器都拥有自己的本地和私有数据。

This collaborative approach contrasts with traditional machine learning techniques, which are centralised in nature and rely on all data samples to be gathered in one unique dataset before being used. It also differs from techniques based on parallel computation, which are devised to optimise computation for ML over multiple processors, using a centralised dataset that is split into identically distributed subsets for computation.

这种协作方法与传统的机器学习技术形成了鲜明的对比,传统的机器学习技术本质上是集中的,并且依赖于所有数据样本在使用之前收集在一个唯一的数据集中。 它也不同于基于并行计算的技术,该技术被设计为使用集中化的数据集将多个处理器上的ML优化计算,该数据集被分为相同分布的子集进行计算。

FL hence offers a broader paradigm for implementing ML solutions, essentially providing more flexibility on how the data can be managed. FL is not restricted to specific ML algorithms and it can be used in a variety of contexts. It is primarily adapting how the training procedures for those algorithms are implemented, and it can be considered for both offline or online learning (for example, training on a static dataset at once, or continuously training on new coming data). It follows that FL is not one unique method, and according to the ML technique employed, the type of data and the operational context, a different strategy will be preferable.

因此,FL为实施ML解决方案提供了更广泛的范例,从本质上为如何管理数据提供了更大的灵活性。 FL不限于特定的ML算法,它可以在多种情况下使用。 它主要是针对这些算法的训练程序的实现方式进行调整,并且可以考虑将其用于离线或在线学习(例如,一次对静态数据集进行训练,或对新数据进行连续训练)。 因此,FL不是一种独特的方法,根据所采用的ML技术,数据类型和操作环境,最好使用其他策略。

Some simple and intuitive FL methods have proven to be surprisingly efficient solutions for practical application, one such example being the federated averaging algorithm. It consists in averaging at regular intervals the weights of the Neural Networks trained by different FL participants, called workers, on their local data subsets to update a global model. In turn, the local neural networks are then updated with this new global model for further training. The learnings obtained from each local dataset are progressively shared across all the workers as the global model is updated. This is the method we applied to an image classification use-case in an agricultural application presented more in detail below.

事实证明,一些简单而直观的FL方法是实际应用中令人惊讶的高效解决方案,其中一个示例就是联合平均算法。 它包括定期对由不同FL参与者(称为工作者)训练的神经网络的权重进行平均,以权衡其本地数据子集以更新全局模型。 反过来,然后使用此新的全局模型更新局部神经网络以进行进一步的训练。 随着全局模型的更新,从每个本地数据集获得的学习信息将逐步在所有工作人员之间共享。 这是我们在农业应用中应用于图像分类用例的方法,下面将详细介绍。

See Figure 1 below a for an illustration of the federated learning principles in this case.

有关这种情况下的联合学习原理的说明,请参见下面的图1。

是什么使联合学习如此吸引人? (What makes federated learning such an appealing tech

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值