北京联合利泰怎么样_联合学习将面临什么样的攻击?

北京联合利泰怎么样

Federated learning will face the problem form privacy-preserving machine learning (PPML) and secure machine learning(SML).

联合学习将面临隐私保护机器学习(PPML)和安全机器学习(SML)的问题。

  • Reconstruction Attacks

    重建攻击

Privacy leakage will not only leak by the data, but also the gradient client update or the support vector machine and k-nearest neighbor which store the explicit feature value. Reconstruction attacks is to get the privacy information on these possible way.

隐私泄漏不仅会被数据泄漏,还会泄漏梯度客户端更新或支持显式特征值的支持向量机和k最近邻。 重构攻击就是通过这些可能的方式获取隐私信息。

Its goal is to extract the training data or feature vectors of the training data during ML model training. In federated learning, gradient update from the client may leak the information of the client. Therefore, we will need the secure multi-party computation(MPC) and homomorphic encryption(HE) to defend this attack. Moreover, ML model store the explicit feature values should be avoided.

其目标是在ML模型训练期间提取训练数据或训练数据的特征向量。 在联合学习中,来自客户端的梯度更新可能会泄漏客户端的信息。 因此,我们将需要安全的多方计算(MPC)和同态加密(HE)来防御这种攻击。 此外,应避免使用ML模型存储显式特征值。

  • Model Inversion Attacks

    模型反转攻击

If the model can be query for many times, the adversary can reconstruct the clear-text model by equation solving attack. The adversary can learn about the distribution of the training data and the similar model.

如果该模型可以被多次查询,则对手可以通过方程求解攻击来重建明文模型。 对手可以了解训练数据和类似模型的分布。

  • Membership-Inference Attacks

    成员推理攻击

The target of the membership inference attacks is to infer if the member is in the training dataset. It build a shadow model to create a dataset which is familiar to the original dataset. There are three way used to build the shadow model mentioned in the paper.

成员推断攻击的目标是推断成员是否在训练数据集中。 它建立一个阴影模型以创建原始数据集熟悉的数据集。 本文提到的建立阴影模型的三种方法。

Intuitively, we can use the model as a shadow model. If the output has high confidence values, it is similar to the original dataset.

直观地,我们可以将模型用作阴影模型。 如果输出具有高置信度值,则它类似于原始数据集。

  • Attribute-Inference Attacks

    属性推理攻击

The adversary try to de-anonymize or target record owner. How To Break Anonymity of the Netflix Prize Dataset show that anonymized data can use other data set such as IMDB to de-anonymize the data. In other words, you will be inferred by any other public data set such as your blog or rating website.

对手尝试取消匿名或以记录所有者为目标。 如何打破Netflix奖数据集的匿名性表明,匿名数据可以使用其他数据集(例如IMDB)使数据去匿名。 换句话说,您可以通过任何其他公共数据集(例如您的博客或评级网站)来推断您。

It is a quit famous and interesting paper. Here is the FAQs for that paper.

这是一篇著名而有趣的论文。 这是该论文的常见问题解答

  • Model Poisoning Attacks

    模型中毒攻击

It is known as back-door attack. As mention in this article:

这被称为后门攻击。 如本文所述

A backdoor is a type of input that the model’s designer is not aware of, but that the attacker can leverage to get the ML system to do what they want.

后门是模型的设计者不知道的一种输入,但是攻击者可以利用它来使ML系统执行他们想要的操作。

In FL, use wrong label to low the performance of the model is also a poisoning attack.

在FL中,使用错误的标签会降低模型的性能,这也是一种中毒攻击。

想进一步了解FL? (Want to know more about FL?)

Architecture of three Federated learning

三种联合学习的体系结构

(Summary)Federated Learning: Strategies for Improving Communication Efficiency

(摘要)联合学习:提高沟通效率的策略

Federated Learning Aggregate Method (1)

联合学习汇总方法(1)

翻译自: https://medium.com/disassembly/what-attack-will-federated-learning-face-f58a56abc3cb

北京联合利泰怎么样

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值