差分隐私 深度学习_深度学习中的差异隐私

本文探讨了差分隐私在深度学习中的应用,旨在保护个人数据隐私,同时允许模型进行有效的训练。通过介绍差分隐私的基本原理和在神经网络中的实现,阐述了如何在保证学习性能的同时增强数据安全性。
摘要由CSDN通过智能技术生成

差分隐私 深度学习

I would like to thank Mr. Akshay Kulkarni for guiding me on my journey in publishing my first-ever article.

我要感谢Akshay Kulkarni先生在我发表我的第一篇文章的过程中为我提供了指导。

介绍 (INTRODUCTION)

As a large number of our day to day activities are being moved online the amount of personal and sensitive data being recorded is also on the rise. This surge in data has also led to the increased data analysis tools in the form of machine learning and deep learning that are permeating in every possible industry. These techniques are also used on sensitive user data to derive actionable insights. The objective of these models is to discover overall patterns rather than an individual’s habits.

随着我们日常活动的大量在线进行,记录的个人和敏感数据的数量也在增加。 数据的激增也导致以机器学习和深度学习形式出现的数据分析工具的增加,渗透到每个可能的行业中。 这些技术还用于敏感的用户数据,以得出可行的见解。 这些模型的目的是发现整体模式,而不是个人的习惯。

Image for post
Photo by Kevin Ku on Unsplash
凯文·库 ( Kevin Ku)Unsplash

Deep learning is evolving to become the industry standard in many of the automation procedures. But it is also infamous for learning the minute and fine details of the training dataset. This aggravates the privacy risk as the model weights now encode the finer user details which on hostile inspection could potentially reveal the user information. For example, Fredrikson et al. demonstrated a model-inversion attack that recovers images from a facial recognition system [1]. Given the abundance of data freely available it is safe to assume that a determined adversary can get hold of the necessary auxiliary information required to extract user information from the model weights.

深度学习正在发展成为许多自动化程序中的行业标准。 但是,学习训练数据集的细微细节也是臭名昭著的。 由于模型权重现在对更精细的用户详细信息进行编码,因此加剧了隐私风险,在敌对检查中,这些详细信息可能会泄露用户信息。 例如,Fredrikson等。 展示了一种模型反转攻击,该攻击可以从面部识别系统中恢复图像[1]。 考虑到免费提供的大量数据,可以安全地假设确定的对手可以掌握从模型权重中提取用户信息所需的必要辅助信息。

什么是差异隐私? (WHAT IS DIFFERENTIAL PRIVACY?)

Differential Privacy is a theory which provides us with certain mathematical guarantees of privacy of user information. It aims to reduce the impact of any one individual’s data on the overall result. This means that one would make the same inference about an individual’s data whether or not it was present in the input of the analysis. As the number of analyses increases on the data so does the risk of exposure of user information. The results of differentially private computations are immune to a wide range of privacy attacks.

差异隐私是一种理论,为我们提供了用户信息隐私的某些数学保证。 它旨在减少任何个人数据对整体结果的影响。 这意味着无论分析的输入中是否存在有关个人数据的推断,都相同。 随着数据分析数量的增加,暴露用户信息的风险也会增加。 差分私有计算的结果不受多种隐私攻击的影响。

  • 1
    点赞
  • 22
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值