Differential privacy

Differential privacy


In cryptographydifferential privacy aims to provide means to maximize the accuracy of queries from statistical databases while minimizing the chances of identifying its records.

Synopsis[edit]

Differential privacy is a mathematical definition for the privacy loss that results to individuals when their private information is used in the creation of a data product. The term was coined by Cynthia Dwork in 2006[1], but the correct reference is actually an earlier publication by Dwork, Frank McSherryKobbi Nissim and Adam Smith[2]. The work is based, in part, on work by Nissum and Irit Dinur[3] which showed that it is impossible to publish information from a private statistical database without revealing some amount of private information, and that the entire database can be revealed by publishing the results of a surprisingly small number of queries.

A result of Dinur and Nissim's "database reconstruction" work was the realization that the approach of providing privacy in statistical databases using semantic definitions of privacy (mostly dating to work of Tore Dalenius in the 1970s) were impossible, and that new techniques for limiting the increased privacy risk resulting from inclusion of private data in a statistical database needed to be developed. A result of the work and subsequent research is the development of techniques that makes it possible, in many cases, to provide very accurate statistics from the database while still ensuring high levels of privacy.[4][5]

Principle and illustration[edit]

Differential confidentiality is a process that introduces randomness into the data.

A simple example, especially developed in the social sciences,[6] is to ask a person to answer the question "Do you own the attribute A?", according to the following procedure:

  1. Throw a coin.
  2. If head, then answer honestly.
  3. If tail, then throw the coin again and answer "Yes" if head, "No" if tail.

The confidentiality arises from the refutability of the individual responses.

But, overall, these data with many responses are significant, since positive responses are given to a quarter by people who do not have the attribute A and three-quarters by people who actually possess it. Thus, if p is the true proportion of people with A, then we expect to obtain (1/4)(1-p) + (3/4)p = (1/4) + p/2 positive responses. Hence is possible to estimate p.

In particular, if the attribute A is synonymous with illegal behavior, then answering "Yes" is not incriminating, insofar as the person has a probability of a "Yes" response, whatever it may be.

Formal definition and example application[edit]

Let {\displaystyle \epsilon }\epsilon  be a positive real number and {\displaystyle {\mathcal {A}}}{\mathcal {A}} be a randomized algorithm that takes a dataset as input (representing the actions of the trusted party holding the data). Let {\displaystyle {\textrm {im}}{\mathcal {A}}}{\displaystyle {\textrm {im}}{\mathcal {A}}} denote the image of {\displaystyle {\mathcal {A}}}{\mathcal {A}}. The algorithm {\displaystyle {\mathcal {A}}}{\mathcal {A}} is {\displaystyle \epsilon }\epsilon -differentially private if for all datasets {\displaystyle D_{1}}D_{1} and {\displaystyle D_{2}}D_{2} that differ on a single element (i.e., the data of one person), and all subsets {\displaystyle S}S of {\displaystyle {\textrm {im}}{\mathcal {A}}}{\displaystyle {\textrm {im}}{\mathcal {A}}},

{\displaystyle \Pr[{\mathcal {A}}(D_{1})\in S]\leq e^{\epsilon }\times \Pr[{\mathcal {A}}(D_{2})\in S],}{\displaystyle \Pr[{\mathcal {A}}(D_{1})\in S]\leq e^{\epsilon }\times \Pr[{\mathcal {A}}(D_{2})\in S],}

where the probability is taken over the randomness used by the algorithm.[6]

According to this definition, differential privacy is a condition on the release mechanism (i.e., the trusted party releasing information about the dataset) and not on the dataset itself. Intuitively, this means that for any two datasets that are similar, a given differentially private algorithm will behave approximately the same on both datasets. The definition gives a strong guarantee that presence or absence of an individual will not affect the final output of the algorithm significantly.

For example, assume we have a database of medical records {\displaystyle D_{1}}D_{1} where each record is a pair (NameX), where {\displaystyle X}X is a Boolean denoting whether a person has diabetes or not. For example:

Name Has Diabetes (X)
Ross1
Monica1
Joey0
Phoebe0
Chandler1

Now suppose a malicious user (often termed an adversary) wants to find whether Chandler has diabetes or not. Suppose he also knows in which row of the database Chandler resides. Now suppose the adversary is only allowed to use a particular form of query {\displaystyle Q_{i}}Q_{i} that returns the partial sum of the first {\displaystyle i}i rows of column {\displaystyle X}X in the database. In order to find Chandler's diabetes status the adversary executes {\displaystyle Q_{5}(D_{1})}{\displaystyle Q_{5}(D_{1})} and {\displaystyle Q_{4}(D_{1})}{\displaystyle Q_{4}(D_{1})}, then computes their difference. In this example, {\displaystyle Q_{5}(D_{1})=3}{\displaystyle Q_{5}(D_{1})=3} and {\displaystyle Q_{4}(D_{1})=2}{\displaystyle Q_{4}(D_{1})=2}, so their difference is 1. This indicates that the "Has Diabetes" field in Chandler's row must be 1. This example highlights how individual information can be compromised even without explicitly querying for the information of a specific individual.

Continuing this example, if we construct {\displaystyle D_{2}}D_{2} by replacing (Chandler, 1) with (Chandler, 0) then this malicious adversary will be able to distinguish {\displaystyle D_{2}}D_{2} from {\displaystyle D_{1}}D_{1} by computing {\displaystyle Q_{5}-Q_{4}}{\displaystyle Q_{5}-Q_{4}} for each dataset. If the adversary were required to receive the values {\displaystyle Q_{i}}Q_{i} via an {\displaystyle \epsilon }\epsilon -differentially private algorithm, for a sufficiently small {\displaystyle \epsilon }\epsilon , then he or she would be unable to distinguish between the two datasets.

Sensitivity[edit]

Let {\displaystyle d}d be a positive integer, {\displaystyle {\mathcal {D}}}{\mathcal {D}} be a collection of datasets, and {\displaystyle f\colon {\mathcal {D}}\rightarrow \mathbb {R} ^{d}}{\displaystyle f\colon {\mathcal {D}}\rightarrow \mathbb {R} ^{d}} be a function. The sensitivity [7] of a function, denoted {\displaystyle \Delta f}\Delta f, is defined by

{\displaystyle \Delta f=\max \lVert f(D_{1})-f(D_{2})\rVert _{1},}{\displaystyle \Delta f=\max \lVert f(D_{1})-f(D_{2})\rVert _{1},}

where the maximum is over all pairs of datasets {\displaystyle D_{1}}D_{1} and {\displaystyle D_{2}}D_{2} in {\displaystyle {\mathcal {D}}}{\mathcal {D}} differing in at most one element and {\displaystyle \lVert \cdot \rVert _{1}}{\displaystyle \lVert \cdot \rVert _{1}} denotes the {\displaystyle \ell _{1}}\ell _{1} norm.

In the example of the medical database above, if we consider {\displaystyle f}f to be the function {\displaystyle Q_{i}}Q_{i}, then the sensitivity of the function is one, since changing any one of the entries in the database causes the output of the function to change by either zero or one.

There are techniques (which are described below) using which we can create a differentially private algorithm for functions with low sensitivity.

Trade-off between utility and privacy[edit]

There is a trade-off between the accuracy of the statistics estimated in a privacy-preserving manner, and the privacy parameter ε.[8][9][10][11]

Other notions of differential privacy[edit]

Since differential privacy is considered to be too strong for some applications, many weakened versions of privacy have been proposed. These include (ε, δ)-differential privacy,[12]randomised differential privacy,[13] and privacy under a metric.[14]

Differentially private mechanisms[edit]

Since differential privacy is a probabilistic concept, any differentially private mechanism is necessarily randomized. Some of these, like the Laplace mechanism, described below, rely on adding controlled noise to the function that we want to compute. Others, like the exponential mechanism[15] and posterior sampling[16] sample from a problem-dependent family of distributions instead.

The Laplace mechanism[edit]

Many differentially private methods add controlled noise to functions with low sensitivity.[7] The Laplace mechanism adds Laplace noise (i.e. noise from the Laplace distribution, which can be expressed by probability density function {\displaystyle {\text{noise}}(y)\propto \exp(-|y|/\lambda )\,\!}\text{noise}(y)\propto \exp(-|y|/\lambda)\,\!, which has mean zero and standard deviation {\displaystyle \lambda \,\!}\lambda \,\!). Now in our case we define the output function of {\displaystyle {\mathcal {A}}\,\!}\mathcal{A}\,\! as a real valued function (called as the transcript output by {\displaystyle {\mathcal {A}}\,\!}\mathcal{A}\,\!) as {\displaystyle {\mathcal {T}}_{\mathcal {A}}(x)=f(x)+Y\,\!}\mathcal{T}_{\mathcal{A}}(x)=f(x)+Y\,\! where {\displaystyle Y\sim {\text{Lap}}(\lambda )\,\!\,\!}Y \sim \text{Lap}(\lambda)\,\!\,\! and {\displaystyle f\,\!}f\,\! is the original real valued query/function we planned to execute on the database. Now clearly {\displaystyle {\mathcal {T}}_{\mathcal {A}}(x)\,\!}\mathcal{T}_{\mathcal{A}}(x)\,\! can be considered to be a continuous random variable, where

{\displaystyle {\frac {\mathrm {pdf} ({\mathcal {T}}_{{\mathcal {A}},D_{1}}(x)=t)}{\mathrm {pdf} ({\mathcal {T}}_{{\mathcal {A}},D_{2}}(x)=t)}}={\frac {{\text{noise}}(t-f(D_{1}))}{{\text{noise}}(t-f(D_{2}))}}\,\!}\frac{\mathrm{pdf}(\mathcal{T}_{\mathcal{A},D_1}(x)=t)}{\mathrm{pdf}(\mathcal{T}_{\mathcal{A},D_2}(x)=t)}=\frac{\text{noise}(t-f(D_1))}{\text{noise}(t-f(D_2))}\,\!

which is at most {\displaystyle e^{\frac {|f(D_{1})-f(D_{2})|}{\lambda }}\leq e^{\frac {\Delta (f)}{\lambda }}\,\!}e^{\frac{|f(D_{1})-f(D_{2})|}{\lambda}}\leq e^{\frac{\Delta(f)}{\lambda}}\,\!. We can consider {\displaystyle {\frac {\Delta (f)}{\lambda }}\,\!}\frac{\Delta(f)}{\lambda}\,\! to be the privacy factor {\displaystyle \epsilon \,\!}\epsilon \,\!. Thus {\displaystyle {\mathcal {T}}\,\!}\mathcal{T}\,\! follows a differentially private mechanism (as can be seen from the definition above). If we try to use this concept in our diabetes example then it follows from the above derived fact that in order to have {\displaystyle {\mathcal {A}}\,\!}\mathcal{A}\,\! as the {\displaystyle \epsilon \,\!}\epsilon \,\!-differential private algorithm we need to have {\displaystyle \lambda =1/\epsilon \,\!}\lambda=1/\epsilon\,\!. Though we have used Laplacian noise here, other forms of noise, such as the Gaussian Noise, can be employed, but they may require a slight relaxation of the definition of differential privacy.[17]

Composability[edit]

Sequential composition[edit]

If we query an ε-differential privacy mechanism {\displaystyle t}t times, and the randomization of the mechanism is independent for each query, then the result would be {\displaystyle \epsilon t}\epsilon t-differentially private. In the more general case, if there are {\displaystyle n}n independent mechanisms: {\displaystyle {\mathcal {M}}_{1},\dots ,{\mathcal {M}}_{n}}\mathcal{M}_1,\dots,\mathcal{M}_n, whose privacy guarantees are {\displaystyle \epsilon _{1},\dots ,\epsilon _{n}}\epsilon_1,\dots,\epsilon_n differential privacy, respectively, then any function {\displaystyle g}g of them: {\displaystyle g({\mathcal {M}}_{1},\dots ,{\mathcal {M}}_{n})}g(\mathcal{M}_1,\dots,\mathcal{M}_n) is {\displaystyle (\sum \limits _{i=1}^{n}\epsilon _{i})}(\sum\limits_{i=1}^{n} \epsilon_i)-differentially private.[18]

Parallel composition[edit]

Furthermore, if the previous mechanisms are computed on disjoint subsets of the private database then the function {\displaystyle g}g would be {\displaystyle (\max _{i}\epsilon _{i})}(\max_i \epsilon_i)-differentially private instead.[18]

Group privacy[edit]

In general, ε-differential privacy is designed to protect the privacy between neighboring databases which differ only in one row. This means that no adversary with arbitrary auxiliary information can know if one particular participant submitted his information. However this is also extendable if we want to protect databases differing in {\displaystyle c}c rows, which amounts to adversary with arbitrary auxiliary information can know if {\displaystyle c}c particular participants submitted their information. This can be achieved because if {\displaystyle c}c items change, the probability dilation is bounded by {\displaystyle \exp(\epsilon c)}\exp ( \epsilon c ) instead of {\displaystyle \exp(\epsilon )}\exp ( \epsilon ),[17] i.e., for D1 and D2 differing on {\displaystyle c}c items:

{\displaystyle \Pr[{\mathcal {A}}(D_{1})\in S]\leq \exp(\epsilon c)\times \Pr[{\mathcal {A}}(D_{2})\in S]\,\!}\Pr[\mathcal{A}(D_{1})\in S]\leq\exp(\epsilon c)\times\Pr[\mathcal{A}(D_{2})\in S]\,\!

Thus setting ε instead to {\displaystyle \epsilon /c}\epsilon/c achieves the desired result (protection of {\displaystyle c}c items). In other words, instead of having each item ε-differentially private protected, now every group of {\displaystyle c}citems is ε-differentially private protected (and each item is {\displaystyle (\epsilon /c)}(\epsilon/c)-differentially private protected).

Stable transformations[edit]

A transformation {\displaystyle T}T is {\displaystyle c}c-stable if the hamming distance between {\displaystyle T(A)}T(A) and {\displaystyle T(B)}T(B) is at most {\displaystyle c}c-times the hamming distance between {\displaystyle A}A and {\displaystyle B}B for any two databases {\displaystyle A,B}A,B. Theorem 2 in [18] asserts that if there is a mechanism {\displaystyle M}M that is {\displaystyle \epsilon }\epsilon -differentially private, then the composite mechanism {\displaystyle M\circ T}M\circ T is {\displaystyle (\epsilon \times c)}(\epsilon \times c)-differentially private.

This could be generalized to group privacy, as the group size could be thought of as the hamming distance {\displaystyle h}h between {\displaystyle A}A and {\displaystyle B}B (where {\displaystyle A}A contains the group and {\displaystyle B}B doesn't). In this case {\displaystyle M\circ T}M\circ T is {\displaystyle (\epsilon \times c\times h)}(\epsilon \times c \times h)-differentially private.

Adoption of differential privacy in real-world applications[edit]

Several uses of differential privacy in practice are known to date:

  • U.S. Census Bureau, for showing commuting patterns,[19]
  • Google's RAPPOR, for telemetry such as learning statistics about unwanted software hijacking users' settings [20] (RAPPOR's open-source implementation),
  • Google, for sharing historical traffic statistics.[21]
  • On June 13, 2016 Apple announced its intention to use differential privacy in iOS 10 to improve its intelligent assistance and suggestions technology,[22]
  • Some initial research has been done into practical implementations of differential privacy in data mining models.[23]


References[edit]

  1. Jump up^ Dwork, Cynthia (2006). "Differential Privacy": 1–12.
  2. Jump up^ Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. 2006. Calibrating noise to sensitivity in private data analysis. In Proceedings of the Third conference on Theory of Cryptography (TCC'06), Shai Halevi and Tal Rabin (Eds.). Springer-Verlag, Berlin, Heidelberg, 265-284. DOI=http://dx.doi.org/10.1007/11681878_14
  3. Jump up^ Irit Dinur and Kobbi Nissim. 2003. Revealing information while preserving privacy. In Proceedings of the twenty-second ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems (PODS '03). ACM, New York, NY, USA, 202-210. DOI=http://dx.doi.org/10.1145/773153.773173
  4. Jump up^ HILTON, MICHAEL. "Differential Privacy: A Historical Survey" (PDF).
  5. Jump up^ Dwork, Cynthia (2008-04-25). "Differential Privacy: A Survey of Results". In Agrawal, Manindra; Du, Dingzhu; Duan, Zhenhua; Li, Angsheng. Theory and Applications of Models of Computation. Lecture Notes in Computer Science. Springer Berlin Heidelberg. pp. 1–19. ISBN 9783540792277doi:10.1007/978-3-540-79228-4_1.
  6. Jump up to:a b The Algorithmic Foundations of Differential Privacy by Cynthia Dwork and Aaron Roth. Foundations and Trends in Theoretical Computer Science. Vol. 9, no. 3–4, pp. 211‐407, Aug. 2014. DOI=10.1561/0400000042
  7. Jump up to:a b Calibrating Noise to Sensitivity in Private Data Analysis by Cynthia Dwork, Frank McSherry, Kobbi Nissim, Adam Smith In Theory of Cryptography Conference (TCC), Springer, 2006. DOI=10.1007/11681878_14
  8. Jump up^ A. Ghosh, T. Roughgarden, and M. Sundararajan. Universally utility-maximizing privacy mechanisms. In Proceedings of the 41st annual ACM Symposium on Theory of Computing, pages 351–360. ACM New York, NY, USA, 2009.
  9. Jump up^ H. Brenner and K. Nissim. Impossibility of Differentially Private Universally Optimal Mechanisms. In Proceedings of the 51st Annual IEEE Symposium on Foundations of Computer Science (FOCS), 2010.
  10. Jump up^ R. Chen, N. Mohammed, B. C. M. Fung, B. C. Desai, and L. Xiong. Publishing set-valued data via differential privacy. The Proceedings of the VLDB Endowment (PVLDB), 4(11):1087-1098, August 2011. VLDB Endowment.
  11. Jump up^ N. Mohammed, R. Chen, B. C. M. Fung, and P. S. Yu. Differentially private data release for data mining. In Proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (SIGKDD), pages 493-501, San Diego, CA: ACM Press, August 2011.
  12. Jump up^ Dwork, Cynthia, Krishnaram Kenthapadi, Frank McSherry, Ilya Mironov, and Moni Naor. "Our data, ourselves: Privacy via distributed noise generation." In Advances in Cryptology-EUROCRYPT 2006, pp. 486-503. Springer Berlin Heidelberg, 2006.
  13. Jump up^ Hall, Rob, Alessandro Rinaldo, and Larry Wasserman. "Random differential privacy." arXiv preprint arXiv:1112.2680 (2011).
  14. Jump up^ Chatzikokolakis, Konstantinos, Miguel E. Andrés, Nicolás Emilio Bordenabe, and Catuscia Palamidessi. "Broadening the scope of Differential Privacy using metrics." In Privacy Enhancing Technologies, pp. 82-102. Springer Berlin Heidelberg, 2013.
  15. Jump up^ F.McSherry and K.Talwar. Mechasim Design via Differential Privacy. Proceedings of the 48th Annual Symposium of Foundations of Computer Science, 2007.
  16. Jump up^ Christos Dimitrakakis, Blaine Nelson, Aikaterini Mitrokotsa, Benjamin Rubinstein. Robust and Private Bayesian Inference. Algorithmic Learning Theory 2014
  17. Jump up to:a b Differential Privacy by Cynthia Dwork, International Colloquium on Automata, Languages and Programming (ICALP) 2006, p. 1–12. DOI=10.1007/11787006_1
  18. Jump up to:a b c Privacy integrated queries: an extensible platform for privacy-preserving data analysis by Frank D. McSherry. In Proceedings of the 35th SIGMOD International Conference on Management of Data (SIGMOD), 2009. DOI=10.1145/1559845.1559850
  19. Jump up^ Ashwin Machanavajjhala, Daniel Kifer, John M. Abowd, Johannes Gehrke, and Lars Vilhuber. "Privacy: Theory meets Practice on the Map". In Proceedings of the 24th International Conference on Data Engineering, (ICDE) 2008.
  20. Jump up^ Úlfar Erlingsson, Vasyl Pihur, Aleksandra Korolova. "RAPPOR: Randomized Aggregatable Privacy-Preserving Ordinal Response". In Proceedings of the 21st ACM Conference on Computer and Communications Security (CCS), 2014.
  21. Jump up^ Tackling Urban Mobility with Technology by Andrew Eland. Google Policy Europe Blog, Nov 18, 2015.
  22. Jump up^ "Apple - Press Info - Apple Previews iOS 10, the Biggest iOS Release Ever"Apple. Retrieved 16 June 2016.
  23. Jump up^ Fletcher, Sam; Islam, Md Zahidul (July 2017). "Differentially private random decision forests using smooth sensitivity"Expert Systems with Applications78: 16–31. doi:10.1016/j.eswa.2017.01.034.

External links[edit]







  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值