-
Differential privacy for machine learning models can be obtained in four ways: input perturbation, output perturbation, objective perturbation, and change in optimization algorithm. The fourth method modifies the optimization algorithm for training machine learning models. This includes noisy SGD methods, which we discuss in the next section.
-
The analysis of such algorithms can be broken into two parts:
- Obtain ( ε ′ \varepsilon' ε′, δ ′ \delta' δ′) differential privacy for each round of SGD, by ensuring that any information from the dataset that is used to update the model parameters is differentially-private.
- Compute the total privacy cost of all SGD iterations to obtain overall ( ε \varepsilon ε, δ \delta δ) parameters.
-
keep track of accumulated privacy loss over multiple iterations of SGD 的几种方法 :
- Privacy accountant
【论文记录】Adaptive Clipping for Private SGD
最新推荐文章于 2024-04-16 20:09:07 发布