- Batch Normalization (BN) and its variants have have delivered tremendous success in combating the covariate shift induced by the training step of deep learning methods.
- We propose a simple yet effective method to scale up differential privacy to large neural networks at reasonable privacy budgets. Our key insight was to minimize the number of trainable parameters during private dataset finetuning, by leveraging additional public data. By pre-training large models on public data, we obtain a strong representation at no privacy cost. Next, finetuning a small subset of
parameters on private data, we maintain the expressiveness of large models while introducing minimal training disruption during the process of domain adaptation.
We note that our two proposed approaches – normalization transfer and convolution parameter transfer – albeit outperforming previous methods, are naive stabs-in-the-dark; an exploration into the precise parameter subset to choose in order to optimize over the fundamental privacy-accuracy tradeoff in differential privacy is likely to be a fruitful area of research.
220322
最新推荐文章于 2024-07-14 23:44:12 发布