论文阅读:Auditing differentially private machine learning: How private is private SGD?

Audit

JAGIELSKI M, ULLMAN J, OPREA A. Auditing differentially private machine learning: How private is private SGD?[C]//Advances in Neural Information Processing Systems. .
正式版论文链接
预收论文链接(更加详细)
视频链接
image-20210821151242818

Differential privacy gives a strong worst-case guarantee of individual privacy:

a differentially private algorithm ensures that, for any set of training examples, no attacker, no matter how powerful attack, can not learn much more information about a single training example than they could have learned had that example been excluded from the training data.

image-20210825163449743

So, how closely can we measure privacy loss ?

here upper bounds means more privacy and smaller epsilon

lower bounds means less privacy and larger epsilon

A privacy proof will only give an upper bound for privacy level (smaller epsilon, more privacy). Improvements into the proof will get closer and closer to the true value of the privacy loss. Indeed, the analysis of the algorithm is always pessimistic, and recently theoretical analysis is not tight.

Besides, differential privacy is a worst-case notion. That is it usually provide more privacy guarantee on the realistic datasets and realistic attacks. So, privacy attacks can only get closer and close to the worst-case. That’s also what we do in the after work——construct an efficient attack.

image-20210821151734788

DP-SGD

image-20210821160508914

In this paper, mainly discuss the audit in DP-SGD

DP-SGD is a modifaction of SGD, DP-SGD makes two modifications to the learning process to preserve privacy: clipping gradients and adding noise.

Every iteration it take a random sample L, and for each i ∈Lt ,we compute gradient and clip it. then sum all gradient and add noise

Poisoning Attacks

目的:构造数据集 D0, D1
BackGround Attacks

image-20210820195116398

Implement
  1. Xp = GETRANDOMROWS(X, k)

    随机选择k个x进行投毒

  2. Pert(x):置x的前5*5的像素点为1

  3. yp:置y为1

However

Clipping provides no formal privacy on its own,but many poisoning attacks perform significantly worse in the presence of clipping

The objective of attack is to decrease the loss on (xp, yp). That is, we need to increasing gradient at every times iteration. In traditional SGD: gt = 1/L Σigt(xi)

wl(w · xp+ b, yp) =l(w · xp+ b, yp)·xp

By doubling this quantity of gradient , if |xp| is fixed, half as many poisoning points are required for the same effect.

However in the presence of clipping ,this relationship broken

Clipping-Aware Poisoning

the attack must produce not only large gradients,but also distinguished gradients. (That is, the distribution of gradients arising from poisoned and cleaned data must be significantly different.)

minimizing Var(x,y)∈D[l(w · xp+ b, yp)xp· l(w · x+ b, y)x]

V a r ( x , y ) ∈ D [ ℓ ′ ( w ⋅ x p + b , y p ) x p ⋅ ℓ ′ ( w ⋅ x + b , y ) x ] Var_{(x,y)\in D}[\ell^{'}(w·x_p+b,y_p)x_p · \ell^{'}(w·x+b,y)x] Var(x,y)D[(wxp+b,yp)xp(wx+b,y)x]

image-20210821170055497

image-20210821165954645

Audit

image-20210823165955599

  1. in the case of δ = 0 \delta = 0 δ=
  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值