【论文笔记09】Differentially Private Hypothesis Transfer Learning 差分隐私迁移学习模型, ECML&PKDD 2018

该博客详细介绍了论文《Differentially Private Hypothesis Transfer Learning》的主要内容,探讨了在保护数据隐私的同时进行迁移学习的方法。通过在源域中学习并扰动源假设,计算不同源域的“重要性权重”,并将这些信息传递到目标域,以构建具有差分隐私的贝叶斯逻辑回归模型。文章还提到了隐私风险随着迭代次数增加而累积的问题,以及如何通过优化问题来确定源假设的权重。
摘要由CSDN通过智能技术生成

系列传送

我的论文笔记频道

【Active Learning】
【论文笔记01】Learning Loss for Active Learning, CVPR 2019
【论文笔记02】Active Learning For Convolutional Neural Networks: A Core-Set Approch, ICLR 2018
【论文笔记03】Variational Adversarial Active Learning, ICCV 2019
【论文笔记04】Ranked Batch-Mode Active Learning,ICCV 2016

【Transfer Learning】
【论文笔记05】Active Transfer Learning, IEEE T CIRC SYST VID 2020
【论文笔记06】Domain-Adversarial Training of Neural Networks, JMLR 2016
【论文笔记10】A unified framework of active transfer learning for cross-system recommendation, AI 2017
【论文笔记14】Transfer Learning via Minimizing the Performance Gap Between Domains, NIPS 2019

【Differential Privacy】
【论文笔记07】A Survey on Differentially Private Machine Learning, IEEE CIM 2020
【论文笔记09】Differentially Private Hypothesis Transfer Learning, ECML&PKDD 2018
【论文笔记11】Deep Domain Adaptation With Differential Privay, IEEE TIFS 2020
【论文笔记12】Differential privacy based on importance weighting, Mach Learn 2013
【论文笔记13】Differentially Private Optimal Transport: Application to Domain Adaptation, IJCAI 2019

【Model inversion attack】
【论文笔记08】Model inversion attacks that exploit confidence information and basic countermeasures, SIGSAC 2015

Differentially Private Hypothesis Transfer Learning

原文传送

1 Abstract

2 Bg & Rw

更多关于差分隐私的背景可以看
差分隐私机器学习综述
模型反转攻击

Differential privacy essentially implies that the existence of any particular data point s s s in a private data set S S S cannot be determined by analyzing the output of a differentially private algorithm M \mathcal{M} M applied on S S S.

Popular approaches to achieving differential privacy

  • output perturbation, adding carefully calibrated noises to the parameters of the elarned hypothesis before releasing it.
  • distributed privacy-preserving ML, where private data sets are collected by multiple parties.
    • One line of research involves exchanging differentially private information (e.g. gradients) among multiple parties during the iterative hypothesis training process.
    • An alternative line of work focuses on privacy-preserving model aggregation techniques.

The most related works to this paper are focusing on multi-task learning.

One of the key drawbacks of iterative differentially private methods is that privacy risks accumulate with each iteraion. So, according to the composition theorem of differential privacy, there is a limit on how many iterations can be performed on a specific private dataset under a certain level of total privacy budget and this severely affects the utility-privacy trade-off of iterative solutions. 换句话说就是utility会因为privacy的牵制下降。

3 Setting & The proposed methods

I think hypothesis means the same as model/mapping/function.

My understanding of the setting and algorithm:
在这里插入图片描述

  1. This is a multi-source transfer learning.
  2. In each source domain D k \mathcal{D}^k Dk, there are i.i.d. labeled samples S l k = { ( x i k , y i k ) : 1 ⩽ i ⩽ n l k } S_l^k=\{(x_i^k, y_i^k): 1\leqslant i \leqslant n_l^k\} Slk={ (xik,yik
  • 1
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值