论文阅读 [TPAMI-2022] Model-Protected Multi-Task Learning

论文阅读 [TPAMI-2022] Model-Protected Multi-Task Learning

论文搜索(studyai.com)

搜索论文: Model-Protected Multi-Task Learning

搜索论文: http://www.studyai.com/search/whole-site/?q=Model-Protected+Multi-Task+Learning

关键字(Keywords)

Task analysis; Covariance matrices; Privacy; Security; Data models; Resource management; Multi-task learning; model protection; differential privacy; covariance matrix; low-rank subspace learning

机器学习

多任务学习

摘要(Abstract)

Multi-task learning (MTL) refers to the paradigm of learning multiple related tasks together.

多任务学习(Multi task learning,MTL)是指同时学习多个相关任务的范式。.

In contrast, in single-task learning (STL) each individual task is learned independently.

相比之下,在单任务学习(STL)中,每个单独的任务都是独立学习的。.

MTL often leads to better trained models because they can leverage the commonalities among related tasks.

MTL通常会导致更好的训练模型,因为它们可以利用相关任务之间的共性。.

However, because MTL algorithms can “leak” information from different models across different tasks, MTL poses a potential security risk.

然而,由于MTL算法可以在不同的任务中“泄漏”来自不同模型的信息,因此MTL带来了潜在的安全风险。.

Specifically, an adversary may participate in the MTL process through one task and thereby acquire the model information for another task.

具体而言,对手可以通过一项任务参与MTL过程,从而获取另一项任务的模型信息。.

The previously proposed privacy-preserving MTL methods protect data instances rather than models, and some of them may underperform in comparison with STL methods.

先前提出的隐私保护MTL方法保护的是数据实例而不是模型,其中一些方法的性能可能不如STL方法。.

In this paper, we propose a privacy-preserving MTL framework to prevent information from each model leaking to other models based on a perturbation of the covariance matrix of the model matrix.

在本文中,我们基于模型矩阵协方差矩阵的扰动,提出了一个保护隐私的MTL框架,以防止每个模型的信息泄漏到其他模型。.

We study two popular MTL approaches for instantiation, namely, learning the low-rank and group-sparse patterns of the model matrix.

我们研究了两种流行的MTL实例化方法,即学习模型矩阵的低秩和群稀疏模式。.

Our algorithms can be guaranteed not to underperform compared with STL methods.

我们的算法可以保证不低于STL方法。.

We build our methods based upon tools for differential privacy, and privacy guarantees, utility bounds are provided, and heterogeneous privacy budgets are considered.

我们基于用于区分隐私的工具构建我们的方法,并提供隐私保证、效用界限,以及考虑异构隐私预算。.

The experiments demonstrate that our algorithms outperform the baseline methods constructed by existing privacy-preserving MTL methods on the proposed model-protection problem…

实验表明,在所提出的模型保护问题上,我们的算法优于现有的隐私保护MTL方法构造的基线方法。。.

作者(Authors)

[‘Jian Liang’, ‘Ziqi Liu’, ‘Jiayu Zhou’, ‘Xiaoqian Jiang’, ‘Changshui Zhang’, ‘Fei Wang’]

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值