2018s&p Manipulating Machine Learning Poisoning Attacks and Countermeasures for Regression Learning

操纵机器学习:回归学习的中毒攻击与对策

论文的出发点
不同对抗模型下的中毒线性回归问题系统研究
从现有的中毒攻击分类入手,提出了一个专门针对回归模型的优化框架
设计了一种快速的统计攻击,对学习过程的知识要求低
设计一种新的鲁棒防御算法(TRIM),该算法在很大程度上优于现有的稳健回归方法
很强的鲁棒性和恢复能力
保证了算法的收敛性,MSE设置了上界·
在四个回归模型和来自不同领域的几个数据集上广泛评估我们的攻击和防御

方案
Effectiveness of the attack
Success rate of the poisoning attack by the difference in testing set MSE
Running time of the attack
Goal of the attack
Corrupt the learning model
Attack scenarios
White-box attacks
Black-box attacks
理想世界: 学习过程包括执行数据清理和标准化的数据预处理阶段,之后可以表示训练数据;
测试阶段: 模型在预处理后应用于新数据,并使用在训练中学习的回归模型生成数值预测值;
对抗性世界:在中毒攻击中,攻击者在训练回归模型之前将中毒点注入训练集
Poisoning Attack Strategy
Bilevel optimization problem
Attach Methodology
Optimization-based Poisoning Attacks
Defense algorithms
Existing defense proposals
noise-resilient regression algorithms
adversariallly- resilient defenses
TRIM algorithm

We perform the first systematic study on poisoning attacks and their countermeasures for linear regression models.
We propose a new optimization framework for poisoning attacks and a fast-statistical attack that requires minimal knowledge of the training process. We also take a principled approach in designing a new robust defense algorithm that largely out- performs existing robust regression methods.
We extensively evaluate our proposed attack and defense algorithms on several datasets from health care, loan assessment, and real estate domains.
We demonstrate the real implications of poisoning attacks in a case study health application.
We finally believe that our work will inspire future research towards developing more secure learning algorithms against poisoning attacks.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值