传统对抗样本目的是使模型分类错误,本文通过则是使模型执行特定任务(攻击者设定),且该任务可以未被训练过。
We introduce attacks that instead reprogram the target model to perform a task chosen by the attacker—without the attacker needing to specify or compute the desired output for each test-time input. This attack finds a single adversarial perturbation, that can be added to all test-time inputs to a machine learning model in order to cause the model to perform a task chosen by the adversary—even if the model was not trained to do this task.
先前的对抗样本攻击旨在使模型发生误判。而这种攻击方法改为对目标模型进行重新编程以执行攻击者选择的任务,攻击者不需要对每个测试时的输入计算所想要的输出。 这种攻击发现一个可以将其添加到模型的所有测试输入中的单一对抗扰动,这个扰动可以使模型执行攻击者事先选择好的任务,即使该模型对这个任务没有进行过训练。
公式:
h g ( f ( h f ( x ~ ) ) ) = g ( x ~ ) h_{g}(f(h_{f}(\widetilde{x})))=g(\widetilde{x}) hg(f(hf(x
)))=g(x
)<