- 博客(8)
- 资源 (2)
- 收藏
- 关注
原创 DeepInspect: A Black-box Trojan Detection and Mitigation Framework for Deep Neural Networks笔记
Challenges:C1.后门的隐蔽性使得他们很难通过功能测试functional testing来识别(这种测试通常会使用测试准确率作为检测标准)C2.在后门检测期间,只能得到被查询模型的有限信息。在真实世界中一个干净的训练数据集或者一个好的推理模型可能不容易得到。训练集包含用户的个人信息,所以一般不会分布在预训练的模型中。C3.防御者不知道攻击者指定的target。在我们的情况下,攻击者是恶意模型的提供者而防守者是终端用户。攻击者目标的未知性使得NT的检测更加复杂,因为对于有大量输出clas.
2021-09-03 16:23:25 539
原创 Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks 笔记
Code: https://github.com/kangliucn/Fine-pruning-defenseContext: 因为在我们训练模型时,由于我们所需要的数据集太大,对于计算机的配置要求太高,时间消耗太大,所以我们一般会将模型的训练过程外包给第三方,而会存在着有着恶意想法的第三方,会在给我们训练的模型上安上后门,影响我们模型的判断,这时候就需要我们运用不同的方法来‘消毒’。作者自己给自己设计了新型的后门攻击并且用了自己设计的fine-pruning(a combination of prun
2021-09-03 16:20:47 584
原创 STRIP: A Defence Against Trojan Attacks on Deep Neural Networks 笔记
Code: https://github.com/garrisongys/STRIP?utm_source=catalyzex.com本文提出的 detect 方法如下图所示:说明:(图 中 input x 是待检测的样本,这里示例的是一张加入了 trigger(右下角的小方块) 的样本,作者将验证集中的干净图片依次作为水印添加到 x 上,得到多张添加了水印的 perturbed inputs(比如作者用了 100 张干净图片做水印,那么依次添加后就得到 100 perturbed inputs
2021-09-03 16:19:28 955
原创 Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks笔记
Code: https://github.com/mzweilin/EvadeML-ZooFeature squeezing: reducing the color bit depth of each pixel and spatial smoothing.Framework:Adversarial examples attacksLpnormL_p normLpnorm attackFGSMBIMDeepFoolJSMACarlini/Wagner attacksD
2021-09-03 16:17:58 1091
原创 Backdoor Attack with Sample-Specific Triggers (2021) 笔记
Backdoor attack injects some attacker-specified patterns in the poisoned image and replace the corresponding label with a pre-defined target label.Existing backdoor attacks are easily defensed because the backdoor triggers are sample-agnosticwe ge..
2021-09-03 16:15:16 522
原创 TrojanNet: Embedding Hidden Trojan Horse Models in Neural Networks (2020)笔记
Code: https://github.com/Billy1900/TrojanNet (self produced)We prove theoretically that the Trojan network’s detection is computationally infeasible and demonstrate empirically that the transport network does not compromise its disguise.Our method ut
2021-09-03 16:13:52 813
原创 BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain (2017)笔记
Code: https://github.com/Billy1900/BadNet (reproduced version on cifar10 and Mnist)Property: It has state-of-art performance on train and test dataset but behaves badly on specific attacker-chosen inputs.Context: outsourced task and transfer learningCa
2021-09-03 16:02:26 703
原创 Learning With Noisy Labels
Learning With noisy labels1. ContextDeep learning has several principle problems:deep learning requires a lot of data trainingdo not have enough ability to migrateopen inference problemprinciple is not transparent deep learning heavily
2021-09-03 12:45:59 737
华中科技大学信息系统安全实验报告
2020-08-10
空空如也
TA创建的收藏夹 TA关注的收藏夹
TA关注的人