大模型安全
文章平均质量分 90
一些有关AI安全方向的论文(之前读的,已停更)
想发CCFA
这个作者很懒,什么都没留下…
展开
-
POSEIDON: Privacy-Preserving Federated NeuralNetwork Learning
POSEIDON: Privacy-Preserving Federated NeuralNetwork Learning阅读笔记原创 2024-05-27 14:35:46 · 821 阅读 · 0 评论 -
Practical Blind Membership Inference Attack viaDifferential Comparisons
Practical Blind Membership Inference Attack viaDifferential Comparisons阅读笔记原创 2024-05-22 20:34:29 · 1922 阅读 · 0 评论 -
Transfer Attacks Revisited: A Large-Scale Empirical Study in Real Computer Vision Settings
Transfer Attacks Revisited: A Large-Scale Empirical Study in Real Computer Vision Settings阅读笔记原创 2024-05-15 21:15:05 · 1954 阅读 · 0 评论 -
BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised Learning
BadEncoder: Backdoor Attacks to Pre-trainedEncoders in Self-Supervised Learning阅读笔记原创 2024-05-08 20:43:47 · 2295 阅读 · 3 评论 -
Bad Characters:Imperceptible NLP Attacks 阅读笔记
文章提出了一种新的针对NLP任务的攻击方式,可以将NLP模型当作一个黑盒来处理并且攻击方式简单易实施,因此它可以覆盖大范围的NLP任务。原创 2024-04-27 17:19:33 · 909 阅读 · 0 评论 -
Universal 3-Dimensional Perturbations forBlack-Box Attacks on Video Recognition Systems
Universal 3-Dimensional Perturbations forBlack-Box Attacks on Video Recognition Systems阅读笔记原创 2024-05-05 20:55:24 · 1006 阅读 · 0 评论