对抗
文章平均质量分 93
薄荷奶绿Yena
211研究生在读,研究方向为视觉问答、视觉对话、多模态对抗攻防。
展开
-
【扩散对抗】AdvDiffuser: Natural Adversarial Example Synthesis with Diffusion Models
原文标题: AdvDiffuser: Natural Adversarial Example Synthesis with Diffusion Models原文代码: https://github.com/lafeat/advdiffuser发布年度: 2023发布期刊: ICCVPrevious work on adversarial examples typically involves a fixed norm perturbation budget, which fails to captu原创 2024-07-10 15:05:23 · 593 阅读 · 0 评论 -
【多模态攻击】Set-level Guidance Attack: Boosting Adversarial Transferability of Vision-Language Pre-train
原文标题: Set-level Guidance Attack: Boosting Adversarial Transferability of Vision-Language Pre-training Models原文代码: https://github.com/Zoky-2020/SGA发布年度: 2023发布期刊: ICCVVision-language pre-training (VLP) models have shown vulnerability to adversarial examp原创 2024-06-21 20:35:22 · 908 阅读 · 0 评论 -
【图像攻击转移性】FACL-Attack: Frequency-Aware Contrastive Learning for Transferable Adversarial Attacks
原文标题: FACL-Attack: Frequency-Aware Contrastive Learning for Transferable Adversarial Attacks原文代码: 暂无发布年度: 2024发布期刊: AAAIDeep neural networks are known to be vulnerable to security risks due to the inherent transferable nature of adversarial examples. De原创 2024-05-27 16:28:41 · 884 阅读 · 0 评论 -
【多模态对抗】VQAttack: Transferable Adversarial Attacks on Visual Question Answering via Pre-trained Model
原文标题: VQAttack: Transferable Adversarial Attacks on Visual Question Answering via Pre-trained Models原文代码: https://github.com/ericyinyzy/VQAttack发布年度: 2024发布期刊: AAAIVisual Question Answering (VQA) is a fundamental task in computer vision and natural lang原创 2024-05-27 16:27:33 · 697 阅读 · 0 评论 -
[物理对抗攻击]Adversarial Attack with Raindrops
原文标题: Adversarial Attack with Raindrops原文代码: 暂无发布年度: 2023发布期刊: CVPRDeep neural networks (DNNs) are known to be vulnerable to adversarial examples, which are usually designed artificially to fool DNNs, but rarely exist in real-world scenarios. In this pa原创 2024-03-29 15:26:36 · 870 阅读 · 1 评论 -
【多模态对抗攻击】VLATTACK: Multimodal Adversarial Attacks on Vision-Language Tasks via Pre-trained Models
原文标题: VLATTACK: Multimodal Adversarial Attacks on Vision-Language Tasks via Pre-trained Models原文代码: https://github.com/ericyinyzy/VLAttack发布年度: 2023发布期刊: NeurIPSVision-Language (VL) pre-trained models have shown their superiority on many multimodal task原创 2024-03-29 15:25:55 · 1158 阅读 · 0 评论 -
【多模态对抗】AdvCLIP: Downstream-agnostic Adversarial Examples in Multimodal Contrastive Learning
原文标题: AdvCLIP: Downstream-agnostic Adversarial Examples in Multimodal Contrastive Learning原文代码: https://github.com/CGCL-codes/AdvCLIP发布年度: 2023发布期刊: ACM MMMultimodal contrastive learning aims to train a general-purpose feature extractor, such as CLIP, o原创 2024-03-07 16:03:29 · 978 阅读 · 1 评论 -
【文本对抗攻击】Bridge the Gap Between CV and NLP!A Gradient-based Textual Adversarial Attack Framework
原文标题: Bridge the Gap Between CV and NLP!A Gradient-based Textual Adversarial Attack Framework原文代码: https://github.com/Phantivia/T-PGD发布年度: 2023发布期刊: ACLDespite recent success on various tasks, deep learning techniques still perform poorly on adversari原创 2024-03-07 16:01:48 · 838 阅读 · 1 评论 -
【语义扰动的对抗攻击】Mutual-modality Adversarial Attack with Semantic Perturbation
原文标题: Mutual-modality Adversarial Attack with Semantic Perturbation原文代码: 暂无发布年度: 2024发布期刊: AAAIAdversarial attacks constitute a notable threat to machine learning systems, given their potential to induce erroneous predictions and classifications. Howeve原创 2024-02-29 22:17:37 · 890 阅读 · 0 评论 -
【图对抗】Local-Global Defense against Unsupervised Adversarial Attacks on Graphs
原文标题: Local-Global Defense against Unsupervised Adversarial Attacks on Graphs原文代码: https://github.com/jindi-tju/ULGD/blob/main发布年度: 2023发布期刊: AAAIUnsupervised pre-training algorithms for graph representation learning are vulnerable to adversarial attac原创 2023-12-18 00:01:57 · 820 阅读 · 1 评论