face_adv 论文阅读-ni_sifgsm


链接: 论文链接
链接: 代码

face_adv 论文阅读

1. 论文名

NESTEROV ACCELERATED GRADIENT AND SCALE
INVARIANCE FOR ADVERSARIAL ATTACKS
si_ni fgsm

1.1摘要

Deep learning models are vulnerable to adversarial examples crafted by applying
human-imperceptible perturbations on benign inputs. However, under the blackbox setting, most existing adversaries often have a poor transferability to attack
other defense models. In this work, from the perspective of regarding the adversarial example generation as an optimization process, we propose two new methods
to improve the transferability of adversarial examples, namely Nesterov Iterative
Fast Gradient Sign Method (NI-FGSM) and Scale-Invariant attack Method (SIM).
NI-FGSM aims to adapt Nesterov accelerated gradient into the iterative attacks
so as to effectively look ahead and improve the transferability of adversarial examples. While SIM is based on our discovery on the scale-invariant property
of deep learning models, for which we leverage to optimize the adversarial perturbations over the scale copies of the input images so as to avoid “overfitting”
on the white-box model being attacked and generate more transferable adversarial examples. NI-FGSM and SIM can be naturally integrated to build a robust
gradient-based attack to generate more transferable adversarial examples against
the defense models. Empirical results on ImageNet dataset demonstrate that our
attack methods exhibit higher transferability and achieve higher attack success
rates than state-of-the-art gradient-based attacks.

摘要: 本文提出了两种方法 NIfgsm 以及sifgsm nifgsm 是为了加速梯度下降。
sifgsm 主要针对与卷积的平移不变性 ,通过复制输入的照片进行提高黑盒的迁移性。

1.2想法

1.3思路

1.4算法流程

1.5想法 可以改进的部分

直接上代码实现,感觉这个对迁移性会有一定的提升,针对于白盒效果会很好,黑盒的话,感觉相似结构的不错,换结构可能效果次一点

#1. NIfgsm

代码

nifgsm 输入为 x_nes 为求导的参数

x_nes = x + momentum * alpha * grad

with slim.arg_scope(inception_v3.inception_v3_arg_scope()):
    logits_v3, end_points_v3 = inception_v3.inception_v3(
        x_nes, num_classes=num_classes, is_training=False, reuse=tf.AUTO_REUSE)

mifgsm 输入为 x 为求导的参数

with slim.arg_scope(inception_v3.inception_v3_arg_scope()):
    logits_v3, end_points_v3 = inception_v3.inception_v3(
        x, num_classes=num_classes, is_training=False, reuse=tf.AUTO_REUSE)

后续生成adv


noise = tf.gradients(cross_entropy, x)[0]

noise = noise / tf.reduce_mean(tf.abs(noise), [1, 2, 3], keep_dims=True)

noise = momentum * grad + noise

noise 为生成的loss
这个是ni与mi1的区别,ni包括了mi,所以后续可以将ni与其他的生成方法结合

#2. smi
作者发现除了平移不变性,还存在缩放不变性,图像和缩放后的图像(
像素值乘以一个系数),在同一个模型上的loss很接近。将这种特性视为一种模型增强的话,
可以得到目标函数 :
代码实现: mi_si

    with slim.arg_scope(inception_v3.inception_v3_arg_scope()):
        logits_v3_2, end_points_v3 = inception_v3.inception_v3(
            x_nes_2, num_classes=num_classes, is_training=False, reuse=tf.AUTO_REUSE)
    cross_entropy_2 = tf.losses.softmax_cross_entropy(one_hot, logits_v3_2)
    noise += tf.gradients(cross_entropy_2, x)[0]

    x_nes_4 = 1 / 4 * x_nes
    with slim.arg_scope(inception_v3.inception_v3_arg_scope()):
        logits_v3_4, end_points_v3 = inception_v3.inception_v3(
            x_nes_4, num_classes=num_classes, is_training=False, reuse=tf.AUTO_REUSE)
    cross_entropy_4 = tf.losses.softmax_cross_entropy(one_hot, logits_v3_4)
    noise += tf.gradients(cross_entropy_4, x)[0]

    x_nes_8 = 1 / 8 * x_nes
    with slim.arg_scope(inception_v3.inception_v3_arg_scope()):
        logits_v3_8, end_points_v3 = inception_v3.inception_v3(
            x_nes_8, num_classes=num_classes, is_training=False, reuse=tf.AUTO_REUSE)
    cross_entropy_8 = tf.losses.softmax_cross_entropy(one_hot, logits_v3_8)
    noise += tf.gradients(cross_entropy_8, x)[0]

    x_nes_16 = 1 / 16 * x_nes
    with slim.arg_scope(inception_v3.inception_v3_arg_scope()):
        logits_v3_16, end_points_v3 = inception_v3.inception_v3(
            x_nes_16, num_classes=num_classes, is_training=False, reuse=tf.AUTO_REUSE)
    cross_entropy_16 = tf.losses.softmax_cross_entropy(one_hot, logits_v3_16)
    noise += tf.gradients(cross_entropy_16, x)[0]

    noise = noise / tf.reduce_mean(tf.abs(noise), [1, 2, 3], keep_dims=True)

    noise = momentum * grad + noise

实现手段 :输入 进行一个线性变化,然后求一次导数,然后跟adv进行叠加的

看着这个变化,可以根据 跟 si_ni_di_ti_mi_fgsm 进行叠加

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值