TrojanNet: Embedding Hidden Trojan Horse Models in Neural Networks (2020)笔记

Code: https://github.com/Billy1900/TrojanNet (self produced)

  • We prove theoretically that the Trojan network’s detection is computationally infeasible and demonstrate empirically that the transport network does not compromise its disguise.

  • Our method utilizes excess model capacity to simultaneously learn a public and secret task in a single network, but different from multi-task learning, the two do not share common features and the secret keys remain undetectable without the presence of hidden key.

  • Framework:
    在这里插入图片描述

  • Permutation shuffles the layer parameters:
    在这里插入图片描述
    Loss and gradient:
    在这里插入图片描述
    Select permutations:
    Pseudo-random permutation generator: H:K→ π_d, and EXISTS-PERM decision problem is NP-complete.

  • Results:

    • the TRN50 network achieves similar test accuracy to that of RN50 trained on the single task alone, which shows that simultaneous training of multiple tasks has no significant effect on the classification accuracy. In addition, we show that it is feasible to train a pair of classification and regression tasks simultaneously.
      在这里插入图片描述
      在这里插入图片描述

    • Using group normalization: We observe a similar trend of minimal effect on performance when network weights are shared between two tasks (rows 2 to 7 compared to row 1). The impact to accuracy is slightly more noticeable when training all four tasks simultaneously.

    • Selecting the threshold L: It suggests that selecting a tight threshold L may be very difficult and may require an intricate balance between computational efficiency and controlling the false positive rate.

    • Comparison

      1. Individual TRN50 models (dark orange) have similar accuracy to that of HRN50 models (dark blue) on both datasets
      2. Ensembling multiple TRN50 networks (light orange) provides a large boost of accuracy over the individual models
      3. The effect of ensembling TRN50 models is surprisingly strong
        (dark orange).
        在这里插入图片描述
    • Model capacity: larger models have more excess capacity to share among different permutations
      在这里插入图片描述
      More Update:https://github.com/Billy1900/Backdoor-Learning

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值