Code: https://github.com/Billy1900/TrojanNet (self produced)
-
We prove theoretically that the Trojan network’s detection is computationally infeasible and demonstrate empirically that the transport network does not compromise its disguise.
-
Our method utilizes excess model capacity to simultaneously learn a public and secret task in a single network, but different from multi-task learning, the two do not share common features and the secret keys remain undetectable without the presence of hidden key.
-
Framework:
-
Permutation shuffles the layer parameters:
Loss and gradient:
Select permutations:
Pseudo-random permutation generator: H:K→ π_d, and EXISTS-PERM decision problem is NP-complete. -
Results:
-
the TRN50 network achieves similar test accuracy to that of RN50 trained on the single task alone, which shows that simultaneous training of multiple tasks has no significant effect on the classification accuracy. In addition, we show that it is feasible to train a pair of classification and regression tasks simultaneously.
-
Using group normalization: We observe a similar trend of minimal effect on performance when network weights are shared between two tasks (rows 2 to 7 compared to row 1). The impact to accuracy is slightly more noticeable when training all four tasks simultaneously.
-
Selecting the threshold L: It suggests that selecting a tight threshold L may be very difficult and may require an intricate balance between computational efficiency and controlling the false positive rate.
-
Comparison
- Individual TRN50 models (dark orange) have similar accuracy to that of HRN50 models (dark blue) on both datasets
- Ensembling multiple TRN50 networks (light orange) provides a large boost of accuracy over the individual models
- The effect of ensembling TRN50 models is surprisingly strong
(dark orange).
-
Model capacity: larger models have more excess capacity to share among different permutations
More Update:https://github.com/Billy1900/Backdoor-Learning
-