FGM (Fast Gradient Sign Method) and AWP (Adversarial Weight Perturbation) are two different techniques used in adversarial attacks and defenses in deep learning models.
FGM (Fast Gradient Sign Method):
FGM is a simple but effective method for generating adversarial examples. It perturbs the input data by adding a small amount of perturbation in the direction of the sign of the gradient of the loss function with respect to the input. By perturbing the input data in the direction that maximizes the loss, FGM aims to fool the model into making incorrect predictions.
The FGM technique is commonly used in adversarial attacks, where an attacker tries to craft adversarial examples to deceive a model. By applying FGM, the attacker can generate perturbed inputs that may app