foolbox.attacksΒΆ

Gradient-based attacks

GradientSignAttack Adds the sign of the gradient to the image, gradually increasing the magnitude until the image is misclassified.
IterativeGradientSignAttack Like GradientSignAttack but with several steps for each epsilon.
GradientAttack Perturbs the image with the gradient of the loss w.r.t.
IterativeGradientAttack Like GradientAttack but with several steps for each epsilon.
FGSM alias of GradientSignAttack
LBFGSAttack Uses L-BFGS-B to minimize the distance between the image and the adversarial as well as the cross-entropy between the predictions for the adversarial and the the one-hot encoded target class.
DeepFoolAttack Simple and accurate adversarial attack.
SLSQPAttack Uses SLSQP to minimize the distance between the image and the adversarial under the constraint that the image is adversarial.
SaliencyMapAttack Implements the Saliency Map Attack.

Score-based attacks

SinglePixelAttack Perturbs just a single pixel and sets it to the min or max.
LocalSearchAttack A black-box attack based on the idea of greedy local search.
ApproximateLBFGSAttack Same as LBFGSBAttack with approximate_gradient set to True.

Decision-based attacks

BoundaryAttack A powerful adversarial attack that requires neither gradients nor probabilities.
GaussianBlurAttack Blurs the image until it is misclassified.
ContrastReductionAttack Reduces the contrast of the image until it is misclassified.
AdditiveUniformNoiseAttack Adds uniform noise to the image, gradually increasing the standard deviation until the image is misclassified.
AdditiveGaussianNoiseAttack Adds Gaussian noise to the image, gradually increasing the standard deviation until the image is misclassified.
BlendedUniformNoiseAttack Blends the image with a uniform noise image until it is misclassified.
SaltAndPepperNoiseAttack Increases the amount of salt and pepper noise until the image is misclassified.

Other attacks

PrecomputedImagesAttack Attacks a model using precomputed adversarial candidates.