foolbox.attacksΒΆ

Gradient-based attacks

GradientAttack Perturbs the input with the gradient of the loss w.r.t.
GradientSignAttack Adds the sign of the gradient to the input, gradually increasing the magnitude until the input is misclassified.
FGSM alias of foolbox.attacks.gradient.GradientSignAttack
LinfinityBasicIterativeAttack The Basic Iterative Method introduced in [R37dbc8f24aee-1].
BasicIterativeMethod alias of foolbox.attacks.iterative_projected_gradient.LinfinityBasicIterativeAttack
BIM alias of foolbox.attacks.iterative_projected_gradient.LinfinityBasicIterativeAttack
L1BasicIterativeAttack Modified version of the Basic Iterative Method that minimizes the L1 distance.
L2BasicIterativeAttack Modified version of the Basic Iterative Method that minimizes the L2 distance.
ProjectedGradientDescentAttack The Projected Gradient Descent Attack introduced in [R367e8e10528a-1] without random start.
ProjectedGradientDescent alias of foolbox.attacks.iterative_projected_gradient.ProjectedGradientDescentAttack
PGD alias of foolbox.attacks.iterative_projected_gradient.ProjectedGradientDescentAttack
RandomStartProjectedGradientDescentAttack The Projected Gradient Descent Attack introduced in [Re6066bc39e14-1] with random start.
RandomProjectedGradientDescent alias of foolbox.attacks.iterative_projected_gradient.RandomStartProjectedGradientDescentAttack
RandomPGD alias of foolbox.attacks.iterative_projected_gradient.RandomStartProjectedGradientDescentAttack
MomentumIterativeAttack The Momentum Iterative Method attack introduced in [R86d363e1fb2f-1].
MomentumIterativeMethod alias of foolbox.attacks.iterative_projected_gradient.MomentumIterativeAttack
LBFGSAttack Uses L-BFGS-B to minimize the distance between the input and the adversarial as well as the cross-entropy between the predictions for the adversarial and the the one-hot encoded target class.
DeepFoolAttack Simple and close to optimal gradient-based adversarial attack.
NewtonFoolAttack Implements the NewtonFool Attack.
DeepFoolL2Attack
DeepFoolLinfinityAttack
ADefAttack Adversarial attack that distorts the image, i.e.
SLSQPAttack Uses SLSQP to minimize the distance between the input and the adversarial under the constraint that the input is adversarial.
SaliencyMapAttack Implements the Saliency Map Attack.
IterativeGradientAttack Like GradientAttack but with several steps for each epsilon.
IterativeGradientSignAttack Like GradientSignAttack but with several steps for each epsilon.
CarliniWagnerL2Attack The L2 version of the Carlini & Wagner attack.
EADAttack Gradient based attack which uses an elastic-net regularization [1].
DecoupledDirectionNormL2Attack The Decoupled Direction and Norm L2 adversarial attack from [R0e9d4da0ab48-1].
SparseFoolAttack A geometry-inspired and fast attack for computing sparse adversarial perturbations.

Score-based attacks

SinglePixelAttack Perturbs just a single pixel and sets it to the min or max.
LocalSearchAttack A black-box attack based on the idea of greedy local search.
ApproximateLBFGSAttack Same as LBFGSAttack with approximate_gradient set to True.

Decision-based attacks

BoundaryAttack A powerful adversarial attack that requires neither gradients nor probabilities.
SpatialAttack Adversarially chosen rotations and translations [1].
PointwiseAttack Starts with an adversarial and performs a binary search between the adversarial and the original for each dimension of the input individually.
GaussianBlurAttack Blurs the input until it is misclassified.
ContrastReductionAttack Reduces the contrast of the input until it is misclassified.
AdditiveUniformNoiseAttack Adds uniform noise to the input, gradually increasing the standard deviation until the input is misclassified.
AdditiveGaussianNoiseAttack Adds Gaussian noise to the input, gradually increasing the standard deviation until the input is misclassified.
SaltAndPepperNoiseAttack Increases the amount of salt and pepper noise until the input is misclassified.
BlendedUniformNoiseAttack Blends the input with a uniform noise input until it is misclassified.
BoundaryAttackPlusPlus A powerful adversarial attack that requires neither gradients nor probabilities.

Other attacks

BinarizationRefinementAttack For models that preprocess their inputs by binarizing the inputs, this attack can improve adversarials found by other attacks.
PrecomputedAdversarialsAttack Attacks a model using precomputed adversarial candidates.