foolbox.attacks
ΒΆ
GradientSignAttack 
Adds the sign of the gradient to the image, gradually increasing the magnitude until the image is misclassified. 
IterativeGradientSignAttack 
Like GradientSignAttack but with several steps for each epsilon. 
GradientAttack 
Perturbs the image with the gradient of the loss w.r.t. 
IterativeGradientAttack 
Like GradientAttack but with several steps for each epsilon. 
FGSM 
alias of GradientSignAttack 
LBFGSAttack 
Uses LBFGSB to minimize the distance between the image and the adversarial as well as the crossentropy between the predictions for the adversarial and the the onehot encoded target class. 
DeepFoolAttack 
Simple and close to optimal gradientbased adversarial attack. 
DeepFoolL2Attack 

DeepFoolLinfinityAttack 

SLSQPAttack 
Uses SLSQP to minimize the distance between the image and the adversarial under the constraint that the image is adversarial. 
SaliencyMapAttack 
Implements the Saliency Map Attack. 
SinglePixelAttack 
Perturbs just a single pixel and sets it to the min or max. 
LocalSearchAttack 
A blackbox attack based on the idea of greedy local search. 
ApproximateLBFGSAttack 
Same as LBFGSAttack with approximate_gradient set to True. 
BoundaryAttack 
A powerful adversarial attack that requires neither gradients nor probabilities. 
GaussianBlurAttack 
Blurs the image until it is misclassified. 
ContrastReductionAttack 
Reduces the contrast of the image until it is misclassified. 
AdditiveUniformNoiseAttack 
Adds uniform noise to the image, gradually increasing the standard deviation until the image is misclassified. 
AdditiveGaussianNoiseAttack 
Adds Gaussian noise to the image, gradually increasing the standard deviation until the image is misclassified. 
SaltAndPepperNoiseAttack 
Increases the amount of salt and pepper noise until the image is misclassified. 
BlendedUniformNoiseAttack 
Blends the image with a uniform noise image until it is misclassified. 
PointwiseAttack 
Starts with an adversarial and performs a binary search between the adversarial and the original for each dimension of the input individually. 
PrecomputedImagesAttack 
Attacks a model using precomputed adversarial candidates. 