Gradient-based attacks

class foolbox.attacks.GradientAttack(model=None, criterion=<foolbox.criteria.Misclassification object>, distance=<class 'foolbox.distances.MeanSquaredDistance'>, threshold=None)[source]

Perturbs the image with the gradient of the loss w.r.t. the image, gradually increasing the magnitude until the image is misclassified.

Does not do anything if the model does not have a gradient.

__call__(self, input_or_adv, label=None, unpack=True, epsilons=1000, max_epsilon=1)[source]

Perturbs the image with the gradient of the loss w.r.t. the image, gradually increasing the magnitude until the image is misclassified.

Parameters:
input_or_adv : numpy.ndarray or Adversarial

The original, unperturbed input as a numpy.ndarray or an Adversarial instance.

label : int

The reference label of the original input. Must be passed if a is a numpy.ndarray, must not be passed if a is an Adversarial instance.

unpack : bool

If true, returns the adversarial input, otherwise returns the Adversarial object.

epsilons : int or Iterable[float]

Either Iterable of step sizes in the gradient direction or number of step sizes between 0 and max_epsilon that should be tried.

max_epsilon : float

Largest step size if epsilons is not an iterable.

class foolbox.attacks.GradientSignAttack(model=None, criterion=<foolbox.criteria.Misclassification object>, distance=<class 'foolbox.distances.MeanSquaredDistance'>, threshold=None)[source]

Adds the sign of the gradient to the image, gradually increasing the magnitude until the image is misclassified. This attack is often referred to as Fast Gradient Sign Method and was introduced in [R20d0064ee4c9-1].

Does not do anything if the model does not have a gradient.

References

[R20d0064ee4c9-1]Ian J. Goodfellow, Jonathon Shlens, Christian Szegedy, “Explaining and Harnessing Adversarial Examples”, https://arxiv.org/abs/1412.6572
__call__(self, input_or_adv, label=None, unpack=True, epsilons=1000, max_epsilon=1)[source]

Adds the sign of the gradient to the image, gradually increasing the magnitude until the image is misclassified.

Parameters:
input_or_adv : numpy.ndarray or Adversarial

The original, unperturbed input as a numpy.ndarray or an Adversarial instance.

label : int

The reference label of the original input. Must be passed if a is a numpy.ndarray, must not be passed if a is an Adversarial instance.

unpack : bool

If true, returns the adversarial input, otherwise returns the Adversarial object.

epsilons : int or Iterable[float]

Either Iterable of step sizes in the direction of the sign of the gradient or number of step sizes between 0 and max_epsilon that should be tried.

max_epsilon : float

Largest step size if epsilons is not an iterable.

foolbox.attacks.FGSM[source]

alias of foolbox.attacks.gradient.GradientSignAttack

class foolbox.attacks.LinfinityBasicIterativeAttack(model=None, criterion=<foolbox.criteria.Misclassification object>, distance=<class 'foolbox.distances.MeanSquaredDistance'>, threshold=None)[source]

The Basic Iterative Method introduced in [R37dbc8f24aee-1].

This attack is also known as Projected Gradient Descent (PGD) (without random start) or FGMS^k.

References

[R37dbc8f24aee-1]

Alexey Kurakin, Ian Goodfellow, Samy Bengio, “Adversarial examples in the physical world”,

__call__(self, input_or_adv, label=None, unpack=True, binary_search=True, epsilon=0.3, stepsize=0.05, iterations=10, random_start=False, return_early=True)[source]

Simple iterative gradient-based attack known as Basic Iterative Method, Projected Gradient Descent or FGSM^k.

Parameters:
input_or_adv : numpy.ndarray or Adversarial

The original, unperturbed input as a numpy.ndarray or an Adversarial instance.

label : int

The reference label of the original input. Must be passed if a is a numpy.ndarray, must not be passed if a is an Adversarial instance.

unpack : bool

If true, returns the adversarial input, otherwise returns the Adversarial object.

binary_search : bool or int

Whether to perform a binary search over epsilon and stepsize, keeping their ratio constant and using their values to start the search. If False, hyperparameters are not optimized. Can also be an integer, specifying the number of binary search steps (default 20).

epsilon : float

Limit on the perturbation size; if binary_search is True, this value is only for initialization and automatically adapted.

stepsize : float

Step size for gradient descent; if binary_search is True, this value is only for initialization and automatically adapted.

iterations : int

Number of iterations for each gradient descent run.

random_start : bool

Start the attack from a random point rather than from the original input.

return_early : bool

Whether an individual gradient descent run should stop as soon as an adversarial is found.

foolbox.attacks.BasicIterativeMethod[source]

alias of foolbox.attacks.iterative_projected_gradient.LinfinityBasicIterativeAttack

foolbox.attacks.BIM[source]

alias of foolbox.attacks.iterative_projected_gradient.LinfinityBasicIterativeAttack

class foolbox.attacks.L1BasicIterativeAttack(model=None, criterion=<foolbox.criteria.Misclassification object>, distance=<class 'foolbox.distances.MeanSquaredDistance'>, threshold=None)[source]

Modified version of the Basic Iterative Method that minimizes the L1 distance.

__call__(self, input_or_adv, label=None, unpack=True, binary_search=True, epsilon=0.3, stepsize=0.05, iterations=10, random_start=False, return_early=True)[source]

Simple iterative gradient-based attack known as Basic Iterative Method, Projected Gradient Descent or FGSM^k.

Parameters:
input_or_adv : numpy.ndarray or Adversarial

The original, unperturbed input as a numpy.ndarray or an Adversarial instance.

label : int

The reference label of the original input. Must be passed if a is a numpy.ndarray, must not be passed if a is an Adversarial instance.

unpack : bool

If true, returns the adversarial input, otherwise returns the Adversarial object.

binary_search : bool or int

Whether to perform a binary search over epsilon and stepsize, keeping their ratio constant and using their values to start the search. If False, hyperparameters are not optimized. Can also be an integer, specifying the number of binary search steps (default 20).

epsilon : float

Limit on the perturbation size; if binary_search is True, this value is only for initialization and automatically adapted.

stepsize : float

Step size for gradient descent; if binary_search is True, this value is only for initialization and automatically adapted.

iterations : int

Number of iterations for each gradient descent run.

random_start : bool

Start the attack from a random point rather than from the original input.

return_early : bool

Whether an individual gradient descent run should stop as soon as an adversarial is found.

class foolbox.attacks.L2BasicIterativeAttack(model=None, criterion=<foolbox.criteria.Misclassification object>, distance=<class 'foolbox.distances.MeanSquaredDistance'>, threshold=None)[source]

Modified version of the Basic Iterative Method that minimizes the L2 distance.

__call__(self, input_or_adv, label=None, unpack=True, binary_search=True, epsilon=0.3, stepsize=0.05, iterations=10, random_start=False, return_early=True)[source]

Simple iterative gradient-based attack known as Basic Iterative Method, Projected Gradient Descent or FGSM^k.

Parameters:
input_or_adv : numpy.ndarray or Adversarial

The original, unperturbed input as a numpy.ndarray or an Adversarial instance.

label : int

The reference label of the original input. Must be passed if a is a numpy.ndarray, must not be passed if a is an Adversarial instance.

unpack : bool

If true, returns the adversarial input, otherwise returns the Adversarial object.

binary_search : bool or int

Whether to perform a binary search over epsilon and stepsize, keeping their ratio constant and using their values to start the search. If False, hyperparameters are not optimized. Can also be an integer, specifying the number of binary search steps (default 20).

epsilon : float

Limit on the perturbation size; if binary_search is True, this value is only for initialization and automatically adapted.

stepsize : float

Step size for gradient descent; if binary_search is True, this value is only for initialization and automatically adapted.

iterations : int

Number of iterations for each gradient descent run.

random_start : bool

Start the attack from a random point rather than from the original input.

return_early : bool

Whether an individual gradient descent run should stop as soon as an adversarial is found.

class foolbox.attacks.ProjectedGradientDescentAttack(model=None, criterion=<foolbox.criteria.Misclassification object>, distance=<class 'foolbox.distances.MeanSquaredDistance'>, threshold=None)[source]

The Projected Gradient Descent Attack introduced in [R367e8e10528a-1] without random start.

When used without a random start, this attack is also known as Basic Iterative Method (BIM) or FGSM^k.

References

[R367e8e10528a-1]Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, Adrian Vladu, “Towards Deep Learning Models Resistant to Adversarial Attacks”, https://arxiv.org/abs/1706.06083
__call__(self, input_or_adv, label=None, unpack=True, binary_search=True, epsilon=0.3, stepsize=0.01, iterations=40, random_start=False, return_early=True)[source]

Simple iterative gradient-based attack known as Basic Iterative Method, Projected Gradient Descent or FGSM^k.

Parameters:
input_or_adv : numpy.ndarray or Adversarial

The original, unperturbed input as a numpy.ndarray or an Adversarial instance.

label : int

The reference label of the original input. Must be passed if a is a numpy.ndarray, must not be passed if a is an Adversarial instance.

unpack : bool

If true, returns the adversarial input, otherwise returns the Adversarial object.

binary_search : bool or int

Whether to perform a binary search over epsilon and stepsize, keeping their ratio constant and using their values to start the search. If False, hyperparameters are not optimized. Can also be an integer, specifying the number of binary search steps (default 20).

epsilon : float

Limit on the perturbation size; if binary_search is True, this value is only for initialization and automatically adapted.

stepsize : float

Step size for gradient descent; if binary_search is True, this value is only for initialization and automatically adapted.

iterations : int

Number of iterations for each gradient descent run.

random_start : bool

Start the attack from a random point rather than from the original input.

return_early : bool

Whether an individual gradient descent run should stop as soon as an adversarial is found.

foolbox.attacks.ProjectedGradientDescent[source]

alias of foolbox.attacks.iterative_projected_gradient.ProjectedGradientDescentAttack

class foolbox.attacks.RandomStartProjectedGradientDescentAttack(model=None, criterion=<foolbox.criteria.Misclassification object>, distance=<class 'foolbox.distances.MeanSquaredDistance'>, threshold=None)[source]

The Projected Gradient Descent Attack introduced in [Re6066bc39e14-1] with random start.

References

[Re6066bc39e14-1]Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, Adrian Vladu, “Towards Deep Learning Models Resistant to Adversarial Attacks”, https://arxiv.org/abs/1706.06083
__call__(self, input_or_adv, label=None, unpack=True, binary_search=True, epsilon=0.3, stepsize=0.01, iterations=40, random_start=True, return_early=True)[source]

Simple iterative gradient-based attack known as Basic Iterative Method, Projected Gradient Descent or FGSM^k.

Parameters:
input_or_adv : numpy.ndarray or Adversarial

The original, unperturbed input as a numpy.ndarray or an Adversarial instance.

label : int

The reference label of the original input. Must be passed if a is a numpy.ndarray, must not be passed if a is an Adversarial instance.

unpack : bool

If true, returns the adversarial input, otherwise returns the Adversarial object.

binary_search : bool or int

Whether to perform a binary search over epsilon and stepsize, keeping their ratio constant and using their values to start the search. If False, hyperparameters are not optimized. Can also be an integer, specifying the number of binary search steps (default 20).

epsilon : float

Limit on the perturbation size; if binary_search is True, this value is only for initialization and automatically adapted.

stepsize : float

Step size for gradient descent; if binary_search is True, this value is only for initialization and automatically adapted.

iterations : int

Number of iterations for each gradient descent run.

random_start : bool

Start the attack from a random point rather than from the original input.

return_early : bool

Whether an individual gradient descent run should stop as soon as an adversarial is found.

foolbox.attacks.RandomProjectedGradientDescent[source]

alias of foolbox.attacks.iterative_projected_gradient.RandomStartProjectedGradientDescentAttack

foolbox.attacks.RandomPGD[source]

alias of foolbox.attacks.iterative_projected_gradient.RandomStartProjectedGradientDescentAttack

class foolbox.attacks.MomentumIterativeAttack(model=None, criterion=<foolbox.criteria.Misclassification object>, distance=<class 'foolbox.distances.MeanSquaredDistance'>, threshold=None)[source]

The Momentum Iterative Method attack introduced in [R86d363e1fb2f-1]. It’s like the Basic Iterative Method or Projected Gradient Descent except that it uses momentum.

References

[R86d363e1fb2f-1]Yinpeng Dong, Fangzhou Liao, Tianyu Pang, Hang Su, Jun Zhu, Xiaolin Hu, Jianguo Li, “Boosting Adversarial Attacks with Momentum”, https://arxiv.org/abs/1710.06081
__call__(self, input_or_adv, label=None, unpack=True, binary_search=True, epsilon=0.3, stepsize=0.06, iterations=10, decay_factor=1.0, random_start=False, return_early=True)[source]

Momentum-based iterative gradient attack known as Momentum Iterative Method.

Parameters:
input_or_adv : numpy.ndarray or Adversarial

The original, unperturbed input as a numpy.ndarray or an Adversarial instance.

label : int

The reference label of the original input. Must be passed if a is a numpy.ndarray, must not be passed if a is an Adversarial instance.

unpack : bool

If true, returns the adversarial input, otherwise returns the Adversarial object.

binary_search : bool

Whether to perform a binary search over epsilon and stepsize, keeping their ratio constant and using their values to start the search. If False, hyperparameters are not optimized. Can also be an integer, specifying the number of binary search steps (default 20).

epsilon : float

Limit on the perturbation size; if binary_search is True, this value is only for initialization and automatically adapted.

stepsize : float

Step size for gradient descent; if binary_search is True, this value is only for initialization and automatically adapted.

iterations : int

Number of iterations for each gradient descent run.

decay_factor : float

Decay factor used by the momentum term.

random_start : bool

Start the attack from a random point rather than from the original input.

return_early : bool

Whether an individual gradient descent run should stop as soon as an adversarial is found.

foolbox.attacks.MomentumIterativeMethod[source]

alias of foolbox.attacks.iterative_projected_gradient.MomentumIterativeAttack

class foolbox.attacks.LBFGSAttack(*args, **kwargs)[source]

Uses L-BFGS-B to minimize the distance between the image and the adversarial as well as the cross-entropy between the predictions for the adversarial and the the one-hot encoded target class.

If the criterion does not have a target class, a random class is chosen from the set of all classes except the original one.

Notes

This implementation generalizes algorithm 1 in [Rf3ff9c7ff5d3-1] to support other targeted criteria and other distance measures.

References

[Rf3ff9c7ff5d3-1]https://arxiv.org/abs/1510.05328
__call__(self, input_or_adv, label=None, unpack=True, epsilon=1e-05, num_random_targets=0, maxiter=150)[source]

Uses L-BFGS-B to minimize the distance between the image and the adversarial as well as the cross-entropy between the predictions for the adversarial and the the one-hot encoded target class.

Parameters:
input_or_adv : numpy.ndarray or Adversarial

The original, unperturbed input as a numpy.ndarray or an Adversarial instance.

label : int

The reference label of the original input. Must be passed if a is a numpy.ndarray, must not be passed if a is an Adversarial instance.

unpack : bool

If true, returns the adversarial input, otherwise returns the Adversarial object.

epsilon : float

Epsilon of the binary search.

num_random_targets : int

Number of random target classes if no target class is given by the criterion.

maxiter : int

Maximum number of iterations for L-BFGS-B.

__init__(self, *args, **kwargs)[source]

Initialize self. See help(type(self)) for accurate signature.

name(self)[source]

Returns a human readable name that uniquely identifies the attack with its hyperparameters.

Returns:
str

Human readable name that uniquely identifies the attack with its hyperparameters.

Notes

Defaults to the class name but subclasses can provide more descriptive names and must take hyperparameters into account.

class foolbox.attacks.DeepFoolAttack(model=None, criterion=<foolbox.criteria.Misclassification object>, distance=<class 'foolbox.distances.MeanSquaredDistance'>, threshold=None)[source]

Simple and close to optimal gradient-based adversarial attack.

Implementes DeepFool introduced in [Rb4dd02640756-1].

References

[Rb4dd02640756-1]Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Pascal Frossard, “DeepFool: a simple and accurate method to fool deep neural networks”, https://arxiv.org/abs/1511.04599
__call__(self, input_or_adv, label=None, unpack=True, steps=100, subsample=10, p=None)[source]

Simple and close to optimal gradient-based adversarial attack.

Parameters:
input_or_adv : numpy.ndarray or Adversarial

The original, unperturbed input as a numpy.ndarray or an Adversarial instance.

label : int

The reference label of the original input. Must be passed if a is a numpy.ndarray, must not be passed if a is an Adversarial instance.

unpack : bool

If true, returns the adversarial input, otherwise returns the Adversarial object.

steps : int

Maximum number of steps to perform.

subsample : int

Limit on the number of the most likely classes that should be considered. A small value is usually sufficient and much faster.

p : int or float

Lp-norm that should be minimzed, must be 2 or np.inf.

class foolbox.attacks.NewtonFoolAttack(model=None, criterion=<foolbox.criteria.Misclassification object>, distance=<class 'foolbox.distances.MeanSquaredDistance'>, threshold=None)[source]

Implements the NewtonFool Attack.

The attack was introduced in [R6a972939b320-1].

References

[R6a972939b320-1]Uyeong Jang et al., “Objective Metrics and Gradient Descent Algorithms for Adversarial Examples in Machine Learning”, https://dl.acm.org/citation.cfm?id=3134635
__call__(self, input_or_adv, label=None, unpack=True, max_iter=100, eta=0.01)[source]
Parameters:
input_or_adv : numpy.ndarray or Adversarial

The original, unperturbed input as a numpy.ndarray or an Adversarial instance.

label : int

The reference label of the original input. Must be passed if a is a numpy.ndarray, must not be passed if a is an Adversarial instance.

unpack : bool

If true, returns the adversarial input, otherwise returns the Adversarial object.

max_iter : int

The maximum number of iterations.

eta : float

the eta coefficient

class foolbox.attacks.DeepFoolL2Attack(model=None, criterion=<foolbox.criteria.Misclassification object>, distance=<class 'foolbox.distances.MeanSquaredDistance'>, threshold=None)[source]
__call__(self, input_or_adv, label=None, unpack=True, steps=100, subsample=10)[source]

Simple and close to optimal gradient-based adversarial attack.

Parameters:
input_or_adv : numpy.ndarray or Adversarial

The original, unperturbed input as a numpy.ndarray or an Adversarial instance.

label : int

The reference label of the original input. Must be passed if a is a numpy.ndarray, must not be passed if a is an Adversarial instance.

unpack : bool

If true, returns the adversarial input, otherwise returns the Adversarial object.

steps : int

Maximum number of steps to perform.

subsample : int

Limit on the number of the most likely classes that should be considered. A small value is usually sufficient and much faster.

p : int or float

Lp-norm that should be minimzed, must be 2 or np.inf.

class foolbox.attacks.DeepFoolLinfinityAttack(model=None, criterion=<foolbox.criteria.Misclassification object>, distance=<class 'foolbox.distances.MeanSquaredDistance'>, threshold=None)[source]
__call__(self, input_or_adv, label=None, unpack=True, steps=100, subsample=10)[source]

Simple and close to optimal gradient-based adversarial attack.

Parameters:
input_or_adv : numpy.ndarray or Adversarial

The original, unperturbed input as a numpy.ndarray or an Adversarial instance.

label : int

The reference label of the original input. Must be passed if a is a numpy.ndarray, must not be passed if a is an Adversarial instance.

unpack : bool

If true, returns the adversarial input, otherwise returns the Adversarial object.

steps : int

Maximum number of steps to perform.

subsample : int

Limit on the number of the most likely classes that should be considered. A small value is usually sufficient and much faster.

p : int or float

Lp-norm that should be minimzed, must be 2 or np.inf.

class foolbox.attacks.ADefAttack(model=None, criterion=<foolbox.criteria.Misclassification object>, distance=<class 'foolbox.distances.MeanSquaredDistance'>, threshold=None)[source]

Adversarial attack that distorts the image, i.e. changes the locations of pixels. The algorithm is described in [Rf241e6d2664d-1], a Repository with the original code can be found in [Rf241e6d2664d-2]. References ———- .. [Rf241e6d2664d-1] Rima Alaifari, Giovanni S. Alberti, and Tandri Gauksson:

“ADef: an Iterative Algorithm to Construct Adversarial Deformations”, https://arxiv.org/abs/1804.07729
__call__(self, input_or_adv, unpack=True, max_iter=100, max_norm=<Mock name='mock.inf' id='140004838978336'>, label=None, smooth=1.0, subsample=10)[source]
Parameters:
input_or_adv : numpy.ndarray or Adversarial

The original, unperturbed input as a numpy.ndarray or an Adversarial instance.

label : int

The reference label of the original input. Must be passed if a is a numpy.ndarray, must not be passed if a is an Adversarial instance.

unpack : bool

If true, returns the adversarial input, otherwise returns the Adversarial object.

max_iter : int > 0

Maximum number of iterations (default max_iter = 100).

max_norm : float

Maximum l2 norm of vector field (default max_norm = numpy.inf).

smooth : float >= 0

Width of the Gaussian kernel used for smoothing. (default is smooth = 0 for no smoothing).

subsample : int >= 2

Limit on the number of the most likely classes that should be considered. A small value is usually sufficient and much faster. (default subsample = 10)

class foolbox.attacks.SLSQPAttack(model=None, criterion=<foolbox.criteria.Misclassification object>, distance=<class 'foolbox.distances.MeanSquaredDistance'>, threshold=None)[source]

Uses SLSQP to minimize the distance between the image and the adversarial under the constraint that the image is adversarial.

__call__(self, input_or_adv, label=None, unpack=True)[source]

Uses SLSQP to minimize the distance between the image and the adversarial under the constraint that the image is adversarial.

Parameters:
input_or_adv : numpy.ndarray or Adversarial

The original, correctly classified image. If image is a numpy array, label must be passed as well. If image is an Adversarial instance, label must not be passed.

label : int

The reference label of the original image. Must be passed if image is a numpy array, must not be passed if image is an Adversarial instance.

unpack : bool

If true, returns the adversarial image, otherwise returns the Adversarial object.

class foolbox.attacks.SaliencyMapAttack(model=None, criterion=<foolbox.criteria.Misclassification object>, distance=<class 'foolbox.distances.MeanSquaredDistance'>, threshold=None)[source]

Implements the Saliency Map Attack.

The attack was introduced in [R08e06ca693ba-1].

References

[R08e06ca693ba-1]Nicolas Papernot, Patrick McDaniel, Somesh Jha, Matt Fredrikson, Z. Berkay Celik, Ananthram Swami, “The Limitations of Deep Learning in Adversarial Settings”, https://arxiv.org/abs/1511.07528
__call__(self, input_or_adv, label=None, unpack=True, max_iter=2000, num_random_targets=0, fast=True, theta=0.1, max_perturbations_per_pixel=7)[source]

Implements the Saliency Map Attack.

Parameters:
input_or_adv : numpy.ndarray or Adversarial

The original, unperturbed input as a numpy.ndarray or an Adversarial instance.

label : int

The reference label of the original input. Must be passed if a is a numpy.ndarray, must not be passed if a is an Adversarial instance.

max_iter : int

The maximum number of iterations to run.

num_random_targets : int

Number of random target classes if no target class is given by the criterion.

fast : bool

Whether to use the fast saliency map calculation.

theta : float

perturbation per pixel relative to [min, max] range.

max_perturbations_per_pixel : int

Maximum number of times a pixel can be modified.

class foolbox.attacks.IterativeGradientAttack(model=None, criterion=<foolbox.criteria.Misclassification object>, distance=<class 'foolbox.distances.MeanSquaredDistance'>, threshold=None)[source]

Like GradientAttack but with several steps for each epsilon.

__call__(self, input_or_adv, label=None, unpack=True, epsilons=100, max_epsilon=1, steps=10)[source]

Like GradientAttack but with several steps for each epsilon.

Parameters:
input_or_adv : numpy.ndarray or Adversarial

The original, unperturbed input as a numpy.ndarray or an Adversarial instance.

label : int

The reference label of the original input. Must be passed if a is a numpy.ndarray, must not be passed if a is an Adversarial instance.

unpack : bool

If true, returns the adversarial input, otherwise returns the Adversarial object.

epsilons : int or Iterable[float]

Either Iterable of step sizes in the gradient direction or number of step sizes between 0 and max_epsilon that should be tried.

max_epsilon : float

Largest step size if epsilons is not an iterable.

steps : int

Number of iterations to run.

class foolbox.attacks.IterativeGradientSignAttack(model=None, criterion=<foolbox.criteria.Misclassification object>, distance=<class 'foolbox.distances.MeanSquaredDistance'>, threshold=None)[source]

Like GradientSignAttack but with several steps for each epsilon.

__call__(self, input_or_adv, label=None, unpack=True, epsilons=100, max_epsilon=1, steps=10)[source]

Like GradientSignAttack but with several steps for each epsilon.

Parameters:
input_or_adv : numpy.ndarray or Adversarial

The original, unperturbed input as a numpy.ndarray or an Adversarial instance.

label : int

The reference label of the original input. Must be passed if a is a numpy.ndarray, must not be passed if a is an Adversarial instance.

unpack : bool

If true, returns the adversarial input, otherwise returns the Adversarial object.

epsilons : int or Iterable[float]

Either Iterable of step sizes in the direction of the sign of the gradient or number of step sizes between 0 and max_epsilon that should be tried.

max_epsilon : float

Largest step size if epsilons is not an iterable.

steps : int

Number of iterations to run.

class foolbox.attacks.CarliniWagnerL2Attack(model=None, criterion=<foolbox.criteria.Misclassification object>, distance=<class 'foolbox.distances.MeanSquaredDistance'>, threshold=None)[source]

The L2 version of the Carlini & Wagner attack.

This attack is described in [Rc2cb572b91c5-1]. This implementation is based on the reference implementation by Carlini [Rc2cb572b91c5-2]. For bounds ≠ (0, 1), it differs from [Rc2cb572b91c5-2] because we normalize the squared L2 loss with the bounds.

References

[Rc2cb572b91c5-1]Nicholas Carlini, David Wagner: “Towards Evaluating the Robustness of Neural Networks”, https://arxiv.org/abs/1608.04644
[Rc2cb572b91c5-2](1, 2) https://github.com/carlini/nn_robust_attacks
__call__(self, input_or_adv, label=None, unpack=True, binary_search_steps=5, max_iterations=1000, confidence=0, learning_rate=0.005, initial_const=0.01, abort_early=True)[source]

The L2 version of the Carlini & Wagner attack.

Parameters:
input_or_adv : numpy.ndarray or Adversarial

The original, unperturbed input as a numpy.ndarray or an Adversarial instance.

label : int

The reference label of the original input. Must be passed if a is a numpy.ndarray, must not be passed if a is an Adversarial instance.

unpack : bool

If true, returns the adversarial input, otherwise returns the Adversarial object.

binary_search_steps : int

The number of steps for the binary search used to find the optimal tradeoff-constant between distance and confidence.

max_iterations : int

The maximum number of iterations. Larger values are more accurate; setting it too small will require a large learning rate and will produce poor results.

confidence : int or float

Confidence of adversarial examples: a higher value produces adversarials that are further away, but more strongly classified as adversarial.

learning_rate : float

The learning rate for the attack algorithm. Smaller values produce better results but take longer to converge.

initial_const : float

The initial tradeoff-constant to use to tune the relative importance of distance and confidence. If binary_search_steps is large, the initial constant is not important.

abort_early : bool

If True, Adam will be aborted if the loss hasn’t decreased for some time (a tenth of max_iterations).

static best_other_class(logits, exclude)[source]

Returns the index of the largest logit, ignoring the class that is passed as exclude.

classmethod loss_function(const, a, x, logits, reconstructed_original, confidence, min_, max_)[source]

Returns the loss and the gradient of the loss w.r.t. x, assuming that logits = model(x).