Decision-based attacks

class foolbox.attacks.BoundaryAttack(model=None, criterion=<foolbox.criteria.Misclassification object>, distance=<class 'foolbox.distances.MeanSquaredDistance'>, threshold=None)[source]

A powerful adversarial attack that requires neither gradients nor probabilities.

This is the reference implementation for the attack introduced in [Re72ca268aa55-1].

Notes

This implementation provides several advanced features:

  • ability to continue previous attacks by passing an instance of the Adversarial class
  • ability to pass an explicit starting point; especially to initialize a targeted attack
  • ability to pass an alternative attack used for initialization
  • fine-grained control over logging
  • ability to specify the batch size
  • optional automatic batch size tuning
  • optional multithreading for random number generation
  • optional multithreading for candidate point generation

References

[Re72ca268aa55-1]Wieland Brendel (*), Jonas Rauber (*), Matthias Bethge, “Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models”, https://arxiv.org/abs/1712.04248
__call__(self, input_or_adv, label=None, unpack=True, iterations=5000, max_directions=25, starting_point=None, initialization_attack=None, log_every_n_steps=1, spherical_step=0.01, source_step=0.01, step_adaptation=1.5, batch_size=1, tune_batch_size=True, threaded_rnd=True, threaded_gen=True, alternative_generator=False, internal_dtype=<Mock name='mock.float64' id='140004839189192'>, verbose=False)[source]

Applies the Boundary Attack.

Parameters:
input_or_adv : numpy.ndarray or Adversarial

The original, correctly classified image. If image is a numpy array, label must be passed as well. If image is an Adversarial instance, label must not be passed.

label : int

The reference label of the original image. Must be passed if image is a numpy array, must not be passed if image is an Adversarial instance.

unpack : bool

If true, returns the adversarial image, otherwise returns the Adversarial object.

iterations : int

Maximum number of iterations to run. Might converge and stop before that.

max_directions : int

Maximum number of trials per ieration.

starting_point : numpy.ndarray

Adversarial input to use as a starting point, in particular for targeted attacks.

initialization_attack : Attack

Attack to use to find a starting point. Defaults to BlendedUniformNoiseAttack.

log_every_n_steps : int

Determines verbositity of the logging.

spherical_step : float

Initial step size for the orthogonal (spherical) step.

source_step : float

Initial step size for the step towards the target.

step_adaptation : float

Factor by which the step sizes are multiplied or divided.

batch_size : int

Batch size or initial batch size if tune_batch_size is True

tune_batch_size : bool

Whether or not the batch size should be automatically chosen between 1 and max_directions.

threaded_rnd : bool

Whether the random number generation should be multithreaded.

threaded_gen : bool

Whether the candidate point generation should be multithreaded.

alternative_generator: bool

Whether an alternative implemenation of the candidate generator should be used.

internal_dtype : np.float32 or np.float64

Higher precision might be slower but is numerically more stable.

verbose : bool

Controls verbosity of the attack.

class foolbox.attacks.SpatialAttack(model=None, criterion=<foolbox.criteria.Misclassification object>, distance=<class 'foolbox.distances.MeanSquaredDistance'>, threshold=None)[source]

Adversarially chosen rotations and translations [1].

This implementation is based on the reference implementation by Madry et al.: https://github.com/MadryLab/adversarial_spatial

References

[Rdffd25498f9d-1]Logan Engstrom*, Brandon Tran*, Dimitris Tsipras*, Ludwig Schmidt, Aleksander Mądry: “A Rotation and a Translation Suffice: Fooling CNNs with Simple Transformations”, http://arxiv.org/abs/1712.02779
__call__(self, input_or_adv, label=None, unpack=True, do_rotations=True, do_translations=True, x_shift_limits=(-5, 5), y_shift_limits=(-5, 5), angular_limits=(-5, 5), granularity=10, random_sampling=False, abort_early=True)[source]

Adversarially chosen rotations and translations.

Parameters:
input_or_adv : numpy.ndarray or Adversarial

The original, unperturbed input as a numpy.ndarray or an Adversarial instance.

label : int

The reference label of the original input. Must be passed if a is a numpy.ndarray, must not be passed if a is an Adversarial instance.

unpack : bool

If true, returns the adversarial input, otherwise returns the Adversarial object.

do_rotations : bool

If False no rotations will be applied to the image.

do_translations : bool

If False no translations will be applied to the image.

x_shift_limits : int or (int, int)

Limits for horizontal translations in pixels. If one integer is provided the limits will be (-x_shift_limits, x_shift_limits).

y_shift_limits : int or (int, int)

Limits for vertical translations in pixels. If one integer is provided the limits will be (-y_shift_limits, y_shift_limits).

angular_limits : int or (int, int)

Limits for rotations in degrees. If one integer is provided the limits will be [-angular_limits, angular_limits].

granularity : int

Density of sampling within limits for each dimension.

random_sampling : bool

If True we sample translations/rotations randomly within limits, otherwise we use a regular grid.

abort_early : bool

If True, the attack stops as soon as it finds an adversarial.

class foolbox.attacks.PointwiseAttack(model=None, criterion=<foolbox.criteria.Misclassification object>, distance=<class 'foolbox.distances.MeanSquaredDistance'>, threshold=None)[source]

Starts with an adversarial and performs a binary search between the adversarial and the original for each dimension of the input individually.

__call__(self, input_or_adv, label=None, unpack=True, starting_point=None, initialization_attack=None)[source]

Starts with an adversarial and performs a binary search between the adversarial and the original for each dimension of the input individually.

Parameters:
input_or_adv : numpy.ndarray or Adversarial

The original, unperturbed input as a numpy.ndarray or an Adversarial instance.

label : int

The reference label of the original input. Must be passed if a is a numpy.ndarray, must not be passed if a is an Adversarial instance.

unpack : bool

If true, returns the adversarial input, otherwise returns the Adversarial object.

starting_point : numpy.ndarray

Adversarial input to use as a starting point, in particular for targeted attacks.

initialization_attack : Attack

Attack to use to find a starting point. Defaults to SaltAndPepperNoiseAttack.

class foolbox.attacks.GaussianBlurAttack(model=None, criterion=<foolbox.criteria.Misclassification object>, distance=<class 'foolbox.distances.MeanSquaredDistance'>, threshold=None)[source]

Blurs the image until it is misclassified.

__call__(self, input_or_adv, label=None, unpack=True, epsilons=1000)[source]

Blurs the image until it is misclassified.

Parameters:
input_or_adv : numpy.ndarray or Adversarial

The original, unperturbed input as a numpy.ndarray or an Adversarial instance.

label : int

The reference label of the original input. Must be passed if a is a numpy.ndarray, must not be passed if a is an Adversarial instance.

unpack : bool

If true, returns the adversarial input, otherwise returns the Adversarial object.

epsilons : int or Iterable[float]

Either Iterable of standard deviations of the Gaussian blur or number of standard deviations between 0 and 1 that should be tried.

class foolbox.attacks.ContrastReductionAttack(model=None, criterion=<foolbox.criteria.Misclassification object>, distance=<class 'foolbox.distances.MeanSquaredDistance'>, threshold=None)[source]

Reduces the contrast of the image until it is misclassified.

__call__(self, input_or_adv, label=None, unpack=True, epsilons=1000)[source]

Reduces the contrast of the image until it is misclassified.

Parameters:
input_or_adv : numpy.ndarray or Adversarial

The original, unperturbed input as a numpy.ndarray or an Adversarial instance.

label : int

The reference label of the original input. Must be passed if a is a numpy.ndarray, must not be passed if a is an Adversarial instance.

unpack : bool

If true, returns the adversarial input, otherwise returns the Adversarial object.

epsilons : int or Iterable[float]

Either Iterable of contrast levels or number of contrast levels between 1 and 0 that should be tried. Epsilons are one minus the contrast level.

class foolbox.attacks.AdditiveUniformNoiseAttack(model=None, criterion=<foolbox.criteria.Misclassification object>, distance=<class 'foolbox.distances.MeanSquaredDistance'>, threshold=None)[source]

Adds uniform noise to the image, gradually increasing the standard deviation until the image is misclassified.

__call__(self, input_or_adv, label=None, unpack=True, epsilons=1000)[source]

Adds uniform or Gaussian noise to the image, gradually increasing the standard deviation until the image is misclassified.

Parameters:
input_or_adv : numpy.ndarray or Adversarial

The original, unperturbed input as a numpy.ndarray or an Adversarial instance.

label : int

The reference label of the original input. Must be passed if a is a numpy.ndarray, must not be passed if a is an Adversarial instance.

unpack : bool

If true, returns the adversarial input, otherwise returns the Adversarial object.

epsilons : int or Iterable[float]

Either Iterable of noise levels or number of noise levels between 0 and 1 that should be tried.

__class__[source]

alias of abc.ABCMeta

__delattr__(self, name, /)[source]

Implement delattr(self, name).

__dir__()[source]

default dir() implementation

__eq__(self, value, /)[source]

Return self==value.

__format__()[source]

default object formatter

__ge__(self, value, /)[source]

Return self>=value.

__getattribute__(self, name, /)[source]

Return getattr(self, name).

__gt__(self, value, /)[source]

Return self>value.

__hash__(self, /)[source]

Return hash(self).

__init__(self, model=None, criterion=<foolbox.criteria.Misclassification object at 0x7f556ab96da0>, distance=<class 'foolbox.distances.MeanSquaredDistance'>, threshold=None)[source]

Initialize self. See help(type(self)) for accurate signature.

__le__(self, value, /)[source]

Return self<=value.

__lt__(self, value, /)[source]

Return self<value.

__ne__(self, value, /)[source]

Return self!=value.

__new__(*args, **kwargs)[source]

Create and return a new object. See help(type) for accurate signature.

__reduce__()[source]

helper for pickle

__reduce_ex__()[source]

helper for pickle

__repr__(self, /)[source]

Return repr(self).

__setattr__(self, name, value, /)[source]

Implement setattr(self, name, value).

__sizeof__()[source]

size of object in memory, in bytes

__str__(self, /)[source]

Return str(self).

__subclasshook__()[source]

Abstract classes can override this to customize issubclass().

This is invoked early on by abc.ABCMeta.__subclasscheck__(). It should return True, False or NotImplemented. If it returns NotImplemented, the normal algorithm is used. Otherwise, it overrides the normal algorithm (and the outcome is cached).

__weakref__[source]

list of weak references to the object (if defined)

name(self)[source]

Returns a human readable name that uniquely identifies the attack with its hyperparameters.

Returns:
str

Human readable name that uniquely identifies the attack with its hyperparameters.

Notes

Defaults to the class name but subclasses can provide more descriptive names and must take hyperparameters into account.

class foolbox.attacks.AdditiveGaussianNoiseAttack(model=None, criterion=<foolbox.criteria.Misclassification object>, distance=<class 'foolbox.distances.MeanSquaredDistance'>, threshold=None)[source]

Adds Gaussian noise to the image, gradually increasing the standard deviation until the image is misclassified.

__call__(self, input_or_adv, label=None, unpack=True, epsilons=1000)[source]

Adds uniform or Gaussian noise to the image, gradually increasing the standard deviation until the image is misclassified.

Parameters:
input_or_adv : numpy.ndarray or Adversarial

The original, unperturbed input as a numpy.ndarray or an Adversarial instance.

label : int

The reference label of the original input. Must be passed if a is a numpy.ndarray, must not be passed if a is an Adversarial instance.

unpack : bool

If true, returns the adversarial input, otherwise returns the Adversarial object.

epsilons : int or Iterable[float]

Either Iterable of noise levels or number of noise levels between 0 and 1 that should be tried.

__class__[source]

alias of abc.ABCMeta

__delattr__(self, name, /)[source]

Implement delattr(self, name).

__dir__()[source]

default dir() implementation

__eq__(self, value, /)[source]

Return self==value.

__format__()[source]

default object formatter

__ge__(self, value, /)[source]

Return self>=value.

__getattribute__(self, name, /)[source]

Return getattr(self, name).

__gt__(self, value, /)[source]

Return self>value.

__hash__(self, /)[source]

Return hash(self).

__init__(self, model=None, criterion=<foolbox.criteria.Misclassification object at 0x7f556ab96da0>, distance=<class 'foolbox.distances.MeanSquaredDistance'>, threshold=None)[source]

Initialize self. See help(type(self)) for accurate signature.

__le__(self, value, /)[source]

Return self<=value.

__lt__(self, value, /)[source]

Return self<value.

__ne__(self, value, /)[source]

Return self!=value.

__new__(*args, **kwargs)[source]

Create and return a new object. See help(type) for accurate signature.

__reduce__()[source]

helper for pickle

__reduce_ex__()[source]

helper for pickle

__repr__(self, /)[source]

Return repr(self).

__setattr__(self, name, value, /)[source]

Implement setattr(self, name, value).

__sizeof__()[source]

size of object in memory, in bytes

__str__(self, /)[source]

Return str(self).

__subclasshook__()[source]

Abstract classes can override this to customize issubclass().

This is invoked early on by abc.ABCMeta.__subclasscheck__(). It should return True, False or NotImplemented. If it returns NotImplemented, the normal algorithm is used. Otherwise, it overrides the normal algorithm (and the outcome is cached).

__weakref__[source]

list of weak references to the object (if defined)

name(self)[source]

Returns a human readable name that uniquely identifies the attack with its hyperparameters.

Returns:
str

Human readable name that uniquely identifies the attack with its hyperparameters.

Notes

Defaults to the class name but subclasses can provide more descriptive names and must take hyperparameters into account.

class foolbox.attacks.SaltAndPepperNoiseAttack(model=None, criterion=<foolbox.criteria.Misclassification object>, distance=<class 'foolbox.distances.MeanSquaredDistance'>, threshold=None)[source]

Increases the amount of salt and pepper noise until the image is misclassified.

__call__(self, input_or_adv, label=None, unpack=True, epsilons=100, repetitions=10)[source]

Increases the amount of salt and pepper noise until the image is misclassified.

Parameters:
input_or_adv : numpy.ndarray or Adversarial

The original, unperturbed input as a numpy.ndarray or an Adversarial instance.

label : int

The reference label of the original input. Must be passed if a is a numpy.ndarray, must not be passed if a is an Adversarial instance.

unpack : bool

If true, returns the adversarial input, otherwise returns the Adversarial object.

epsilons : int

Number of steps to try between probability 0 and 1.

repetitions : int

Specifies how often the attack will be repeated.

class foolbox.attacks.BlendedUniformNoiseAttack(model=None, criterion=<foolbox.criteria.Misclassification object>, distance=<class 'foolbox.distances.MeanSquaredDistance'>, threshold=None)[source]

Blends the image with a uniform noise image until it is misclassified.

__call__(self, input_or_adv, label=None, unpack=True, epsilons=1000, max_directions=1000)[source]

Blends the image with a uniform noise image until it is misclassified.

Parameters:
input_or_adv : numpy.ndarray or Adversarial

The original, unperturbed input as a numpy.ndarray or an Adversarial instance.

label : int

The reference label of the original input. Must be passed if a is a numpy.ndarray, must not be passed if a is an Adversarial instance.

unpack : bool

If true, returns the adversarial input, otherwise returns the Adversarial object.

epsilons : int or Iterable[float]

Either Iterable of blending steps or number of blending steps between 0 and 1 that should be tried.

max_directions : int

Maximum number of random images to try.