Development

To install Foolbox in editable mode, see the installation instructions under Contributing to Foolbox.

Running Tests

pytest

To run the tests, you need to have pytest and pytest-cov installed. Afterwards, you can simply run pytest in the root folder of the project. Some tests will require TensorFlow, PyTorch and the other frameworks, so to run all tests, you need to have all of them installed. Note however that this can take quite long (Foolbox has many tests) and installing all frameworks with the correct versions is difficult due to conflicting dependencies. You can also open a pull-request and then we will run all the tests using travis.

Style Guide

We use Black to format all code in a consistent and PEP-8 conform way. All pull-requests are checked using both black and flake8. Simply install black and run black . after all your changes or ideally even on each commit using pre-commit.

New Adversarial Attacks

Foolbox makes it easy to develop new adversarial attacks that can be applied to arbitrary models.

To implement an attack, simply subclass the Attack class, implement the __call__() method and decorate it with the call_decorator(). The call_decorator() will make sure that your __call__() implementation will be called with an instance of the Adversarial class. You can use this instance to ask for model predictions and gradients, get the original image and its label and more. In addition, the Adversarial instance automatically keeps track of the best adversarial amongst all the inputs tested by the attack. That way, the implementation of the attack can focus on the attack logic.

To implement an attack that can make use of the batch support introduced in Foolbox 2.0, implement the as_generator() method and decorate it with the generator_decorator(). All model calls using the Adversarial object should use yield.