Welcome to Foolbox¶
Foolbox is a Python toolbox to create adversarial examples that fool neural networks.
It comes with support for many frameworks to build models including
- TensorFlow
- PyTorch
- Theano
- Keras
- Lasagne
- MXNet
and it is easy to extend to other frameworks.
In addition, it comes with a large collection of adversarial attacks, both gradient-based attacks as well as black-box attacks. See foolbox.attacks for details.
The source code and a minimal working example can be found on GitHub.
Robust Vision Benchmark¶
You might want to have a look at our recently announced Robust Vision Benchmark, a benchmark for adversarial attacks and the robustness of machine learning models.