AUPIMO stands for Area Under the Per-IMage Overlap curve (pronounced a-u-pee-mo).
Official implementation of the paper AUPIMO: Redefining Visual Anomaly Detection Benchmarks with High Speed and Low Tolerance (accepted to BMVC 2024, coming up in November 2024).
Interpretation of an AUPIMO score
“An AUPIMO score is the [cross-threshold] average segmentation recall in an image given that the model (nearly) does not yield false positive regions in normal images”.
See the references in the end for details.
AUPIMO is available in anomalib
!
Tutorials in anomalib/notebooks/700_metrics
.
Warning: this version includes the following features available (not available in anomalib
):
numpy
-only API (onlytorch
andtorchmetrics
-based API)numba
accelaration (makes it considerably slower)
If you want to use AUPIMO, you can clone the git repository and install it with pip:
git clone [email protected]:jpcbertoldo/aupimo.git
cd aupimo
pip install .
You can add it to your requirements.txt
file as aupimo @ git+https://github.com/jpcbertoldo/aupimo
.
PYPI package COMING UP
If you want to reproduce or extend the results of the paper, install the requirements in requirements/aupimo-paper.txt
as well:
git clone [email protected]:jpcbertoldo/aupimo.git
cd aupimo
pip install -e . # `-e` is for 'editable' mode
pip install -r requirements/aupimo-paper.txt
Important: it is recommended to use a virtual environment to install the dependencies and run the tests. We recommend using
conda
, and an enviroment filedev-env.yml
is provided at the root of the repository. Install it withconda env create -f dev-env.yml
and activate it withconda activate aupimo-dev
.
In order to recompute the metrics reported in the paper you can use the script scripts/eval.py
.
You will need to first setup the data, the anomaly score maps and images with their masks from the public datasets.
Download them by running data/experiments/download_asmaps.sh
then unzip the downloaded zip file exactly where it is (it will match the folder structure of in data/experiments/benchmark
).
You can download MVTec AD and VisA from their respective original sources:
- MVTec AD: https://www.mvtec.com/company/research/datasets/mvtec-ad/
- VisA: https://amazon-visual-anomaly.s3.us-west-2.amazonaws.com/VisA_20220922.tar
You should unpack the data in folders respecitvely named MVTec
and VisA
, and the paths to these folders will be passed to the eval.py
script.
If you want to modify the package and eventually open a Pull Request, install the requirements in requirements/dev.txt
and install pre-commit hooks:
git clone [email protected]:jpcbertoldo/aupimo.git
cd aupimo
pip install -e . # `-e` is for 'editable' mode
pip install -r requirements/dev.txt
pre-commit install
Run the tests in tests/
locally with pytest
before opening a Pull Request:
pytest tests/
Important: it is recommended to use a virtual environment to install the dependencies and run the tests. We recommend using
conda
, and an enviroment filedev-env.yml
is provided at the root of the repository. Install it withconda env create -f dev-env.yml
and activate it withconda activate aupimo-dev
.
@misc{bertoldo2024aupimo,
author={Joao P. C. Bertoldo and Dick Ameln and Ashwin Vaidya and Samet Akçay},
title={{AUPIMO: Redefining Visual Anomaly Detection Benchmarks with High Speed and Low Tolerance}},
year={2024},
url={https://arxiv.org/abs/2401.01984},
}
arXiv: https://arxiv.org/abs/2401.01984 (accepted to BMVC 2024)
AUPIMO was developed during Google Summer of Code 2023 (GSoC 2023) with the anomalib team from Intel's OpenVINO Toolkit.
Other media:
- BMVC 2024 poster:
- GSoC 2023 page: https://summerofcode.withgoogle.com/archive/2023/projects/SPMopugd
- Medium post: https://medium.com/p/c653ac30e802
- Slides from the presentation at Mines Paris - PSL University in 2024-10, Paris, France:
- Papers With Code: https://paperswithcode.com/paper/aupimo-redefining-visual-anomaly-detection