Skip to content

jpcbertoldo/aupimo

Repository files navigation

AUPIMO

AUPIMO stands for Area Under the Per-IMage Overlap curve (pronounced a-u-pee-mo).

Official implementation of the paper AUPIMO: Redefining Visual Anomaly Detection Benchmarks with High Speed and Low Tolerance (accepted to BMVC 2024, coming up in November 2024).

Interpretation of an AUPIMO score

“An AUPIMO score is the [cross-threshold] average segmentation recall in an image given that the model (nearly) does not yield false positive regions in normal images”.

AUROC vs. AUPRO vs. AUPIMO

See the references in the end for details.

Integration in anomalib

AUPIMO is available in anomalib!

Tutorials in anomalib/notebooks/700_metrics.

Warning: this version includes the following features available (not available in anomalib):

  • numpy-only API (only torch and torchmetrics-based API)
  • numba accelaration (makes it considerably slower)

Installation

If you want to use AUPIMO, you can clone the git repository and install it with pip:

git clone [email protected]:jpcbertoldo/aupimo.git
cd aupimo
pip install .

You can add it to your requirements.txt file as aupimo @ git+https://github.com/jpcbertoldo/aupimo.

PYPI package COMING UP

Reproducing and extending paper results

If you want to reproduce or extend the results of the paper, install the requirements in requirements/aupimo-paper.txt as well:

git clone [email protected]:jpcbertoldo/aupimo.git
cd aupimo
pip install -e .  # `-e` is for 'editable' mode
pip install -r requirements/aupimo-paper.txt

Important: it is recommended to use a virtual environment to install the dependencies and run the tests. We recommend using conda, and an enviroment file dev-env.yml is provided at the root of the repository. Install it with conda env create -f dev-env.yml and activate it with conda activate aupimo-dev.

Data setup

In order to recompute the metrics reported in the paper you can use the script scripts/eval.py.

You will need to first setup the data, the anomaly score maps and images with their masks from the public datasets.

Anomaly score maps (asmaps)

Download them by running data/experiments/download_asmaps.sh then unzip the downloaded zip file exactly where it is (it will match the folder structure of in data/experiments/benchmark).

Images and masks

You can download MVTec AD and VisA from their respective original sources:

You should unpack the data in folders respecitvely named MVTec and VisA, and the paths to these folders will be passed to the eval.py script.

Development

If you want to modify the package and eventually open a Pull Request, install the requirements in requirements/dev.txt and install pre-commit hooks:

git clone [email protected]:jpcbertoldo/aupimo.git
cd aupimo
pip install -e .  # `-e` is for 'editable' mode
pip install -r requirements/dev.txt
pre-commit install

Run the tests in tests/ locally with pytest before opening a Pull Request:

pytest tests/

Important: it is recommended to use a virtual environment to install the dependencies and run the tests. We recommend using conda, and an enviroment file dev-env.yml is provided at the root of the repository. Install it with conda env create -f dev-env.yml and activate it with conda activate aupimo-dev.

Cite Us

@misc{bertoldo2024aupimo,
      author={Joao P. C. Bertoldo and Dick Ameln and Ashwin Vaidya and Samet Akçay},
      title={{AUPIMO: Redefining Visual Anomaly Detection Benchmarks with High Speed and Low Tolerance}}, 
      year={2024},
      url={https://arxiv.org/abs/2401.01984}, 
}

arXiv: https://arxiv.org/abs/2401.01984 (accepted to BMVC 2024)

AUPIMO was developed during Google Summer of Code 2023 (GSoC 2023) with the anomalib team from Intel's OpenVINO Toolkit.

Other media:

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Packages

No packages published