Skip to content

RAMP: Boosting Adversarial Robustness Against Multiple $l_p$ Perturbations

License

Notifications You must be signed in to change notification settings

uiuc-focal-lab/RAMP

Repository files navigation

RAMP

RAMP: Boosting Adversarial Robustness Against Multiple $l_p$ Perturbations for Universal Robustness [NeurIPS 2024]
Enyi Jiang, Gagandeep Singh

Code

Installation

We recommend first creating a conda environment using the provided environment.yml:

conda env create -f environment.yml

Training from Scratch

  • Main Result: The files RAMP.py and RAMP_wide_resnet.py allow us to train ResNet-18 and WideReset models with standard choices of epsilons. To reproduce the results in the paper, one can run RAMP_scratch_cifar10.sh in folder scripts/cifar10.

  • Varying Epsilon Values: We provide scripts of run_ramp_diff_eps_scratch.sh (RAMP), run_max_diff_eps_scratch.sh (MAX), and run_eat_diff_eps_scratch.sh (E-AT) in folder scripts/cifar10 for running the training from scratch experiments with different choices of epsilons.

Robust Fine-tuning

  • To get pre-trained versions of ResNet-18 models with different epsilon values, one can run pretrain_diff_eps_Lp.sh scripts in folder scripts/cifar10.

  • It is also possible to use models from the Model Zoo of RobustBench with --model_name=RB_{} inserting the identifier of the classifier from the Model Zoo (these are automatically downloaded). (credits to E-AT paper)

  • Main Result: To reproduce the results in the paper with different model architectures, one can run RAMP_finetune_cifar10.sh in folder scripts/cifar10 and RAMP_finetune_imagenet.sh in folder scripts/imagenet.

  • Varying Epsilon Values: We provide scripts of run_ramp_diff_eps_finetune.sh (RAMP), run_max_diff_eps_finetune.sh (MAX), and run_eat_diff_eps_finetune.sh (E-AT) in folder scripts/cifar10 for running the robust fine-tuning experiments with different choices of epsilons.

Evaluation (from E-AT paper)

With --final_eval our standard evaluation (with APGD-CE and APGD-T, for a total of 10 restarts of 100 steps) is run for all threat models at the end of training. Specifying --eval_freq=k a fast evaluation is run on test and training points every k epochs.

To evaluate a trained model one can run eval.py with --model_name as above for the pre-trained model or --model_name=/path/to/checkpoint/ for new or fine-tuned classifiers. The corresponding architecture is loaded if the run has the automatically generated name. More details about the options for evaluation in eval.py.

Credits

Parts of the code in this repo is based on

Citation

Cite the paper/repo:

@article{jiang2024ramp,
  title={RAMP: Boosting Adversarial Robustness Against Multiple $ l\_p $ Perturbations},
  author={Jiang, Enyi and Singh, Gagandeep},
  journal={arXiv preprint arXiv:2402.06827},
  year={2024}
}

About

RAMP: Boosting Adversarial Robustness Against Multiple $l_p$ Perturbations

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages