Skip to content

Code and data for the NeurIPS ATTRIB 2023 paper: Automatic Discovery of Visual Circuits

License

Notifications You must be signed in to change notification settings

multimodal-interpretability/visual-circuits

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 

Repository files navigation

Automatic Discovery of Visual Circuits

Code and data for the NeurIPS ATTRIB 2023 paper: Automatic Discovery of Visual Circuits

Achyuta Rajaram*, Neil Chowdhury*,
Antonio Torralba , Jacob Andreas, Sarah Schwettmann
*Equal contribution

Car Circuit Inception Teaser

This repo is under active development, and the code and data will be released in the coming weeks. Sign up for updates by email using this google form.

To date, most discoveries of network subcomponents that implement human-interpretable computations in deep vision models have involved close study of single units and large amounts of human labor. We explore scalable methods for extracting the subgraph of a vision model’s computational graph that underlies recognition of a specific visual concept. We introduce a new method for identifying these subgraphs: specifying a visual concept using a few examples, and then tracing the interdependence of neuron activations across layers, or their functional connectivity. We find that our approach extracts circuits that causally affect model output, and that editing these circuits can defend large pretrained models from adversarial attacks.

About

Code and data for the NeurIPS ATTRIB 2023 paper: Automatic Discovery of Visual Circuits

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published