Skip to content

Commit

Permalink
readme,docs: add poster
Browse files Browse the repository at this point in the history
  • Loading branch information
plutonium-239 committed Aug 29, 2024
1 parent 0327bc2 commit 0c42260
Show file tree
Hide file tree
Showing 2 changed files with 27 additions and 14 deletions.
39 changes: 26 additions & 13 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,20 +2,22 @@

This package offers drop-in implementations of PyTorch `nn.Module`s.
They are as fast as their built-in equivalents, but more memory-efficient whenever you want to compute gradients for a sub-set of parameters (i.e. some have `requires_grad=False`).
You can convert your neural network by calling the `memsave_torch.nn.convert_to_memory_saving` function.
You can convert your neural network by calling the [`memsave_torch.nn.convert_to_memory_saving`](https://memsave-torch.readthedocs.io/en/stable/api/nn/index.html#memsave_torch.nn.convert_to_memory_saving) function.

Take a look at the [Basic Example](#basic-example) to see how it works.

Currently it supports the following layers:
- `memsave_torch.nn.MemSaveLinear`
- `memsave_torch.nn.MemSaveConv2d`
- `memsave_torch.nn.MemSaveConv1d`
- `memsave_torch.nn.MemSaveConv2d`
- `memsave_torch.nn.MemSaveConv3d`
- `memsave_torch.nn.MemSaveConvTranspose1d`
- `memsave_torch.nn.MemSaveConvTranspose2d`
- `memsave_torch.nn.MemSaveConvTranspose3d`
- `memsave_torch.nn.MemSaveReLU`
- `memsave_torch.nn.MemSaveBatchNorm2d`
- `memsave_torch.nn.MemSaveLayerNorm`
- `memsave_torch.nn.MemSaveMaxPool2d`

Also, each layer has a `.from_nn_<layername>(layer)` function which allows to convert a single `torch.nn` layer into its memory-saving equivalent. (e.g. `MemSaveConv2d.from_nn_Conv2d`)
Also, each layer has a `.from_nn_<layername>(layer)` function which allows to convert a single `torch.nn` layer into its memory-saving equivalent. (e.g. [`MemSaveConv2d.from_nn_Conv2d`](https://memsave-torch.readthedocs.io/en/stable/api/nn/memsave_torch.nn.MemSaveConv2d.html))

## Installation

Expand Down Expand Up @@ -45,19 +47,30 @@ loss = loss_func(model(X), y)
```

## Further reading
- [Link to documentation]()
- [Link to more examples]()
- [Link to paper/experiments folder]()
- [Writeup](memsave_torch/writeup.md)

- [Writeup](https://github.com/plutonium-239/memsave_torch/blob/llm/experiments/writeup.md)

This explains the basic ideas around MemSave without diving into too many details.

- [Our paper (WANT@ICML'24)](https://openreview.net/pdf?id=KsUUzxUK7N) and it's [Poster](https://github.com/plutonium-239/memsave_torch/blob/main/memsave_poster.pdf)

It is also available on [arXiv](https://arxiv.org/abs/2404.12406)

- [Documentation](https://memsave-torch.readthedocs.io/)
<!-- - [Link to more examples]()
- [Link to paper/experiments folder]()-->

## How to cite

If this package has benefited you at some point, consider citing

```bibtex
@article{
TODO
@inproceedings{
bhatia2024lowering,
title={Lowering PyTorch's Memory Consumption for Selective Differentiation},
author={Samarth Bhatia and Felix Dangel},
booktitle={2nd Workshop on Advancing Neural Network Training: Computational Efficiency, Scalability, and Resource Optimization (WANT@ICML 2024)},
year={2024},
url={https://openreview.net/forum?id=KsUUzxUK7N}
}
```
2 changes: 1 addition & 1 deletion docs_src/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -103,7 +103,7 @@ Further reading

This explains the basic ideas around MemSave without diving into too many details.

* `Our paper (WANT@ICML'24) <https://openreview.net/pdf?id=KsUUzxUK7N>`_
* `Our paper (WANT@ICML'24) <https://openreview.net/pdf?id=KsUUzxUK7N>`_ and it's `Poster <https://github.com/plutonium-239/memsave_torch/blob/main/memsave_poster.pdf>`_

It is also available on `arXiv <https://arxiv.org/abs/2404.12406>`_.

Expand Down

0 comments on commit 0c42260

Please sign in to comment.