Skip to content

Commit

Permalink
readme,docs: add pytorch issue info and a note
Browse files Browse the repository at this point in the history
  • Loading branch information
plutonium-239 committed Aug 29, 2024
1 parent 0c42260 commit 154f88c
Show file tree
Hide file tree
Showing 2 changed files with 18 additions and 0 deletions.
4 changes: 4 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,6 +57,10 @@ loss = loss_func(model(X), y)
It is also available on [arXiv](https://arxiv.org/abs/2404.12406)

- [Documentation](https://memsave-torch.readthedocs.io/)

- [PyTorch repo issue](https://github.com/pytorch/pytorch/issues/133566)

To integrate what our library does into pytorch itself, although at a lower kernel level (Please read [notes on pytorch integration](https://memsave-torch.readthedocs.io/en/stable/index.html#pytorch-integration-note)).
<!-- - [Link to more examples]()
- [Link to paper/experiments folder]()-->

Expand Down
14 changes: 14 additions & 0 deletions docs_src/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -107,6 +107,20 @@ Further reading

It is also available on `arXiv <https://arxiv.org/abs/2404.12406>`_.

* `PyTorch repo issue <https://github.com/pytorch/pytorch/issues/133566>`_

To integrate what our library does into pytorch itself, although at a lower kernel level (Please read :ref:`notes on pytorch inegration <pytorch_integration_note>` ).

.. _pytorch_integration_note:

.. admonition:: Notes on PyTorch integration
:class: important

The ideal solution to this problem would be at the lower level (i.e. CPU C++ functions/GPU CUDA kernels etc.), involving a change in the function signature of ``torch.ops.aten.convolution_backward`` to handle not always having two tensors as input (i.e. the saved inputs and weights).

However, that would require a change in all the backends, which is not realistic for us to do and requires considerable design decisions from the pytorch team itself. So, we implement these layers at the higher python level, which makes it platform independent and easy(-ier) to maintain at the cost of a slight performance hit.


How to cite
*************

Expand Down

0 comments on commit 154f88c

Please sign in to comment.