Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

High memory consumption #3

Open
tipf opened this issue Jul 4, 2022 · 4 comments
Open

High memory consumption #3

tipf opened this issue Jul 4, 2022 · 4 comments

Comments

@tipf
Copy link

tipf commented Jul 4, 2022

Hi Qiangqiang,
I recognized another problem with my range-only SLAM example range_only_incremental.py:
Solving the 100-pose dataset requires about 12 GB of RAM and solving a dataset with 3500 poses is not possible on a machine with 32 GB RAM.

Is the high memory consumption expected or could be something wrong with my script?

Best Regards
Tim

@doublestrong
Copy link
Contributor

doublestrong commented Jul 5, 2022

Hi Tim,

You can try to reduce the posterior_sample_num for a lower memory consumption although the resulting samples will look sparse. Figures will still be saved if you set the show_plot to False.

I didn't see issues in your script but 12 GB does sound a lot for a 100-pose problem in my experience. The high memory consumption is an expected behavior for larger-dimensional problems since we are drawing samples from the (high-dimensional) joint posterior distribution. There could be more efficient implementation strategies to reduce the memory cost; however, we haven't apply them in the current code.

@doublestrong
Copy link
Contributor

doublestrong commented Jul 5, 2022

oh, one thing that might be helpful is that you can remove the sampling steps from incremental_inference if you don't want samples for every incremental update: https://github.com/tipf/NF-iSAM/blob/035122e7556197aa3c26cb53d0c0cf47818b2327/src/slam/FactorGraphSolver.py#L388-L392 This will save you both runtime and memory consumption and won't affect results (learned normalizing flows). You can call sample_posterior to draw samples when needed.

I guess it is not necessary to do incremental_inference every time step for your purpose. If you move the following lines to the if n % block, then the incremental updates will be performed for every 10% of the dataset and for the last time step. This will reduce both the overall runtime and memory consumption:
https://github.com/tipf/NF-iSAM/blob/035122e7556197aa3c26cb53d0c0cf47818b2327/example/slam/MUSE/range_only_incremental.py#L167-L173
If you want to get the solution for the final time step ASAP, you can even remove n % (NumTime/10) == 0 or .

@tipf
Copy link
Author

tipf commented Jul 6, 2022

Thanks for your hints, Qiangqiang!
I managed to reduce the memory consumption to 5.6 GB, which is still a lot but seems more reasonable.

Right now, I'm not sure what causes the high memory consumption. Using 500 samples of 100 variables with 3 dimensions, I would expect just about 1.2 MB (assuming 64 bit double values).
I will check this further and report back if I find an answer...

@tipf
Copy link
Author

tipf commented Jul 6, 2022

I run a memory profiler and about 5 GB of the memory consumption comes from the line where Graph.fit_tree_density_models is called: range_only_incremental.py#L172

Here is the full profiling result:
Profile.txt

Right now, I don't have the time to go deeper, but I will come back to this in the future.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants