-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
High memory consumption #3
Comments
Hi Tim, You can try to reduce the I didn't see issues in your script but 12 GB does sound a lot for a 100-pose problem in my experience. The high memory consumption is an expected behavior for larger-dimensional problems since we are drawing samples from the (high-dimensional) joint posterior distribution. There could be more efficient implementation strategies to reduce the memory cost; however, we haven't apply them in the current code. |
oh, one thing that might be helpful is that you can remove the sampling steps from I guess it is not necessary to do |
Thanks for your hints, Qiangqiang! Right now, I'm not sure what causes the high memory consumption. Using 500 samples of 100 variables with 3 dimensions, I would expect just about 1.2 MB (assuming 64 bit double values). |
I run a memory profiler and about 5 GB of the memory consumption comes from the line where Here is the full profiling result: Right now, I don't have the time to go deeper, but I will come back to this in the future. |
Hi Qiangqiang,
I recognized another problem with my range-only SLAM example range_only_incremental.py:
Solving the 100-pose dataset requires about 12 GB of RAM and solving a dataset with 3500 poses is not possible on a machine with 32 GB RAM.
Is the high memory consumption expected or could be something wrong with my script?
Best Regards
Tim
The text was updated successfully, but these errors were encountered: