Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prepared Statements cache is unbounded #114

Open
asterite opened this issue Oct 17, 2019 · 6 comments
Open

Prepared Statements cache is unbounded #114

asterite opened this issue Oct 17, 2019 · 6 comments

Comments

@asterite
Copy link
Member

See will/crystal-pg#191

This is a problem in crystal-db: the string passed to exec is never released. Apparently crystal-db caches query strings but never releases them. So if you keep passing different strings you'll get more and more memory consumed.
This problem is solved by using parameter placeholders, like "$1". But I still think this is an issue in crystal-db: there should be a pool of prepared statements, not an unbounded cache.

@asterite
Copy link
Member Author

/cc @bcardiff

@bcardiff
Copy link
Member

@haffla as a mitigation use prepared_statements=false in both query strings. That should help.

There is an idea of putting a max life span for the connections, but clearing the statement cache should also be considered.

@straight-shoota
Copy link
Member

It's weird though, in my example heap grows by about 140K per query but the query string is only about 5K.

@haffla
Copy link

haffla commented Oct 18, 2019

Why don't I have this problem if I insert into a MySQL database, though, as I described in will/crystal-pg#191? Anyway I will try prepared_statements=false.

@haffla
Copy link

haffla commented Oct 18, 2019

Confirming that disabling prepared statements mitigates the problem. But: I am now actually not usinginsert inside the loop, but actually direct database prepared statements. Before the loop I create a plan

prepare fooplan (numeric, numeric) as insert into table values (...), (...), ...

and then inside the loop

execute fooplan(1, 2, 3, ...)

There is no reason to cache anything here.

@lwakefield
Copy link
Contributor

lwakefield commented Aug 28, 2024

Checking in here! It appears I am running into this issue as well.

Interestingly, the queries I am running are not dynamic (need to verify there isn't anything sneaky) - so I should have a very high hit-rate in @statements_cache. Yet when I add ?prepared_statements_cache=false the problem goes away.

My current theory right now, is there is some interplay between the @statements_cache and Pool where some objects are not being GCed. More specifically, I think it is the case where a connection gets yielded back to the pool and triggers the max_idle_pool_size condition, that connection gets released, but doesn't get fully GCed. This is supported by inspecting the allocations, and noting that the largest allocations are more generally connection related, not cache related.

Either way - this should be fairly straight forward to verify so stay tuned...

Update: I'm glad I called it a theory! This ☝️ might be related to the issue, but appears to be tangential to the issue I am personally experiencing. I'll leave the comment here in case it is of use (so long as it is clear it is closer to "field notes" than "verified assertion").

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants