-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Clean-up /tmp
dirctory
#15
Comments
Today I have done benchmark for DuckDB https://youtu.be/zVR77B2bDR0 |
Can you provide reproducible steps for when an |
I found this when I was using the |
I can confirm that disk space was never a concern and scripts generally won't be handling this kind of exception. |
I noticed this issue too actually when getting the benchmark back up and running. I never had the issue where another solution encountered an |
@sl-solution If you still believe this would be a problem, feel free to open a PR to automatically clean the |
In |
I wouldn't delete everything from |
I guess for |
I think sorting of billion rows requires the use of temporary. I have coded for billion-row jointable/filter/groupby using only 32GB ram, in fact it is certified no need using temp file. |
I think a systematic way to solve the issue is to assign a directory for temporary files, and ask every solution to use solely the assigned directory for on-disk calculations. The launcher can clean the directory after each run. |
Since the new machine has more memory, and instance storage, this has become less of an issue. Can this therefore be closed? |
I guess as long as solutions keep using temp files, this will be an issue. |
I notice that
on-disk
solutions may create large temporary files during their runs, however, they may not clean up afterward (e.g.polars
creates.ipc
files). This may cause theundefined exception
error for other solutions, when they run within the same session.The text was updated successfully, but these errors were encountered: