You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've wanted to improve compile times for a while. So far my attempts to do so have failed because I don't know exactly what is slow, other than program proofs.
The first problem is getting data: what's slow about compilation? We have per-file information, but we need some other measurement. Is it Ltac? If it's Ltac, is it something about the proof mode? Is it typeclass resolution?
Some things we can try:
Figure out how much time per file goes into Ltac vs everything else.
Attempt to get a whole-project Ltac profile.
Look at an OCaml profile of Coq compiling the whole project.
If it's the proof mode, one piece of useful infrastructure would be a synthetic benchmark that tests performance as some scaling factor increases, like the number of hypotheses or size of terms in the environment. Jason's projects (like reification by parametricity) are extremely good about these, which let them both track performance and identify asymptotic inefficiencies.
The text was updated successfully, but these errors were encountered:
I compared Perennial master with upgraded Iris and stdpp, using a recent Coq master build on OCaml 4.12. This is the difference between 4af744c and adb17d5. Before the build took 10964.3 and after 10545.2s, a 4% speedup in total work. The wall clock time with make -j4 went from 63.12min to 60.73min, which is a similar 4% speedup. These numbers are from my laptop with Turbo Boost disabled for better stability.
On a desktop machine running Coq 8.13.2 (OCaml 4.12) I saw a speedup from 12222s to 11143s, which is a more significant 10% speedup.
All in all these are pretty good and certainly appreciated!
I've wanted to improve compile times for a while. So far my attempts to do so have failed because I don't know exactly what is slow, other than program proofs.
The first problem is getting data: what's slow about compilation? We have per-file information, but we need some other measurement. Is it Ltac? If it's Ltac, is it something about the proof mode? Is it typeclass resolution?
Some things we can try:
If it's the proof mode, one piece of useful infrastructure would be a synthetic benchmark that tests performance as some scaling factor increases, like the number of hypotheses or size of terms in the environment. Jason's projects (like reification by parametricity) are extremely good about these, which let them both track performance and identify asymptotic inefficiencies.
The text was updated successfully, but these errors were encountered: