-
Notifications
You must be signed in to change notification settings - Fork 235
rlimits: Machine Independent Resource Limits for Deterministic Execution
For a long time F* used timeouts for asking Z3 to stop looking for a proof at some point and instead just fail. That was very brittle though, since it was very much influenced by the speed of the machine, the current workload, power-saving features, etc, leading to very unpredictable outcomes.
We are now doing better by using Z3's support for machine-independent resource limits (rlimits). Instead of counting wall-clock time, rlimits count the number of certain basic operations in Z3 (like allocations). rlimits ensure fully deterministic execution irrespective of machine, workload, etc.
You can set the rlimit using the new --z3rlimit n
F* flag, where n
is a natural number intuitively counting the maximum number of seconds allowed for each query when running on a mythical powerful laptop (a quad core Intel(R) Core(TM) i7-4700MQ CPU @ 2.40GHz with 16GB RAM). On a strong workstation things will be much faster, on a smart watch much slower, but they will always be deterministic.
Please note that while the correspondance between rlimit and time is quite good, it's not perfect. The following graph plots time against raw Z3 rlimit, before the normalization we do to try to emulate "seconds on a mythical powerful laptop":
In particular, if you're porting code from the old --z3timeout n
to --z3rlimit n
, things will work just fine most of the time, but some times you will need to increase the n
quite a bit even if things don't take very long.
One last thing to keep in mind is that, while rlimits make execution deterministic, they do not completely address the more general robustness problems coming from using Z3 to solve really hard problems. So it is still the case that any change to the queries we feed to Z3 can cause unpredictable verification outcomes. Proofs that used to work fast can stop working or take a longer time and/or rlimit even for simple changes like renaming a variable in a completely unrelated part of the code. This is an extreme example, but it does happen sometimes, probably more often than you might think.
In a sense rlimits make this problem slightly worse: The unpredictable nature of timeouts was making it clearer which proofs are flakier than others, while now everything is deterministic until the input file changes. So before overfitting your rlimits to the lowest possible numbers that make things work right now, you should consider running some experiments, e.g. with different Z3 random seeds, to make sure that you're not causing unnecessary flakiness that will likely be exposed by future changes.