You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
While investigating #6786 I added some custom statistics to the code to see the "big picture" of what is going on, looking at aggregate numbers to guide me in where the changes in behaviour are coming from. For example sometimes it was the number of instructions in an SSA pass, other times the number of instructions generated due to the different ratio of signed/unsigned values used by the passes.
The example was too big to look at the full SSA passes, or the ACIR (e.g. hundred of thousands of instructions, a million opcodes), but to compare what is different between two commits, the high level stats were helpful. NB I could not use nargo info because the example was a contract and info is looking for binary packages.
I thought we might want to collect statistics using one of the metrics libraries such as this one with an option to export it at the end of the compilation, or perhaps between SSA passes as well. prometheus even allows pushing metrics from batch processes, in case we would want to set up something to monitor the compilation on CI.
Happy Case
We could have something like a --show-metrics CLI argument to install a backend to collect metrics, which would otherwise be ignored, and then collect them anywhere we think is interesting to keep track of, e.g. the number of range constraints added due to this or that reason. Then we could use some diff tool to compare what changed in this summary of the compilation on the same program between one commit to the next.
In the code we could collect metrics like this:
fnconvert_ssa_instruction(&mutself,instruction_id:InstructionId,dfg:&DataFlowGraph,) -> Result<Vec<SsaReport>,RuntimeError>{let before = self.acir_context.acir_ir.opcodes().len();letmut tag = None;match&dfg[instruction_id]{Instruction::Binary(binary) => {
tag = Some("binary");let result_acir_var = self.convert_ssa_binary(binary, dfg)?;self.define_result_var(dfg, instruction_id, result_acir_var);}Instruction::Constrain(lhs, rhs, assert_message) => {
...}
...
};ifletSome(tag) = tag {let after = self.acir_context.acir_ir.opcodes().len();counter!(format!("convert_ssa_instruction.{tag}")).increment(after - before);}
...
}
Problem
While investigating #6786 I added some custom statistics to the code to see the "big picture" of what is going on, looking at aggregate numbers to guide me in where the changes in behaviour are coming from. For example sometimes it was the number of instructions in an SSA pass, other times the number of instructions generated due to the different ratio of signed/unsigned values used by the passes.
The example was too big to look at the full SSA passes, or the ACIR (e.g. hundred of thousands of instructions, a million opcodes), but to compare what is different between two commits, the high level stats were helpful. NB I could not use
nargo info
because the example was acontract
andinfo
is looking for binary packages.I thought we might want to collect statistics using one of the metrics libraries such as this one with an option to export it at the end of the compilation, or perhaps between SSA passes as well. prometheus even allows pushing metrics from batch processes, in case we would want to set up something to monitor the compilation on CI.
Happy Case
We could have something like a
--show-metrics
CLI argument to install a backend to collect metrics, which would otherwise be ignored, and then collect them anywhere we think is interesting to keep track of, e.g. the number of range constraints added due to this or that reason. Then we could use somediff
tool to compare what changed in this summary of the compilation on the same program between one commit to the next.In the code we could collect metrics like this:
There are other ways, like using the tracing library and subscribing to do aggregations, or to derive metrics based on a struct for type safety; here's another derivation example.
Workaround
Yes
Workaround Description
Edit the Rust code to add ad-hoc counters and prints, then execute with commands such as this:
Additional Context
No response
Project Impact
None
Blocker Context
No response
Would you like to submit a PR for this Issue?
None
Support Needs
No response
The text was updated successfully, but these errors were encountered: