Skip to content

Commit

Permalink
docs: add more info about running and writing benchmarks
Browse files Browse the repository at this point in the history
  • Loading branch information
ankush authored Dec 19, 2024
1 parent aba3a90 commit ca89fe2
Showing 1 changed file with 14 additions and 1 deletion.
15 changes: 14 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ Approximately, this boils down to:
- Optimize EVERYTHING. Every 0.1% on critical path counts.
- Make deployments resource efficient by tuning various knobs.

### Running Benchmarks
### Running Microbenchmarks

This project uses [pyperf](https://pyperf.readthedocs.io/) to write various micro-benchmarks. Follow these steps to run the benchmarks:
1. Install the app as usual: `bench get-app caffeine`
Expand Down Expand Up @@ -46,6 +46,19 @@ This should get you roughly +/- 1% standard deviation results.

You can read this post for long-form explanations: https://ankush.dev/p/reliable-benchmarking


### Writing Microbenchmarks

1. Find appropriate `bench_{module}.py` file.
2. Add a new function with `bench_` prefix, the function body is your benchmark.
3. If you need to measure something very small (<1ms), then use `NanoBenchmark` class instead of function-based benchmarks.
4. Be very cautious about how you write a benchmark, ensure that it _actually_ measures what you want to measure. E.g. If you want to measure the performance of `frappe.get_cached_doc` when it fetches data from Redis then you need to ensure that it's not just using a locally cached document.


### E2E load testing

`todo!()`

### Contributing

At present, this repo is not accepting any external contributions.
Expand Down

0 comments on commit ca89fe2

Please sign in to comment.