Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bump pytest-benchmark from 4.0.0 to 5.1.0 in /tools/deps #5486

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

dependabot[bot]
Copy link
Contributor

@dependabot dependabot bot commented on behalf of github Dec 2, 2024

Bumps pytest-benchmark from 4.0.0 to 5.1.0.

Changelog

Sourced from pytest-benchmark's changelog.

5.1.0 (2024-10-30)

  • Fixed broken hooks handling on pytest 8.1 or later (the TypeError: import_path() missing 1 required keyword-only argument: 'consider_namespace_packages' issue). Unfortunately this sets the minimum supported pytest version to 8.1.

5.0.1 (2024-10-30)

  • Fixed bad fixture check that broke down then nbmake <https://pypi.org/project/nbmake/>_ was enabled.

5.0.0 (2024-10-29)

  • Dropped support for now EOL Python 3.8. Also moved tests suite to only test the latest pytest versions (8.3.x).

  • Fix generate parametrize tests benchmark csv report errors (issue [#268](https://github.com/ionelmc/pytest-benchmark/issues/268) <https://github.com/ionelmc/pytest-benchmark/issues/268>). Contributed by Johnny Huang in [#269](https://github.com/ionelmc/pytest-benchmark/issues/269) <https://github.com/ionelmc/pytest-benchmark/pull/269>.

  • Added the --benchmark-time-unit cli option for overriding the measurement unit used for display. Contributed by Tony Kuo in [#257](https://github.com/ionelmc/pytest-benchmark/issues/257) <https://github.com/ionelmc/pytest-benchmark/pull/257>_.

  • Fixes spelling in some help texts. Contributed by Eugeniy in [#267](https://github.com/ionelmc/pytest-benchmark/issues/267) <https://github.com/ionelmc/pytest-benchmark/pull/267>_.

  • Added new cprofile options:

    • --benchmark-cprofile-loops=LOOPS - previously profiling only ran the function once, this allow customization.
    • --benchmark-cprofile-top=COUNT - allows showing more rows.
    • --benchmark-cprofile-dump=[FILENAME-PREFIX] - allows saving to a file (that you can load in snakeviz <https://pypi.org/project/snakeviz/>, RunSnakeRun <https://pypi.org/project/RunSnakeRun/> or other tools).
  • Removed hidden dependency on py.path <https://pypi.org/project/py/>_ (replaced with pathlib).

Commits
  • fcc60e0 Bump version: 5.0.1 → 5.1.0
  • 8dfeeec Fix broken hooks on pytest 8.1+ (and set that as the min supported version). ...
  • 41302ed Bump version: 5.0.0 → 5.0.1
  • 3f55df2 Update changelog.
  • 6318314 Fix bad fixture check. Close #271.
  • 614df44 Bump version: 4.0.0 → 5.0.0
  • 65fc262 Make it a major (since it drops py3.8).
  • 0c32498 Remove stray stuff.
  • e1dcd91 Add new cprofile options.
  • 61229eb Replace all uses of py.path with pathlib.
  • Additional commits viewable in compare view

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot show <dependency name> ignore conditions will show all of the ignore conditions of the specified dependency
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

Summary by Sourcery

Build:

  • Update pytest-benchmark from version 4.0.0 to 5.1.0 in the requirements file.

Bumps [pytest-benchmark](https://github.com/ionelmc/pytest-benchmark) from 4.0.0 to 5.1.0.
- [Changelog](https://github.com/ionelmc/pytest-benchmark/blob/master/CHANGELOG.rst)
- [Commits](ionelmc/pytest-benchmark@v4.0.0...v5.1.0)

---
updated-dependencies:
- dependency-name: pytest-benchmark
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <[email protected]>
Copy link
Contributor

sourcery-ai bot commented Dec 2, 2024

Reviewer's Guide by Sourcery

This PR updates the pytest-benchmark dependency from version 4.0.0 to 5.1.0. The update includes several new features, bug fixes, and breaking changes, notably dropping support for Python 3.8 and requiring pytest 8.1 or later.

No diagrams generated as the changes look simple and do not need a visual representation.

File-Level Changes

Change Details Files
Update pytest-benchmark dependency with breaking changes and new features
  • Drop support for Python 3.8
  • Set minimum supported pytest version to 8.1
  • Fix parametrize tests benchmark CSV report errors
  • Add new --benchmark-time-unit CLI option for measurement unit display
  • Add new cprofile options for customizing profiling behavior
  • Replace py.path dependency with pathlib
  • Fix broken hooks handling on pytest 8.1+
  • Fix fixture check issue affecting nbmake compatibility
tools/deps/requirements-bench.txt

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time. You can also use
    this command to specify where the summary should be inserted.

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have skipped reviewing this pull request. It seems to have been created by a bot (hey, dependabot[bot]!). We assume it knows what it's doing!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant