Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: check and see if we can run on python 3.13 #4663

Open
terriko opened this issue Dec 30, 2024 · 4 comments · May be fixed by #4668
Open

feat: check and see if we can run on python 3.13 #4663

terriko opened this issue Dec 30, 2024 · 4 comments · May be fixed by #4668
Labels
enhancement New feature or request good first issue Good for newcomers

Comments

@terriko
Copy link
Contributor

terriko commented Dec 30, 2024

We're working on turning off python 3.8 support but haven't officially started testing on python 3.13. Usually there's a bit of lag while we wait for all of our dependencies to support any new version of python, but it's been a few months so it's probably safe to start trying.

So, if someone's got time, it would be helpful if someone took the time to install python 3.13 and see what works and what doesn't?

  • do all of our dependencies in requirements.txt install on python 3.13?
  • do all of our dependencies in dev-requirements.txt install on python 3.13?
  • do any tests fail on python 3.13?
  • does anything else fail on python 3.13?
  • do we get any new warning/error messages that we need to fix before officially declaring that we have support?

If all of those things look good, we'll need to enable 3.13 testing in some CI jobs and list it as a supported version in setup.py, but we may also need to do some (hopefully minor!) fixes before we get to that point.

@terriko terriko added enhancement New feature or request good first issue Good for newcomers labels Dec 30, 2024
@Prtm2110
Copy link

Prtm2110 commented Jan 1, 2025

Hello, I have created 2 env one with python 3.11 and another with python 3.13

  • do all of our dependencies in requirements.txt install on python 3.13?

Yes, here are the differences

Cryptography Version:
3.11  cryptography-43.0.3.
3.13  cryptography-44.0.0.

gsutil Version:
3.11 gsutil-5.33.
3.13 gsutil-5.31.

pyOpenSSL Version:
3.11  pyOpenSSL-24.2.1.
3.13  pyOpenSSL-24.3.0.
  • do all of our dependencies in dev-requirements.txt install on python 3.13?

Yes, there are no version differences

  • do any tests fail on python 3.13?

For some reason on both 3.11 and 3.13 these following tests are failing on my system

========================================================================================================= short test summary info ==========================================================================================================
FAILED test/test_csv2cve.py::TestCSV2CVE::test_csv2cve_valid_file - AssertionError: assert ('cve_bin_tool', 20, 'There are 2 products with known CVEs detected') in []
FAILED test/test_extractor.py::TestExtractFileRpm::test_bad_files - struct.error: unpack requires a buffer of 96 bytes
FAILED test/test_extractor.py::TestExtractFileRpmWithZstd::test_bad_files - struct.error: unpack requires a buffer of 96 bytes
FAILED test/test_merge.py::TestMergeReports::test_missing_fields[filepaths0-missing_fields0] - IndexError: list index out of range
FAILED test/test_merge.py::TestMergeReports::test_missing_fields[filepaths1-missing_fields1] - IndexError: list index out of range
FAILED test/test_output_engine.py::TestOutputEngine::test_output_file - AssertionError: no logs of level INFO or higher triggered on root
FAILED test/test_output_engine.py::TestOutputEngine::test_output_file_filename_already_exists - AssertionError: no logs of level INFO or higher triggered on root
FAILED test/test_output_engine.py::TestOutputEngine::test_output_file_incorrect_filename - AssertionError: no logs of level INFO or higher triggered on root
FAILED test/test_output_engine.py::TestOutputEngine::test_output_file_wrapper - AssertionError: no logs of level INFO or higher triggered on root
FAILED test/test_requirements.py::test_requirements - importlib.metadata.PackageNotFoundError: No package metadata was found for pillow
FAILED test/test_scanner.py::TestScanner::test_cannot_open_file - AssertionError: assert 0
FAILED test/test_source_redhat.py::TestSourceRedHat::test_update_cve_entries - RuntimeError: This event loop is already running
FAILED test/test_source_redhat.py::TestSourceRedHat::test_incremental_update_cve_entries - RuntimeError: This event loop is already running
FAILED test/test_source_redhat.py::TestSourceRedHat::test_format_data - RuntimeError: This event loop is already running
FAILED test/test_strings.py::TestStrings::test_curl_7_34_0 - RuntimeError: This event loop is already running
FAILED test/test_strings.py::TestStrings::test_kerberos_1_15_1 - RuntimeError: This event loop is already running
FAILED test/test_version.py::TestVersion::test_different_version[-1] - RuntimeError: This event loop is already running
FAILED test/test_version.py::TestVersion::test_different_version[1] - RuntimeError: This event loop is already running
FAILED test/test_version.py::TestVersion::test_exception - RuntimeError: This event loop is already running
ERROR test/test_html.py::TestOutputHTML::test_interactive_mode_print_mode_switching[chromium] - playwright._impl._errors.Error: BrowserType.launch: Executable doesn't exist at /home/xoein/.cache/ms-playwright/chromium_headless_shell-1148/chrome-linux/headless_shell
ERROR test/test_html.py::TestOutputHTML::test_modal_switching[chromium] - playwright._impl._errors.Error: BrowserType.launch: Executable doesn't exist at /home/xoein/.cache/ms-playwright/chromium_headless_shell-1148/chrome-linux/headless_shell
ERROR test/test_html.py::TestOutputHTML::test_product_search[chromium] - playwright._impl._errors.Error: BrowserType.launch: Executable doesn't exist at /home/xoein/.cache/ms-playwright/chromium_headless_shell-1148/chrome-linux/headless_shell
ERROR test/test_html.py::TestOutputHTML::test_product_remark_filter[chromium] - playwright._impl._errors.Error: BrowserType.launch: Executable doesn't exist at /home/xoein/.cache/ms-playwright/chromium_headless_shell-1148/chrome-linux/headless_shell
ERROR test/test_html.py::TestOutputHTML::test_cve_summary_table[chromium] - playwright._impl._errors.Error: BrowserType.launch: Executable doesn't exist at /home/xoein/.cache/ms-playwright/chromium_headless_shell-1148/chrome-linux/headless_shell
ERROR test/test_html.py::TestOutputHTML::test_cve_remarks_table[chromium] - playwright._impl._errors.Error: BrowserType.launch: Executable doesn't exist at /home/xoein/.cache/ms-playwright/chromium_headless_shell-1148/chrome-linux/headless_shell
ERROR test/test_html.py::TestOutputHTML::test_empty_cve_list[chromium] - playwright._impl._errors.Error: BrowserType.launch: Executable doesn't exist at /home/xoein/.cache/ms-playwright/chromium_headless_shell-1148/chrome-linux/headless_shell
ERROR test/test_html.py::TestOutputHTML::test_unknown_cve_number[chromium] - playwright._impl._errors.Error: BrowserType.launch: Executable doesn't exist at /home/xoein/.cache/ms-playwright/chromium_headless_shell-1148/chrome-linux/headless_shell
================================================================================== 19 failed, 278 passed, 1860 skipped, 2 warnings in 258.31s (0:04:18) ===================================================================================
  • does anything else fail on python 3.13?

No,

  • do we get any new warning/error messages that we need to fix before officially declaring that we have support?

On running python3 -m pip install -e . on both 3.11 and 3.13 I got this error

DEPRECATION: Legacy editable install of cve-bin-tool==3.4 from file:///home/xoein/Desktop/CVE-PSF/cve-bin-tool (setup.py develop) is deprecated. pip 25.0 will enforce this behaviour change. A possible replacement is to add a pyproject.toml or enable --use-pep517, and use setuptools >= 64. If the resulting installation is not behaving as expected, try using --config-settings editable_mode=compat. Please consult the setuptools documentation for more information. Discussion can be found at https://github.com/pypa/pip/issues/11457

@Prtm2110
Copy link

Prtm2110 commented Jan 1, 2025

I noticed that a significant number of tests are being skipped. Is this behaviour intentional, or could it indicate a configuration issue on my end?

I also resolved the above Playwright-related errors by installing Playwright as required, and those errors no longer appear. However, 19 tests are still failing.
Could you please confirm if the skipped tests are expected?

@terriko
Copy link
Contributor Author

terriko commented Jan 2, 2025

Skipped tests are expected; we run a lot of tests only in long test mode so there should be something like 1200 skipped tests if you don't have long tests enabled. I forget how many other skipped tests we have outside of those, but probably a dozen or so? You can check against what we're running in Github Actions if you want. The tests that are skipped for a non-longtest reason should be marked with explanations as to why they're being skipped if you run pytest in a more verbose mode.

You can enable long tests by setting LONG_TESTS=1. I usually do this all on one line like so on linux:

LONG_TESTS=1 pytest -n auto

You can also set it in your environment if you want to run long tests all the time, but some of those tests include removing your entire database cache and running NVD updates, so I don't recommend that as a default unless you're actually working on the NVD related code.

@Prtm2110
Copy link

Prtm2110 commented Jan 3, 2025

Thank you for your response!

LONG_TESTS=1 pytest -n auto

Upon trying this on both envs I got something like 570 Fails and most of them are Run time error

FAILED cve-bin-tool/test/test_scanner.py::TestScanner::test_version_in_package[https://downloads.openwrt.org/releases/packages-19.07/x86_64/packages/-libzstd_1.4.5-2_x86_64.ipk-zstandard-1.4.5-other_products1200] - RuntimeError: This event loop is already running

How can I fix this? I’ll put up a PR so we can check it directly against GitHub checks for 3.13. That should make things easier.

You can also set it in your environment if you want to run long tests all the time, but some of those tests include removing your entire database cache and running NVD updates, so I don't recommend that as a default unless you're actually working on the NVD related code.

Thanks, I shall try.

@Prtm2110 Prtm2110 linked a pull request Jan 3, 2025 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request good first issue Good for newcomers
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants