Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update test_clip_tokens.py #1419

Closed
wants to merge 1 commit into from
Closed

Update test_clip_tokens.py #1419

wants to merge 1 commit into from

Conversation

mrT23
Copy link
Collaborator

@mrT23 mrT23 commented Dec 25, 2024

PR Type

Tests


Description

  • Fixed a bug in the test case for clip_tokens function where the assertion was incorrectly checking if the clipped result equals the original text
  • The test now correctly verifies that the clipped text should be different from the original text when applying token limits

Changes walkthrough 📝

Relevant files
Tests
test_clip_tokens.py
Fix incorrect assertion in clip tokens test                           

tests/unittest/test_clip_tokens.py

  • Modified assertion in test_clip method to expect result to be
    different from input text
  • +1/-1     

    💡 PR-Agent usage: Comment /help "your question" on any pull request to receive relevant information

    Copy link
    Contributor

    PR Reviewer Guide 🔍

    Here are some key observations to aid the review process:

    ⏱️ Estimated effort to review: 1 🔵⚪⚪⚪⚪
    🏅 Score: 95
    🧪 PR contains tests
    🔒 No security concerns identified
    ⚡ Recommended focus areas for review

    Test Coverage

    Consider adding test cases with edge cases like empty string input or max_tokens=0 to improve test coverage

    def test_clip(self):
        text = "line1\nline2\nline3\nline4\nline5\nline6"
        max_tokens = 25
        result = clip_tokens(text, max_tokens)
        assert result != text
    
        max_tokens = 10
        result = clip_tokens(text, max_tokens)
        expected_results = 'line1\nline2\nline3\n\n...(truncated)'

    Copy link
    Contributor

    PR Code Suggestions ✨

    Explore these optional code suggestions:

    CategorySuggestion                                                                                                                                    Score
    Possible issue
    Test case should verify that input text within token limit remains unmodified rather than asserting inequality

    The assertion result != text is too broad and may miss edge cases. Instead, verify
    the specific behavior of clip_tokens() when the input text is within the token
    limit.

    tests/unittest/test_clip_tokens.py [11-14]

     text = "line1\nline2\nline3\nline4\nline5\nline6"
     max_tokens = 25
     result = clip_tokens(text, max_tokens)
    -assert result != text
    +assert result == text, "Text within token limit should remain unchanged"
    • Apply this suggestion
    Suggestion importance[1-10]: 10

    Why: The suggestion correctly identifies a critical issue where the test case is asserting the wrong behavior. When text is within token limits, clip_tokens() should return the original text unchanged, not modify it. This is a significant bug in the test logic.

    10
    • Author self-review: I have reviewed the PR code suggestions, and addressed the relevant ones.

    Copy link
    Contributor

    qodo-merge-pro-for-open-source bot commented Dec 25, 2024

    CI Failure Feedback 🧐

    (Checks updated until commit b073fb1)

    Action: build-and-test

    Failed stage: Test dev docker [❌]

    Failed test name: TestClipTokens.test_clip

    Failure summary:

    The test test_clip in TestClipTokens class failed because the clip_tokens function did not modify
    the input text as expected:

  • The function was supposed to clip/truncate the text when given a max_tokens limit of 25
  • However, it returned the exact same text as the input ("line1\nline2\nline3\nline4\nline5\nline6")
  • The test assertion assert result != text failed because the function didn't perform any clipping

  • Relevant error logs:
    1:  ##[group]Operating System
    2:  Ubuntu
    ...
    
    1057:  tests/unittest/test_find_line_number_of_relevant_line_in_file.py ......  [ 62%]
    1058:  tests/unittest/test_fix_output.py ....                                   [ 67%]
    1059:  tests/unittest/test_github_action_output.py ....                         [ 72%]
    1060:  tests/unittest/test_handle_patch_deletions.py ....                       [ 76%]
    1061:  tests/unittest/test_language_handler.py ......                           [ 83%]
    1062:  tests/unittest/test_load_yaml.py ...                                     [ 87%]
    1063:  tests/unittest/test_parse_code_suggestion.py ....                        [ 91%]
    1064:  tests/unittest/test_try_fix_yaml.py .......                              [100%]
    1065:  =================================== FAILURES ===================================
    1066:  ___________________________ TestClipTokens.test_clip ___________________________
    1067:  self = <test_clip_tokens.TestClipTokens object at 0x7f0f2d809280>
    1068:  def test_clip(self):
    1069:  text = "line1\nline2\nline3\nline4\nline5\nline6"
    1070:  max_tokens = 25
    1071:  result = clip_tokens(text, max_tokens)
    1072:  >       assert result != text
    1073:  E       AssertionError: assert 'line1\nline2\nline3\nline4\nline5\nline6' != 'line1\nline2\nline3\nline4\nline5\nline6'
    1074:  tests/unittest/test_clip_tokens.py:14: AssertionError
    ...
    
    1102:  ../usr/local/lib/python3.12/site-packages/pkg_resources/__init__.py:2317
    1103:  /usr/local/lib/python3.12/site-packages/pkg_resources/__init__.py:2317: DeprecationWarning: Deprecated call to `pkg_resources.declare_namespace('azure')`.
    1104:  Implementing implicit namespace packages (as specified in PEP 420) is preferred to `pkg_resources.declare_namespace`. See https://setuptools.pypa.io/en/latest/references/keywords.html#keyword-namespace-packages
    1105:  declare_namespace(parent)
    1106:  ../usr/local/lib/python3.12/site-packages/starlette/formparsers.py:12
    1107:  /usr/local/lib/python3.12/site-packages/starlette/formparsers.py:12: PendingDeprecationWarning: Please use `import python_multipart` instead.
    1108:  import multipart
    1109:  ../usr/local/lib/python3.12/site-packages/pydantic/_internal/_config.py:291
    1110:  /usr/local/lib/python3.12/site-packages/pydantic/_internal/_config.py:291: PydanticDeprecatedSince20: Support for class-based `config` is deprecated, use ConfigDict instead. Deprecated in Pydantic V2.0 to be removed in V3.0. See Pydantic V2 Migration Guide at https://errors.pydantic.dev/2.8/migration/
    ...
    
    1182:  pr_agent/tools/pr_reviewer.py                                         235    235     0%
    1183:  pr_agent/tools/pr_similar_issue.py                                    363    363     0%
    1184:  pr_agent/tools/pr_update_changelog.py                                 110    110     0%
    1185:  pr_agent/tools/ticket_pr_compliance_check.py                           80     69    14%
    1186:  ---------------------------------------------------------------------------------------
    1187:  TOTAL                                                                8922   7386    17%
    1188:  Coverage XML written to file coverage.xml
    1189:  =========================== short test summary info ============================
    1190:  FAILED tests/unittest/test_clip_tokens.py::TestClipTokens::test_clip - Assert...
    1191:  ================== 1 failed, 85 passed, 15 warnings in 11.02s ==================
    1192:  ##[error]Process completed with exit code 1.
    

    ✨ CI feedback usage guide:

    The CI feedback tool (/checks) automatically triggers when a PR has a failed check.
    The tool analyzes the failed checks and provides several feedbacks:

    • Failed stage
    • Failed test name
    • Failure summary
    • Relevant error logs

    In addition to being automatically triggered, the tool can also be invoked manually by commenting on a PR:

    /checks "https://github.com/{repo_name}/actions/runs/{run_number}/job/{job_number}"
    

    where {repo_name} is the name of the repository, {run_number} is the run number of the failed check, and {job_number} is the job number of the failed check.

    Configuration options

    • enable_auto_checks_feedback - if set to true, the tool will automatically provide feedback when a check is failed. Default is true.
    • excluded_checks_list - a list of checks to exclude from the feedback, for example: ["check1", "check2"]. Default is an empty list.
    • enable_help_text - if set to true, the tool will provide a help message with the feedback. Default is true.
    • persistent_comment - if set to true, the tool will overwrite a previous checks comment with the new feedback. Default is true.
    • final_update_message - if persistent_comment is true and updating a previous checks message, the tool will also create a new message: "Persistent checks updated to latest commit". Default is true.

    See more information about the checks tool in the docs.

    @mrT23 mrT23 closed this Dec 25, 2024
    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
    Projects
    None yet
    Development

    Successfully merging this pull request may close these issues.

    1 participant