Skip to content

Commit

Permalink
Speed up blib2to3 tokenization using startswith with a tuple (#4541)
Browse files Browse the repository at this point in the history
  • Loading branch information
moogician authored Dec 30, 2024
1 parent 9431e98 commit fdabd42
Show file tree
Hide file tree
Showing 2 changed files with 3 additions and 2 deletions.
1 change: 1 addition & 0 deletions CHANGES.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,7 @@
### Performance

<!-- Changes that improve Black's performance. -->
- Speed up the `is_fstring_start` function in Black's tokenizer (#4541)

### Output

Expand Down
4 changes: 2 additions & 2 deletions src/blib2to3/pgen2/tokenize.py
Original file line number Diff line number Diff line change
Expand Up @@ -221,7 +221,7 @@ def _combinations(*l: str) -> set[str]:
| {f"{prefix}'" for prefix in _strprefixes | _fstring_prefixes}
| {f'{prefix}"' for prefix in _strprefixes | _fstring_prefixes}
)
fstring_prefix: Final = (
fstring_prefix: Final = tuple(
{f"{prefix}'" for prefix in _fstring_prefixes}
| {f'{prefix}"' for prefix in _fstring_prefixes}
| {f"{prefix}'''" for prefix in _fstring_prefixes}
Expand Down Expand Up @@ -459,7 +459,7 @@ def untokenize(iterable: Iterable[TokenInfo]) -> str:


def is_fstring_start(token: str) -> bool:
return builtins.any(token.startswith(prefix) for prefix in fstring_prefix)
return token.startswith(fstring_prefix)


def _split_fstring_start_and_middle(token: str) -> tuple[str, str]:
Expand Down

0 comments on commit fdabd42

Please sign in to comment.