Skip to content

Commit 58eee5f

Browse files
authored
[PERF] Use faster way of decode in tokenizer: avoid useless list-to-list conversion (#20000)
Signed-off-by: Vadim Gimpelson <[email protected]>
1 parent 067c34a commit 58eee5f

File tree

1 file changed

+4
-3
lines changed

1 file changed

+4
-3
lines changed

vllm/transformers_utils/tokenizer.py

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -50,11 +50,12 @@ def decode_tokens(
5050
`skip_special_tokens=None` means to use the backend's default
5151
settings.
5252
"""
53+
decode_method = getattr(tokenizer, "_decode", tokenizer.decode)
5354
if skip_special_tokens is not None:
54-
return tokenizer.decode(token_ids,
55-
skip_special_tokens=skip_special_tokens)
55+
return decode_method(token_ids,
56+
skip_special_tokens=skip_special_tokens)
5657

57-
return tokenizer.decode(token_ids)
58+
return decode_method(token_ids)
5859

5960

6061
def encode_tokens(

0 commit comments

Comments
 (0)