Skip to content

Commit e1bf00d

Browse files
authored
clean-up print statement (#249)
1 parent 80338f2 commit e1bf00d

File tree

1 file changed

+0
-2
lines changed

1 file changed

+0
-2
lines changed

model2vec/model.py

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -130,8 +130,6 @@ def tokenize(self, sentences: Sequence[str], max_length: int | None = None) -> l
130130
m = max_length * self.median_token_length
131131
sentences = [sentence[:m] for sentence in sentences]
132132

133-
max_len = max([len(sentence) for sentence in sentences])
134-
# self.tokenizer.model.max_input_chars_per_word = max_len + 1
135133
if self._can_encode_fast:
136134
encodings: list[Encoding] = self.tokenizer.encode_batch_fast(sentences, add_special_tokens=False)
137135
else:

0 commit comments

Comments
 (0)