Skip to content

Commit 01a20d1

Browse files
authored
Remove max_length from tokenization (meta-llama#604)
1 parent c62ed77 commit 01a20d1

File tree

1 file changed

+0
-1
lines changed

1 file changed

+0
-1
lines changed

recipes/quickstart/inference/local_inference/inference.py

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -89,7 +89,6 @@ def inference(
8989

9090
batch = tokenizer(
9191
user_prompt,
92-
padding="max_length",
9392
truncation=True,
9493
max_length=max_padding_length,
9594
return_tensors="pt",

0 commit comments

Comments
 (0)