Skip to content

Commit d6c3ff7

Browse files
Mirza-Samad-Ahmed-BaigDanztee
authored andcommitted
fix: max_tokens handling in generate.py (openai#70)
1 parent 44e4935 commit d6c3ff7

File tree

1 file changed

+2
-1
lines changed

1 file changed

+2
-1
lines changed

gpt_oss/generate.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,8 @@ def main(args):
2828

2929
tokenizer = get_tokenizer()
3030
tokens = tokenizer.encode(args.prompt)
31-
for token, logprob in generator.generate(tokens, stop_tokens=[tokenizer.eot_token], temperature=args.temperature, max_tokens=args.limit, return_logprobs=True):
31+
max_tokens = None if args.limit == 0 else args.limit
32+
for token, logprob in generator.generate(tokens, stop_tokens=[tokenizer.eot_token], temperature=args.temperature, max_tokens=max_tokens, return_logprobs=True):
3233
tokens.append(token)
3334
decoded_token = tokenizer.decode([token])
3435
print(

0 commit comments

Comments
 (0)