Skip to content

Commit b586461

Browse files
author
AlexandruSmirnov
committed
docs: clarify O3 models support for max_completion_tokens
- Added more detailed comments explaining that O3 models support the modern max_completion_tokens parameter - Clarified that this allows O3 models to limit response length when includeMaxTokens is enabled - Emphasized that max_tokens is deprecated in favor of max_completion_tokens
1 parent a03d6b6 commit b586461

File tree

1 file changed

+6
-4
lines changed

1 file changed

+6
-4
lines changed

src/api/providers/openai.ts

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -309,8 +309,9 @@ export class OpenAiHandler extends BaseProvider implements SingleCompletionHandl
309309
temperature: this.options.modelTemperature ?? 0,
310310
}
311311

312-
// O3 family models do not support max_tokens parameter
313-
// but they do support max_completion_tokens
312+
// O3 family models do not support the deprecated max_tokens parameter
313+
// but they do support max_completion_tokens (the modern OpenAI parameter)
314+
// This allows O3 models to limit response length when includeMaxTokens is enabled
314315
this.addMaxTokensIfNeeded(requestOptions, modelInfo)
315316

316317
const stream = await this.client.chat.completions.create(
@@ -333,8 +334,9 @@ export class OpenAiHandler extends BaseProvider implements SingleCompletionHandl
333334
temperature: this.options.modelTemperature ?? 0,
334335
}
335336

336-
// O3 family models do not support max_tokens parameter
337-
// but they do support max_completion_tokens
337+
// O3 family models do not support the deprecated max_tokens parameter
338+
// but they do support max_completion_tokens (the modern OpenAI parameter)
339+
// This allows O3 models to limit response length when includeMaxTokens is enabled
338340
this.addMaxTokensIfNeeded(requestOptions, modelInfo)
339341

340342
const response = await this.client.chat.completions.create(

0 commit comments

Comments
 (0)