Skip to content

Implement token management for OpenAI models by introducing methods to handle max_tokens and max_completion_tokens. Refactor API request construction to streamline parameter handling, ensuring compatibility with both older and newer model versions. Enhance readability and maintainability of the code.#189

Merged
veithly merged 5 commits intoXSpoonAi:mainfrom
veithly:fix/openai-provider
Nov 22, 2025

Conversation

@veithly
Copy link
Copy Markdown
Collaborator

@veithly veithly commented Nov 22, 2025

No description provided.

veithly added 2 commits August 1, 2025 11:15
…o handle max_tokens and max_completion_tokens. Refactor API request construction to streamline parameter handling, ensuring compatibility with both older and newer model versions. Enhance readability and maintainability of the code.
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment on lines +37 to +40
def _uses_completion_token_param(self, model: str) -> bool:
"""Whether this model expects max_completion_tokens instead of max_tokens."""
model_lower = (model or "").lower()
return model_lower.startswith("gpt-5") or model_lower.startswith("o")
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Avoid treating OpenRouter models as max_completion_tokens-only

The new _uses_completion_token_param marks any model starting with "o" as requiring max_completion_tokens. OpenRouter models are named like openai/gpt-3.5-turbo, so _max_token_kwargs will now send max_completion_tokens to the OpenRouter API instead of the max_tokens parameter its OpenAI-compatible endpoint expects, causing chat requests to be rejected for the default OpenRouter configuration. This predicate should be narrowed to the actual OpenAI o* models or be provider-aware.

Useful? React with 👍 / 👎.

…ectly identify models using max_completion_tokens. Update documentation to clarify which models require this parameter, ensuring accurate handling of both new and legacy OpenAI models.
@veithly veithly merged commit 1db2556 into XSpoonAi:main Nov 22, 2025
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant