Skip to content

Conversation

@Yeuoly
Copy link
Collaborator

@Yeuoly Yeuoly commented Dec 27, 2025

Motivation

  • Providers differ in behavior for validation pings, so the max_tokens used during credentials validation must be configurable.
  • Streaming validation historically relied on max_tokens=10 to ensure a token chunk is emitted, so that behavior should be preserved by default.
  • Allow explicit credential overrides for both non-stream and stream validation to support edge cases where the default is unsuitable.
  • Undo the earlier pyproject.toml version bump as requested to avoid changing the published SDK version.

Description

  • Read validate_credentials_max_tokens from credentials and use it as the ping max_tokens for non-stream validation in python/dify_plugin/interfaces/model/openai_compatible/llm.py with a default fallback of 5.
  • For stream-mode validation, keep the historical default max_tokens=10 but allow override when validate_credentials_max_tokens is explicitly provided in credentials, and add an inline comment explaining the rationale.
  • Add small code adjustments in llm.py to apply the configurable values in both validation flows.
  • Reverted the SDK version change in python/pyproject.toml as requested (the lockfile changes were not reverted here).

Testing

  • No automated tests were executed for this change.
  • All modifications were limited to python/dify_plugin/interfaces/model/openai_compatible/llm.py and python/pyproject.toml (revert).
  • Basic repository operations (git commit) were used to record the change and no runtime validation was performed.
  • Manual code inspection was done during the change process.

Codex Task

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @Yeuoly, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the flexibility of credential validation for OpenAI-compatible LLMs by making the max_tokens parameter configurable. It introduces a new validate_credentials_max_tokens setting, allowing users to specify this value in their credentials for both non-streaming and streaming validation flows, while preserving the historical max_tokens=10 default for streaming to ensure token chunk emission. Additionally, it reverts an unintended SDK version bump in pyproject.toml.

Highlights

  • Configurable max_tokens for credential validation: The max_tokens parameter used during credential validation is now configurable, addressing varying provider behaviors.
  • Preserved stream default max_tokens=10: The historical default of max_tokens=10 for streaming validation is preserved to ensure a token chunk is emitted.
  • Explicit overrides for validation: Both non-stream and stream validation now allow explicit credential overrides for max_tokens to support edge cases.
  • Reverted pyproject.toml version bump: An earlier pyproject.toml version bump was undone to avoid changing the published SDK version.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@Yeuoly Yeuoly requested a review from Mairuis December 27, 2025 12:15
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces configurability for max_tokens during credential validation, which is a good improvement for provider compatibility. The changes correctly handle defaults for both streaming and non-streaming modes. My review focuses on improving the robustness of handling the max_tokens value from credentials. I've identified that the value is not being converted to an integer, which could lead to API errors if it's provided as a string. I've provided suggestions to enforce type conversion to int and ensure consistent behavior for default fallbacks. These changes will make the validation logic more resilient.


# prepare the payload for a simple ping to the model
data = {"model": credentials.get("endpoint_model_name", model), "max_tokens": 5}
validate_credentials_max_tokens = credentials.get("validate_credentials_max_tokens", 5) or 5
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The value for max_tokens is read from credentials but it's not guaranteed to be an integer. If validate_credentials_max_tokens is a string (e.g., "10"), it will be passed as a string in the JSON payload, which will likely cause an API error. The value should be explicitly cast to int. The current logic also handles integer 0 and string "0" differently.

The suggested change ensures the value is an integer and that any falsy value (like 0, "0", or None) correctly falls back to the default of 5.

Note: This will raise a ValueError if a non-numeric string is provided. This is generally good for failing fast, but a try-except block could be used for more graceful error handling.

Suggested change
validate_credentials_max_tokens = credentials.get("validate_credentials_max_tokens", 5) or 5
validate_credentials_max_tokens = int(credentials.get("validate_credentials_max_tokens") or 0) or 5

# ADD stream validate_credentials
stream_mode_auth = credentials.get("stream_mode_auth", "not_use")
if stream_mode_auth == "use":
stream_validate_max_tokens = credentials.get("validate_credentials_max_tokens") or 10
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Similar to the non-streaming case, the max_tokens value here should be converted to an integer to avoid potential API errors when the value in credentials is a string.

This change ensures the value is an int and that falsy values like 0, "0", or None correctly fall back to the default of 10, providing consistent behavior.

Note: This will raise a ValueError if a non-numeric string is provided. This is generally good for failing fast, but a try-except block could be used for more graceful error handling.

Suggested change
stream_validate_max_tokens = credentials.get("validate_credentials_max_tokens") or 10
stream_validate_max_tokens = int(credentials.get("validate_credentials_max_tokens") or 0) or 10

Copy link
Collaborator

@Mairuis Mairuis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@Yeuoly Yeuoly merged commit 1d5d11e into main Dec 29, 2025
3 checks passed
@Yeuoly Yeuoly deleted the codex/make-max_tokens-configurable-s3iofr branch December 29, 2025 05:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants