Skip to content

Commit a2a5aa7

Browse files
authored
fix(genai): Fix outdated doc for max_output_tokens default value (#1209)
<!-- # Thank you for contributing to LangChain-google! --> <!-- ## Checklist for PR Creation - [x] PR Title: "<type>[optional scope]: <description>" - Where type is one of: feat, fix, docs, style, refactor, perf, test, build, ci, chore, revert, release - Scope is used to specifiy the package targeted. Options are: genai, vertex, community, infra (repo-level) - [x] PR Description and Relevant issues: - Description of the change - Relevant issues (if applicable) - Any dependencies required for this change - [x] Add Tests and Docs: - If adding a new integration: 1. Include a test for the integration (preferably unit tests that do not rely on network access) 2. Add an example notebook showing its use (place in the `docs/docs/integrations` directory) - [x] Lint and Test: - Run `make format`, `make lint`, and `make test` from the root of the package(s) you've modified - See contribution guidelines for more: https://github.com/langchain-ai/langchain-google/blob/main/README.md#contribute-code --> <!-- ## Additional guidelines - [x] PR title and description are appropriate - [x] Necessary tests and documentation have been added - [x] Lint and tests pass successfully - [x] The following additional guidelines are adhered to: - Optional dependencies are imported within functions - No unnecessary dependencies added to pyproject.toml files (except those required for unit tests) - PR doesn't touch more than one package - Changes are backwards compatible --> ## Description The current documentation incorrectly states that the default value of the `max_output_tokens` parameter is `64`. In reality, the default varies depending on the model. This update clarifies that the default is model-specific and adds a reference to the official Gemini API documentation for model-specific limits. ## Relevant issues <!-- e.g. "Fixes #000" --> ## Type <!-- Select the type of Pull Request --> <!-- Keep only the necessary ones --> 🐛 Bug Fix ## Changes(optional) <!-- List of changes --> ## Testing(optional) <!-- Test procedure --> <!-- Test result --> ## Note(optional) <!-- Information about the errors fixed by PR --> <!-- Remaining issue or something --> <!-- Other information about PR -->
1 parent c886868 commit a2a5aa7

File tree

1 file changed

+2
-1
lines changed

1 file changed

+2
-1
lines changed

libs/genai/langchain_google_genai/_common.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -52,7 +52,8 @@ class _BaseGoogleGenerativeAI(BaseModel):
5252

5353
max_output_tokens: Optional[int] = Field(default=None, alias="max_tokens")
5454
"""Maximum number of tokens to include in a candidate. Must be greater than zero.
55-
If unset, will default to ``64``."""
55+
If unset, will use the model's default value, which varies by model.
56+
See https://ai.google.dev/gemini-api/docs/models for model-specific limits."""
5657

5758
n: int = 1
5859
"""Number of chat completions to generate for each prompt. Note that the API may

0 commit comments

Comments
 (0)