Skip to content

Conversation

@dumko2001
Copy link

:…l logic

Addresses Issue #34007.
Fixes a bug where aliases like 'mistral:' were inferred correctly as a provider but the prefix was not stripped from the model name, causing API 400 errors. Added logic to strip prefix when inference succeeds.

Description
This PR resolves a logic error in init_chat_model where inferred provider aliases (specifically mistral:) were correctly identified but not stripped from the model string.

The Problem
When passing a string like mistral:ministral-8b-latest, the factory logic correctly inferred the provider as mistralai but failed to enter the string-splitting block because the alias mistral was not in the hardcoded _SUPPORTED_PROVIDERS list. This caused the raw string mistral:ministral-8b-latest to be passed to the ChatMistralAI constructor, resulting in a 400 API error.

The Fix
I updated _parse_model in libs/langchain/langchain/chat_models/base.py. The logic now attempts to infer the provider from the prefix before determining whether to split the string. This ensures that valid aliases trigger the stripping logic, passing only the clean model_name to the integration class.

Issue
Fixes #34007

Dependencies
None.

Verification
Validated locally with a reproduction script:

  • Input: mistral:ministral-8b-latest
  • Result: Successfully instantiates ChatMistralAI with model="ministral-8b-latest".
  • Validated that standard inputs (e.g., gpt-4o) remain unaffected.

…l logic

Addresses Issue langchain-ai#34007.
Fixes a bug where aliases like 'mistral:' were inferred correctly as a provider but the prefix was not stripped from the model name, causing API 400 errors.
Added logic to strip prefix when inference succeeds.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

mistral alias to mistralai not computing the proper model name

1 participant