-
-
Notifications
You must be signed in to change notification settings - Fork 258
Open
Description
Describe the bug
The output fixing parser isn't working with Gemini 2.5 flash.
As you described in the readme, google models require a different format:
messages = [
{ role: "system", content: "You are a helpful assistant." },
{ role: "user", content: "What's the weather like today?" }
# Google Gemini and Google VertexAI expect messages in a different format:
# { role: "user", parts: [{ text: "why is the sky blue?" }]}
]
However, the OutputFixingParser does not take that into account: https://github.com/patterns-ai-core/langchainrb/blob/main/lib/langchain/output_parsers/output_fixing_parser.rb#L50-L55
To Reproduce
Using the output fixing parser:
parser = Langchain::OutputParsers::StructuredOutputParser.from_json_schema('some_schema.json'))
Langchain::OutputParsers::OutputFixingParser.from_llm(llm:, parser:)
Raises an error when using the model gemini 2.5-flash.
Expected behavior
The output fixing parser should work with google/gemini models
Terminal commands & output
Commands you used and the terminal output.
Screenshots
If applicable, add screenshots to help explain your problem.
Desktop (please complete the following information):
- OS: [e.g. OS X, Linux, Ubuntu, Windows]
- Ruby version [3.3]
- Langchain.rb version [0.19.5]
Additional context
Metadata
Metadata
Assignees
Labels
No labels