Skip to content

OutputParserException when using Ollama instead of Gemini #33016

@philippe-lavoie

Description

@philippe-lavoie

Checked other resources

  • This is a bug, not a usage question.
  • I added a clear and descriptive title that summarizes this issue.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain rather than my code.
  • The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
  • This is not related to the langchain-community package.
  • I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
  • I posted a self-contained, minimal, reproducible example. A maintainer can copy it and run it AS IS.

Example Code

The following can be used to see that the tutorial example works with Gemini but not with Ollama

import { ChatOllama } from "@langchain/ollama";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { z } from "zod";
//import { ChatGoogleGenerativeAI } from "@langchain/google-genai";

// const llm = new ChatGoogleGenerativeAI({
//   model: "gemini-2.0-flash",
//   temperature: 0,
//   apiKey: "Use your own key",
// });

const llm = new ChatOllama({
  model: "llama3.2:3b",
  temperature: 0,
  baseUrl: "http://localhost:11434"
})



const classificationSchema2 = z.object({
  sentiment: z
    .enum(["happy", "neutral", "sad"])
    .describe("The sentiment of the text"),
  aggressiveness: z
    .number()
    .int()
    .describe(
      "describes how aggressive the statement is on a scale from 1 to 5. The higher the number the more aggressive"
    ),
  language: z
    .enum(["spanish", "english", "french", "german", "italian"])
    .describe("The language the text is written in"),
});

const taggingPrompt2 = ChatPromptTemplate.fromTemplate(
  `Extract the desired information from the following passage.

Passage:
{input}
`
);

const llmWithStructuredOutput2 = llm.withStructuredOutput(
  classificationSchema2,
  { name: "extractor" }
);

const prompt2 = await taggingPrompt2.invoke({
  input:
    "Estoy increiblemente contento de haberte conocido! Creo que seremos muy buenos amigos!"
});

const result = await llmWithStructuredOutput2.invoke(prompt2);

console.log("Result:", result);

Error Message and Stack Trace (if applicable)

 throw new OutputParserException(`Failed to parse. Text: "${text}". Error: ${e}`, text);
                  ^

OutputParserException [Error]: Failed to parse. Text: "". Error: SyntaxError: Unexpected end of JSON input

Troubleshooting URL: https://js.langchain.com/docs/troubleshooting/errors/OUTPUT_PARSING_FAILURE/

Description

I'm trying to follow the tutorials but it's not working with Ollama. The integration seems to have a bug. The example code can be used to easily switch between Ollama and Gemini, just modify the commented lines.

System Info

I'm using javascript with the following dependencies

"dependencies": {
"@langchain/community": "^0.3.54",
"@langchain/core": "^0.3.73",
"@langchain/google-genai": "^0.2.18",
"@langchain/ollama": "^0.2.4",
"ollama": "^0.5.18",
"pdf-parse": "^1.1.1",
"zod-to-json-schema": "^3.24.6"
},

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugRelated to a bug, vulnerability, unexpected error with an existing feature

    Type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions