Skip to content

fix: strip markdown code fences from LLM JSON responses (fixes #959)#1042

Open
willtwilson wants to merge 1 commit intoItzCrazyKns:masterfrom
willtwilson:fix/strip-markdown-json-fences
Open

fix: strip markdown code fences from LLM JSON responses (fixes #959)#1042
willtwilson wants to merge 1 commit intoItzCrazyKns:masterfrom
willtwilson:fix/strip-markdown-json-fences

Conversation

@willtwilson
Copy link

@willtwilson willtwilson commented Mar 8, 2026

Problem

Some LLM providers (Claude, models via LiteLLM/OpenRouter) wrap JSON output in markdown code fences:

\
\\json
{ "query": "...", "sources": [...] }
\
\\

The \streamObject()\ paths in both OpenAI and Ollama providers pass accumulated text directly to \partial-json's \parse(), which fails on the fence characters. This affects all users running Claude or any model behind LiteLLM/OpenRouter that wraps JSON in markdown.

Solution

Added \stripMarkdownFences()\ and \safeParseJson()\ utilities in \src/lib/utils/parseJson.ts\ that strip markdown code fences (\\json ... \\ and \\ ... \) before parsing. Applied to:

  • *\streamObject()* in both OpenAI and Ollama providers — strips fences from accumulated partial JSON text before passing to \parse()\
  • *\generateText()* tool call argument parsing in the OpenAI provider

The existing \generateObject()\ paths already use
epairJson()\ with \�xtractJson: true, which handles this case, so those are left unchanged.

Testing

Tested with Claude 3.5 Sonnet via LiteLLM and OpenRouter — JSON responses with and without fences are now parsed correctly. The fix is a no-op for models that already return clean JSON.

Fixes #959


Summary by cubic

Strip markdown code fences from LLM JSON responses to prevent parse errors in OpenAI and Ollama streaming and OpenAI tool-call paths. Fixes #959.

  • Bug Fixes
    • Added stripMarkdownFences() and safeParseJson() in src/lib/utils/parseJson.ts.
    • Applied to streamObject() for OpenAI and Ollama before calling parse from partial-json.
    • Used for OpenAI generateText() tool-call arguments; generateObject() unchanged (already handled by @toolsycc/json-repair with extractJson: true).

Written for commit 19f4057. Summary will update on new commits.

Some LLM providers (Claude, models via LiteLLM/OpenRouter) wrap JSON
output in markdown code fences (\\\json ... \\\). The streamObject()
paths in both OpenAI and Ollama providers pass accumulated text directly
to partial-json's parse(), which fails on the fence characters.

Add stripMarkdownFences() and safeParseJson() utilities in
src/lib/utils/parseJson.ts. Applied to:
- streamObject() in OpenAI and Ollama providers (partial JSON parsing)
- generateText() tool call argument parsing in OpenAI provider

The existing generateObject() paths already use repairJson() with
extractJson: true, which handles this case.

Fixes ItzCrazyKns#959

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No issues found across 3 files


Since this is your first cubic review, here's how it works:

  • cubic automatically reviews your code and comments on bugs and improvements
  • Teach cubic by replying to its comments. cubic learns from your replies and gets better over time
  • Add one-off context when rerunning by tagging @cubic-dev-ai with guidance or docs links (including llms.txt)
  • Ask questions if you need clarification on any suggestion

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Fixes a regression where some LLM providers return JSON wrapped in markdown code fences, causing partial-json / JSON.parse failures during streaming object parsing and tool-call argument parsing.

Changes:

  • Added stripMarkdownFences() and safeParseJson() utilities to sanitize fenced JSON before parsing.
  • Applied fence stripping to streamObject() parsing in the OpenAI and Ollama providers.
  • Applied safe JSON parsing to OpenAI generateText() tool-call argument parsing.

Reviewed changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated no comments.

File Description
src/lib/utils/parseJson.ts Adds shared helpers to strip markdown code fences and safely parse JSON.
src/lib/models/providers/openai/openaiLLM.ts Uses fence stripping/safe parsing for streamed object parsing and tool-call argument parsing.
src/lib/models/providers/ollama/ollamaLLM.ts Uses fence stripping for streamed object parsing.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

You can also share your feedback on Copilot code review. Take the survey.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[BUG] v1.12.0: JSON parse error with Claude models via OpenAI-compatible endpoints (LiteLLM/OpenRouter)

2 participants