Skip to content

Conversation

@zerone0x
Copy link
Contributor

@zerone0x zerone0x commented Jan 9, 2026

Summary

  • Add warning log when models with context limits below 16K tokens are used with tools
  • Helps users diagnose tool calling failures with vLLM/Ollama setups
  • Warning includes context limit, recommended minimum, and actionable guidance

Root Cause Analysis

Users with vLLM/Ollama experience tool calling failures because default context windows (2048-8192 tokens) are insufficient for tool
schemas. The minimum recommended is 16,384+ tokens.

Fixes #7185

@github-actions
Copy link
Contributor

github-actions bot commented Jan 9, 2026

Thanks for your contribution!

This PR doesn't have a linked issue. All PRs must reference an existing issue.

Please:

  1. Open an issue describing the bug/feature (if one doesn't exist)
  2. Add Fixes #<number> or Closes #<number> to this PR description

See CONTRIBUTING.md for details.

@github-actions
Copy link
Contributor

github-actions bot commented Jan 9, 2026

The following comment was made by an LLM, it may be inaccurate:

No duplicate PRs found

fwang and others added 3 commits January 9, 2026 09:49
Revert "wip: zen"

This reverts commit a41601d40ca1f2a2d6b8bcfa035be29984fc49ff.
Add a warning log when models with context limits below 16K tokens are
used with tools. This helps users diagnose tool calling failures with
vLLM/Ollama setups that have insufficient context windows.

Fixes anomalyco#7185

Co-Authored-By: Claude Opus 4.5 <[email protected]>
KeyEvent type only has preventDefault(), not stopPropagation().

Co-Authored-By: Claude Opus 4.5 <[email protected]>
@fwang fwang force-pushed the fix/vllm-tool-calling-7185 branch from 278fbe6 to 859c17f Compare January 9, 2026 14:49
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

When use gpt-oss-120B by vLLM locally, opencode doesn't call the tools

2 participants