Skip to content

Conversation

kowyo
Copy link
Contributor

@kowyo kowyo commented Oct 4, 2025

Title

fix(ollama/chat): correctly map reasoning_effort to think in requests

Relevant issues

Fixes #15059

Fixes #11680 (#11680 (comment))

Pre-Submission checklist

Please complete all items before asking a LiteLLM maintainer to review your PR

  • I have Added testing in the tests/litellm/ directory, Adding at least 1 test is a hard requirement - see details
  • I have added a screenshot of my new test passing locally
  • My PR passes all unit tests on make test-unit
  • My PR's scope is as isolated as possible, it only solves 1 specific problem

Type

🐛 Bug Fix

Changes

Parameter mapping improvements:
Updated map_openai_params in both chat/transformation.py and completion/transformation.py so the reasoning_effort parameter maps directly to the Ollama think param (high/medium/low) instead of always defaulting to True. As in ollama/ollama-python@aa4b476 and ollama/ollama#11752.

Parameter mapping correction:
pop the think parameter from optional_params, so that it maps correctly to ollama API https://github.com/ollama/ollama/blob/main/docs/api.md, fixes the warning level=WARN source=types.go:737 msg="invalid option provided" option=think

Effect

Before this PR:

reasoning_content is missing from output, indicating that think param is not mapped correctly:

#15059

With this PR:

reasoning_content is return correctly.

image

Testing Pass

Gemini and Cohere related test should not be affected in this PR

image

Copy link

vercel bot commented Oct 4, 2025

@kowyo is attempting to deploy a commit to the CLERKIEAI Team on Vercel.

A member of the Team first needs to authorize it.

@krrishdholakia
Copy link
Contributor

Hey @kowyo i believe that parameter value (low/medium/high) only works for gpt-oss. Can we be more careful on when to use low/medium/high vs. a 'True' based on model?
Screenshot 2025-10-03 at 9 42 22 PM

@kowyo
Copy link
Contributor Author

kowyo commented Oct 5, 2025

Hey @krrishdholakia, you are correct, thanks for the feedback.

I have made some adjustments and will only set thinking level for gpt-oss model family.

Let me know if further work is needed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Bug]: think parameter for Ollama models are not mapped correctly [Feature]: Support think parameter for Ollama models
2 participants