TextReasoningContent implementation? #290
Replies: 2 comments 3 replies
-
Older Ollama version or older model definitions may cause that the think tags are included in the model response text directly. Can you make sure that both, the Ollama version and the model definition are new? When in doubt, just re-pull your models. |
Beta Was this translation helpful? Give feedback.
-
My ollama version is 0.10.1 It's not about if they are included in the response for me (in the Ollama CLI the thinking is also included but in a different colour), I was just wondering if the TextReasoningContent will be used to seperate the thinking in the future? For now I can just filter it out myself I was just curious :) |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
When using the Microsoft.Extensions.AI implementation of OllamaSharp, using the GetStreamingResponseAsync and using a thinking model like qwen3, the response I get is something like:
The <think>...</think> part is added to the response message as a
TextContent
. I saw that there is also aTextReasoningContent
. I was wondering if the <think> part will be implemented as aTextReasoningContent
in the future?Beta Was this translation helpful? Give feedback.
All reactions