Problem
Several LLM providers (e.g., Ollama, OpenAI reasoning models) return structured reasoning data (e.g., thinking, reasoning, or similar fields).
Currently, this information is not represented in the Microsoft.Extensions.AI / Semantic Kernel abstraction model and is therefore lost or requires workarounds.
Current Limitation
As a result:
- Providers like Ollama expose reasoning via
RawRepresentation
- Consumers must rely on provider-specific parsing
Impact
- Loss of useful model output (reasoning/debugging)
- Inconsistent behavior across providers
- Harder to build advanced tooling (debugging, tracing, explainability)
Proposed Directions
Possible approaches:
Option 1: New Content Type
Introduce a new content type:
ReasoningContent : AIContent
Option 2: Metadata-based
Expose reasoning via standardized metadata:
content.Metadata["reasoning"]
Option 3: Extended ChatMessage
Add optional structured fields for reasoning
Context
A recent fix for Ollama required extracting reasoning from RawRepresentation, which works but highlights this abstraction gap.
Goal
Provide a consistent, provider-agnostic way to expose reasoning/thinking content across all supported LLM providers.
Happy to contribute to design/discussion if helpful.
Problem
Several LLM providers (e.g., Ollama, OpenAI reasoning models) return structured reasoning data (e.g.,
thinking,reasoning, or similar fields).Currently, this information is not represented in the
Microsoft.Extensions.AI/ Semantic Kernel abstraction model and is therefore lost or requires workarounds.Current Limitation
ChatMessageonly supports:Contents(text, image, function calls, etc.)There is no standardized way to represent:
As a result:
RawRepresentationImpact
Proposed Directions
Possible approaches:
Option 1: New Content Type
Introduce a new content type:
Option 2: Metadata-based
Expose reasoning via standardized metadata:
Option 3: Extended ChatMessage
Add optional structured fields for reasoning
Context
A recent fix for Ollama required extracting reasoning from
RawRepresentation, which works but highlights this abstraction gap.Goal
Provide a consistent, provider-agnostic way to expose reasoning/thinking content across all supported LLM providers.
Happy to contribute to design/discussion if helpful.