Skip to content

feat: Thinking LLMs in agents #813

@jakubduda-dsai

Description

@jakubduda-dsai

Feature description

Introduce support for "Thinking LLMs" inside agents. The agent should be able to run an internal reasoning step (hidden chain of thought) before producing the result. This could be implemented by wrapping the model calls so that the agent can:

  • Generate reasoning tokens separately from the final output.
  • Optionally persist or log these reasoning traces for debugging, benchmarking, or evaluation.
  • Allow to configure whether the "thinking" step remains hidden or visible

Motivation

By integrating Thinking LLMs:

  • We can improve reasoning quality and consistency of agent behavior.
  • Developers gain better insight into why the agent took certain steps (debugging, error analysis).
  • It aligns with modern approaches to tool-using and planning agents, where separating internal thought from final answers improves reliability.

Additional context

No response

Metadata

Metadata

Assignees

Labels

featureNew feature or request

Type

No type

Projects

Status

In Progress

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions