Skip to content

[FEATURE] Support enabling thinking for Sonnet 4/ 3.7 Through LiteLLMModel #523

@havishammah

Description

@havishammah

Problem Statement

I would like Strands to support enabling thinking mode for Anthropic models when using them through LiteLLMModel.

By now none of this configs are enabling the thinking mode:

model = LiteLLMModel(
        model_id="hosted_vllm/" + LITE_LLM_MODEL_ID,
        client_args=client_args
        params={
            "temperature": 1,
            "top_p": 0.8,
            "max_tokens": 2048,
        },
        additional_request_fields={
            # Configure reasoning parameters
            "reasoning_config": {
                "type": "enabled",  # Turn on thinking
                "budget_tokens": 3000  # Thinking token budget
            }
        },
        thinking={"type": "enabled", "budget_tokens": 2048}
    )

Proposed Solution

Make one of those config to work:

thinking={"type": "enabled", "budget_tokens": 2048}

or

additional_request_fields={
            # Configure reasoning parameters
            "reasoning_config": {
                "type": "enabled",  # Turn on thinking
                "budget_tokens": 3000  # Thinking token budget
            }
        }

Use Case

When we need Anthropic models to work on thinking mode, retrieving 'reasoningContent' / 'reasoningText'

Alternatives Solutions

No response

Additional Context

No response

Metadata

Metadata

Assignees

Labels

enhancementNew feature or request

Type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions