generated from amazon-archives/__template_Apache-2.0
-
Notifications
You must be signed in to change notification settings - Fork 425
Open
Labels
enhancementNew feature or requestNew feature or request
Description
Problem Statement
I would like Strands to support enabling thinking mode for Anthropic models when using them through LiteLLMModel.
By now none of this configs are enabling the thinking mode:
model = LiteLLMModel(
model_id="hosted_vllm/" + LITE_LLM_MODEL_ID,
client_args=client_args
params={
"temperature": 1,
"top_p": 0.8,
"max_tokens": 2048,
},
additional_request_fields={
# Configure reasoning parameters
"reasoning_config": {
"type": "enabled", # Turn on thinking
"budget_tokens": 3000 # Thinking token budget
}
},
thinking={"type": "enabled", "budget_tokens": 2048}
)
Proposed Solution
Make one of those config to work:
thinking={"type": "enabled", "budget_tokens": 2048}
or
additional_request_fields={
# Configure reasoning parameters
"reasoning_config": {
"type": "enabled", # Turn on thinking
"budget_tokens": 3000 # Thinking token budget
}
}
Use Case
When we need Anthropic models to work on thinking mode, retrieving 'reasoningContent' / 'reasoningText'
Alternatives Solutions
No response
Additional Context
No response
Metadata
Metadata
Assignees
Labels
enhancementNew feature or requestNew feature or request