Handling Reasoning Models and Reasoning Messages in Prompts #59
Unanswered
alexpareto
asked this question in
Q&A
Replies: 1 comment
-
|
@alexpareto great question! we did to an AI that works session on this topic! - https://github.com/hellovai/ai-that-works/tree/main/2025-04-07-reasoning-models-vs-prompts#-reasoning-models-vs-reasoning-prompts |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
With the latest development in reasoning models - we're seeing a lot more "vendor specific" items in the APIs.
For example - OpenAI's responses API requires passing back a "Reasoning ID" with any reasoning messages. Similarly Anthropic requires passing back a reasoning signature.
When these are not included, the API will throw an error - and of course, Anthropic doesn't recognize OpenAI IDs and OpenAI doesn't recognize Anthropic reasoning signatures.
Reasoning is looking like it will be a core primitive in these model APIs - and the claims are that including them in the prompts as the native vendor messages improves quality of responses, since it allows the AI vendor to include the reasoning chain of thought in subsequent responses (which are not exposed to end users).
I'm curious what folks think the best pattern to handle this is - on one hand we could pass in just the reasoning summary in the prompt itself as a user message to "manage the prompts ourselves" - but it seems like we'll get worse performance due to not having the full chain of thought. On the other hand, we'll have more control over the prompts that we're passing in and no vendor lock in.
Anyone find a great way / approach to handle this?
Beta Was this translation helpful? Give feedback.
All reactions