Skip to content

Handling Reasoning Models and Reasoning Messages in Prompts #44

@alexpareto

Description

@alexpareto

With the latest development in reasoning models - we're seeing a lot more "vendor specific" items in the APIs.

For example - OpenAI's responses API requires passing back a "Reasoning ID" with any reasoning messages. Similarly Anthropic requires passing back a reasoning signature.

When these are not included, the API will throw an error - and of course, Anthropic doesn't recognize OpenAI IDs and OpenAI doesn't recognize Anthropic reasoning signatures.

Reasoning is looking like it will be a core primitive in these model APIs - and the claims are that including them in the prompts as the native vendor messages improves quality of responses, since it allows the AI vendor to include the reasoning chain of thought in subsequent responses (which are not exposed to end users).

I'm curious what folks think the best pattern to handle this is - on one hand we could pass in just the reasoning summary in the prompt itself as a user message to "manage the prompts ourselves" - but it seems like we'll get worse performance due to not having the full chain of thought. On the other hand, we'll have more control over the prompts that we're passing in and no vendor lock in.

Anyone find a great way / approach to handle this?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions