Skip to content
Discussion options

You must be logged in to vote

A system prompt isn’t “executed” like code; it’s just tokens the model reads on every forward pass. Those tokens condition the next-token distribution alongside the conversation history and any retrieved context, so the behavioral constraints in the system prompt influence every decision the model makes during that turn (and subsequent tool loops). There’s no hidden expansion step where the prompt is decomposed into a plan; the “plan” is whatever sequence the model emits under those conditions.

When an LLM is given access to tools, the runtime has to tell the model which tools exist and how to use them. Some APIs do this through a structured “tools” parameter (name, description, JSON sche…

Replies: 1 comment 1 reply

Comment options

You must be logged in to vote
1 reply
@Matteo-Barberis
Comment options

Answer selected by x1xhlol
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants