You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Prompts enable servers to define reusable prompt templates and workflows that clients can easily surface to users and LLMs. They provide a powerful way to standardize and share common LLM interactions.
7
7
8
+
<Note>
9
+
Prompts are designed to be **user-controlled**, meaning they are exposed from servers to clients with the intention of the user being able to explicitly select them for use.
Copy file name to clipboardExpand all lines: docs/concepts/resources.mdx
+6Lines changed: 6 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,6 +5,12 @@ description: "Expose data and content from your servers to LLMs"
5
5
6
6
Resources are a core primitive in the Model Context Protocol (MCP) that allow servers to expose data and content that can be read by clients and used as context for LLM interactions.
7
7
8
+
<Note>
9
+
Resources are designed to be **application-controlled**, meaning that the client application can decide how and when they should be used.
10
+
11
+
For example, one application may require users to explicitly select resources, while another could automatically select them based on heuristics or even at the discretion of the AI model itself.
12
+
</Note>
13
+
8
14
## Overview
9
15
10
16
Resources represent any kind of data that an MCP server wants to make available to clients. This can include:
Copy file name to clipboardExpand all lines: docs/concepts/tools.mdx
+13-9Lines changed: 13 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,6 +5,10 @@ description: "Enable LLMs to perform actions through your server"
5
5
6
6
Tools are a powerful primitive in the Model Context Protocol (MCP) that enable servers to expose executable functionality to clients. Through tools, LLMs can interact with external systems, perform computations, and take actions in the real world.
7
7
8
+
<Note>
9
+
Tools are designed to be **model-controlled**, meaning that tools are exposed from servers to clients with the intention of the AI model being able to automatically invoke them (with a human in the loop to grant approval).
10
+
</Note>
11
+
8
12
## How tools work
9
13
10
14
Tools in MCP follow a request-response pattern where:
@@ -79,10 +83,10 @@ For long-running operations, tools can report progress using the progress notifi
0 commit comments