Skip to content

Commit ea2728d

Browse files
committed
core component docs
1 parent da601d9 commit ea2728d

File tree

8 files changed

+597
-1906
lines changed

8 files changed

+597
-1906
lines changed

docs/docs.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -71,7 +71,7 @@
7171
{
7272
"group": "MCP",
7373
"pages": [
74-
"mcp-agent-sdk/mcp/supported-capabilities",
74+
"mcp-agent-sdk/mcp/overview",
7575
"mcp-agent-sdk/mcp/agent-as-mcp-server",
7676
"mcp-agent-sdk/mcp/server-authentication"
7777
]

docs/mcp-agent-sdk/core-components/agents.mdx

Lines changed: 46 additions & 48 deletions
Original file line numberDiff line numberDiff line change
@@ -8,55 +8,23 @@ icon: robot
88

99
## What is an Agent?
1010

11-
In the `mcp-agent` framework, an **Agent** is your primary interface for building intelligent applications. An agent combines a Large Language Model (LLM) with specialized capabilities, allowing it to use tools, access data, and interact with external systems to accomplish tasks.
12-
13-
Think of an agent as an intelligent assistant that can:
14-
15-
- Understand natural language requests
16-
- Make decisions about which tools to use
17-
- Execute complex multi-step workflows
18-
- Maintain conversation history and context
19-
- Request human input when needed
20-
21-
<Card>
22-
**Core Concept:** An Agent is a configured LLM enhanced with tools, memory,
23-
and the ability to take actions in your environment.
24-
</Card>
25-
26-
## Agent Components
27-
28-
An agent in `mcp-agent` consists of several key components working together:
29-
30-
1. **Agent Core**: Manages the overall workflow and orchestrates interactions
31-
2. **LLM Integration**: Connects to various language model providers (OpenAI, Anthropic, etc.)
32-
3. **Tool Access**: Provides the LLM with capabilities through MCP servers
33-
4. **Memory System**: Maintains conversation history and context
34-
5. **Human Input**: Allows for interactive workflows requiring user input
35-
36-
Here's how these components work together:
37-
38-
```mermaid
39-
graph TD
40-
A[User Request] --> B[Agent Core]
41-
B --> C[LLM Provider]
42-
C --> D{Needs Tools?}
43-
D -->|Yes| E[Call Tools]
44-
E --> F[Tool Response]
45-
F --> C
46-
D -->|No| G[Generate Response]
47-
C --> H[Memory Storage]
48-
G --> I[User Response]
49-
50-
subgraph "Agent Components"
51-
B
52-
C
53-
H
54-
end
55-
```
11+
In `mcp-agent`, an **Agent** describes what the model is allowed to do. It captures:
12+
13+
- A name and system-level instruction
14+
- The MCP servers (and optional local functions) that should be available
15+
- Optional behaviour hooks such as human-input callbacks or whether connections persist
16+
17+
On its own an agent is just configuration and connection management. The agent becomes actionable only after you attach an LLM implementation. Calling `agent.attach_llm(...)` (or constructing an AugmentedLLM with `agent=...`) returns an **AugmentedLLM**—an LLM with the agent’s instructions, tools, and memory bound in. You then use the AugmentedLLM to run generations, call tools, and chain workflows.
18+
19+
Key ideas:
20+
21+
- **Agent = policy + tool access.** It defines how the model should behave and which MCP servers or functions are reachable.
22+
- **AugmentedLLM = Agent + model provider.** Attaching an LLM binds a concrete provider (OpenAI, Anthropic, Google, Bedrock, etc.) and exposes generation helpers such as `generate`, `generate_str`, and `generate_structured`.
23+
- **Agents are reusable.** You can attach different AugmentedLLM providers to the same agent definition without rewriting instructions or server lists.
5624

5725
## Creating Your First Agent
5826

59-
The simplest way to create an agent is through the `Agent` class. Here's a basic example:
27+
The simplest way to create an agent is through the `Agent` class. Define the instruction and servers, then attach an LLM to obtain an AugmentedLLM:
6028

6129
```python
6230
from mcp_agent.agents.agent import Agent
@@ -83,6 +51,8 @@ async with finder_agent:
8351
print(result)
8452
```
8553

54+
The value returned by `attach_llm` is an `AugmentedLLM` instance. It inherits the agent’s instructions and tool access, so every call to `generate_str` (or `generate` / `generate_structured`) can transparently read files, fetch URLs, or call any other MCP tool the agent exposes.
55+
8656
<CardGroup>
8757
<Card title="Tool Integration">
8858
Agents automatically discover and use tools from connected MCP servers,
@@ -94,9 +64,37 @@ async with finder_agent:
9464
</Card>
9565
</CardGroup>
9666

67+
## AgentSpec and factory helpers
68+
69+
`AgentSpec` (`mcp_agent.agents.agent_spec.AgentSpec`) is the declarative version of an agent: it captures the same fields (`name`, `instruction`, `server_names`, optional functions) and is used by workflows, config files, and factories. The helpers in [`mcp_agent.workflows.factory`](https://github.com/lastmile-ai/mcp-agent/blob/main/src/mcp_agent/workflows/factory.py) let you turn specs into agents or AugmentedLLMs with a single call.
70+
71+
```python
72+
from pathlib import Path
73+
from mcp_agent.workflows.factory import (
74+
load_agent_specs_from_file,
75+
create_llm,
76+
create_router_llm,
77+
)
78+
79+
async with app.run() as running_app:
80+
context = running_app.context
81+
specs = load_agent_specs_from_file(
82+
str(Path("examples/basic/agent_factory/agents.yaml")),
83+
context=context,
84+
)
85+
86+
# Create a specialist LLM from a spec
87+
researcher_llm = create_llm(agent=specs[0], provider="openai", context=context)
88+
89+
# Or compose higher-level workflows (router, parallel, orchestrator, ...)
90+
router = await create_router_llm(agents=specs, provider="openai", context=context)
91+
```
92+
93+
Explore the [agent factory examples](https://github.com/lastmile-ai/mcp-agent/tree/main/examples/basic/agent_factory) to see how specs keep call sites small, how subagents can be auto-loaded from config, and how factories compose routers, orchestrators, and parallel pipelines.
94+
9795
## Agent Configuration
9896

99-
Agents can be configured either programmatically or through configuration files. The framework supports both approaches:
97+
Agents can be configured either programmatically or through configuration files. The framework supports both approaches, and each definition ultimately resolves to an `AgentSpec`:
10098

10199
### Configuration File Approach
102100

@@ -158,7 +156,7 @@ settings = Settings(
158156

159157
## Agent Capabilities
160158

161-
Agents in `mcp-agent` come with several powerful built-in capabilities:
159+
Once an agent has an AugmentedLLM attached, it gains the following capabilities:
162160

163161
### Multi-LLM Provider Support
164162

0 commit comments

Comments
 (0)