diff --git a/units/en/unit1/mcp-clients.mdx b/units/en/unit1/mcp-clients.mdx
index 55a596e..0095f71 100644
--- a/units/en/unit1/mcp-clients.mdx
+++ b/units/en/unit1/mcp-clients.mdx
@@ -12,6 +12,14 @@ In this section, you will:
* Discover how to use Hugging Face's MCP Client implementation
* See practical examples of MCP Client usage
+
+
+In this page we're going to show examples of how to set up MCP Clients in a few different ways using the JSON notation. For now, we will use *examples* like `path/to/server.py` to represent the path to the MCP Server. In the next unit, we'll implement this with real MCP Servers.
+
+For now, focus on understanding the MCP Client notation. We'll implement the MCP Servers in the next unit.
+
+
+
## Understanding MCP Clients
MCP Clients are crucial components that act as the bridge between AI applications (Hosts) and external capabilities provided by MCP Servers. Think of the Host as your main application (like an AI assistant or IDE) and the Client as a specialized module within that Host responsible for handling MCP communications.
@@ -52,6 +60,8 @@ Fortunately, the configuration files are very simple, easy to understand, and co
The standard configuration file for MCP is named `mcp.json`. Here's the basic structure:
+This is the basic structure of the `mcp.json` can be passed to applications like Claude Desktop, Cursor, or VS Code.
+
```json
{
"servers": [
@@ -114,14 +124,6 @@ Environment variables can be passed to server processes using the `env` field. H
-Let's first install the packages we need to run the examples.
-
-```bash
-pip install "mcp[cli]" "smolagents[mcp]" "fastmcp[cli]"
-# or if you are using uv
-uv add "mcp[cli]" "smolagents[mcp]" "fastmcp[cli]"
-```
-
In Python, we use the `os` module to access environment variables:
```python
@@ -170,7 +172,7 @@ The corresponding configuration in `mcp.json` would look like this:
"transport": {
"type": "stdio",
"command": "python",
- "args": ["/path/to/github_server.py"],
+ "args": ["/path/to/github_server.py"], // This is an example, we'll use a real server in the next unit
"env": {
"GITHUB_TOKEN": "your_github_token"
}
@@ -196,7 +198,7 @@ In this scenario, we have a local server that is a Python script which could be
"transport": {
"type": "stdio",
"command": "python",
- "args": ["/path/to/file_explorer_server.py"]
+ "args": ["/path/to/file_explorer_server.py"] // This is an example, we'll use a real server in the next unit
}
}
]
@@ -214,7 +216,7 @@ In this scenario, we have a remote server that is a weather API.
"name": "Weather API",
"transport": {
"type": "sse",
- "url": "https://example.com/mcp-server"
+ "url": "https://example.com/mcp-server" // This is an example, we'll use a real server in the next unit
}
}
]
@@ -225,122 +227,117 @@ Proper configuration is essential for successfully deploying MCP integrations. B
In the next section, we'll explore the ecosystem of MCP servers available on Hugging Face Hub and how to publish your own servers there.
-## Code Clients
-
-You can also use the MCP Client within code so that the tools are available to the LLM. Let's explore some examples in `smolagents`. To run these examples you will need to add `mcp[cli]`, `smolagents[toolkit]`, and `smolagents[mcp]` to your `uv` virtual environment.
+## Tiny Agents Clients
-First, let's explore our weather server from the previous page. In `smolagents`, we can use the `ToolCollection` class to automatically discover and register tools from an MCP server. This is done by passing the `StdioServerParameters` or `SSEServerParameters` to the `ToolCollection.from_mcp` method. We can then print the tools to the console.
+Now, let's explore how to use MCP Clients within code.
-```python
-from smolagents import ToolCollection
-from mcp.client.stdio import StdioServerParameters
+You can also use tiny agents as MCP Clients to connect directly to MCP servers from your code. Tiny agents provide a simple way to create AI agents that can use tools from MCP servers.
-server_parameters = StdioServerParameters(command="uv", args=["run", "server.py"])
+Tiny Agent can run MCP servers with a command line environment. To do this, we will need to install `npm` and run the server with `npx`. **We'll need these for both Python and JavaScript.**
-with ToolCollection.from_mcp(
- server_parameters, trust_remote_code=True
-) as tools:
- print("\n".join(f"{tool.name}: {tool.description}" for tool in tools.tools))
+Let's install `npx` with `npm`. If you don't have `npm` installed, check out the [npm documentation](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm).
+```bash
+# install npx
+npm install -g npx
```
-
-
-Output
-
+
+
-```sh
-get_weather: Get the current weather for a specified location.
+First, install the tiny agents package:
+```bash
+pip install "huggingface_hub[mcp]>=0.32.0"
```
-
+First, let's login to the Hugging Face Hub. You will need a [login token](https://huggingface.co/docs/huggingface_hub/v0.32.3/en/quick-start#authentication) to do this.
-We can also connect to an MCP server that is hosted on a remote machine. In this case, we need to pass the `SSEServerParameters` to the `MCPClient` class.
-
-```python
-from smolagents.mcp_client import MCPClient
-
-with MCPClient(
- {"url": "https://abidlabs-mcp-tools.hf.space/gradio_api/mcp/sse"}
-) as tools:
- # Tools from the remote server are available
- print("\n".join(f"{t.name}: {t.description}" for t in tools))
+```bash
+huggingface-cli login
```
-
-
-Output
-
+Now, let's create an agent configuration file `agent.json`.
-```sh
-prime_factors: Compute the prime factorization of a positive integer.
-generate_cheetah_image: Generate a cheetah image.
-image_orientation: Returns whether image is portrait or landscape.
-sepia: Apply a sepia filter to the input image.
+```json
+{
+ "model": "Qwen/Qwen2.5-72B-Instruct",
+ "provider": "nebius",
+ "servers": [
+ {
+ "type": "stdio",
+ "config": {
+ "command": "npx",
+ "args": ["@playwright/mcp@latest"]
+ }
+ }
+ ]
+}
```
-
+In this configuration, we are using the `@playwright/mcp` MCP server. This is a MCP server that can control a browser with Playwright.
-Now, let's see how we can use the MCP Client in a code agent.
+Now you can run the agent:
-```python
-from smolagents import InferenceClientModel, CodeAgent, ToolCollection
-from mcp.client.stdio import StdioServerParameters
+```bash
+tiny-agents run agent.json
+```
-model = InferenceClientModel()
+
-server_parameters = StdioServerParameters(command="uv", args=["run", "server.py"])
+First, install the tiny agents package with [npm](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm).
-with ToolCollection.from_mcp(
- server_parameters, trust_remote_code=True
-) as tool_collection:
- agent = CodeAgent(tools=[*tool_collection.tools], model=model)
- agent.run("What's the weather in Tokyo?")
+```bash
+npm install @huggingface/tiny-agents
+```
+Make an agent project directory and create an `agent.json` file.
+
+```bash
+mkdir my-agent
+touch my-agent/agent.json
```
-
-
-Output
-
+Create an agent configuration file at `my-agent/agent.json`:
-```sh
-The weather in Tokyo is sunny with a temperature of 20 degrees Celsius.
+```json
+{
+ "model": "Qwen/Qwen2.5-72B-Instruct",
+ "provider": "nebius",
+ "servers": [
+ {
+ "type": "stdio",
+ "config": {
+ "command": "npx",
+ "args": ["@playwright/mcp@latest"]
+ }
+ }
+ ]
+}
```
-
+Now you can run the agent:
-We can also connect to an MCP package. Here's an example of connecting to the `pubmedmcp` package.
+```bash
+npx @huggingface/tiny-agents run ./my-agent
+```
-```python
-import os
-from smolagents import ToolCollection, CodeAgent, InferenceClientModel
-from mcp.client.stdio import StdioServerParameters
+
+
-model = InferenceClientModel()
+In the video below, we run the agent and ask it to open a new tab in the browser.
-server_parameters = StdioServerParameters(
- command="uvx",
- args=["--quiet", "pubmedmcp@0.1.3"],
- env={"UV_PYTHON": "3.12", **os.environ},
-)
+The following example shows a web-browsing agent configured to use the [Qwen/Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct) model via Nebius inference provider, and it comes equipped with a playwright MCP server, which lets it use a web browser! The agent config is loaded specifying [its path in the `tiny-agents/tiny-agents`](https://huggingface.co/datasets/tiny-agents/tiny-agents/tree/main/celinah/web-browser) Hugging Face dataset.
-with ToolCollection.from_mcp(server_parameters, trust_remote_code=True) as tool_collection:
- agent = CodeAgent(tools=[*tool_collection.tools], add_base_tools=True, model=model)
- agent.run("Please find a remedy for hangover.")
-```
+
-
-
-Output
-
+When you run the agent, you'll see it load, listing the tools it has discovered from its connected MCP servers. Then, it's ready for your prompts!
-```sh
-The remedy for hangover is to drink water.
-```
+Prompt used in this demo:
-
+> do a Web Search for HF inference providers on Brave Search and open the first result and then give me the list of the inference providers supported on Hugging Face
## Next Steps
diff --git a/units/en/unit2/gradio-client.mdx b/units/en/unit2/gradio-client.mdx
index 976d6e6..f77eb25 100644
--- a/units/en/unit2/gradio-client.mdx
+++ b/units/en/unit2/gradio-client.mdx
@@ -15,7 +15,7 @@ We'll connect to the MCP server we created in the previous section and use it to
First, we need to install the `smolagents`, gradio and mcp-client libraries, if we haven't already:
```bash
-pip install "smolagents[mcp]" "gradio[mcp]" mcp
+pip install "smolagents[mcp]" "gradio[mcp]" mcp fastmcp
```
Now, we can import the necessary libraries and create a simple Gradio interface that uses the MCP Client to connect to the MCP Server.
@@ -24,16 +24,15 @@ Now, we can import the necessary libraries and create a simple Gradio interface
```python
import gradio as gr
-from mcp.client.stdio import StdioServerParameters
-from smolagents import InferenceClientModel, CodeAgent, ToolCollection
-from smolagents.mcp_client import MCPClient
+from mcp import StdioServerParameters
+from smolagents import InferenceClientModel, CodeAgent, ToolCollection, MCPClient
```
Next, we'll connect to the MCP Server and get the tools that we can use to answer questions.
```python
mcp_client = MCPClient(
- {"url": "http://localhost:7860/gradio_api/mcp/sse"}
+ {"url": "http://localhost:7860/gradio_api/mcp/sse"} # This is the MCP Server we created in the previous section
)
tools = mcp_client.get_tools()
```
@@ -77,16 +76,13 @@ Here's the complete example of the MCP Client in Gradio:
```python
import gradio as gr
-from mcp.client.stdio import StdioServerParameters
-from smolagents import InferenceClientModel, CodeAgent, ToolCollection
-from smolagents.mcp_client import MCPClient
+from mcp import StdioServerParameters
+from smolagents import InferenceClientModel, CodeAgent, ToolCollection, MCPClient
try:
mcp_client = MCPClient(
- ## Try this working example on the hub:
- # {"url": "https://abidlabs-mcp-tools.hf.space/gradio_api/mcp/sse"}
- {"url": "http://localhost:7860/gradio_api/mcp/sse"}
+ {"url": "http://localhost:7860/gradio_api/mcp/sse"} # This is the MCP Server we created in the previous section
)
tools = mcp_client.get_tools()
@@ -119,13 +115,21 @@ To deploy your Gradio MCP client to Hugging Face Spaces:
- Choose "Gradio" as the SDK
- Name your space (e.g., "mcp-client")
-2. Create a `requirements.txt` file:
+2. Update MCP Server URL in the code:
+
+```python
+mcp_client = MCPClient(
+ {"url": "http://localhost:7860/gradio_api/mcp/sse"} # This is the MCP Server we created in the previous section
+)
+```
+
+3. Create a `requirements.txt` file:
```txt
gradio[mcp]
smolagents[mcp]
```
-3. Push your code to the Space:
+4. Push your code to the Space:
```bash
git init
git add server.py requirements.txt
diff --git a/units/en/unit2/tiny-agents.mdx b/units/en/unit2/tiny-agents.mdx
index 1182fd9..45548fe 100644
--- a/units/en/unit2/tiny-agents.mdx
+++ b/units/en/unit2/tiny-agents.mdx
@@ -17,44 +17,37 @@ Some MCP Clients, notably Claude Desktop, do not yet support SSE-based MCP Serve
-
-
+Tiny Agent can run MCP servers with a command line environment. To do this, we will need to install `npm` and run the server with `npx`. **We'll need these for both Python and JavaScript.**
-First, we need to install the `tiny-agents` package.
+Let's install `npx` with `npm`. If you don't have `npm` installed, check out the [npm documentation](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm).
```bash
-npm install @huggingface/tiny-agents
-# or
-pnpm add @huggingface/tiny-agents
+# install npx
+npm install -g npx
```
Then, we need to install the `mcp-remote` package.
```bash
npm i mcp-remote
-# or
-pnpm add mcp-remote
```
-
-
+
+
-First, you need to install the latest version of `huggingface_hub` with the `mcp` extra to get all the necessary components.
+For JavaScript, we need to install the `tiny-agents` package.
```bash
-pip install "huggingface_hub[mcp]>=0.32.0"
+npm install @huggingface/tiny-agents
```
-Then, we need to install the `mcp-remote` package.
-
-```bash
-npm install mcp-remote
-```
+
+
-And we'll need to install `npx` to run the `mcp-remote` command.
+For Python, you need to install the latest version of `huggingface_hub` with the `mcp` extra to get all the necessary components.
```bash
-npm install -g npx
+pip install "huggingface_hub[mcp]>=0.32.0"
```
@@ -62,7 +55,7 @@ npm install -g npx
## Tiny Agents MCP Client in the Command Line
-Tiny Agents can create MCP clients from the command line based on JSON configuration files.
+Let's repeat the example from [Unit 1](../unit1/mcp-clients.mdx) to create a basic Tiny Agent. Tiny Agents can create MCP clients from the command line based on JSON configuration files.
@@ -87,7 +80,7 @@ The JSON file will look like this:
"command": "npx",
"args": [
"mcp-remote",
- "http://localhost:7860/gradio_api/mcp/sse"
+ "http://localhost:7860/gradio_api/mcp/sse" // This is the MCP Server we created in the previous section
]
}
}
@@ -109,6 +102,7 @@ Let's setup a project with a basic Tiny Agent.
```bash
mkdir my-agent
touch my-agent/agent.json
+cd my-agent
```
The JSON file will look like this:
@@ -135,7 +129,7 @@ The JSON file will look like this:
We can then run the agent with the following command:
```bash
-tiny-agents run ./my-agent
+tiny-agents run agent.json
```
@@ -149,10 +143,9 @@ Here we have a basic Tiny Agent that can connect to our Gradio MCP server. It in
| `provider` | The inference provider to use for the agent |
| `servers` | The servers to use for the agent. We'll use the `mcp-remote` server for our Gradio MCP server. |
-We could also use an open source model running locally with Tiny Agents.
+
-
-
+We could also use an open source model running locally with Tiny Agents. If we start a local inference server with
```json
{
@@ -173,33 +166,11 @@ We could also use an open source model running locally with Tiny Agents.
}
```
-
-
-
-```json
-{
- "model": "Qwen/Qwen3-32B",
- "endpoint_url": "http://localhost:1234/v1",
- "servers": [
- {
- "type": "stdio",
- "config": {
- "command": "npx",
- "args": [
- "mcp-remote",
- "http://localhost:1234/v1/mcp/sse"
- ]
- }
- }
- ]
-}
-```
-
-
-
Here we have a Tiny Agent that can connect to a local model. It includes a model, endpoint URL (`http://localhost:1234/v1`), and a server configuration. The endpoint should be an OpenAI-compatible endpoint.
+
+
## Custom Tiny Agents MCP Client
Now that we understand both Tiny Agents and Gradio MCP servers, let's see how they work together! The beauty of MCP is that it provides a standardized way for agents to interact with any MCP-compatible server, including our Gradio-based sentiment analysis server from earlier sections.