diff --git a/units/en/unit2/tiny-agents.mdx b/units/en/unit2/tiny-agents.mdx
index 0180808..8310e9a 100644
--- a/units/en/unit2/tiny-agents.mdx
+++ b/units/en/unit2/tiny-agents.mdx
@@ -1,14 +1,25 @@
-# Tiny Agents: an MCP-powered agent in 50 lines of code
+# Building Tiny Agents with MCP and the Hugging Face Hub
-Now that we've built MCP servers in Gradio and learned about creating MCP clients, let's complete our end-to-end application by building a TypeScript agent that can seamlessly interact with our sentiment analysis tool. This section builds on the project [Tiny Agents](https://huggingface.co/blog/tiny-agents), which demonstrates a super simple way of deploying MCP clients that can connect to services like our Gradio sentiment analysis server.
+Now that we've built MCP servers in Gradio and learned about creating MCP clients, let's complete our end-to-end application by building an agent that can seamlessly interact with our sentiment analysis tool. This section builds on the project [Tiny Agents](https://huggingface.co/blog/tiny-agents), which demonstrates a super simple way of deploying MCP clients that can connect to services like our Gradio sentiment analysis server.
-In this final exercise of Unit 2, we will walk you through how to implement a TypeScript (JS) MCP client that can communicate with any MCP server, including the Gradio-based sentiment analysis server we built in the previous sections. This completes our end-to-end MCP application flow: from building a Gradio MCP server exposing a sentiment analysis tool, to creating a flexible agent that can use this tool alongside other capabilities.
+In this final exercise of Unit 2, we will walk you through how to implement both TypeScript (JS) and Python MCP clients that can communicate with any MCP server, including the Gradio-based sentiment analysis server we built in the previous sections. This completes our end-to-end MCP application flow: from building a Gradio MCP server exposing a sentiment analysis tool, to creating a flexible agent that can use this tool alongside other capabilities.

Image credit https://x.com/adamdotdev
## Installation
+Let's install the necessary packages to build our Tiny Agents.
+
+
+
+Some MCP Clients, notably Claude Desktop, do not yet support SSE-based MCP Servers. In those cases, you can use a tool such as [mcp-remote](https://github.com/geelen/mcp-remote). First install Node.js. Then, add the following to your own MCP Client config:
+
+
+
+
+
+
First, we need to install the `tiny-agents` package.
```bash
@@ -25,10 +36,37 @@ npm i mcp-remote
pnpm add mcp-remote
```
+
+
+
+First, you need to install the latest version of `huggingface_hub` with the `mcp` extra to get all the necessary components.
+
+```bash
+pip install "huggingface_hub[mcp]>=0.32.0"
+```
+
+Then, we need to install the `mcp-remote` package.
+
+```bash
+npm install mcp-remote
+```
+
+And we'll need to install `npx` to run the `mcp-remote` command.
+
+```bash
+npm install -g npx
+```
+
+
+
+
## Tiny Agents MCP Client in the Command Line
Tiny Agents can create MCP clients from the command line based on JSON configuration files.
+
+
+
Let's setup a project with a basic Tiny Agent.
```bash
@@ -57,6 +95,52 @@ The JSON file will look like this:
}
```
+We can then run the agent with the following command:
+
+```bash
+npx @huggingface/tiny-agents run ./my-agent
+```
+
+
+
+
+Let's setup a project with a basic Tiny Agent.
+
+```bash
+mkdir my-agent
+touch my-agent/agent.json
+```
+
+The JSON file will look like this:
+
+```json
+{
+ "model": "Qwen/Qwen2.5-72B-Instruct",
+ "provider": "nebius",
+ "servers": [
+ {
+ "type": "stdio",
+ "config": {
+ "command": "npx",
+ "args": [
+ "mcp-remote",
+ "http://localhost:7860/gradio_api/mcp/sse"
+ ]
+ }
+ }
+ ]
+}
+```
+
+We can then run the agent with the following command:
+
+```bash
+tiny-agents run ./my-agent
+```
+
+
+
+
Here we have a basic Tiny Agent that can connect to our Gradio MCP server. It includes a model, provider, and a server configuration.
| Field | Description |
@@ -67,6 +151,9 @@ Here we have a basic Tiny Agent that can connect to our Gradio MCP server. It in
We could also use an open source model running locally with Tiny Agents.
+
+
+
```json
{
"model": "Qwen/Qwen3-32B",
@@ -86,14 +173,33 @@ We could also use an open source model running locally with Tiny Agents.
}
```
-Here we have a Tiny Agent that can connect to a local model. It includes a model, endpoint URL (`http://localhost:1234/v1`), and a server configuration. The endpoint should be an OpenAI-compatible endpoint.
-
-We can then run the agent with the following command:
+
+
-```bash
-npx @huggingface/tiny-agents run ./my-agent
+```json
+{
+ "model": "Qwen/Qwen3-32B",
+ "endpoint_url": "http://localhost:1234/v1",
+ "servers": [
+ {
+ "type": "stdio",
+ "config": {
+ "command": "npx",
+ "args": [
+ "mcp-remote",
+ "http://localhost:1234/v1/mcp/sse"
+ ]
+ }
+ }
+ ]
+}
```
+
+
+
+Here we have a Tiny Agent that can connect to a local model. It includes a model, endpoint URL (`http://localhost:1234/v1`), and a server configuration. The endpoint should be an OpenAI-compatible endpoint.
+
## Custom Tiny Agents MCP Client
Now that we understand both Tiny Agents and Gradio MCP servers, let's see how they work together! The beauty of MCP is that it provides a standardized way for agents to interact with any MCP-compatible server, including our Gradio-based sentiment analysis server from earlier sections.
@@ -102,6 +208,9 @@ Now that we understand both Tiny Agents and Gradio MCP servers, let's see how th
To connect our Tiny Agent to the Gradio sentiment analysis server we built earlier in this unit, we just need to add it to our list of servers. Here's how we can modify our agent configuration:
+
+
+
```ts
const agent = new Agent({
provider: process.env.PROVIDER ?? "nebius",
@@ -120,32 +229,43 @@ const agent = new Agent({
});
```
-Now our agent can use the sentiment analysis tool alongside other tools! For example, it could:
-1. Read text from a file using the filesystem server
-2. Analyze its sentiment using our Gradio server
-3. Write the results back to a file
+
+
-### Example Interaction
+```python
+import os
-Here's what a conversation with our agent might look like:
+from huggingface_hub import Agent
+agent = Agent(
+ model="Qwen/Qwen2.5-72B-Instruct",
+ provider="nebius",
+ servers=[
+ {
+ "command": "npx",
+ "args": [
+ "mcp-remote",
+ "http://localhost:7860/gradio_api/mcp/sse" # Your Gradio MCP server
+ ]
+ }
+ ],
+)
```
-User: Read the file "feedback.txt" from my Desktop and analyze its sentiment
-
-Agent: I'll help you analyze the sentiment of the feedback file. Let me break this down into steps:
-1. First, I'll read the file using the filesystem tool
-2. Then, I'll analyze its sentiment using the sentiment analysis tool
-3. Finally, I'll write the results to a new file
+
+
-[Agent proceeds to use the tools and provide the analysis]
-```
+Now our agent can use the sentiment analysis tool alongside other tools! For example, it could:
+1. Read text from a file using the filesystem server
+2. Analyze its sentiment using our Gradio server
+3. Write the results back to a file
### Deployment Considerations
When deploying your Gradio MCP server to Hugging Face Spaces, you'll need to update the server URL in your agent configuration to point to your deployed space:
-```ts
+
+```json
{
command: "npx",
args: [
@@ -155,6 +275,7 @@ When deploying your Gradio MCP server to Hugging Face Spaces, you'll need to upd
}
```
+
This allows your agent to use the sentiment analysis tool from anywhere, not just locally!
## Conclusion: Our Complete End-to-End MCP Application
@@ -163,7 +284,7 @@ In this unit, we've gone from understanding MCP basics to building a complete en
1. We created a Gradio MCP server that exposes a sentiment analysis tool
2. We learned how to connect to this server using MCP clients
-3. We built a tiny agent in TypeScript that can interact with our tool
+3. We built a tiny agent in TypeScript and Python that can interact with our tool
This demonstrates the power of the Model Context Protocol - we can create specialized tools using frameworks we're familiar with (like Gradio), expose them through a standardized interface (MCP), and then have agents seamlessly use these tools alongside other capabilities.
@@ -187,6 +308,9 @@ This section is based on the [Tiny Agents blog post](https://huggingface.co/blog
In this section, we'll show you how to build an agent that can perform web automation tasks like searching, clicking, and extracting information from websites.
+
+
+
```ts
// playwright-agent.ts
import { Agent } from "@huggingface/tiny-agents";
@@ -206,6 +330,32 @@ const agent = new Agent({
await agent.run();
```
+
+
+
+```python
+# playwright_agent.py
+import os
+from huggingface_hub import Agent
+
+agent = Agent(
+ provider=os.environ.get("PROVIDER", "nebius"),
+ model=os.environ.get("MODEL_ID", "Qwen/Qwen2.5-72B-Instruct"),
+ api_key=os.environ.get("HF_TOKEN"),
+ servers=[
+ {
+ "command": "npx",
+ "args": ["playwright-mcp"]
+ }
+ ],
+)
+
+agent.run()
+```
+
+
+
+
The Playwright MCP server exposes tools that allow your agent to:
1. Open browser tabs
@@ -233,6 +383,9 @@ This browser automation capability can be combined with other MCP servers to cre
## How to run the complete demo
+
+
+
If you have NodeJS (with `pnpm` or `npm`), just run this in a terminal:
```bash
@@ -247,6 +400,21 @@ pnpx @huggingface/mcp-client
This installs the package into a temporary folder then executes its command.
+
+
+
+If you have Python installed, you can run this in a terminal:
+
+```bash
+pip install "huggingface_hub[mcp]>=0.32.0"
+tiny-agents run
+```
+
+This installs the package and runs the tiny agents client.
+
+
+
+
You'll see your simple Agent connect to multiple MCP servers (running locally), loading their tools (similar to how it would load your Gradio sentiment analysis tool), then prompting you for a conversation.