|
| 1 | +# Tiny Agents: an MCP-powered agent in 50 lines of code |
| 2 | + |
| 3 | +Now that we've built MCP server's in Gradio let's explore MCP clients even further. This section on based on the experimental project [Tiny Agents](https://huggingface.co/blog/tiny-agents). Which is a super simple way of deploying MCP clients and using Hugging Face Inference Providers. |
| 4 | + |
| 5 | + |
| 6 | +It is fairly simple to extend an Inference Client – at HF, we have two official client SDKs: [`@huggingface/inference`](https://github.com/huggingface/huggingface.js) in JS, and [`huggingface_hub`](https://github.com/huggingface/huggingface_hub/) in Python – to also act as a MCP client and hook the available tools from MCP servers into the LLM inference. |
| 7 | + |
| 8 | +<Tip> |
| 9 | + |
| 10 | +Once you have an MCP Client, an Agent is literally just a while loop on top of it. |
| 11 | + |
| 12 | +</Tip> |
| 13 | + |
| 14 | +In short exercise, we will walk you through how to implement a Typescript (JS) MCP client, how you can adopt MCP too and how it's going to make Agentic AI way simpler going forward. |
| 15 | + |
| 16 | + |
| 17 | +<figcaption>Image credit https://x.com/adamdotdev</figcaption> |
| 18 | + |
| 19 | +We will also show you how to connect your tiny agent to Gradio based MCP server from the previous section. |
| 20 | + |
| 21 | +## How to run the complete demo |
| 22 | + |
| 23 | +If you have NodeJS (with `pnpm` or `npm`), just run this in a terminal: |
| 24 | + |
| 25 | +```bash |
| 26 | +npx @huggingface/mcp-client |
| 27 | +``` |
| 28 | + |
| 29 | +or if using `pnpm`: |
| 30 | + |
| 31 | +```bash |
| 32 | +pnpx @huggingface/mcp-client |
| 33 | +``` |
| 34 | + |
| 35 | +This installs my package into a temporary folder then executes its command. |
| 36 | + |
| 37 | +You'll see your simple Agent connect to two distinct MCP servers (running locally), loading their tools, then prompting you for a conversation. |
| 38 | + |
| 39 | +<video controls autoplay loop> |
| 40 | + <source src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/tiny-agents/use-filesystem.mp4" type="video/mp4"> |
| 41 | +</video> |
| 42 | + |
| 43 | +By default our example Agent connects to the following two MCP servers: |
| 44 | + |
| 45 | +- the "canonical" [file system server](https://github.com/modelcontextprotocol/servers/tree/main/src/filesystem), which gets access to your Desktop, |
| 46 | +- and the [Playwright MCP](https://github.com/microsoft/playwright-mcp) server, which knows how to use a sandboxed Chromium browser for you. |
| 47 | + |
| 48 | +> [!NOTE] |
| 49 | +> Note: this is a bit counter-intuitive but currently, all MCP servers are actually local processes (though remote servers are coming soon). |
| 50 | +
|
| 51 | +Our input for this first video was: |
| 52 | + |
| 53 | +> write a haiku about the Hugging Face community and write it to a file named "hf.txt" on my Desktop |
| 54 | +
|
| 55 | +Now let us try this prompt that involves some Web browsing: |
| 56 | + |
| 57 | +> do a Web Search for HF inference providers on Brave Search and open the first 3 results |
| 58 | +
|
| 59 | +<video controls autoplay loop> |
| 60 | + <source src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/tiny-agents/brave-search.mp4" type="video/mp4"> |
| 61 | +</video> |
| 62 | + |
| 63 | +### Default model and provider |
| 64 | + |
| 65 | +In terms of model/provider pair, our example Agent uses by default: |
| 66 | +- ["Qwen/Qwen2.5-72B-Instruct"](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct) |
| 67 | +- running on [Nebius](https://huggingface.co/docs/inference-providers/providers/nebius) |
| 68 | + |
| 69 | +This is all configurable through env variables! See: |
| 70 | + |
| 71 | +```ts |
| 72 | +const agent = new Agent({ |
| 73 | + provider: process.env.PROVIDER ?? "nebius", |
| 74 | + model: process.env.MODEL_ID ?? "Qwen/Qwen2.5-72B-Instruct", |
| 75 | + apiKey: process.env.HF_TOKEN, |
| 76 | + servers: SERVERS, |
| 77 | +}); |
| 78 | +``` |
| 79 | + |
| 80 | +## Where does the code live |
| 81 | + |
| 82 | +The Tiny Agent code lives in the `mcp-client` sub-package of the `huggingface.js` mono-repo, which is the GitHub mono-repo in which all our JS libraries reside. |
| 83 | + |
| 84 | +https://github.com/huggingface/huggingface.js/tree/main/packages/mcp-client |
| 85 | + |
| 86 | +> [!TIP] |
| 87 | +> The codebase uses modern JS features (notably, async generators) which make things way easier to implement, especially asynchronous events like the LLM responses. |
| 88 | +> You might need to ask a LLM about those JS features if you're not yet familiar with them. |
| 89 | +
|
| 90 | + |
| 91 | +## The foundation for this: tool calling native support in LLMs. |
| 92 | + |
| 93 | +What is going to make this whole blogpost very easy is that the recent crop of LLMs (both closed and open) have been trained for function calling, aka. tool use. |
| 94 | + |
| 95 | +A tool is defined by its name, a description, and a JSONSchema representation of its parameters. |
| 96 | +In some sense, it is an opaque representation of any function's interface, as seen from the outside (meaning, the LLM does not care how the function is actually implemented). |
| 97 | + |
| 98 | +```ts |
| 99 | +const weatherTool = { |
| 100 | + type: "function", |
| 101 | + function: { |
| 102 | + name: "get_weather", |
| 103 | + description: "Get current temperature for a given location.", |
| 104 | + parameters: { |
| 105 | + type: "object", |
| 106 | + properties: { |
| 107 | + location: { |
| 108 | + type: "string", |
| 109 | + description: "City and country e.g. Bogotá, Colombia", |
| 110 | + }, |
| 111 | + }, |
| 112 | + }, |
| 113 | + }, |
| 114 | +}; |
| 115 | +``` |
| 116 | + |
| 117 | +The canonical documentation I will link to here is [OpenAI's function calling doc](https://platform.openai.com/docs/guides/function-calling?api-mode=chat). (Yes... OpenAI pretty much defines the LLM standards for the whole community 😅). |
| 118 | + |
| 119 | +Inference engines let you pass a list of tools when calling the LLM, and the LLM is free to call zero, one or more of those tools. |
| 120 | +As a developer, you run the tools and feed their result back into the LLM to continue the generation. |
| 121 | + |
| 122 | +> [!NOTE] |
| 123 | +> Note that in the backend (at the inference engine level), the tools are simply passed to the model in a specially-formatted `chat_template`, like any other message, and then parsed out of the response (using model-specific special tokens) to expose them as tool calls. |
| 124 | +
|
| 125 | +## Implementing an MCP client on top of InferenceClient |
| 126 | + |
| 127 | +Now that we know what a tool is in recent LLMs, let us implement the actual MCP client. |
| 128 | + |
| 129 | +The official doc at https://modelcontextprotocol.io/quickstart/client is fairly well-written. You only have to replace any mention of the Anthropic client SDK by any other OpenAI-compatible client SDK. (There is also a [llms.txt](https://modelcontextprotocol.io/llms-full.txt) you can feed into your LLM of choice to help you code along). |
| 130 | + |
| 131 | +As a reminder, we use HF's `InferenceClient` for our inference client. |
| 132 | + |
| 133 | +> [!TIP] |
| 134 | +> The complete `McpClient.ts` code file is [here](https://github.com/huggingface/huggingface.js/blob/main/packages/mcp-client/src/McpClient.ts) if you want to follow along using the actual code 🤓 |
| 135 | +
|
| 136 | +Our `McpClient` class has: |
| 137 | +- an Inference Client (works with any Inference Provider, and `huggingface/inference` supports both remote and local endpoints) |
| 138 | +- a set of MCP client sessions, one for each connected MCP server (yes, we want to support multiple servers) |
| 139 | +- and a list of available tools that is going to be filled from the connected servers and just slightly re-formatted. |
| 140 | + |
| 141 | +```ts |
| 142 | +export class McpClient { |
| 143 | + protected client: InferenceClient; |
| 144 | + protected provider: string; |
| 145 | + protected model: string; |
| 146 | + private clients: Map<ToolName, Client> = new Map(); |
| 147 | + public readonly availableTools: ChatCompletionInputTool[] = []; |
| 148 | + |
| 149 | + constructor({ provider, model, apiKey }: { provider: InferenceProvider; model: string; apiKey: string }) { |
| 150 | + this.client = new InferenceClient(apiKey); |
| 151 | + this.provider = provider; |
| 152 | + this.model = model; |
| 153 | + } |
| 154 | + |
| 155 | + // [...] |
| 156 | +} |
| 157 | +``` |
| 158 | + |
| 159 | +To connect to a MCP server, the official `@modelcontextprotocol/sdk/client` TypeScript SDK provides a `Client` class with a `listTools()` method: |
| 160 | + |
| 161 | +```ts |
| 162 | +async addMcpServer(server: StdioServerParameters): Promise<void> { |
| 163 | + const transport = new StdioClientTransport({ |
| 164 | + ...server, |
| 165 | + env: { ...server.env, PATH: process.env.PATH ?? "" }, |
| 166 | + }); |
| 167 | + const mcp = new Client({ name: "@huggingface/mcp-client", version: packageVersion }); |
| 168 | + await mcp.connect(transport); |
| 169 | + |
| 170 | + const toolsResult = await mcp.listTools(); |
| 171 | + debug( |
| 172 | + "Connected to server with tools:", |
| 173 | + toolsResult.tools.map(({ name }) => name) |
| 174 | + ); |
| 175 | + |
| 176 | + for (const tool of toolsResult.tools) { |
| 177 | + this.clients.set(tool.name, mcp); |
| 178 | + } |
| 179 | + |
| 180 | + this.availableTools.push( |
| 181 | + ...toolsResult.tools.map((tool) => { |
| 182 | + return { |
| 183 | + type: "function", |
| 184 | + function: { |
| 185 | + name: tool.name, |
| 186 | + description: tool.description, |
| 187 | + parameters: tool.inputSchema, |
| 188 | + }, |
| 189 | + } satisfies ChatCompletionInputTool; |
| 190 | + }) |
| 191 | + ); |
| 192 | +} |
| 193 | +``` |
| 194 | + |
| 195 | +`StdioServerParameters` is an interface from the MCP SDK that will let you easily spawn a local process: as we mentioned earlier, currently, all MCP servers are actually local processes. |
| 196 | + |
| 197 | +For each MCP server we connect to, we slightly re-format its list of tools and add them to `this.availableTools`. |
| 198 | + |
| 199 | +### How to use the tools |
| 200 | + |
| 201 | +Easy, you just pass `this.availableTools` to your LLM chat-completion, in addition to your usual array of messages: |
| 202 | + |
| 203 | +```ts |
| 204 | +const stream = this.client.chatCompletionStream({ |
| 205 | + provider: this.provider, |
| 206 | + model: this.model, |
| 207 | + messages, |
| 208 | + tools: this.availableTools, |
| 209 | + tool_choice: "auto", |
| 210 | +}); |
| 211 | +``` |
| 212 | + |
| 213 | +`tool_choice: "auto"` is the parameter you pass for the LLM to generate zero, one, or multiple tool calls. |
| 214 | + |
| 215 | +When parsing or streaming the output, the LLM will generate some tool calls (i.e. a function name, and some JSON-encoded arguments), which you (as a developer) need to compute. The MCP client SDK once again makes that very easy; it has a `client.callTool()` method: |
| 216 | + |
| 217 | +```ts |
| 218 | +const toolName = toolCall.function.name; |
| 219 | +const toolArgs = JSON.parse(toolCall.function.arguments); |
| 220 | + |
| 221 | +const toolMessage: ChatCompletionInputMessageTool = { |
| 222 | + role: "tool", |
| 223 | + tool_call_id: toolCall.id, |
| 224 | + content: "", |
| 225 | + name: toolName, |
| 226 | +}; |
| 227 | + |
| 228 | +/// Get the appropriate session for this tool |
| 229 | +const client = this.clients.get(toolName); |
| 230 | +if (client) { |
| 231 | + const result = await client.callTool({ name: toolName, arguments: toolArgs }); |
| 232 | + toolMessage.content = result.content[0].text; |
| 233 | +} else { |
| 234 | + toolMessage.content = `Error: No session found for tool: ${toolName}`; |
| 235 | +} |
| 236 | +``` |
| 237 | + |
| 238 | +Finally you will add the resulting tool message to your `messages` array and back into the LLM. |
| 239 | + |
| 240 | +## Our 50-lines-of-code Agent 🤯 |
| 241 | + |
| 242 | +Now that we have an MCP client capable of connecting to arbitrary MCP servers to get lists of tools and capable of injecting them and parsing them from the LLM inference, well... what is an Agent? |
| 243 | + |
| 244 | +> Once you have an inference client with a set of tools, then an Agent is just a while loop on top of it. |
| 245 | +
|
| 246 | +In more detail, an Agent is simply a combination of: |
| 247 | +- a system prompt |
| 248 | +- an LLM Inference client |
| 249 | +- an MCP client to hook a set of Tools into it from a bunch of MCP servers |
| 250 | +- some basic control flow (see below for the while loop) |
| 251 | + |
| 252 | +> [!TIP] |
| 253 | +> The complete `Agent.ts` code file is [here](https://github.com/huggingface/huggingface.js/blob/main/packages/mcp-client/src/Agent.ts). |
| 254 | +
|
| 255 | +Our Agent class simply extends McpClient: |
| 256 | + |
| 257 | +```ts |
| 258 | +export class Agent extends McpClient { |
| 259 | + private readonly servers: StdioServerParameters[]; |
| 260 | + protected messages: ChatCompletionInputMessage[]; |
| 261 | + |
| 262 | + constructor({ |
| 263 | + provider, |
| 264 | + model, |
| 265 | + apiKey, |
| 266 | + servers, |
| 267 | + prompt, |
| 268 | + }: { |
| 269 | + provider: InferenceProvider; |
| 270 | + model: string; |
| 271 | + apiKey: string; |
| 272 | + servers: StdioServerParameters[]; |
| 273 | + prompt?: string; |
| 274 | + }) { |
| 275 | + super({ provider, model, apiKey }); |
| 276 | + this.servers = servers; |
| 277 | + this.messages = [ |
| 278 | + { |
| 279 | + role: "system", |
| 280 | + content: prompt ?? DEFAULT_SYSTEM_PROMPT, |
| 281 | + }, |
| 282 | + ]; |
| 283 | + } |
| 284 | +} |
| 285 | +``` |
| 286 | + |
| 287 | +By default, we use a very simple system prompt inspired by the one shared in the [GPT-4.1 prompting guide](https://cookbook.openai.com/examples/gpt4-1_prompting_guide). |
| 288 | + |
| 289 | +Even though this comes from OpenAI 😈, this sentence in particular applies to more and more models, both closed and open: |
| 290 | + |
| 291 | +> We encourage developers to exclusively use the tools field to pass tools, rather than manually injecting tool descriptions into your prompt and writing a separate parser for tool calls, as some have reported doing in the past. |
| 292 | +
|
| 293 | +Which is to say, we don't need to provide painstakingly formatted lists of tool use examples in the prompt. The `tools: this.availableTools` param is enough. |
| 294 | + |
| 295 | +Loading the tools on the Agent is literally just connecting to the MCP servers we want (in parallel because it's so easy to do in JS): |
| 296 | + |
| 297 | +```ts |
| 298 | +async loadTools(): Promise<void> { |
| 299 | + await Promise.all(this.servers.map((s) => this.addMcpServer(s))); |
| 300 | +} |
| 301 | +``` |
| 302 | + |
| 303 | +We add two extra tools (outside of MCP) that can be used by the LLM for our Agent's control flow: |
| 304 | + |
| 305 | +```ts |
| 306 | +const taskCompletionTool: ChatCompletionInputTool = { |
| 307 | + type: "function", |
| 308 | + function: { |
| 309 | + name: "task_complete", |
| 310 | + description: "Call this tool when the task given by the user is complete", |
| 311 | + parameters: { |
| 312 | + type: "object", |
| 313 | + properties: {}, |
| 314 | + }, |
| 315 | + }, |
| 316 | +}; |
| 317 | +const askQuestionTool: ChatCompletionInputTool = { |
| 318 | + type: "function", |
| 319 | + function: { |
| 320 | + name: "ask_question", |
| 321 | + description: "Ask a question to the user to get more info required to solve or clarify their problem.", |
| 322 | + parameters: { |
| 323 | + type: "object", |
| 324 | + properties: {}, |
| 325 | + }, |
| 326 | + }, |
| 327 | +}; |
| 328 | +const exitLoopTools = [taskCompletionTool, askQuestionTool]; |
| 329 | +``` |
| 330 | + |
| 331 | +When calling any of these tools, the Agent will break its loop and give control back to the user for new input. |
| 332 | + |
| 333 | +### The complete while loop |
| 334 | + |
| 335 | +Behold our complete while loop.🎉 |
| 336 | + |
| 337 | +The gist of our Agent's main while loop is that we simply iterate with the LLM alternating between tool calling and feeding it the tool results, and we do so **until the LLM starts to respond with two non-tool messages in a row**. |
| 338 | + |
| 339 | +This is the complete while loop: |
| 340 | + |
| 341 | +```ts |
| 342 | +let numOfTurns = 0; |
| 343 | +let nextTurnShouldCallTools = true; |
| 344 | +while (true) { |
| 345 | + try { |
| 346 | + yield* this.processSingleTurnWithTools(this.messages, { |
| 347 | + exitLoopTools, |
| 348 | + exitIfFirstChunkNoTool: numOfTurns > 0 && nextTurnShouldCallTools, |
| 349 | + abortSignal: opts.abortSignal, |
| 350 | + }); |
| 351 | + } catch (err) { |
| 352 | + if (err instanceof Error && err.message === "AbortError") { |
| 353 | + return; |
| 354 | + } |
| 355 | + throw err; |
| 356 | + } |
| 357 | + numOfTurns++; |
| 358 | + const currentLast = this.messages.at(-1)!; |
| 359 | + if ( |
| 360 | + currentLast.role === "tool" && |
| 361 | + currentLast.name && |
| 362 | + exitLoopTools.map((t) => t.function.name).includes(currentLast.name) |
| 363 | + ) { |
| 364 | + return; |
| 365 | + } |
| 366 | + if (currentLast.role !== "tool" && numOfTurns > MAX_NUM_TURNS) { |
| 367 | + return; |
| 368 | + } |
| 369 | + if (currentLast.role !== "tool" && nextTurnShouldCallTools) { |
| 370 | + return; |
| 371 | + } |
| 372 | + if (currentLast.role === "tool") { |
| 373 | + nextTurnShouldCallTools = false; |
| 374 | + } else { |
| 375 | + nextTurnShouldCallTools = true; |
| 376 | + } |
| 377 | +} |
| 378 | +``` |
| 379 | + |
| 380 | +## Next steps |
| 381 | + |
| 382 | +There are many cool potential next steps once you have a running MCP Client and a simple way to build Agents 🔥 |
| 383 | + |
| 384 | +- Experiment with **other models** |
| 385 | + - [mistralai/Mistral-Small-3.1-24B-Instruct-2503](https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Instruct-2503) is optimized for function calling |
| 386 | + - Gemma 3 27B, the [Gemma 3 QAT](https://huggingface.co/collections/google/gemma-3-qat-67ee61ccacbf2be4195c265b) models are a popular choice for function calling though it would require us to implement tool parsing as it's not using native `tools` (a PR would be welcome!) |
| 387 | +- Experiment with all the **[Inference Providers](https://huggingface.co/docs/inference-providers/index)**: |
| 388 | + - Cerebras, Cohere, Fal, Fireworks, Hyperbolic, Nebius, Novita, Replicate, SambaNova, Together, etc. |
| 389 | + - each of them has different optimizations for function calling (also depending on the model) so performance may vary! |
| 390 | +- Hook **local LLMs** using llama.cpp or LM Studio |
| 391 | + |
| 392 | +Pull requests and contributions are welcome! |
| 393 | +Again, everything here is [open source](https://github.com/huggingface/huggingface.js)! 💎❤️ |
| 394 | + |
| 395 | + |
0 commit comments