|
1 | 1 | # Agents on the Hub |
2 | 2 |
|
3 | | -This page compiles all the libraries and tools Hugging Face offers for agentic workflows: huggingface.js mcp-client, Gradio MCP Server and smolagents. |
| 3 | +This page compiles all the libraries and tools Hugging Face offers for agentic workflows: huggingface.js tiny-agents, huggingface_hub (Python) tiny-agents, Gradio MCP Server and smolagents. |
4 | 4 |
|
5 | 5 | ## smolagents |
6 | 6 |
|
@@ -43,27 +43,119 @@ with MCPClient(server_parameters) as tools: |
43 | 43 |
|
44 | 44 | Learn more [in the documentation](https://huggingface.co/docs/smolagents/tutorials/tools#use-mcp-tools-with-mcpclient-directly). |
45 | 45 |
|
46 | | -## huggingface.js mcp-client |
47 | 46 |
|
48 | | -Huggingface.js offers an MCP client served with [Inference Providers](https://huggingface.co/docs/inference-providers/en/index) or local LLMs. Getting started with them is as simple as running `pnpm agent`. You can plug and play different models and providers by setting `PROVIDER` and `MODEL_ID` environment variables. |
| 47 | +## @huggingface/tiny-agents (JS) |
| 48 | + |
| 49 | +`@huggingface/tiny-agents` offers a lightweight toolkit for running and building MCP-powered agents on top of the Hugging Face Inference Client + Model Context Protocol (MCP). |
| 50 | + |
| 51 | + |
| 52 | +**Getting Started** |
49 | 53 |
|
50 | 54 | ```bash |
51 | | -export HF_TOKEN="hf_..." |
52 | | -export MODEL_ID="Qwen/Qwen2.5-72B-Instruct" |
53 | | -export PROVIDER="nebius" |
54 | | -npx @huggingface/mcp-client |
| 55 | +npx @huggingface/tiny-agents [command] "agent/id" |
| 56 | +``` |
| 57 | + |
| 58 | +``` |
| 59 | +Usage: |
| 60 | + tiny-agents [flags] |
| 61 | + tiny-agents run "agent/id" |
| 62 | + tiny-agents serve "agent/id" |
| 63 | +
|
| 64 | +Available Commands: |
| 65 | + run Run the Agent in command-line |
| 66 | + serve Run the Agent as an OpenAI-compatible HTTP server |
| 67 | +``` |
| 68 | + |
| 69 | +You can load agents directly from the Hugging Face Hub [tiny-agents](https://huggingface.co/datasets/tiny-agents/tiny-agents) Dataset, or specify a path to your own local agent configuration. |
| 70 | + |
| 71 | +**Define Custom Agents** |
| 72 | + |
| 73 | +To create your own agent, set up a folder (e.g., `my-agent/`) with an `agent.json` file. The following example shows a web-browsing agent configured to use the [Qwen/Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct) model via Nebius inference provider, and it comes equipped with a playwright MCP server, which lets it use a web browser |
| 74 | + |
| 75 | +```json |
| 76 | +{ |
| 77 | + "model": "Qwen/Qwen2.5-72B-Instruct", |
| 78 | + "provider": "nebius", |
| 79 | + "servers": [ |
| 80 | + { |
| 81 | + "type": "stdio", |
| 82 | + "config": { |
| 83 | + "command": "npx", |
| 84 | + "args": ["@playwright/mcp@latest"] |
| 85 | + } |
| 86 | + } |
| 87 | + ] |
| 88 | +} |
| 89 | +``` |
| 90 | + |
| 91 | +To use a local LLM (such as [llama.cpp](https://github.com/ggerganov/llama.cpp), or [LM Studio](https://lmstudio.ai/)), just provide an `endpointUrl`: |
| 92 | + |
| 93 | +```json |
| 94 | +{ |
| 95 | + "model": "Qwen/Qwen3-32B", |
| 96 | + "endpointUrl": "http://localhost:1234/v1", |
| 97 | + "servers": [ |
| 98 | + { |
| 99 | + "type": "stdio", |
| 100 | + "config": { |
| 101 | + "command": "npx", |
| 102 | + "args": ["@playwright/mcp@latest"] |
| 103 | + } |
| 104 | + } |
| 105 | + ] |
| 106 | +} |
| 107 | + |
55 | 108 | ``` |
56 | 109 |
|
57 | | -or, you can use any Local LLM (for example via lmstudio): |
| 110 | +Optionally, add a `PROMPT.md` to customize the system prompt. |
| 111 | + |
| 112 | +**Advanced Usage** |
| 113 | +In addition to the CLI, you can use the `Agent` class for more fine-grained control. For lower-level interactions, use the `MCPClient` from the `@huggingface/mcp-client` package to connect directly to MCP servers and manage tool calls. |
| 114 | + |
| 115 | +Learn more about tiny-agents in the [huggingface.js documentation](https://huggingface.co/docs/huggingface.js/en/tiny-agents/README). |
| 116 | + |
| 117 | +## huggingface_hub (Python) |
| 118 | + |
| 119 | +The `huggingface_hub` library is the easiest way to run MCP-powered agents in Python. It includes a high-level `tiny-agents` CLI as well as programmatic access via the `Agent` and `MCPClient` classes — all built to work with [Hugging Face Inference Providers](https://huggingface.co/docs/inference-providers/index), local LLMs, or any inference endpoint compatible with OpenAI's API specs. |
| 120 | + |
| 121 | +**Getting started** |
58 | 122 |
|
| 123 | +Install the latest version with MCP support: |
59 | 124 | ```bash |
60 | | -ENDPOINT_URL=http://localhost:1234/v1 \ |
61 | | -MODEL_ID=lmstudio-community/Qwen3-14B-GGUF \ |
62 | | -npx @huggingface/mcp-client |
| 125 | +pip install "huggingface_hub[mcp]>=0.32.2" |
63 | 126 | ``` |
| 127 | +Then, you can run your agent: |
| 128 | +```bash |
| 129 | +> tiny-agents run --help |
| 130 | + |
| 131 | + Usage: tiny-agents run [OPTIONS] [PATH] COMMAND [ARGS]... |
| 132 | + |
| 133 | + Run the Agent in the CLI |
| 134 | + |
| 135 | + |
| 136 | +╭─ Arguments ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ |
| 137 | +│ path [PATH] Path to a local folder containing an agent.json file or a built-in agent stored in the 'tiny-agents/tiny-agents' Hugging Face dataset │ |
| 138 | +│ (https://huggingface.co/datasets/tiny-agents/tiny-agents) │ |
| 139 | +╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ |
| 140 | +╭─ Options ─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ |
| 141 | +│ --help Show this message and exit. │ |
| 142 | +╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ |
| 143 | + |
| 144 | +``` |
| 145 | + |
| 146 | +The CLI pulls the config, connects to its MCP servers, prints the available tools, and waits for your prompt. |
| 147 | + |
| 148 | +**Advanced Usage** |
| 149 | + |
| 150 | +For more fine-grained control, use the `MCPClient` directly. This low-level interface extends `AsyncInferenceClient` and allows LLMs to call tools via the Model Context Protocol (MCP). It supports both local (`stdio`) and remote (`http`/`sse`) MCP servers, handles tool registration and execution, and streams results back to the model in real-time. |
| 151 | + |
| 152 | +Learn more in the [`huggingface_hub` MCP documentation](https://huggingface.co/docs/huggingface_hub/main/en/package_reference/mcp). |
| 153 | + |
| 154 | +<Tip> |
64 | 155 |
|
65 | | -You can get more information about mcp-client [here](https://huggingface.co/docs/huggingface.js/en/mcp-client/README). |
| 156 | +Don't hesitate to contribute your agent to the community by opening a Pull Request in the [tiny-agents](https://huggingface.co/datasets/tiny-agents/tiny-agents) Hugging Face dataset. |
66 | 157 |
|
| 158 | +</Tip> |
67 | 159 |
|
68 | 160 | ## Gradio MCP Server / Tools |
69 | 161 |
|
|
0 commit comments