Skip to content

Commit 010edb4

Browse files
merveenoyanaymeric-roucherabidlabspcuencaVaibhavs10
authored
Agents landing page (#1728)
--------- Co-authored-by: Aymeric Roucher <[email protected]> Co-authored-by: Abubakar Abid <[email protected]> Co-authored-by: Pedro Cuenca <[email protected]> Co-authored-by: vb <[email protected]> Co-authored-by: Julien Chaumond <[email protected]>
1 parent bf3dfa4 commit 010edb4

File tree

2 files changed

+115
-0
lines changed

2 files changed

+115
-0
lines changed

docs/hub/_toctree.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -398,6 +398,8 @@
398398
title: "Protect AI"
399399
- local: security-jfrog
400400
title: "JFrog"
401+
- local: agents
402+
title: Agents on Hub
401403
- local: moderation
402404
title: Moderation
403405
- local: paper-pages

docs/hub/agents.md

Lines changed: 113 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,113 @@
1+
# Agents on the Hub
2+
3+
This page compiles all the libraries and tools Hugging Face offers for agentic workflows: huggingface.js mcp-client, Gradio MCP Server and smolagents.
4+
5+
## smolagents
6+
7+
[smolagents](https://github.com/huggingface/smolagents) is a lightweight library to cover all agentic use cases, from code-writing agents to computer use, in few lines of code. It is model agnostic, supporting local models served with Hugging Face Transformers, as well as models offered with [Inference Providers](../inference-providers/index.md), and proprietary model providers.
8+
9+
It offers a unique kind of agent :`CodeAgent`, an agent that writes its actions in Python code.
10+
It also supports the standard agent that writes actions in JSON blobs as most other agentic frameworks do, called `ToolCallingAgent`.
11+
To learn more about write actions in code vs JSON, check out our [new short course on DeepLearning.AI](https://www.deeplearning.ai/short-courses/building-code-agents-with-hugging-face-smolagents/).
12+
13+
If you want to avoid defining agents yourself, the easiest way to start an agent is through the CLI, using the `smolagent` command.
14+
15+
```bash
16+
smolagent "Plan a trip to Tokyo, Kyoto and Osaka between Mar 28 and Apr 7." \
17+
--model-type "InferenceClientModel" \
18+
--model-id "Qwen/Qwen2.5-Coder-32B-Instruct" \
19+
--imports "pandas numpy" \
20+
--tools "web_search"
21+
```
22+
23+
Agents can be pushed to Hugging Face Hub as Spaces. Check out all the cool agents people have built [here](https://huggingface.co/spaces?filter=smolagents&sort=likes).
24+
25+
smolagents also supports MCP servers as tools, as follows:
26+
27+
```python
28+
# pip install --upgrade smolagents mcp
29+
from smolagents import MCPClient, CodeAgent
30+
from mcp import StdioServerParameters
31+
import os
32+
33+
server_parameters = StdioServerParameters(
34+
command="uvx", # Using uvx ensures dependencies are available
35+
args=["--quiet", "[email protected]"],
36+
env={"UV_PYTHON": "3.12", **os.environ},
37+
)
38+
39+
with MCPClient(server_parameters) as tools:
40+
agent = CodeAgent(tools=tools, model=model, add_base_tools=True)
41+
agent.run("Please find the latest research on COVID-19 treatment.")
42+
```
43+
44+
Learn more [in the documentation](https://huggingface.co/docs/smolagents/tutorials/tools#use-mcp-tools-with-mcpclient-directly).
45+
46+
## huggingface.js mcp-client
47+
48+
Huggingface.js offers an MCP client served with [Inference Providers](https://huggingface.co/docs/inference-providers/en/index) or local LLMs. Getting started with them is as simple as running `pnpm agent`. You can plug and play different models and providers by setting `PROVIDER` and `MODEL_ID` environment variables.
49+
50+
```bash
51+
export HF_TOKEN="hf_..."
52+
export MODEL_ID="Qwen/Qwen2.5-72B-Instruct"
53+
export PROVIDER="nebius"
54+
npx @huggingface/mcp-client
55+
```
56+
57+
or, you can use any Local LLM (for example via lmstudio):
58+
59+
```bash
60+
ENDPOINT_URL=http://localhost:1234/v1 \
61+
MODEL_ID=lmstudio-community/Qwen3-14B-GGUF \
62+
npx @huggingface/mcp-client
63+
```
64+
65+
You can get more information about mcp-client [here](https://huggingface.co/docs/huggingface.js/en/mcp-client/README).
66+
67+
68+
## Gradio MCP Server / Tools
69+
70+
You can build an MCP server in just a few lines of Python with Gradio. If you have an existing Gradio app or Space you'd like to use as an MCP server / tool, it's just a single-line change.
71+
72+
To make a Gradio application an MCP server, simply pass in `mcp_server=True` when launching your demo like follows.
73+
74+
```python
75+
# pip install gradio
76+
77+
import gradio as gr
78+
79+
def generate_image(prompt: str):
80+
"""
81+
Generate an image based on a text prompt
82+
83+
Args:
84+
prompt: a text string describing the image to generate
85+
"""
86+
pass
87+
88+
demo = gr.Interface(
89+
fn=generate_image,
90+
inputs="text",
91+
outputs="image",
92+
title="Image Generator"
93+
)
94+
95+
demo.launch(mcp_server=True)
96+
```
97+
98+
The MCP server will be available at `http://your-space-id.hf.space/gradio_api/mcp/sse` where your application is served. It will have a tool corresponding to each function in your Gradio app, with the tool description automatically generated from the docstrings of your functions.
99+
100+
Lastly, add this to the settings of the MCP Client of your choice (e.g. Cursor).
101+
102+
```json
103+
{
104+
"mcpServers": {
105+
"gradio": {
106+
"url": "http://your-server:port/gradio_api/mcp/sse"
107+
}
108+
}
109+
}
110+
```
111+
112+
This is very powerful because it lets the LLM use any Gradio application as a tool. You can find thousands of them on [Spaces](https://huggingface.co/spaces). Learn more [here](https://www.gradio.app/guides/building-mcp-server-with-gradio).
113+

0 commit comments

Comments
 (0)