diff --git a/docs/hub/agents.md b/docs/hub/agents.md index ed4ed827f..e49aaaffd 100644 --- a/docs/hub/agents.md +++ b/docs/hub/agents.md @@ -24,49 +24,10 @@ This feature is experimental ⚗️ and will continue to evolve. -## smolagents - -[smolagents](https://github.com/huggingface/smolagents) is a lightweight library to cover all agentic use cases, from code-writing agents to computer use, in few lines of code. It is model agnostic, supporting local models served with Hugging Face Transformers, as well as models offered with [Inference Providers](../inference-providers/index.md), and proprietary model providers. - -It offers a unique kind of agent :`CodeAgent`, an agent that writes its actions in Python code. -It also supports the standard agent that writes actions in JSON blobs as most other agentic frameworks do, called `ToolCallingAgent`. -To learn more about write actions in code vs JSON, check out our [new short course on DeepLearning.AI](https://www.deeplearning.ai/short-courses/building-code-agents-with-hugging-face-smolagents/). - -If you want to avoid defining agents yourself, the easiest way to start an agent is through the CLI, using the `smolagent` command. - -```bash -smolagent "Plan a trip to Tokyo, Kyoto and Osaka between Mar 28 and Apr 7." \ ---model-type "InferenceClientModel" \ ---model-id "Qwen/Qwen2.5-Coder-32B-Instruct" \ ---imports "pandas numpy" \ ---tools "web_search" -``` - -Agents can be pushed to Hugging Face Hub as Spaces. Check out all the cool agents people have built [here](https://huggingface.co/spaces?filter=smolagents&sort=likes). - -smolagents also supports MCP servers as tools, as follows: - -```python -# pip install --upgrade smolagents mcp -from smolagents import MCPClient, CodeAgent -from mcp import StdioServerParameters -import os - -server_parameters = StdioServerParameters( - command="uvx", # Using uvx ensures dependencies are available - args=["--quiet", "pubmedmcp@0.1.3"], - env={"UV_PYTHON": "3.12", **os.environ}, -) - -with MCPClient(server_parameters) as tools: - agent = CodeAgent(tools=tools, model=model, add_base_tools=True) - agent.run("Please find the latest research on COVID-19 treatment.") -``` - -Learn more [in the documentation](https://huggingface.co/docs/smolagents/tutorials/tools#use-mcp-tools-with-mcpclient-directly). - ## tiny-agents (JS and Python) +NEW: tiny-agents now supports [AGENTS.md](https://agents.md/) standard. 🥳 + `tiny-agents` is a lightweight toolkit for running and building MCP-powered agents on top of the Hugging Face Inference Client + Model Context Protocol (MCP). It is available as a JS package `@huggingface/tiny-agents` and in the `huggingface_hub` Python package. @@ -234,3 +195,43 @@ Lastly, add this to the settings of the MCP Client of your choice (e.g. Cursor). This is very powerful because it lets the LLM use any Gradio application as a tool. You can find thousands of them on [Spaces](https://huggingface.co/spaces). Learn more [here](https://www.gradio.app/guides/building-mcp-server-with-gradio). +## smolagents + +[smolagents](https://github.com/huggingface/smolagents) is a lightweight library to cover all agentic use cases, from code-writing agents to computer use, in few lines of code. It is model agnostic, supporting local models served with Hugging Face Transformers, as well as models offered with [Inference Providers](../inference-providers/index.md), and proprietary model providers. + +It offers a unique kind of agent :`CodeAgent`, an agent that writes its actions in Python code. +It also supports the standard agent that writes actions in JSON blobs as most other agentic frameworks do, called `ToolCallingAgent`. +To learn more about write actions in code vs JSON, check out our [new short course on DeepLearning.AI](https://www.deeplearning.ai/short-courses/building-code-agents-with-hugging-face-smolagents/). + +If you want to avoid defining agents yourself, the easiest way to start an agent is through the CLI, using the `smolagent` command. + +```bash +smolagent "Plan a trip to Tokyo, Kyoto and Osaka between Mar 28 and Apr 7." \ +--model-type "InferenceClientModel" \ +--model-id "Qwen/Qwen2.5-Coder-32B-Instruct" \ +--imports "pandas numpy" \ +--tools "web_search" +``` + +Agents can be pushed to Hugging Face Hub as Spaces. Check out all the cool agents people have built [here](https://huggingface.co/spaces?filter=smolagents&sort=likes). + +smolagents also supports MCP servers as tools, as follows: + +```python +# pip install --upgrade smolagents mcp +from smolagents import MCPClient, CodeAgent +from mcp import StdioServerParameters +import os + +server_parameters = StdioServerParameters( + command="uvx", # Using uvx ensures dependencies are available + args=["--quiet", "pubmedmcp@0.1.3"], + env={"UV_PYTHON": "3.12", **os.environ}, +) + +with MCPClient(server_parameters) as tools: + agent = CodeAgent(tools=tools, model=model, add_base_tools=True) + agent.run("Please find the latest research on COVID-19 treatment.") +``` + +Learn more [in the documentation](https://huggingface.co/docs/smolagents/tutorials/tools#use-mcp-tools-with-mcpclient-directly). \ No newline at end of file