Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
51 commits
Select commit Hold shift + click to select a range
3481098
refactor: update model type to ChatMessage in agent classes (#263)
jank Jan 18, 2025
5aa0f2b
Fixes bug no attribute 'logger' (#259)
joaopauloschuler Jan 18, 2025
6e1373a
Add linter rules + apply make style (#255)
Wauplin Jan 18, 2025
06a8c54
fix additional_args and num_ctx examples also in zh docs (#260)
RolandJAAI Jan 18, 2025
89a6350
Fix quality (#272)
albertvillanova Jan 20, 2025
0abd91c
Improve tool call argument parsing (#267)
aymeric-roucher Jan 20, 2025
3178b18
Remove unused and undocumented parameter (#273)
Wauplin Jan 20, 2025
35f7191
Fix Bug in from_langchain in tools.py (#268)
RolandJAAI Jan 20, 2025
a2ca951
Bump version to 1.5.0.dev (#256)
aymeric-roucher Jan 20, 2025
d19ebc7
Make import time faster (optional deps + delay imports) (#253)
Wauplin Jan 20, 2025
3c18d4d
Python interpreter: improve suggestions for possible mappings (#266)
aymeric-roucher Jan 20, 2025
7a91123
Improve python executor's error logging (#275)
aymeric-roucher Jan 20, 2025
a4612c9
Fix (interpreter security): builtins functions passed as tools enable…
tandiapa Jan 20, 2025
bd08d64
Added Hindi docs for smolagents (#279)
keetrap Jan 20, 2025
a2b37ca
Fix CI quality (#286)
albertvillanova Jan 21, 2025
0e0d73b
Try first dunder method in evaluate_augassign (#285)
albertvillanova Jan 21, 2025
fb23e91
Add huggingface-hub as required dependency (#288)
albertvillanova Jan 21, 2025
1e745c7
Update building_good_agents.md (#283)
derekalia Jan 21, 2025
257c1fe
Update guided_tour.md (#287)
sanjeed5 Jan 21, 2025
16f7910
Make e2b optional dependency (#292)
albertvillanova Jan 21, 2025
cfbd527
Add cool GIF of agent run inspection using Phoenix (#277)
aymeric-roucher Jan 21, 2025
5f5aec3
Remove pickle5 package from E2BExecutor (#295)
albertvillanova Jan 21, 2025
428aedd
Update README and documentation to clarify Hub integrations with Grad…
davidberenstein1957 Jan 22, 2025
2c43546
Fuse stream and direct run calls (#296)
clefourrier Jan 22, 2025
ec45d67
minor fix for console in AgentLogger (#303)
nbroad1881 Jan 22, 2025
a721837
Add Azure OpenAI support (#282)
vladiliescu Jan 22, 2025
117014d
Fix arg passing to AgentExecutionError (#309)
albertvillanova Jan 22, 2025
398c932
refactor(models): restructure model parameter handling (#227)
kingdomad Jan 22, 2025
83ecd57
fix(interpreter security): functions from the builtins module must be…
tandiapa Jan 22, 2025
43904f3
Support any and none tool types (#280)
aymeric-roucher Jan 22, 2025
5d6502a
Minor fix: adding a 60 seconds timeout to the visit webpage tool (#308)
Killian-pit Jan 22, 2025
6196958
Fix: source code inspection in interactive shells (#281)
antoinejeannot Jan 22, 2025
0ead477
Refactor evaluate_augassign and test all operators (#313)
albertvillanova Jan 22, 2025
ce11c7e
Remove package json files (#314)
albertvillanova Jan 22, 2025
ffaa945
Unset temperature in models (#315)
aymeric-roucher Jan 22, 2025
fe2f4e7
Fix tool calls with LiteLLM and tool optional types (#318)
aymeric-roucher Jan 22, 2025
a806f50
Multiple tool example (#293)
touseefahmed96 Jan 23, 2025
7e9f6e5
RAG on your huggingface_doc data using chromadb and groq api (#235)
touseefahmed96 Jan 23, 2025
b351a8c
Improve static tools initialization safety (#324)
kingdomad Jan 23, 2025
bc82d1c
Update README.md fix quick demo code import bug (#327)
Deng-Xian-Sheng Jan 23, 2025
115c8ae
Rename tool_response_message to error_message and append it (#325)
albertvillanova Jan 23, 2025
696b885
Add args to MultiStepAgent docstring (#332)
albertvillanova Jan 23, 2025
b333e08
Update README instructions to run tests (#328)
albertvillanova Jan 23, 2025
0217d3f
Fix MultiStepAgent docstring (#336)
albertvillanova Jan 23, 2025
2a2b764
Fix docstrings of models (#344)
albertvillanova Jan 24, 2025
73621c9
Move torchvision to the torch extra (#297)
nonsleepr Jan 24, 2025
b5b55a5
docstring args for ToolCallingAgent, CodeAgent and ManagedAgent (#335)
touseefahmed96 Jan 24, 2025
0196dc7
fixed tool examples in prompts (#341)
RolandJAAI Jan 24, 2025
de7b0ee
Improve inference choice examples (#311)
aymeric-roucher Jan 24, 2025
408b52a
Add VLM support (#220)
merveenoyan Jan 24, 2025
ce763ff
Bump version to 1.6.0.dev (#348)
albertvillanova Jan 24, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions .github/workflows/tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -87,3 +87,8 @@ jobs:
run: |
uv run pytest ./tests/test_utils.py
if: ${{ success() || failure() }}

- name: Function type hints utils tests
run: |
uv run pytest ./tests/test_function_type_hints_utils.py
if: ${{ success() || failure() }}
10 changes: 5 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,13 +37,13 @@ limitations under the License.
🧑‍💻 **First-class support for Code Agents**, i.e. agents that write their actions in code (as opposed to "agents being used to write code"). To make it secure, we support executing in sandboxed environments via [E2B](https://e2b.dev/).
- On top of this [`CodeAgent`](https://huggingface.co/docs/smolagents/reference/agents#smolagents.CodeAgent) class, we still support the standard [`ToolCallingAgent`](https://huggingface.co/docs/smolagents/reference/agents#smolagents.ToolCallingAgent) that writes actions as JSON/text blobs.

🤗 **Hub integrations**: you can share and load tools to/from the Hub, and more is to come!
🤗 **Hub integrations**: you can share and load Gradio Spaces as tools to/from the Hub, and more is to come!

🌐 **Support for any LLM**: it supports models hosted on the Hub loaded in their `transformers` version or through our inference API, but also supports models from OpenAI, Anthropic and many others via our [LiteLLM](https://www.litellm.ai/) integration.

Full documentation can be found [here](https://huggingface.co/docs/smolagents/index).

> [!NOTE]
> [!NOTE]
> Check the our [launch blog post](https://huggingface.co/blog/smolagents) to learn more about `smolagents`!

## Quick demo
Expand Down Expand Up @@ -118,7 +118,7 @@ And commit the changes.

To run tests locally, run this command:
```bash
pytest .
make test
```

## Citing smolagents
Expand All @@ -127,8 +127,8 @@ If you use `smolagents` in your publication, please cite it by using the followi

```bibtex
@Misc{smolagents,
title = {`smolagents`: The easiest way to build efficient agentic systems.},
author = {Aymeric Roucher and Thomas Wolf and Leandro von Werra and Erik Kaunismäki},
title = {`smolagents`: a smol library to build great agentic systems.},
author = {Aymeric Roucher and Albert Villanova del Moral and Thomas Wolf and Leandro von Werra and Erik Kaunismäki},
howpublished = {\url{https://github.com/huggingface/smolagents}},
year = {2025}
}
Expand Down
40 changes: 33 additions & 7 deletions docs/source/en/conceptual_guides/react.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,10 +19,33 @@ The ReAct framework ([Yao et al., 2022](https://huggingface.co/papers/2210.03629

The name is based on the concatenation of two words, "Reason" and "Act." Indeed, agents following this architecture will solve their task in as many steps as needed, each step consisting of a Reasoning step, then an Action step where it formulates tool calls that will bring it closer to solving the task at hand.

React process involves keeping a memory of past steps.
All agents in `smolagents` are based on singular `MultiStepAgent` class, which is an abstraction of ReAct framework.

> [!TIP]
> Read [Open-source LLMs as LangChain Agents](https://huggingface.co/blog/open-source-llms-as-agents) blog post to learn more about multi-step agents.
On a basic level, this class performs actions on a cycle of following steps, where existing variables and knowledge is incorporated into the agent logs like below:

Initialization: the system prompt is stored in a `SystemPromptStep`, and the user query is logged into a `TaskStep` .

While loop (ReAct loop):

- Use `agent.write_inner_memory_from_logs()` to write the agent logs into a list of LLM-readable [chat messages](https://huggingface.co/docs/transformers/en/chat_templating).
- Send these messages to a `Model` object to get its completion. Parse the completion to get the action (a JSON blob for `ToolCallingAgent`, a code snippet for `CodeAgent`).
- Execute the action and logs result into memory (an `ActionStep`).
- At the end of each step, we run all callback functions defined in `agent.step_callbacks` .

Optionally, when planning is activated, a plan can be periodically revised and stored in a `PlanningStep` . This includes feeding facts about the task at hand to the memory.

For a `CodeAgent`, it looks like the figure below.

<div class="flex justify-center">
<img
class="block dark:hidden"
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/smolagents/codeagent_docs.png"
/>
<img
class="hidden dark:block"
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/smolagents/codeagent_docs.png"
/>
</div>

Here is a video overview of how that works:

Expand All @@ -39,9 +62,12 @@ Here is a video overview of how that works:

![Framework of a React Agent](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/open-source-llms-as-agents/ReAct.png)

We implement two versions of ToolCallingAgent:
- [`ToolCallingAgent`] generates tool calls as a JSON in its output.
- [`CodeAgent`] is a new type of ToolCallingAgent that generates its tool calls as blobs of code, which works really well for LLMs that have strong coding performance.
We implement two versions of agents:
- [`CodeAgent`] is the preferred type of agent: it generates its tool calls as blobs of code.
- [`ToolCallingAgent`] generates tool calls as a JSON in its output, as is commonly done in agentic frameworks. We incorporate this option because it can be useful in some narrow cases where you can do fine with only one tool call per step: for instance, for web browsing, you need to wait after each action on the page to monitor how the page changes.

> [!TIP]
> We also provide an option to run agents in one-shot: just pass `single_step=True` when launching the agent, like `agent.run(your_task, single_step=True)`

> [!TIP]
> We also provide an option to run agents in one-shot: just pass `single_step=True` when launching the agent, like `agent.run(your_task, single_step=True)`
> Read [Open-source LLMs as LangChain Agents](https://huggingface.co/blog/open-source-llms-as-agents) blog post to learn more about multi-step agents.
55 changes: 51 additions & 4 deletions docs/source/en/guided_tour.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,24 +27,25 @@ To initialize a minimal agent, you need at least these two arguments:
- [`TransformersModel`] takes a pre-initialized `transformers` pipeline to run inference on your local machine using `transformers`.
- [`HfApiModel`] leverages a `huggingface_hub.InferenceClient` under the hood.
- [`LiteLLMModel`] lets you call 100+ different models through [LiteLLM](https://docs.litellm.ai/)!
- [`AzureOpenAIServerModel`] allows you to use OpenAI models deployed in [Azure](https://azure.microsoft.com/en-us/products/ai-services/openai-service).

- `tools`, a list of `Tools` that the agent can use to solve the task. It can be an empty list. You can also add the default toolbox on top of your `tools` list by defining the optional argument `add_base_tools=True`.

Once you have these two arguments, `tools` and `model`, you can create an agent and run it. You can use any LLM you'd like, either through [Hugging Face API](https://huggingface.co/docs/api-inference/en/index), [transformers](https://github.com/huggingface/transformers/), [ollama](https://ollama.com/), or [LiteLLM](https://www.litellm.ai/).
Once you have these two arguments, `tools` and `model`, you can create an agent and run it. You can use any LLM you'd like, either through [Hugging Face API](https://huggingface.co/docs/api-inference/en/index), [transformers](https://github.com/huggingface/transformers/), [ollama](https://ollama.com/), [LiteLLM](https://www.litellm.ai/), or [Azure OpenAI](https://azure.microsoft.com/en-us/products/ai-services/openai-service).

<hfoptions id="Pick a LLM">
<hfoption id="Hugging Face API">

Hugging Face API is free to use without a token, but then it will have a rate limitation.

To access gated models or rise your rate limits with a PRO account, you need to set the environment variable `HF_TOKEN` or pass `token` variable upon initialization of `HfApiModel`.
To access gated models or rise your rate limits with a PRO account, you need to set the environment variable `HF_TOKEN` or pass `token` variable upon initialization of `HfApiModel`. You can get your token from your [settings page](https://huggingface.co/settings/tokens)

```python
from smolagents import CodeAgent, HfApiModel

model_id = "meta-llama/Llama-3.3-70B-Instruct"
model_id = "meta-llama/Llama-3.3-70B-Instruct"

model = HfApiModel(model_id=model_id, token="<YOUR_HUGGINGFACEHUB_API_TOKEN>")
model = HfApiModel(model_id=model_id, token="<YOUR_HUGGINGFACEHUB_API_TOKEN>") # You can choose to not pass any model_id to HfApiModel to use a default free model
agent = CodeAgent(tools=[], model=model, add_base_tools=True)

agent.run(
Expand All @@ -55,6 +56,7 @@ agent.run(
<hfoption id="Local Transformers Model">

```python
# !pip install smolagents[transformers]
from smolagents import CodeAgent, TransformersModel

model_id = "meta-llama/Llama-3.2-3B-Instruct"
Expand All @@ -72,6 +74,7 @@ agent.run(
To use `LiteLLMModel`, you need to set the environment variable `ANTHROPIC_API_KEY` or `OPENAI_API_KEY`, or pass `api_key` variable upon initialization.

```python
# !pip install smolagents[litellm]
from smolagents import CodeAgent, LiteLLMModel

model = LiteLLMModel(model_id="anthropic/claude-3-5-sonnet-latest", api_key="YOUR_ANTHROPIC_API_KEY") # Could use 'gpt-4o'
Expand All @@ -85,6 +88,7 @@ agent.run(
<hfoption id="Ollama">

```python
# !pip install smolagents[litellm]
from smolagents import CodeAgent, LiteLLMModel

model = LiteLLMModel(
Expand All @@ -100,6 +104,49 @@ agent.run(
"Could you give me the 118th number in the Fibonacci sequence?",
)
```
</hfoption>
<hfoption id="Azure OpenAI">

To connect to Azure OpenAI, you can either use `AzureOpenAIServerModel` directly, or use `LiteLLMModel` and configure it accordingly.

To initialize an instance of `AzureOpenAIServerModel`, you need to pass your model deployment name and then either pass the `azure_endpoint`, `api_key`, and `api_version` arguments, or set the environment variables `AZURE_OPENAI_ENDPOINT`, `AZURE_OPENAI_API_KEY`, and `OPENAI_API_VERSION`.

```python
# !pip install smolagents[openai]
from smolagents import CodeAgent, AzureOpenAIServerModel

model = AzureOpenAIServerModel(model_id="gpt-4o-mini")
agent = CodeAgent(tools=[], model=model, add_base_tools=True)

agent.run(
"Could you give me the 118th number in the Fibonacci sequence?",
)
```

Similarly, you can configure `LiteLLMModel` to connect to Azure OpenAI as follows:

- pass your model deployment name as `model_id`, and make sure to prefix it with `azure/`
- make sure to set the environment variable `AZURE_API_VERSION`
- either pass the `api_base` and `api_key` arguments, or set the environment variables `AZURE_API_KEY`, and `AZURE_API_BASE`

```python
import os
from smolagents import CodeAgent, LiteLLMModel

AZURE_OPENAI_CHAT_DEPLOYMENT_NAME="gpt-35-turbo-16k-deployment" # example of deployment name

os.environ["AZURE_API_KEY"] = "" # api_key
os.environ["AZURE_API_BASE"] = "" # "https://example-endpoint.openai.azure.com"
os.environ["AZURE_API_VERSION"] = "" # "2024-10-01-preview"

model = LiteLLMModel(model_id="azure/" + AZURE_OPENAI_CHAT_DEPLOYMENT_NAME)
agent = CodeAgent(tools=[], model=model, add_base_tools=True)

agent.run(
"Could you give me the 118th number in the Fibonacci sequence?",
)
```

</hfoption>
</hfoptions>

Expand Down
2 changes: 1 addition & 1 deletion docs/source/en/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ This library offers:

🧑‍💻 **First-class support for Code Agents**, i.e. agents that write their actions in code (as opposed to "agents being used to write code"), [read more here](tutorials/secure_code_execution).

🤗 **Hub integrations**: you can share and load tools to/from the Hub, and more is to come!
🤗 **Hub integrations**: you can share and load Gradio Spaces as tools to/from the Hub, and more is to come!

<div class="mt-10">
<div class="w-full flex flex-col space-y-4 md:space-y-0 md:grid md:grid-cols-2 md:gap-y-4 md:gap-x-5">
Expand Down
37 changes: 33 additions & 4 deletions docs/source/en/reference/agents.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,6 @@ We provide two types of agents, based on the main [`Agent`] class.

Both require arguments `model` and list of tools `tools` at initialization.


### Classes of agents

[[autodoc]] MultiStepAgent
Expand All @@ -44,7 +43,6 @@ Both require arguments `model` and list of tools `tools` at initialization.

[[autodoc]] ToolCallingAgent


### ManagedAgent

[[autodoc]] ManagedAgent
Expand All @@ -55,6 +53,9 @@ Both require arguments `model` and list of tools `tools` at initialization.

### GradioUI

> [!TIP]
> You must have `gradio` installed to use the UI. Please run `pip install smolagents[gradio]` if it's not the case.

[[autodoc]] GradioUI

## Models
Expand Down Expand Up @@ -99,6 +100,9 @@ print(model([{"role": "user", "content": "Ok!"}], stop_sequences=["great"]))
>>> What a
```

> [!TIP]
> You must have `transformers` and `torch` installed on your machine. Please run `pip install smolagents[transformers]` if it's not the case.

[[autodoc]] TransformersModel

### HfApiModel
Expand Down Expand Up @@ -142,7 +146,7 @@ print(model(messages))

[[autodoc]] LiteLLMModel

### OpenAiServerModel
### OpenAIServerModel

This class lets you call any OpenAIServer compatible model.
Here's how you can set it (you can customise the `api_base` url to point to another server):
Expand All @@ -154,4 +158,29 @@ model = OpenAIServerModel(
api_base="https://api.openai.com/v1",
api_key=os.environ["OPENAI_API_KEY"],
)
```
```

[[autodoc]] OpenAIServerModel

### AzureOpenAIServerModel

`AzureOpenAIServerModel` allows you to connect to any Azure OpenAI deployment.

Below you can find an example of how to set it up, note that you can omit the `azure_endpoint`, `api_key`, and `api_version` arguments, provided you've set the corresponding environment variables -- `AZURE_OPENAI_ENDPOINT`, `AZURE_OPENAI_API_KEY`, and `OPENAI_API_VERSION`.

Pay attention to the lack of an `AZURE_` prefix for `OPENAI_API_VERSION`, this is due to the way the underlying [openai](https://github.com/openai/openai-python) package is designed.

```py
import os

from smolagents import AzureOpenAIServerModel

model = AzureOpenAIServerModel(
model_id = os.environ.get("AZURE_OPENAI_MODEL"),
azure_endpoint=os.environ.get("AZURE_OPENAI_ENDPOINT"),
api_key=os.environ.get("AZURE_OPENAI_API_KEY"),
api_version=os.environ.get("OPENAI_API_VERSION")
)
```

[[autodoc]] AzureOpenAIServerModel
2 changes: 1 addition & 1 deletion docs/source/en/tutorials/building_good_agents.md
Original file line number Diff line number Diff line change
Expand Up @@ -273,7 +273,7 @@ image_generation_tool = load_tool("m-ric/text-to-image", trust_remote_code=True)
search_tool = DuckDuckGoSearchTool()

agent = CodeAgent(
tools=[search_tool],
tools=[search_tool, image_generation_tool],
model=HfApiModel("Qwen/Qwen2.5-72B-Instruct"),
planning_interval=3 # This is where you activate planning!
)
Expand Down
12 changes: 10 additions & 2 deletions docs/source/en/tutorials/inspect_runs.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,14 @@ We've adopted the [OpenTelemetry](https://opentelemetry.io/) standard for instru

This means that you can just run some instrumentation code, then run your agents normally, and everything gets logged into your platform.

Here's how it goes:
Here's how it then looks like on the platform:

<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/smolagents/inspect_run_phoenix.gif"/>
</div>


### Setting up telemetry with Arize AI Phoenix
First install the required packages. Here we install [Phoenix by Arize AI](https://github.com/Arize-ai/phoenix) because that's a good solution to collect and inspect the logs, but there are other OpenTelemetry-compatible platforms that you could use for this collection & inspection part.

```shell
Expand Down Expand Up @@ -97,7 +104,8 @@ manager_agent.run(
"If the US keeps its 2024 growth rate, how many years will it take for the GDP to double?"
)
```
And you can then navigate to `http://0.0.0.0:6006/projects/` to inspect your run!
Voilà!
You can then navigate to `http://0.0.0.0:6006/projects/` to inspect your run!

<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/smolagents/inspect_run_phoenix.png">

Expand Down
2 changes: 1 addition & 1 deletion docs/source/en/tutorials/secure_code_execution.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ For maximum security, you can use our integration with E2B to run code in a sand

For this, you will need to setup your E2B account and set your `E2B_API_KEY` in your environment variables. Head to [E2B's quickstart documentation](https://e2b.dev/docs/quickstart) for more information.

Then you can install it with `pip install e2b-code-interpreter python-dotenv`.
Then you can install it with `pip install "smolagents[e2b]"`.

Now you're set!

Expand Down
14 changes: 14 additions & 0 deletions docs/source/hi/_config.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
# docstyle-ignore
INSTALL_CONTENT = """
# Installation
! pip install smolagents
# To install from source instead of the last release, comment the command above and uncomment the following one.
# ! pip install git+https://github.com/huggingface/smolagents.git
"""

notebook_first_cells = [{"type": "code", "content": INSTALL_CONTENT}]
black_avoid_patterns = {
"{processor_class}": "FakeProcessorClass",
"{model_class}": "FakeModelClass",
"{object_class}": "FakeObjectClass",
}
36 changes: 36 additions & 0 deletions docs/source/hi/_toctree.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
- title: Get started
sections:
- local: index
title: 🤗 Agents
- local: guided_tour
title: गाइडेड टूर
- title: Tutorials
sections:
- local: tutorials/building_good_agents
title: ✨ अच्छे Agents का निर्माण
- local: tutorials/inspect_runs
title: 📊 OpenTelemetry के साथ runs का निरीक्षण
- local: tutorials/tools
title: 🛠️ Tools - in-depth guide
- local: tutorials/secure_code_execution
title: 🛡️ E2B के साथ अपने कोड एक्जीक्यूशन को सुरक्षित करें
- title: Conceptual guides
sections:
- local: conceptual_guides/intro_agents
title: 🤖 Agentic सिस्टम का परिचय
- local: conceptual_guides/react
title: 🤔 मल्टी-स्टेप एजेंट कैसे काम करते हैं?
- title: Examples
sections:
- local: examples/text_to_sql
title: सेल्फ करेक्टिंग Text-to-SQL
- local: examples/rag
title: एजेंटिक RAG के साथ अपनी ज्ञान आधारित को मास्टर करें
- local: examples/multiagents
title: एक बहु-एजेंट प्रणाली का आयोजन करें
- title: Reference
sections:
- local: reference/agents
title: एजेंट से संबंधित ऑब्जेक्ट्स
- local: reference/tools
title: टूल्स से संबंधित ऑब्जेक्ट्स
Loading
Loading