You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: units/en/unit1/what-are-agents.mdx
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -60,7 +60,7 @@ It decides **which Actions to take based on the situation**.
60
60
61
61
This part represents **everything the Agent is equipped to do**.
62
62
63
-
The **scope of possible actions** depends on what the agent **has been equipped with**. For example, because humans lack wings, they can't perform the "fly" **Action**, but they can execute **Actions** like "walk", "run" ,"jump", "grab", and so on.
63
+
The **scope of possible actions** depends on what the agent **has been equipped with**. For example, because humans lack wings, they can't perform the "fly" **Action**, but they can execute **Actions** like "walk", "run", "jump", "grab", and so on.
64
64
65
65
### The spectrum of "Agency"
66
66
@@ -81,7 +81,7 @@ Table from [smolagents conceptual guide](https://huggingface.co/docs/smolagents/
81
81
82
82
The most common AI model found in Agents is an LLM (Large Language Model), which takes **Text** as an input and outputs **Text** as well.
83
83
84
-
Well known examples are **GPT4** from **OpenAI**, **LLama** from **Meta**, **Gemini** from **Google**, etc. These models have been trained on a vast amount of text and are able to generalize well. We will learn more about LLMs in the [next section](what-are-llms).
84
+
Well known examples are **GPT-4** from **OpenAI**, **Llama** from **Meta**, **Gemini** from **Google**, etc. These models have been trained on a vast amount of text and are able to generalize well. We will learn more about LLMs in the [next section](what-are-llms).
85
85
86
86
> [!TIP]
87
87
> It's also possible to use models that accept other inputs as the Agent's core model. For example, a Vision Language Model (VLM), which is like an LLM but also understands images as input. We'll focus on LLMs for now and will discuss other options later.
@@ -120,7 +120,7 @@ The LLM, as we'll see, will generate code to run the tool when it needs to, and
120
120
send_message_to("Manager", "Can we postpone today's meeting?")
121
121
```
122
122
123
-
The **design of the Tools is very important and has a great impact on the quality of your Agent**. Some tasks will require very specific Tools to be crafted, while others may be solved with generalpurpose tools like "web_search".
123
+
The **design of the Tools is very important and has a great impact on the quality of your Agent**. Some tasks will require very specific Tools to be crafted, while others may be solved with general-purpose tools like "web_search".
124
124
125
125
> Note that **Actions are not the same as Tools**. An Action, for instance, can involve the use of multiple Tools to complete.
0 commit comments