Skip to content

Commit c8d6ab1

Browse files
Merge pull request #5 from huggingface/main
Update from HF.
2 parents 899324e + e5d879f commit c8d6ab1

File tree

11 files changed

+19
-21
lines changed

11 files changed

+19
-21
lines changed

.github/workflows/build_documentation.yml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,7 @@ on:
1111
- 'docs/source/**'
1212
- 'assets/**'
1313
- '.github/workflows/doc-build.yml'
14+
- 'pyproject.toml'
1415

1516
jobs:
1617
build:

docs/source/en/examples/multiagents.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -169,7 +169,7 @@ manager_agent = CodeAgent(
169169
That's all! Now let's run our system! We select a question that requires both some calculation and research:
170170

171171
```py
172-
answer = manager_agent.run("If LLM trainings continue to scale up at the current rhythm until 2030, what would be the electric power in GW required to power the biggest training runs by 2030? What does that correspond to, compared to some contries? Please provide a source for any number used.")
172+
answer = manager_agent.run("If LLM training continues to scale up at the current rhythm until 2030, what would be the electric power in GW required to power the biggest training runs by 2030? What would that correspond to, compared to some countries? Please provide a source for any numbers used.")
173173
```
174174

175175
We get this report as the answer:

docs/source/en/examples/rag.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ This agent will: ✅ Formulate the query itself and ✅ Critique to re-retrieve
2929

3030
So it should naively recover some advanced RAG techniques!
3131
- Instead of directly using the user query as the reference in semantic search, the agent formulates itself a reference sentence that can be closer to the targeted documents, as in [HyDE](https://huggingface.co/papers/2212.10496).
32-
The agent can the generated snippets and re-retrieve if needed, as in [Self-Query](https://docs.llamaindex.ai/en/stable/examples/evaluation/RetryQuery/).
32+
The agent can use the generated snippets and re-retrieve if needed, as in [Self-Query](https://docs.llamaindex.ai/en/stable/examples/evaluation/RetryQuery/).
3333

3434
Let's build this system. 🛠️
3535

docs/source/en/tutorials/building_good_agents.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -233,9 +233,9 @@ Here are the rules you should always follow to solve your task:
233233
Now Begin! If you solve the task correctly, you will receive a reward of $1,000,000.
234234
```
235235

236-
As yo can see, there are placeholders like `"{{tool_descriptions}}"`: these will be used upon agent initialization to insert certain automatically generated descriptions of tools or managed agents.
236+
As you can see, there are placeholders like `"{{tool_descriptions}}"`: these will be used upon agent initialization to insert certain automatically generated descriptions of tools or managed agents.
237237

238-
So while you can overwrite this system prompt template by passing your custom prompt as an argument to the `system_prompt` parameter, your new system promptmust contain the following placeholders:
238+
So while you can overwrite this system prompt template by passing your custom prompt as an argument to the `system_prompt` parameter, your new system prompt must contain the following placeholders:
239239
- `"{{tool_descriptions}}"` to insert tool descriptions.
240240
- `"{{managed_agents_description}}"` to insert the description for managed agents if there are any.
241241
- For `CodeAgent` only: `"{{authorized_imports}}"` to insert the list of authorized imports.

docs/source/en/tutorials/secure_code_execution.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,7 @@ This interpreter is designed for security by:
4747
- Capping the number of operations to prevent infinite loops and resource bloating.
4848
- Will not perform any operation that's not pre-defined.
4949

50-
Wev'e used this on many use cases, without ever observing any damage to the environment.
50+
We've used this on many use cases, without ever observing any damage to the environment.
5151

5252
However this solution is not watertight: one could imagine occasions where LLMs fine-tuned for malignant actions could still hurt your environment. For instance if you've allowed an innocuous package like `Pillow` to process images, the LLM could generate thousands of saves of images to bloat your hard drive.
5353
It's certainly not likely if you've chosen the LLM engine yourself, but it could happen.
@@ -79,4 +79,4 @@ agent = CodeAgent(
7979
agent.run("What was Abraham Lincoln's preferred pet?")
8080
```
8181

82-
E2B code execution is not compatible with multi-agents at the moment - because having an agent call in a code blob that should be executed remotely is a mess. But we're working on adding it!
82+
E2B code execution is not compatible with multi-agents at the moment - because having an agent call in a code blob that should be executed remotely is a mess. But we're working on adding it!

docs/source/en/tutorials/tools.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -89,8 +89,8 @@ model_downloads_tool.push_to_hub("{your_username}/hf-model-downloads", token="<Y
8989
```
9090

9191
For the push to Hub to work, your tool will need to respect some rules:
92-
- All method are self-contained, e.g. use variables that come either from their args.
93-
- As per the above point, **all imports should be defined directky within the tool's functions**, else you will get an error when trying to call [`~Tool.save`] or [`~Tool.push_to_hub`] with your custom tool.
92+
- All methods are self-contained, e.g. use variables that come either from their args.
93+
- As per the above point, **all imports should be defined directly within the tool's functions**, else you will get an error when trying to call [`~Tool.save`] or [`~Tool.push_to_hub`] with your custom tool.
9494
- If you subclass the `__init__` method, you can give it no other argument than `self`. This is because arguments set during a specific tool instance's initialization are hard to track, which prevents from sharing them properly to the hub. And anyway, the idea of making a specific class is that you can already set class attributes for anything you need to hard-code (just set `your_variable=(...)` directly under the `class YourTool(Tool):` line). And of course you can still create a class attribute anywhere in your code by assigning stuff to `self.your_variable`.
9595

9696

@@ -210,7 +210,7 @@ Just make sure the new tool follows the same API as the replaced tool or adapt t
210210
### Use a collection of tools
211211

212212
You can leverage tool collections by using the ToolCollection object, with the slug of the collection you want to use.
213-
Then pass them as a list to initialize you agent, and start using them!
213+
Then pass them as a list to initialize your agent, and start using them!
214214

215215
```py
216216
from smolagents import ToolCollection, CodeAgent
@@ -224,4 +224,4 @@ agent = CodeAgent(tools=[*image_tool_collection.tools], model=model, add_base_to
224224
agent.run("Please draw me a picture of rivers and lakes.")
225225
```
226226

227-
To speed up the start, tools are loaded only if called by the agent.
227+
To speed up the start, tools are loaded only if called by the agent.

pyproject.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ build-backend = "setuptools.build_meta"
44

55
[project]
66
name = "smolagents"
7-
version = "1.1.0.dev0"
7+
version = "1.2.0.dev0"
88
description = "🤗 smolagents: a barebones library for agents. Agents write python code to call tools or orchestrate other agents."
99
authors = [
1010
{ name="Aymeric Roucher", email="[email protected]" }, { name="Thomas Wolf"},

src/smolagents/__init__.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@
1414
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
1515
# See the License for the specific language governing permissions and
1616
# limitations under the License.
17-
__version__ = "1.1.0.dev0"
17+
__version__ = "1.2.0.dev0"
1818

1919
from typing import TYPE_CHECKING
2020

src/smolagents/default_tools.py

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -146,12 +146,11 @@ def forward(self, question):
146146

147147
class DuckDuckGoSearchTool(Tool):
148148
name = "web_search"
149-
description = """Performs a duckduckgo web search based on your query (think a Google search) then returns the top search results as a list of dict elements.
150-
Each result has keys 'title', 'href' and 'body'."""
149+
description = """Performs a duckduckgo web search based on your query (think a Google search) then returns the top search results."""
151150
inputs = {
152151
"query": {"type": "string", "description": "The search query to perform."}
153152
}
154-
output_type = "any"
153+
output_type = "string"
155154

156155
def __init__(self, *args, max_results=10, **kwargs):
157156
super().__init__(*args, **kwargs)

src/smolagents/gradio_ui.py

Lines changed: 3 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@
2323
def pull_messages_from_step(step_log: AgentStep, test_mode: bool = True):
2424
"""Extract ChatMessage objects from agent steps"""
2525
if isinstance(step_log, ActionStep):
26-
yield gr.ChatMessage(role="assistant", content=step_log.llm_output)
26+
yield gr.ChatMessage(role="assistant", content=step_log.llm_output or "")
2727
if step_log.tool_call is not None:
2828
used_code = step_log.tool_call.name == "code interpreter"
2929
content = step_log.tool_call.arguments
@@ -35,9 +35,7 @@ def pull_messages_from_step(step_log: AgentStep, test_mode: bool = True):
3535
content=str(content),
3636
)
3737
if step_log.observations is not None:
38-
yield gr.ChatMessage(
39-
role="assistant", content=f"```\n{step_log.observations}\n```"
40-
)
38+
yield gr.ChatMessage(role="assistant", content=step_log.observations)
4139
if step_log.error is not None:
4240
yield gr.ChatMessage(
4341
role="assistant",
@@ -65,7 +63,7 @@ def stream_to_gradio(
6563
if isinstance(final_answer, AgentText):
6664
yield gr.ChatMessage(
6765
role="assistant",
68-
content=f"**Final answer:**\n```\n{final_answer.to_string()}\n```",
66+
content=f"**Final answer:**\n{final_answer.to_string()}\n",
6967
)
7068
elif isinstance(final_answer, AgentImage):
7169
yield gr.ChatMessage(

0 commit comments

Comments
 (0)