diff --git a/.github/workflows/build_documentation.yml b/.github/workflows/build_documentation.yml index 78f70b869..ab96066c3 100644 --- a/.github/workflows/build_documentation.yml +++ b/.github/workflows/build_documentation.yml @@ -11,6 +11,7 @@ on: - 'docs/source/**' - 'assets/**' - '.github/workflows/doc-build.yml' + - 'pyproject.toml' jobs: build: diff --git a/docs/source/en/examples/multiagents.md b/docs/source/en/examples/multiagents.md index 63cc04db4..4ea4e51b2 100644 --- a/docs/source/en/examples/multiagents.md +++ b/docs/source/en/examples/multiagents.md @@ -169,7 +169,7 @@ manager_agent = CodeAgent( That's all! Now let's run our system! We select a question that requires both some calculation and research: ```py -answer = manager_agent.run("If LLM trainings continue to scale up at the current rhythm until 2030, what would be the electric power in GW required to power the biggest training runs by 2030? What does that correspond to, compared to some contries? Please provide a source for any number used.") +answer = manager_agent.run("If LLM training continues to scale up at the current rhythm until 2030, what would be the electric power in GW required to power the biggest training runs by 2030? What would that correspond to, compared to some countries? Please provide a source for any numbers used.") ``` We get this report as the answer: diff --git a/docs/source/en/examples/rag.md b/docs/source/en/examples/rag.md index ca3550260..acbdf14f6 100644 --- a/docs/source/en/examples/rag.md +++ b/docs/source/en/examples/rag.md @@ -29,7 +29,7 @@ This agent will: ✅ Formulate the query itself and ✅ Critique to re-retrieve So it should naively recover some advanced RAG techniques! - Instead of directly using the user query as the reference in semantic search, the agent formulates itself a reference sentence that can be closer to the targeted documents, as in [HyDE](https://huggingface.co/papers/2212.10496). -The agent can the generated snippets and re-retrieve if needed, as in [Self-Query](https://docs.llamaindex.ai/en/stable/examples/evaluation/RetryQuery/). +The agent can use the generated snippets and re-retrieve if needed, as in [Self-Query](https://docs.llamaindex.ai/en/stable/examples/evaluation/RetryQuery/). Let's build this system. 🛠️ diff --git a/docs/source/en/tutorials/building_good_agents.md b/docs/source/en/tutorials/building_good_agents.md index 57e5ea4f7..de8cd3ad8 100644 --- a/docs/source/en/tutorials/building_good_agents.md +++ b/docs/source/en/tutorials/building_good_agents.md @@ -233,9 +233,9 @@ Here are the rules you should always follow to solve your task: Now Begin! If you solve the task correctly, you will receive a reward of $1,000,000. ``` -As yo can see, there are placeholders like `"{{tool_descriptions}}"`: these will be used upon agent initialization to insert certain automatically generated descriptions of tools or managed agents. +As you can see, there are placeholders like `"{{tool_descriptions}}"`: these will be used upon agent initialization to insert certain automatically generated descriptions of tools or managed agents. -So while you can overwrite this system prompt template by passing your custom prompt as an argument to the `system_prompt` parameter, your new system promptmust contain the following placeholders: +So while you can overwrite this system prompt template by passing your custom prompt as an argument to the `system_prompt` parameter, your new system prompt must contain the following placeholders: - `"{{tool_descriptions}}"` to insert tool descriptions. - `"{{managed_agents_description}}"` to insert the description for managed agents if there are any. - For `CodeAgent` only: `"{{authorized_imports}}"` to insert the list of authorized imports. diff --git a/docs/source/en/tutorials/secure_code_execution.md b/docs/source/en/tutorials/secure_code_execution.md index 2189c5bbc..d8a6109ae 100644 --- a/docs/source/en/tutorials/secure_code_execution.md +++ b/docs/source/en/tutorials/secure_code_execution.md @@ -47,7 +47,7 @@ This interpreter is designed for security by: - Capping the number of operations to prevent infinite loops and resource bloating. - Will not perform any operation that's not pre-defined. -Wev'e used this on many use cases, without ever observing any damage to the environment. +We've used this on many use cases, without ever observing any damage to the environment. However this solution is not watertight: one could imagine occasions where LLMs fine-tuned for malignant actions could still hurt your environment. For instance if you've allowed an innocuous package like `Pillow` to process images, the LLM could generate thousands of saves of images to bloat your hard drive. It's certainly not likely if you've chosen the LLM engine yourself, but it could happen. @@ -79,4 +79,4 @@ agent = CodeAgent( agent.run("What was Abraham Lincoln's preferred pet?") ``` -E2B code execution is not compatible with multi-agents at the moment - because having an agent call in a code blob that should be executed remotely is a mess. But we're working on adding it! \ No newline at end of file +E2B code execution is not compatible with multi-agents at the moment - because having an agent call in a code blob that should be executed remotely is a mess. But we're working on adding it! diff --git a/docs/source/en/tutorials/tools.md b/docs/source/en/tutorials/tools.md index be69b83cc..c86da5736 100644 --- a/docs/source/en/tutorials/tools.md +++ b/docs/source/en/tutorials/tools.md @@ -89,8 +89,8 @@ model_downloads_tool.push_to_hub("{your_username}/hf-model-downloads", token="