Skip to content

Commit 7a4359f

Browse files
author
Rémi Sahl
committed
small function calling guide corrections
1 parent dbc0432 commit 7a4359f

File tree

1 file changed

+10
-10
lines changed
  • pages/public_cloud/ai_machine_learning/endpoints_guide_06_function_calling

1 file changed

+10
-10
lines changed

pages/public_cloud/ai_machine_learning/endpoints_guide_06_function_calling/guide.en-gb.md

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -13,10 +13,10 @@ updated: 2025-08-01
1313

1414
[AI Endpoints](https://endpoints.ai.cloud.ovh.net/) is a serverless platform provided by OVHcloud that offers easy access to a selection of world-renowned, pre-trained AI models. The platform is designed to be simple, secure, and intuitive, making it an ideal solution for developers who want to enhance their applications with AI capabilities without extensive AI expertise or concerns about data privacy.
1515

16-
**Function Calling**, also named tool calling, is a feature that enables a large language model (LLM) to trigger user-defined functions (also named tools). These tools are defined by the developer and implement specific behaviors such as calling an API, fetching data or calculating values, which extends the capabilities of the LLM.
16+
**Function Calling**, also known as tool calling, is a feature that enables a large language model (LLM) to trigger user-defined functions (also named tools). These tools are defined by the developer and implement specific behaviors such as calling an API, fetching data or calculating values, which extends the capabilities of the LLM.
1717

1818
The LLM will identify which tool(s) to call and the arguments to use.
19-
This feature can be used to develop assistants or agents for instance.
19+
This feature can be used to develop assistants or agents, for instance.
2020

2121
## Objective
2222

@@ -197,21 +197,21 @@ Output:
197197
}
198198
```
199199

200-
We see that the model understood correctly that it needed to call the `log_work` tool, by looking at the `assistant` message generated.
200+
We see that the model correctly identified that it needed to call the `log_work` tool, by looking at the `assistant` message generated.
201201

202202
The `tool_calls` list contains the tool calls the model generated in response to our user message.
203-
The `name` and `arguments` fields tells us which tool to call and which parameters to pass to the function.
204-
The `id` is an unique identifier for this tool call, that we will need later on.
203+
The `name` and `arguments` fields specify which tool to call and which parameters to pass to the function.
204+
The `id` is a unique identifier for this tool call, that we will need later on.
205205
You can have multiple tool calls in this list.
206206

207-
Under the hood, the model has recognized that the user intent was related to the set of tools given, and generated a sequence of specific tokens that were post-processed to create a tool call object.
207+
Under the hood, the model has recognized that the user's intent was related to the set of tools provided, and generated a sequence of specific tokens that were post-processed to create a tool call object.
208208

209-
We add this message to the conversation so that the model can have knowledge about this tool call in the next rounds of our multi-turn conversation.
209+
We add this message to the conversation so that the model is aware of this tool call in subsequent rounds of our multi-turn conversation.
210210

211211
### Process tools calls
212212

213213
Now that we see that the model is able to generate tool calls, we need to code the Python implementation of the tools, so that we can process the tool calls the LLM will generate and actually start to log time!
214-
Each task is stored in a dict, with the name as key.
214+
Each task is stored in a dict, with the name as the key.
215215
Categories are a fixed list.
216216

217217
We define the two functions, `log_work` and `time_report`, in the Python code below:
@@ -289,7 +289,7 @@ FUNCTION_MAP = {
289289
"time_report": time_report
290290
}
291291

292-
# if there are tool calls in the assistant generated response
292+
# if there are tool calls in the assistant-generated response
293293
if assistant_response.tool_calls:
294294
print(f"<\t{len(assistant_response.tool_calls)} tool(s) to call")
295295
# we can have several tool calls
@@ -319,7 +319,7 @@ We see that we successfully created a task called "team meeting", in the "Meetin
319319

320320
Now that we have executed our tool calls, we have to send the result back to the model, so that it can generate a new response that takes this new information into account, to tell the user the task has been created successfully or to give the time report for instance.
321321

322-
All we have to do, is to add the tool results as new `tool` messages into the conversation, so we'll update our code:
322+
All we have to do, is add the tool results as new `tool` messages into the conversation, so we'll update our code:
323323
```python
324324
if assistant_response.tool_calls:
325325
print(f"<\t{len(assistant_response.tool_calls)} tool(s) to call")

0 commit comments

Comments
 (0)