You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: pages/public_cloud/ai_machine_learning/endpoints_guide_06_function_calling/guide.en-gb.md
+10-10Lines changed: 10 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -13,10 +13,10 @@ updated: 2025-08-01
13
13
14
14
[AI Endpoints](https://endpoints.ai.cloud.ovh.net/) is a serverless platform provided by OVHcloud that offers easy access to a selection of world-renowned, pre-trained AI models. The platform is designed to be simple, secure, and intuitive, making it an ideal solution for developers who want to enhance their applications with AI capabilities without extensive AI expertise or concerns about data privacy.
15
15
16
-
**Function Calling**, also named tool calling, is a feature that enables a large language model (LLM) to trigger user-defined functions (also named tools). These tools are defined by the developer and implement specific behaviors such as calling an API, fetching data or calculating values, which extends the capabilities of the LLM.
16
+
**Function Calling**, also known as tool calling, is a feature that enables a large language model (LLM) to trigger user-defined functions (also named tools). These tools are defined by the developer and implement specific behaviors such as calling an API, fetching data or calculating values, which extends the capabilities of the LLM.
17
17
18
18
The LLM will identify which tool(s) to call and the arguments to use.
19
-
This feature can be used to develop assistants or agents for instance.
19
+
This feature can be used to develop assistants or agents, for instance.
20
20
21
21
## Objective
22
22
@@ -197,21 +197,21 @@ Output:
197
197
}
198
198
```
199
199
200
-
We see that the model understood correctly that it needed to call the `log_work` tool, by looking at the `assistant` message generated.
200
+
We see that the model correctly identified that it needed to call the `log_work` tool, by looking at the `assistant` message generated.
201
201
202
202
The `tool_calls` list contains the tool calls the model generated in response to our user message.
203
-
The `name` and `arguments` fields tells us which tool to call and which parameters to pass to the function.
204
-
The `id` is an unique identifier for this tool call, that we will need later on.
203
+
The `name` and `arguments` fields specify which tool to call and which parameters to pass to the function.
204
+
The `id` is a unique identifier for this tool call, that we will need later on.
205
205
You can have multiple tool calls in this list.
206
206
207
-
Under the hood, the model has recognized that the user intent was related to the set of tools given, and generated a sequence of specific tokens that were post-processed to create a tool call object.
207
+
Under the hood, the model has recognized that the user's intent was related to the set of tools provided, and generated a sequence of specific tokens that were post-processed to create a tool call object.
208
208
209
-
We add this message to the conversation so that the model can have knowledge about this tool call in the next rounds of our multi-turn conversation.
209
+
We add this message to the conversation so that the model is aware of this tool call in subsequent rounds of our multi-turn conversation.
210
210
211
211
### Process tools calls
212
212
213
213
Now that we see that the model is able to generate tool calls, we need to code the Python implementation of the tools, so that we can process the tool calls the LLM will generate and actually start to log time!
214
-
Each task is stored in a dict, with the name as key.
214
+
Each task is stored in a dict, with the name as the key.
215
215
Categories are a fixed list.
216
216
217
217
We define the two functions, `log_work` and `time_report`, in the Python code below:
@@ -289,7 +289,7 @@ FUNCTION_MAP = {
289
289
"time_report": time_report
290
290
}
291
291
292
-
# if there are tool calls in the assistantgenerated response
292
+
# if there are tool calls in the assistant-generated response
293
293
if assistant_response.tool_calls:
294
294
print(f"<\t{len(assistant_response.tool_calls)} tool(s) to call")
295
295
# we can have several tool calls
@@ -319,7 +319,7 @@ We see that we successfully created a task called "team meeting", in the "Meetin
319
319
320
320
Now that we have executed our tool calls, we have to send the result back to the model, so that it can generate a new response that takes this new information into account, to tell the user the task has been created successfully or to give the time report for instance.
321
321
322
-
All we have to do, is to add the tool results as new `tool` messages into the conversation, so we'll update our code:
322
+
All we have to do, is add the tool results as new `tool` messages into the conversation, so we'll update our code:
323
323
```python
324
324
if assistant_response.tool_calls:
325
325
print(f"<\t{len(assistant_response.tool_calls)} tool(s) to call")
0 commit comments