You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: pages/public_cloud/ai_machine_learning/endpoints_guide_06_function_calling/guide.fr-fr.md
+53-43Lines changed: 53 additions & 43 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,7 +1,7 @@
1
1
---
2
2
title: AI Endpoints - Appel de fonctions
3
3
excerpt: Découvrez comment utiliser l'appel de fonctions avec OVHcloud AI Endpoints
4
-
updated: 2025-08-01
4
+
updated: 2025-08-06
5
5
---
6
6
7
7
> [!primary]
@@ -13,10 +13,9 @@ updated: 2025-08-01
13
13
14
14
[AI Endpoints](https://endpoints.ai.cloud.ovh.net/) is a serverless platform provided by OVHcloud that offers easy access to a selection of world-renowned, pre-trained AI models. The platform is designed to be simple, secure, and intuitive, making it an ideal solution for developers who want to enhance their applications with AI capabilities without extensive AI expertise or concerns about data privacy.
15
15
16
-
**Function Calling**, also named tool calling, is a feature that enables a large language model (LLM) to trigger user-defined functions (also named tools). These tools are defined by the developer and implement specific behaviors such as calling an API, fetching data or calculating values, which extends the capabilities of the LLM.
16
+
**Function Calling**, also known as tool calling, is a feature that enables a large language model (LLM) to trigger user-defined functions (also named tools). These tools are defined by the developer and implement specific behaviors such as calling an API, fetching data or calculating values, which extends the capabilities of the LLM.
17
17
18
-
The LLM will identify which tool(s) to call and the arguments to use.
19
-
This feature can be used to develop assistants or agents for instance.
18
+
The LLM will identify which tool(s) to call and the arguments to use. This feature can be used to develop assistants or agents, for instance.
20
19
21
20
## Objective
22
21
@@ -27,51 +26,55 @@ Visit our [Catalog](https://endpoints.ai.cloud.ovh.net/catalog) to find out whic
27
26
28
27
## Requirements
29
28
30
-
We use Python for the examples provided through this guide.
29
+
We use Python for the examples provided in this guide.
31
30
32
31
Make sure you have a [Python](https://www.python.org/) environment configured, and install the [openai client](https://pypi.org/project/openai/).
32
+
33
33
```sh
34
34
pip install openai
35
35
```
36
36
37
37
### Authentication & rate limiting
38
38
39
-
All the examples provided in this guide are using the anonymous authentication which makes it simpler to use but may cause rate limiting issues.
40
-
If you wish to enable authentication using your own token, simply specify your API key within the requests.
41
-
Follow the following instructions in the [AI Endpoints - Getting Started](/pages/public_cloud/ai_machine_learning/endpoints_guide_01_getting_started) for more information on authentication.
39
+
All the examples provided in this guide use the anonymous authentication which makes it simpler to use but may cause rate limiting issues. If you wish to enable authentication using your own token, simply specify your API key within the requests.
40
+
41
+
Follow the instructions in the [AI Endpoints - Getting Started](/pages/public_cloud/ai_machine_learning/endpoints_guide_01_getting_started) guide for more information on authentication.
42
42
43
43
## Function Calling overview
44
44
45
45
The workflow to use function calling is described below:
46
+
46
47
1.**Define tools**: tell the model what tools it can use, with a JSON schema for each tool.
47
48
2.**Call the model with tools**: pass tools along with your system and user messages to the model, which will eventually generate tool calls.
48
49
3.**Process tools calls**: for each tool call returned by the model, execute the actual implementation of the tool in your code.
49
50
4.**Call the model with tools responses**: send a new request to the model, with the conversation updated with tool calls results.
50
-
4.**Final response**: process the final generated answer, which takes the tools results into account.
51
+
5.**Final response**: process the final generated answer, which takes the tools results into account.
To illustrate the use of function calling and progressively introduce the important notions related to this feature, we are going to develop a time-tracking assistant, step-by-step.
59
60
60
61
The assistant will be able to:
61
-
* log time spent on a task
62
-
* generate a time report
63
62
64
-
Each task has a name, category and total duration in minutes. Categories are a fixed list of strings, for example "Code" or "Meetings".
65
-
A time report can be generated for a category of task.
63
+
* log time spent on a task
64
+
* generate a time report
65
+
66
+
Each task has a name, category and total duration in minutes. Categories are a fixed list of strings, for example "Code" or "Meetings". A time report can be generated for a category of task.
66
67
67
68
The user will be able to interact with the assistant to log time and get information about how time was spent.
68
69
69
70
### Define tools
70
71
71
-
Our time-tracking assistant will use two tools :
72
+
Our time-tracking assistant will use two tools:
73
+
72
74
*`log_work`: log time spent on a task. Take the name of the task, category, duration and unit (minutes or hours).
73
75
For example, to log 2 hours on documentation writing, you would call `log_work("User guide", "Documentation", 2, "hours")`
74
-
*`time_report`: get JSON data about all tasks of a given category, and the total duration, in a given time unit (minutes or hours).
76
+
77
+
*`time_report`: get JSON data about all tasks of a given category, and the total duration in a given time unit (minutes or hours).
75
78
For example, to get the breakdown on time spent on coding tasks, in hours, you would call `time_report("Code", "hours")`
76
79
77
80
To get the model to use those tools, first we have to declare them with JSON schemas, in a `tools` list that we will pass to the Chat Completion API.
@@ -143,8 +146,7 @@ TOOLS = [
143
146
144
147
### Generate tool calls
145
148
146
-
With our tools ready, we can now try to call the model and see if it understands our tools definition.
147
-
We use the OpenAI Python SDK to call the ``/v1/chat/completions`` route on the endpoint, passing the tools definition in the `tools` parameter.
149
+
With our tools ready, we can now try to call the model and see if it understands our tools definition. We use the OpenAI Python SDK to call the ``/v1/chat/completions`` route on the endpoint, passing the tools definition in the `tools` parameter.
148
150
149
151
Let's send a simple user message: `log 1 hour team meeting` and see what the model answers.
We see that the model understood correctly that it needed to call the `log_work` tool, by looking at the `assistant` message generated.
203
+
We see that the model correctly identified that it needed to call the `log_work` tool, by looking at the `assistant` message generated.
201
204
202
205
The `tool_calls` list contains the tool calls the model generated in response to our user message.
203
-
The `name` and `arguments` fields tells us which tool to call and which parameters to pass to the function.
204
-
The `id` is an unique identifier for this tool call, that we will need later on.
206
+
The `name` and `arguments` fields specify which tool to call and which parameters to pass to the function.
207
+
The `id` is a unique identifier for this tool call, that we will need later on.
208
+
205
209
You can have multiple tool calls in this list.
206
210
207
-
Under the hood, the model has recognized that the user intent was related to the set of tools given, and generated a sequence of specific tokens that were post-processed to create a tool call object.
211
+
Under the hood, the model has recognized that the user's intent was related to the set of tools provided, and generated a sequence of specific tokens that were post-processed to create a tool call object.
208
212
209
-
We add this message to the conversation so that the model can have knowledge about this tool call in the next rounds of our multi-turn conversation.
213
+
We add this message to the conversation so that the model is aware of this tool call in subsequent rounds of our multi-turn conversation.
210
214
211
215
### Process tools calls
212
216
213
-
Now that we see that the model is able to generate tool calls, we need to code the Python implementation of the tools, so that we can process the tool calls the LLM will generate and actually start to log time!
214
-
Each task is stored in a dict, with the name as key.
215
-
Categories are a fixed list.
217
+
Now that we see that the model is able to generate tool calls, we need to code the Python implementation of the tools, so that we can process the tool calls generated by the LLM and start recording the time!
218
+
219
+
Each task is stored in a dict, with the name as the key and categories are a fixed list.
216
220
217
221
We define the two functions, `log_work` and `time_report`, in the Python code below:
@@ -317,9 +321,10 @@ We see that we successfully created a task called "team meeting", in the "Meetin
317
321
318
322
### Send tool calls results and get the final response
319
323
320
-
Now that we have executed our tool calls, we have to send the result back to the model, so that it can generate a new response that takes this new information into account, to tell the user the task has been created successfully or to give the time report for instance.
324
+
Now that we have executed our tool calls, we need to send the result back to the model so that it can generate a new response taking this new information into account, for example to inform the user that the task has been successfully created or to provide the hourly report.
325
+
326
+
All we have to do, is add the tool results as new `tool` messages into the conversation, so we'll update our code:
321
327
322
-
All we have to do, is to add the tool results as new `tool` messages into the conversation, so we'll update our code:
323
328
```python
324
329
if assistant_response.tool_calls:
325
330
print(f"<\t{len(assistant_response.tool_calls)} tool(s) to call")
@@ -360,6 +365,7 @@ print(f"<\t\tAssistant final answer:\n{response.choices[0].message.content}")
@@ -699,19 +711,18 @@ final_tool_calls = [v for (k, v) in sorted(final_tool_calls_dict.items())]
699
711
700
712
### Parallel tool calls
701
713
702
-
Some models are able to generate multiple tool calls in one round (see the time-tracking tutorial above for an example).
703
-
To control this behavior, the OpenAI specification allows to pass a `parallel_tool_calls` boolean parameter.
714
+
Some models are able to generate multiple tool calls in one round (see the time-tracking tutorial above for an example). To control this behavior, the OpenAI specification allows to pass a `parallel_tool_calls` boolean parameter.
704
715
705
-
If `false`, the model can only generate one tool call at most.
706
-
This case is currently not supported by AI Endpoints.
716
+
If `false`, the model can only generate one tool call at most. This case is currently not supported by AI Endpoints.
707
717
708
718
If you need your system to process only one tool call at a time, or if the model you are using doesn't support multiple tool calls, we suggest you pick the first one, process it, and call the model again.
709
719
710
-
Please note that LLaMa models don't support multiple tool calls between users and assistants messages.
720
+
Please note that LLaMa models do not support multiple tool calls between users and assistants messages.
711
721
712
722
### Prompting & additional parameters
713
723
714
724
Some additional considerations regarding prompts and model parameters:
725
+
715
726
- Most models tend to perform better when using lower temperature for function calling.
716
727
- The use of a system prompt is recommended, to ground the model into using the tools at its disposal. Whether a system prompt is defined or not, a description of the tools will usually be included in the tokens sent to the model (see the model chat template for more details).
717
728
- If you know in advance that your model needs to call tools, use the `tool_choice=required` parameter to make sure it generates at least one tool call.
@@ -735,4 +746,3 @@ If you need training or technical assistance to implement our solutions, contact
735
746
Please send us your questions, feedback and suggestions to improve the service:
736
747
737
748
- On the OVHcloud [Discord server](https://discord.gg/ovhcloud)
0 commit comments