Skip to content

Commit 3529969

Browse files
committed
feat(ifr): added reference page
1 parent 417793b commit 3529969

File tree

2 files changed

+62
-5
lines changed

2 files changed

+62
-5
lines changed
Lines changed: 49 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,49 @@
1+
---
2+
meta:
3+
title: Support for function calling in Scaleway Managed Inference
4+
description: Function calling allows models to connect to external tools.
5+
content:
6+
h1: Support for function calling in Scaleway Managed Inference
7+
paragraph: Function calling allows models to connect to external tools.
8+
tags:
9+
categories:
10+
- ai-data
11+
---
12+
13+
## What is function calling?
14+
15+
Function calling allows a language model (LLM) to interact with external tools or APIs, executing specific tasks based on user requests. The LLM identifies the appropriate function, extracts needed parameters, and returns the results as structured data, typically in JSON format. While errors can occur, custom parsers or tools like LlamaIndex and LangChain can help ensure valid results.
16+
17+
## How to implement function calling in Scaleway Managed Inference?
18+
19+
[This tutorial](/tutorials/building-ai-application-function-calling/) will guide you through the steps of creating a simple flight schedule assistant that can understand natural language queries about flights and return structured information.
20+
21+
## What are models with function calling capabilities?
22+
23+
The following models in Scaleway's Managed Inference library can call tools as per the OpenAI method:
24+
25+
* meta/llama-3.1-8b-instruct
26+
* meta/llama-3.1-70b-instruct
27+
* mistral/mistral-7b-instruct-v0.3
28+
* mistral/mistral-nemo-instruct-2407
29+
30+
## Understanding function calling
31+
32+
Function calling consists of three main components:
33+
- **Tool definitions**: JSON schemas that describe available functions and their parameters
34+
- **Tool selection**: Automatic or manual selection of appropriate functions based on user queries
35+
- **Tool execution**: Processing function calls and handling their responses
36+
37+
The workflow typically follows these steps:
38+
1. Define available tools using JSON schema
39+
2. Send system and user query along with tool definitions
40+
3. Process model's function selection
41+
4. Execute selected functions
42+
5. Return results to model for final response
43+
44+
## Further resources
45+
46+
For more information about function calling and advanced implementations, refer to these resources:
47+
48+
- [OpenAI Function Calling Guide](https://platform.openai.com/docs/guides/function-calling)
49+
- [JSON Schema Specification](https://json-schema.org/specification)

tutorials/building-ai-application-function-calling/index.mdx

Lines changed: 13 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ This tutorial will guide you through creating a simple flight schedule assistant
2424
- A Scaleway account logged into the [console](https://console.scaleway.com)
2525
- Python 3.7 or higher
2626
- An API key from Scaleway [Identity and Access Management](https://www.scaleway.com/en/docs/identity-and-access-management/iam/)
27-
- Access to Scaleway [Generative APIs](/ai-data/generative-apis/quickstart/)
27+
- Access to Scaleway [Generative APIs](/ai-data/generative-apis/quickstart/) or to Scaleway [Managed Inference](/ai-data/managed-inference/quickstart/)
2828
- The `openai` Python library installed
2929

3030
## Understanding function calling
@@ -106,9 +106,17 @@ import json
106106
from flight_schedule import get_flight_schedule
107107

108108
# Initialize the OpenAI client with Scaleway configuration
109+
110+
MODEL="meta/llama-3.1-70b-instruct:fp8"
111+
# use the right name according to your Managed Inference deployment or Generative APIs model
112+
113+
API_KEY = os.environ.get("SCALEWAY_API_KEY")
114+
BASE_URL = os.environ.get("SCALEWAY_INFERENCE_ENDPOINT_URL")
115+
# use https://api.scaleway.ai/v1 for Scaleway Generative APIs
116+
109117
client = OpenAI(
110-
base_url="https://api.scaleway.com/v1",
111-
api_key=os.environ.get("SCALEWAY_API_KEY")
118+
base_url=BASE_URL,
119+
api_key=API_KEY
112120
)
113121

114122
# Define the tool specification
@@ -155,7 +163,7 @@ def process_query(user_query: str) -> str:
155163

156164
# Get the model's response
157165
response = client.chat.completions.create(
158-
model="llama-3.1-70b-instruct", # Use appropriate model name
166+
model=MODEL,
159167
messages=messages,
160168
tools=tools,
161169
tool_choice="auto"
@@ -184,7 +192,7 @@ def process_query(user_query: str) -> str:
184192

185193
# Get final response
186194
final_response = client.chat.completions.create(
187-
model="llama-3.1-70b-instruct",
195+
model=MODEL,
188196
messages=messages
189197
)
190198

0 commit comments

Comments
 (0)