diff --git a/pages/generative-apis/reference-content/integrating-generative-apis-with-popular-tools.mdx b/pages/generative-apis/reference-content/integrating-generative-apis-with-popular-tools.mdx
index fad22b3f8f..b4a887b40c 100644
--- a/pages/generative-apis/reference-content/integrating-generative-apis-with-popular-tools.mdx
+++ b/pages/generative-apis/reference-content/integrating-generative-apis-with-popular-tools.mdx
@@ -71,13 +71,72 @@ print(response.choices[0].message.content)
LangChain is a popular library for building AI applications. Scaleway's Generative APIs support LangChain for both inference and embeddings.
### Python
+
+#### Function calling
+
+1. Run the following commands to install LangChain and its dependencies:
+ ```bash
+ $ pip install 'langchain>=0.3.24'
+ $ pip install 'langchain-core>=0.3.55'
+ $ pip install 'langchain-openai>=0.3.14'
+ $ pip install 'langchain-text-splitters>=0.3.8'
+ ```
+2. Create a file named `tools.py` and paste the code below into it to import and create the tools examples:
+ ```Python
+ from langchain_core.messages import HumanMessage
+ from langchain.chat_models import init_chat_model
+ from langchain_core.tools import tool
+
+
+ @tool
+ def add(a: int, b: int) -> int:
+ """Adds a and b."""
+ return a + b
+
+
+ @tool
+ def multiply(a: int, b: int) -> int:
+ """Multiplies a and b."""
+ return a * b
+
+
+ tools = [add, multiply]
+ ```
+3. Configure the `init_chat_model` function to use Scaleway's Generative APIs.
+ ```Python
+ llm = init_chat_model("mistral-small-3.1-24b-instruct-2503", model_provider="openai", base_url="https://api.scaleway.ai/v1")
+ ```
+4. Use the `llm` object and the `tools` list to generate a response to your query with the following code:
+ ```python
+ query = "What is 3 * 12?"
+ # You can also try the following query:
+ # query = "What is 42 + 4?"
+
+ messages = [HumanMessage(query)] # We initialize the messages list with the user's query.
+
+ ai_msg = llm_with_tools.invoke(messages) # We generate a response to the query.
+ messages.append(ai_msg) # We append the response to the messages list.
+
+ for tool_call in ai_msg.tool_calls:
+ selected_tool = {"add": add, "multiply": multiply}[tool_call["name"].lower()] # Depending on the tool name, we select the appropriate tool.
+ tool_msg = selected_tool.invoke(tool_call) # We invoke the selected tool with the tool call.
+ messages.append(tool_msg) # We append the tool's response to the messages list.
+
+ print(llm_with_tools.invoke(messages).content) # We print the content of the final response.
+ ```
+5. Run `tools.py`:
+ ```bash
+ $ python tools.py
+ The result of 3 * 12 is 36.
+ ```
+
Refer to our dedicated documentation for [implementing Retrieval-Augmented Generation (RAG) with LangChain and Scaleway Generative APIs](/tutorials/how-to-implement-rag-generativeapis/)
## LlamaIndex (advanced RAG applications)
-LlamaIndex is an open-source framework for building Large Language Models (LLMs) based applications, especially optimizing RAG (Retrieval Augmented Generation) pipelines.
+LlamaIndex is an open-source framework for building Large Language Models (LLMs) based applications, especially optimizing RAG (Retrieval Augmented Generation) pipelines.
1. Install the required dependencies to use the LlamaIndex framework with Scaleway's Generative APIs:
```bash
pip install llama-index-llms-openai-like
@@ -197,7 +256,7 @@ Chatbox AI is a powerful AI client and smart assistant, compatible with Scaleway
## Bolt.diy (code generation)
-Bolt.diy is a software enabling users to create web applications from the prompt.
+Bolt.diy is a software enabling users to create web applications from the prompt.
1. Install and launch Bolt.diy locally. Follow the setup instructions provided in the [Bolt.diy GitHub repository](https://github.com/stackblitz-labs/bolt.diy?tab=readme-ov-file#setup).
2. Once Bolt.diy is running, open the interface in your web browser.
@@ -206,9 +265,13 @@ Bolt.diy is a software enabling users to create web applications from the prompt
5. Click **Local Providers** to add a new external provider configuration.
6. Toggle the switch next to **OpenAILike** to enable it. Then, enter the Scaleway API endpoint: `https://api.scaleway.ai/v1` as the base URL.
7. In Bolt's main menu, select `OpenAILike` and input your **Scaleway Secret Key** as the `OpenAILike API Key`.
-8. Select one of the supported models from Scaleway Generative APIs. For best results with Bolt.diy, which requires a significant amount of output tokens (8000 by default), start with the `llama-3.1-8b-instruct` model.
+8. Select one of the supported models from Scaleway Generative APIs. For best results with Bolt.diy, which requires a significant amount of output tokens (8000 by default), start with the `gemma-3-27b-it` model.
9. Enter your prompt in the Bolt.diy interface to see your application being generated.
+
+ Only models that have a maximum output token of at least 8000 tokens are supported. Refer to the [list of Generative APIs models](/generative-apis/reference-content/supported-models/#chat-models) for more information.
+
+
Alternatively, you can also setup your Scaleway Secret Key by renaming `.env.example` to `.env`, adding corresponding environment variables values and restarting Bolt.diy:
```bash
OPENAI_LIKE_API_BASE_URL=https://api.scaleway.ai/v1