Skip to content

Commit 193625c

Browse files
authored
docs(genapi): add a langchain integration and update bolt.diy integration using GenAPIs (#4911)
1 parent a9aeed3 commit 193625c

File tree

1 file changed

+66
-3
lines changed

1 file changed

+66
-3
lines changed

pages/generative-apis/reference-content/integrating-generative-apis-with-popular-tools.mdx

Lines changed: 66 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -71,13 +71,72 @@ print(response.choices[0].message.content)
7171
LangChain is a popular library for building AI applications. Scaleway's Generative APIs support LangChain for both inference and embeddings.
7272

7373
### Python
74+
75+
#### Function calling
76+
77+
1. Run the following commands to install LangChain and its dependencies:
78+
```bash
79+
$ pip install 'langchain>=0.3.24'
80+
$ pip install 'langchain-core>=0.3.55'
81+
$ pip install 'langchain-openai>=0.3.14'
82+
$ pip install 'langchain-text-splitters>=0.3.8'
83+
```
84+
2. Create a file named `tools.py` and paste the code below into it to import and create the tools examples:
85+
```Python
86+
from langchain_core.messages import HumanMessage
87+
from langchain.chat_models import init_chat_model
88+
from langchain_core.tools import tool
89+
90+
91+
@tool
92+
def add(a: int, b: int) -> int:
93+
"""Adds a and b."""
94+
return a + b
95+
96+
97+
@tool
98+
def multiply(a: int, b: int) -> int:
99+
"""Multiplies a and b."""
100+
return a * b
101+
102+
103+
tools = [add, multiply]
104+
```
105+
3. Configure the `init_chat_model` function to use Scaleway's Generative APIs.
106+
```Python
107+
llm = init_chat_model("mistral-small-3.1-24b-instruct-2503", model_provider="openai", base_url="https://api.scaleway.ai/v1")
108+
```
109+
4. Use the `llm` object and the `tools` list to generate a response to your query with the following code:
110+
```python
111+
query = "What is 3 * 12?"
112+
# You can also try the following query:
113+
# query = "What is 42 + 4?"
114+
115+
messages = [HumanMessage(query)] # We initialize the messages list with the user's query.
116+
117+
ai_msg = llm_with_tools.invoke(messages) # We generate a response to the query.
118+
messages.append(ai_msg) # We append the response to the messages list.
119+
120+
for tool_call in ai_msg.tool_calls:
121+
selected_tool = {"add": add, "multiply": multiply}[tool_call["name"].lower()] # Depending on the tool name, we select the appropriate tool.
122+
tool_msg = selected_tool.invoke(tool_call) # We invoke the selected tool with the tool call.
123+
messages.append(tool_msg) # We append the tool's response to the messages list.
124+
125+
print(llm_with_tools.invoke(messages).content) # We print the content of the final response.
126+
```
127+
5. Run `tools.py`:
128+
```bash
129+
$ python tools.py
130+
The result of 3 * 12 is 36.
131+
```
132+
74133
<Message type="tip">
75134
Refer to our dedicated documentation for [implementing Retrieval-Augmented Generation (RAG) with LangChain and Scaleway Generative APIs](/tutorials/how-to-implement-rag-generativeapis/)
76135
</Message>
77136

78137
## LlamaIndex (advanced RAG applications)
79138

80-
LlamaIndex is an open-source framework for building Large Language Models (LLMs) based applications, especially optimizing RAG (Retrieval Augmented Generation) pipelines.
139+
LlamaIndex is an open-source framework for building Large Language Models (LLMs) based applications, especially optimizing RAG (Retrieval Augmented Generation) pipelines.
81140
1. Install the required dependencies to use the LlamaIndex framework with Scaleway's Generative APIs:
82141
```bash
83142
pip install llama-index-llms-openai-like
@@ -197,7 +256,7 @@ Chatbox AI is a powerful AI client and smart assistant, compatible with Scaleway
197256

198257
## Bolt.diy (code generation)
199258

200-
Bolt.diy is a software enabling users to create web applications from the prompt.
259+
Bolt.diy is a software enabling users to create web applications from the prompt.
201260

202261
1. Install and launch Bolt.diy locally. Follow the setup instructions provided in the [Bolt.diy GitHub repository](https://github.com/stackblitz-labs/bolt.diy?tab=readme-ov-file#setup).
203262
2. Once Bolt.diy is running, open the interface in your web browser.
@@ -206,9 +265,13 @@ Bolt.diy is a software enabling users to create web applications from the prompt
206265
5. Click **Local Providers** to add a new external provider configuration.
207266
6. Toggle the switch next to **OpenAILike** to enable it. Then, enter the Scaleway API endpoint: `https://api.scaleway.ai/v1` as the base URL.
208267
7. In Bolt's main menu, select `OpenAILike` and input your **Scaleway Secret Key** as the `OpenAILike API Key`.
209-
8. Select one of the supported models from Scaleway Generative APIs. For best results with Bolt.diy, which requires a significant amount of output tokens (8000 by default), start with the `llama-3.1-8b-instruct` model.
268+
8. Select one of the supported models from Scaleway Generative APIs. For best results with Bolt.diy, which requires a significant amount of output tokens (8000 by default), start with the `gemma-3-27b-it` model.
210269
9. Enter your prompt in the Bolt.diy interface to see your application being generated.
211270

271+
<Message type="important">
272+
Only models that have a maximum output token of at least 8000 tokens are supported. Refer to the [list of Generative APIs models](/generative-apis/reference-content/supported-models/#chat-models) for more information.
273+
</Message>
274+
212275
Alternatively, you can also setup your Scaleway Secret Key by renaming `.env.example` to `.env`, adding corresponding environment variables values and restarting Bolt.diy:
213276
```bash
214277
OPENAI_LIKE_API_BASE_URL=https://api.scaleway.ai/v1

0 commit comments

Comments
 (0)