Skip to content

Commit b1f0fe7

Browse files
committed
feat(genapi): add examples
1 parent ece78c3 commit b1f0fe7

File tree

2 files changed

+182
-0
lines changed

2 files changed

+182
-0
lines changed

menu/navigation.json

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -964,6 +964,10 @@
964964
{
965965
"label": "Adding AI to IntelliJ IDEA using Continue",
966966
"slug": "adding-ai-to-intellij-using-continue"
967+
},
968+
{
969+
"label": "Integrating Generative APIs with popular AI tools",
970+
"slug": "integrating-generative-apis-with-popular-tools"
967971
}
968972
],
969973
"label": "Additional Content",
Lines changed: 178 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,178 @@
1+
---
2+
meta:
3+
title: Integrating Scaleway Generative APIs with popular AI tools
4+
description: Learn how to integrate Scaleway's Generative APIs with popular AI tools.
5+
content:
6+
h1: Integrating Scaleway Generative APIs with popular AI tools
7+
paragraph: Learn how to integrate Scaleway's Generative APIs with popular AI tools.
8+
tags: generative-apis, ai, language-models
9+
validation_date: 2025-02-18
10+
posted_date: 2025-02-18
11+
---
12+
13+
Scaleway’s Generative APIs provide easy integration with various AI frameworks and tools. This guide outlines the configuration steps needed to integrate Scaleway's models into different environments.
14+
15+
## OpenAI-Compatible libraries
16+
Scaleway Generative APIs follow OpenAI’s API structure, making integration straightforward.
17+
18+
### Configuration
19+
Set the API key and base URL in your OpenAI-compatible client:
20+
21+
```python
22+
import openai
23+
24+
openai.api_key = "<API secret key>"
25+
openai.api_base = "https://api.scaleway.ai/v1"
26+
27+
response = openai.ChatCompletion.create(
28+
model="llama-3.1-8b-instruct",
29+
messages=[{"role": "user", "content": "Tell me a joke about AI"}]
30+
)
31+
32+
print(response["choices"][0]["message"]["content"])
33+
```
34+
35+
36+
## LangChain (RAG & LLM applications)
37+
38+
LangChain supports Scaleway models for both inference and embeddings.
39+
40+
### Configuration
41+
1. Install required dependencies:
42+
```bash
43+
pip install langchain langchain_openai langchain_postgres psycopg2
44+
```
45+
2. Set up the API connection:
46+
```python
47+
from langchain_openai import OpenAIEmbeddings, ChatOpenAI
48+
import os
49+
50+
os.environ["OPENAI_API_KEY"] = "<API secret key>"
51+
os.environ["OPENAI_API_BASE"] = "https://api.scaleway.ai/v1"
52+
53+
llm = ChatOpenAI(model="llama-3.1-8b-instruct")
54+
embeddings = OpenAIEmbeddings(model="bge-multilingual-gemma2")
55+
```
56+
3. Use a vector store for retrieval:
57+
```python
58+
from langchain_postgres import PGVector
59+
60+
connection_string = "postgresql+psycopg2://user:password@host:port/dbname"
61+
vector_store = PGVector(connection=connection_string, embeddings=embeddings)
62+
```
63+
64+
## LlamaIndex (document indexing & retrieval)
65+
66+
LlamaIndex enables easy document retrieval using Scaleway’s models.
67+
68+
### Configuration
69+
1. Install dependencies:
70+
```bash
71+
pip install llama-index
72+
```
73+
2. Set up the embedding model:
74+
```python
75+
from llama_index.embeddings.openai import OpenAIEmbedding
76+
77+
embed_model = OpenAIEmbedding(
78+
api_key="<API secret key>",
79+
api_base="https://api.scaleway.ai/v1",
80+
model="bge-multilingual-gemma2"
81+
)
82+
```
83+
3. Index and query documents:
84+
```python
85+
from llama_index import VectorStoreIndex, SimpleDirectoryReader
86+
87+
documents = SimpleDirectoryReader("data").load_data()
88+
index = VectorStoreIndex.from_documents(documents, embed_model=embed_model)
89+
query_engine = index.as_query_engine()
90+
91+
response = query_engine.query("Summarize this document")
92+
print(response)
93+
```
94+
95+
## Continue Dev (AI coding assistance)
96+
97+
Continue Dev allows configuring Scaleway models for code completion.
98+
99+
### Configuration
100+
Modify `continue.json` to add Scaleway’s API:
101+
102+
```json
103+
{
104+
"models": [
105+
{
106+
"title": "Qwen2.5-Coder-32B-Instruct",
107+
"provider": "scaleway",
108+
"model": "qwen2.5-coder-32b-instruct",
109+
"apiKey": "<API secret key>"
110+
}
111+
],
112+
"embeddingsProvider": {
113+
"provider": "scaleway",
114+
"model": "bge-multilingual-gemma2",
115+
"apiKey": "<API secret key>"
116+
}
117+
}
118+
```
119+
120+
---
121+
122+
## Transformers (Hugging Face integration)
123+
124+
Hugging Face’s `transformers` library can send requests to Scaleway-hosted models.
125+
126+
### Configuration
127+
1. Install dependencies:
128+
```bash
129+
pip install transformers requests
130+
```
131+
2. Use a custom API endpoint:
132+
```python
133+
from transformers import pipeline
134+
135+
generator = pipeline(
136+
"text-generation",
137+
model="llama-3.1-8b-instruct",
138+
tokenizer="meta-llama/Llama-3-8b",
139+
api_base="https://api.scaleway.ai/v1",
140+
api_key="<API secret key>"
141+
)
142+
143+
print(generator("Write a short poem about the ocean"))
144+
```
145+
146+
---
147+
148+
## API clients & custom integrations
149+
You can interact with Scaleway’s Generative APIs directly using any HTTP client.
150+
151+
### cURL example
152+
```bash
153+
curl https://api.scaleway.ai/v1/chat/completions \
154+
-H "Authorization: Bearer <API secret key>" \
155+
-H "Content-Type: application/json" \
156+
-d '{
157+
"model": "llama-3.1-8b-instruct",
158+
"messages": [{"role": "user", "content": "What is quantum computing?"}]
159+
}'
160+
```
161+
162+
### Python example
163+
```python
164+
import requestsMe
165+
166+
headers = {
167+
"Authorization": "Bearer <API secret key>",
168+
"Content-Type": "application/json"
169+
}
170+
171+
data = {
172+
"model": "llama-3.1-8b-instruct",
173+
"messages": [{"role": "user", "content": "Explain black holes"}]
174+
}
175+
176+
response = requests.post("https://api.scaleway.ai/v1/chat/completions", json=data, headers=headers)
177+
print(response.json()["choices"][0]["message"]["content"])
178+
```

0 commit comments

Comments
 (0)