diff --git a/pages/generative-apis/reference-content/adding-ai-to-vscode-using-continue.mdx b/pages/generative-apis/reference-content/adding-ai-to-vscode-using-continue.mdx
index 898dd6bf8c..44d5ac8fe4 100644
--- a/pages/generative-apis/reference-content/adding-ai-to-vscode-using-continue.mdx
+++ b/pages/generative-apis/reference-content/adding-ai-to-vscode-using-continue.mdx
@@ -30,7 +30,25 @@ code --install-extension continue.continue
### Configure Continue to use Scaleway’s Generative APIs
-To link Continue with Scaleway's Generative APIs, you need to configure a settings file:
+#### Configure Continue through graphical interface
+
+To link Continue with Scaleway's Generative APIs, you can use built-in menus from Continue in VS Code.
+
+- Select Continue on the left menu.
+- In the prompt section, click on **Select model** dropdown, then on **Add Chat model**.
+- Select **Scaleway** as provider.
+- Select the model you want to use (we recommend `Qwen 2.5 Coder 32b` to get started with).
+- Enter your **Scaleway Secret Key**. Note that to start with, we recommend you use a Scaleway Secret Key having access to your `default` Scaleway project.
+
+These actions will edit automatically your `config.json file`. To edit it manually, see [Configure Continue through configuration file](#configure-continue-through-configuration-file).
+
+
+ Embeddings and autocomplete models are not yet supported through graphical interface configuration. To enable them, you need to edit configuration manually, see [Configure Continue through configuration file](#configure-continue-through-configuration-file).
+
+
+#### Configure Continue through configuration file
+
+To link Continue with Scaleway's Generative APIs, you can configure a settings file:
- Create a `config.json` file inside your `.continue` directory.
- Add the following configuration to enable Scaleway's Generative API:
@@ -43,7 +61,18 @@ To link Continue with Scaleway's Generative APIs, you need to configure a settin
"provider": "scaleway",
"apiKey": "###SCW_SECRET_KEY###"
}
- ]
+ ],
+ "embeddingsProvider": {
+ "model": "bge-multilingual-gemma2",
+ "provider": "scaleway",
+ "apiKey": "###SCW_SECRET_KEY###"
+ },
+ "tabAutocompleteModel": {
+ "model": "qwen2.5-coder-32b",
+ "title": "Qwen2.5 Coder Autocomplete",
+ "provider": "scaleway",
+ "apiKey": "###SCW_SECRET_KEY###"
+ }
}
```
- Save the file at the correct location:
@@ -52,6 +81,7 @@ To link Continue with Scaleway's Generative APIs, you need to configure a settin
For more details on configuring `config.json`, refer to the [official Continue documentation](https://docs.continue.dev/reference).
+ If you want to limit access to a specific Scaleway Project, you should add the field `"apiBase": "https://api.scaleway.ai/###PROJECT_ID###/v1/"` for each model (ie. `models`, `embeddingsProvider` and `tabAutocompleteModel`) since the default url `https://api.scaleway.ai/v1/` can only be used with the `default` project.
### Activate Continue in VS Code
@@ -80,7 +110,7 @@ Use the "Continue: Generate Code" command to generate boilerplate code, function
### Intelligent code completion
-Continue enhances your coding workflow with AI-driven code completion. Simply start typing, and the AI model will predict and complete your code.
+Continue enhances your coding workflow with AI-driven code completion. Simply start typing, and the AI model will suggest code edition that you can accept by pressing `Tab`.
### Automated code refactoring
@@ -103,4 +133,4 @@ Refactoring is essential for maintaining clean and efficient code. Use the *"Con
## Conclusion
-By integrating Continue with Scaleway’s Generative APIs, you unlock AI-powered coding capabilities that enhance productivity, automate repetitive tasks, and improve code quality.
\ No newline at end of file
+By integrating Continue with Scaleway’s Generative APIs, you unlock AI-powered coding capabilities that enhance productivity, automate repetitive tasks, and improve code quality.
diff --git a/pages/generative-apis/reference-content/integrating-generative-apis-with-popular-tools.mdx b/pages/generative-apis/reference-content/integrating-generative-apis-with-popular-tools.mdx
index 25ab4fe564..ad2bbb1758 100644
--- a/pages/generative-apis/reference-content/integrating-generative-apis-with-popular-tools.mdx
+++ b/pages/generative-apis/reference-content/integrating-generative-apis-with-popular-tools.mdx
@@ -19,53 +19,60 @@ The following table compares AI tools and libraries supported by Scaleway's Gene
| Tool/Library | Description | Use cases | Integration effort |
| --- | --- | --- | --- |
-| [OpenAI](#openai-compatible-libraries) | Popular AI library for natural language processing | Text generation, language translation, text summarization | Low |
+| [OpenAI client](#openai-client-libraries) | Popular AI library for natural language processing | Text generation, language translation, text summarization | Low |
| [LangChain](#langchain-rag-and-llm-applications) | Library for building AI applications | Inference, embeddings, document indexing and retrieval | Medium |
| [LlamaIndex](#llamaindex-document-indexing-and-retrieval) | Library for indexing and retrieving documents using AI models | Document indexing and retrieval, question answering | Medium |
| [Continue Dev](#continue-dev-ai-coding-assistance) | Library for AI-powered coding assistance | Code completion, code review | Low |
| [Transformers (Hugging Face)](#transformers-hugging-face-integration) | Library for pre-trained models for natural language processing | Text generation, language translation, text summarization | Medium |
-| [cURL/Python](#api-clients-and-custom-integrations) | Direct API clients for custom integrations | Custom applications, data processing | High |
+| [cURL/Python](#custom-http-integrations) | Direct HTTP API calls for custom integrations | Custom applications, data processing | High |
The integration effort is subjective and may vary depending on the specific use case and requirements.
-## OpenAI-compatible libraries
+## OpenAI client libraries
Scaleway Generative APIs follow OpenAI's API structure, making integration straightforward. To get started, you'll need to install the OpenAI library and set up your API key.
### Configuration
-To use the OpenAI library with Scaleway's Generative APIs, you'll need to set the API key and base URL in your OpenAI-compatible client:
+To use the OpenAI client library with Scaleway's Generative APIs, you'll need to install the required dependencies:
+```bash
+pip install openai
+```
+
+Then you'll need to set the API key and base URL in your OpenAI-compatible client:
```python
-import openai
-openai.api_key = ""
-openai.api_base = "https://api.scaleway.ai/v1"
-response = openai.ChatCompletion.create(
- model="llama-3.1-8b-instruct",
- messages=[{"role": "user", "content": "Tell me a joke about AI"}]
+from openai import OpenAI
+client = OpenAI(
+ base_url="https://api.scaleway.ai/v1",
+ api_key=""
)
-print(response["choices"][0]["message"]["content"])
```
Make sure to replace `` with your actual API key.
-### Using OpenAI for text generation
+### Using OpenAI client for text generation
-To use OpenAI for text generation, you can create a `ChatCompletion` object and call the `create` method:
+To use OpenAI client for text generation, you can create a `client.chat.completions` object and call the `create` method:
```python
-response = openai.ChatCompletion.create(
+response = client.chat.completions.create(
model="llama-3.1-8b-instruct",
messages=[{"role": "user", "content": "Tell me a joke about AI"}]
)
-print(response["choices"][0]["message"]["content"])
+print(response.choices[0].message.content)
```
## LangChain (RAG & LLM applications)
LangChain is a popular library for building AI applications. Scaleway's Generative APIs support LangChain for both inference and embeddings.
+
+ Refer our dedicated documentation for
+ - [Implementing Retrieval-Augmented Generation (RAG) with LangChain and Scaleway Generative APIs](/tutorials/how-to-implement-rag-generativeapis/)
+
+
### Configuration
To use LangChain with Scaleway's Generative APIs, you'll need to install the required dependencies:
@@ -168,6 +175,12 @@ To use Continue Dev with Scaleway's Generative APIs, you'll need to modify the `
"provider": "scaleway",
"model": "bge-multilingual-gemma2",
"apiKey": ""
+ },
+ "tabAutocompleteModel": {
+ "model": "qwen2.5-coder-32b",
+ "title": "Qwen2.5 Coder Autocomplete",
+ "provider": "scaleway",
+ "apiKey": ""
}
}
```
@@ -202,7 +215,7 @@ To use Hugging Face for text generation, you can call the `generator` function:
```python
print(generator("Write a short poem about the ocean"))
```
-## API clients and custom integrations
+## Custom HTTP integrations
You can interact with Scaleway's Generative APIs directly using any HTTP client.
@@ -222,9 +235,14 @@ curl https://api.scaleway.ai/v1/chat/completions \
Make sure to replace `` with your actual API key.
-### Python example
+### Python HTTP example
-To use Python with Scaleway's Generative APIs, you can use the following code:
+To perform HTTP requests with Scaleway's Generative APIs, you'll need to install `requests` dependency:
+```bash
+pip install requests
+```
+
+Then, you can use the following code:
```python
import requests
headers = {
@@ -240,4 +258,4 @@ print(response.json()["choices"][0]["message"]["content"])
```
Make sure to replace `` with your actual API key.
-
\ No newline at end of file
+