Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -62,14 +62,25 @@ To link Continue with Scaleway’s Generative APIs, you need to configure the se
3. Add the following configuration:
```json
{
"models": [
"models": [
{
"model": "qwen2.5-coder-32b-instruct",
"title": "Qwen2.5 Coder",
"model": "qwen2.5-coder-32b-instruct",
"title": "Qwen2.5 Coder",
"provider": "scaleway",
"apiKey": "###SCW_SECRET_KEY###"
}
],
"embeddingsProvider": {
"model": "bge-multilingual-gemma2",
"provider": "scaleway",
"apiKey": "###SCW_SECRET_KEY###"
}
]
},
"tabAutocompleteModel": {
"model": "qwen2.5-coder-32b",
"title": "Qwen2.5 Coder Autocomplete",
"provider": "scaleway",
"apiKey": "###SCW_SECRET_KEY###"
}
}
```
4. Save the file and restart IntelliJ IDEA.
Expand All @@ -88,4 +99,19 @@ After configuring the API, activate Continue in IntelliJ IDEA:

<Message type="important">
Enabling tab completion **may lead to higher token consumption** as the model generates predictions for every keystroke. Be mindful of your API usage and adjust settings accordingly to avoid unexpected costs. For more information, refer to the [official Continue documentation](https://docs.continue.dev/reference#tabautocompleteoptions).
</Message>
</Message>

### Going further

You can add additional parameters to configure your model behaviour by editing `config.json`.
For instance, you can add the following `systemMessage` value to modify LLM messages `"role":"system"` and/or `"role":"developer"` and provide less verbose answers:
```json
{
"models": [
{
"model": "...",
"systemMessage": "You are an expert software developer. You give concise responses."
}
]
}
```
Original file line number Diff line number Diff line change
Expand Up @@ -96,4 +96,19 @@ After configuring the API, open VS Code and activate Continue:

<Message type="important">
Enabling tab completion **may lead to higher token consumption** as the model generates predictions for every keystroke. Be mindful of your API usage and adjust settings accordingly to avoid unexpected costs. For more information, refer to the [official Continue documentation](https://docs.continue.dev/reference#tabautocompleteoptions).
</Message>
</Message>

### Going further

You can add additional parameters to configure your model behaviour by editing `config.json`.
For instance, you can add the following `systemMessage` value to modify LLM messages `"role":"system"` and/or `"role":"developer"` and provide less verbose answers:
```json
{
"models": [
{
"model": "...",
"systemMessage": "You are an expert software developer. You give concise responses."
}
]
}
```