Skip to content
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
24 changes: 15 additions & 9 deletions pages/generative-apis/reference-content/rate-limits.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -17,25 +17,31 @@ Any model served through Scaleway Generative APIs gets limited by:
- Tokens per minute
- Queries per minute

Base limits apply if you registered a valid payment method. This limits are increased automatically if you also validate your identity.

<Message type="tip">
These limits only apply if you created a Scaleway Account and registered a valid payment method. Otherwise, stricter limits apply to ensure usage stays within Free Tier only.
If you created a Scaleway Account but did not register a valid payment method, stricter limits apply to ensure usage stays within Free Tier only.
</Message>

## How can I increase the rate limits?

We actively monitor usage and will improve rates based on feedback.
If you need to increase your rate limits, [contact our support team](https://console.scaleway.com/support/create), providing details on the model used and specific use case.
If you need to increase your rate limits:
- [Verify your identity](https://www.scaleway.com/en/docs/account/how-to/verify-identity/) to automatically increase your rate limit as described below
- [Contact our support team](https://console.scaleway.com/support/create), providing details on the model used and specific use case, for additional increase.
Note that for increases of up to x5 or x10 volumes, we highly recommend using dedicated deployments with [Managed Inference](https://console.scaleway.com/inference/deployments), which provides exactly the same features and API compatibility.

### Chat models

| Model string | Requests per minute | Total tokens per minute |
|-----------------|-----------------|-----------------|
| `llama-3.1-8b-instruct` | 300 | 200K |
| `llama-3.1-70b-instruct` | 300 | 200K |
| `mistral-nemo-instruct-2407`| 300 | 200K |
| `pixtral-12b-2409`| 300 | 200K |
| `qwen2.5-32b-instruct`| 300 | 200K |
| Model string | Additional steps required | Requests per minute | Total tokens per minute |
|-----------------|-----------------|-----------------|-----------------|
| `llama-3.1-8b-instruct` | None | 300 | 200K |
| `llama-3.1-70b-instruct` | None | 300 | 200K |
| `llama-3.3-70b-instruct` | None | 300 | 200K |
| `llama-3.3-70b-instruct` | Identity verified | 600 | 400K |
| `mistral-nemo-instruct-2407`| None | 300 | 200K |
| `pixtral-12b-2409`| None | 300 | 200K |
| `qwen2.5-32b-instruct`| None | 300 | 200K |

### Embedding models

Expand Down