Skip to content

Commit 7e901cb

Browse files
authored
fix(docs): doc review (#5307)
1 parent 4568253 commit 7e901cb

File tree

3 files changed

+10
-10
lines changed

3 files changed

+10
-10
lines changed

pages/generative-apis/troubleshooting/fixing-common-issues.mdx

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ title: Fixing common issues with Generative APIs
33
description: This page lists common issues that you may encounter while using Scaleway's Generative APIs, their causes and recommended solutions.
44
tags: generative-apis ai-data common-issues
55
dates:
6-
validation: 2025-01-16
6+
validation: 2025-07-21
77
posted: 2025-01-16
88
---
99

@@ -32,7 +32,7 @@ Below are common issues that you may encounter when using Generative APIs, their
3232
- You can store your content in a file with the `.json` extension (eg. named `file.json`), and open it with an IDE such as VSCode or Zed. Syntax errors should display if there are any.
3333
- You can copy your content in a JSON formatter tool or linter available online, that will identify errors.
3434
- Usually, most common errors include:
35-
- Missing or unecessary quotes `"`, `'` or commas `,` on properties name and string values.
35+
- Missing or unnecessary quotes `"`, `'` or commas `,` on property names and string values.
3636
- Special characters that are not escaped, such as line break `\n` or backslash `\\`
3737

3838
## 403: Forbidden - Insufficient permissions to access the resource
@@ -66,7 +66,7 @@ Below are common issues that you may encounter when using Generative APIs, their
6666
## 416: Range Not Satisfiable - max_completion_tokens is limited for this model
6767

6868
### Cause
69-
- You provided `max_completion_tokens` value too high, which is not supported by the model you are using.
69+
- You provided a value for `max_completion_tokens` which is too high, and not supported by the model you are using.
7070

7171
### Solution
7272
- Remove the `max_completion_tokens` field from your request or client library, or reduce its value below what is [supported by the model](https://www.scaleway.com/en/docs/generative-apis/reference-content/supported-models/).
@@ -80,11 +80,11 @@ Below are common issues that you may encounter when using Generative APIs, their
8080
## 429: Too Many Requests - You exceeded your current quota of requests/tokens per minute
8181

8282
### Cause
83-
- You performed too many API requests over a given minute
83+
- You performed too many API requests within a given minute
8484
- You consumed too many tokens (input and output) with your API requests over a given minute
8585

8686
### Solution
87-
- Smooth out your API requests rate by limiting the number of API requests you perform over a given minute so that you remain below your [Organization quotas for Generative APIs](/organizations-and-projects/additional-content/organization-quotas/#generative-apis).
87+
- Smooth out your API requests rate by limiting the number of API requests you perform over a given minute, so that you remain below your [Organization quotas for Generative APIs](/organizations-and-projects/additional-content/organization-quotas/#generative-apis).
8888
- [Add a payment method](/billing/how-to/add-payment-method/#how-to-add-a-credit-card) and [validate your identity](/account/how-to/verify-identity/) to increase automatically your quotas [based on standard limits](/organizations-and-projects/additional-content/organization-quotas/#generative-apis).
8989
- Reduce the size of the input or output tokens processed by your API requests.
9090
- Use [Managed Inference](/managed-inference/), where these quotas do not apply (your throughput will be only limited by the amount of Inference Deployment your provision)
@@ -97,7 +97,7 @@ Below are common issues that you may encounter when using Generative APIs, their
9797

9898
### Solution
9999
- Smooth out your API requests rate by limiting the number of API requests you perform at the same time (eg. requests which did not receive a complete response and are still opened) so that you remain below your [Organization quotas for Generative APIs](/organizations-and-projects/additional-content/organization-quotas/#generative-apis).
100-
- Use [Managed Inference](/managed-inference/), where concurrent request limit do not apply. Note that exceeding the number of concurrent requests your Inference Deployment can handle may impact performance metrics.
100+
- Use [Managed Inference](/managed-inference/), where concurrent request limit do not apply. Note that exceeding the number of concurrent requests your Inference deployment can handle may impact performance metrics.
101101

102102

103103
## 504: Gateway Timeout
@@ -117,7 +117,7 @@ For queries where the model enters an infinite loop (more frequent when using **
117117
- Ensure the `top_p` parameter is not set too low (we recommend the default value of `1`).
118118
- Add a `presence_penalty` value in your request (`0.5` is a good starting value). This option will help the model choose different tokens than the one it is looping on, although it might impact accuracy for some tasks requiring repeated multiple similar outputs.
119119
- Use more recent models, which are usually more optimized to avoid loops, especially when using structured output.
120-
- Optimize the system prompt to provide clearer and simpler tasks. Currently, JSON output accuracy still relies on heuristics to constrain models to output only valid JSON tokens, and thus depends on the prompts given. As a counter-example, providing contradictory requirements to a model - such as `Never output JSON` in the system prompt and `response_format` as `json_schema" in the query - may lead to the model never outputting closing JSON brackets `}`.
120+
- Optimize the system prompt to provide clearer and simpler tasks. Currently, JSON output accuracy still relies on heuristics to constrain models to output only valid JSON tokens, and thus depends on the prompts given. As a counter-example, providing contradictory requirements to a model - such as `Never output JSON` in the system prompt and `response_format` as `json_schema` in the query - may lead to the model never outputting closing JSON brackets `}`.
121121

122122
## Structured output (e.g., JSON) is not working correctly
123123

@@ -181,7 +181,7 @@ For queries where the model enters an infinite loop (more frequent when using **
181181
- Counter for **Tokens Processed** or **API Requests** should display a correct value (different from 0)
182182
- Graph across time should be empty
183183

184-
## Embeddings vectors cannot be stored in a database or used with a third-party library
184+
## Embedding vectors cannot be stored in a database or used with a third-party library
185185

186186
### Cause
187187
The embedding model you are using generates vector representations with a fixed dimension number, which is too high for your database or third-party library.

pages/load-balancer/concepts.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ title: Load Balancers - Concepts
33
description: Learn the key concepts of Scaleway Load Balancer - optimize traffic distribution, ensure high availability, and enhance application performance.
44
tags: load-balancer load balancer acl backend balancing-rule frontend health-check proxy s3-failover protocol ssl
55
dates:
6-
validation: 2025-01-13
6+
validation: 2025-07-21
77
categories:
88
- networks
99
---

pages/vpc/reference-content/use-case-basic.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ title: VPC use case 1 - Basic infrastructure to leverage VPC isolation
33
description: Learn how to set up a basic infrastructure using VPC isolation for secure cloud environments. Step-by-step guidance on leveraging VPCs for optimal network isolation.
44
tags: vpc private-network connectivity best-practice use-case infrastructure-diagram
55
dates:
6-
validation: 2025-01-16
6+
validation: 2025-07-21
77
posted: 2025-01-16
88
categories:
99
- network

0 commit comments

Comments
 (0)