Skip to content

Commit 34e91d8

Browse files
authored
docs(gen): fix typos and harmonizing content (#4891)
* docs(gen): fix typos and harmonizing content * Update pages/managed-inference/faq.mdx
1 parent 29b9ef9 commit 34e91d8

File tree

54 files changed

+174
-189
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

54 files changed

+174
-189
lines changed

pages/account/how-to/open-a-support-ticket.mdx

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -52,7 +52,7 @@ Providing a clear subject and description will help us resolve your issue faster
5252
Example: “The issue occurs when attempting to start an Instance after applying a configuration update in the Scaleway console.”
5353

5454
- **Expected behavior:** explain what you expected to happen.
55-
Example: “The instance should start within 2 minutes without errors.”
55+
Example: “The Instance should start within 2 minutes without errors.”
5656

5757
- **Actual behavior:** describe what is happening instead.
5858
Example: “The Instance remains in "Starting" status for over 10 minutes and then switches to "Error".
@@ -71,7 +71,6 @@ Examples:
7171
- Screenshot of the network tab of your browser’s Developer Tools (right-click anywhere on the page and select **Inspect**. Go to the **Network tab** in the Developer Tools panel.)
7272
- Logs
7373

74-
7574
<Message type="important">
7675
If you have lost access to the Scaleway console and want to create a ticket, you must first [follow this procedure](/account/how-to/use-2fa/#how-to-regain-access-to-your-account) to regain access to your account.
77-
</Message>
76+
</Message>

pages/audit-trail/quickstart.mdx

Lines changed: 1 addition & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,4 @@ Refer to the [dedicated documentation page](/audit-trail/how-to/configure-audit-
3939

4040
<Message type="tip">
4141
If no events display after you use the filter, try switching the region from the **Region** drop-down, or adjusting your search. Find out how to troubleshoot event issues in our [dedicated documentation](/audit-trail/troubleshooting/cannot-see-events/).
42-
</Message>
43-
44-
45-
42+
</Message>

pages/cockpit/how-to/configure-alerts-for-scw-resources.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -52,5 +52,5 @@ This page shows you how to configure alerts for Scaleway resources in Grafana us
5252
</Message>
5353

5454
<Message type="tip">
55-
Find out how to send Cockpit's alert notifications to Slack using a webkook URL in our [dedicated documentation](/tutorials/configure-slack-alerting/).
55+
Find out how to send Cockpit's alert notifications to Slack using a webhook URL in our [dedicated documentation](/tutorials/configure-slack-alerting/).
5656
</Message>

pages/cockpit/index.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -52,7 +52,7 @@ meta:
5252
<Grid>
5353

5454
<DefaultCard
55-
title="Sending Cockpit's alert notifications to Slack using a webkook URL"
55+
title="Sending Cockpit's alert notifications to Slack using a webhook URL"
5656
url="/tutorials/configure-slack-alerting/"
5757
label="Read more"
5858
/>

pages/environmental-footprint/how-to/track-monthly-footprint.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -42,7 +42,7 @@ categories:
4242
For a detailed description of how the water consumption is calculated, refer to the [Water Consumption section](/environmental-footprint/additional-content/environmental-footprint-calculator/#water-consumption) of the Environmental Footprint calculation breakdown documentation page.
4343
</Message>
4444
- **5.** The total water consumption and carbon footprint of each of your Projects.
45-
- **6.** The total water consumption and carbon footprint per geographical location (Region and Availability Zone)
45+
- **6.** The total water consumption and carbon footprint per geographical location (region and Availability Zone)
4646
- **7.** The total water consumption and carbon footprint of each of your products.
4747

4848
For both the carbon emissions, and the water consumption, the power consumption of your active resources is used in the calculation. The way you use your resources has a direct impact on power consumption. Therefore, results may vary greatly from one month to another.

pages/generative-apis/troubleshooting/fixing-common-issues.mdx

Lines changed: 17 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -16,15 +16,15 @@ Below are common issues that you may encounter when using Generative APIs, their
1616
## 400: Bad Request - You exceeded maximum context window for this model
1717

1818
### Cause
19-
- You provided an input exceeding the maximum context window (also known as context length) for the model you are using.
20-
- You provided a long input and requested a long input (in `max_completion_tokens` field), which added together, exceed the maximum context window of the model you are using.
19+
- You provided an input exceeding the maximum context window (also known as context length) for the model you are using.
20+
- You provided a long input and requested a long input (in `max_completion_tokens` field), which added together, exceeds the maximum context window of the model you are using.
2121

2222
### Solution
23-
- Reduce your input size below what is [supported by the model](/generative-apis/reference-content/supported-models/).
23+
- Reduce your input size below what is [supported by the model](/generative-apis/reference-content/supported-models/).
2424
- Use a model supporting longer context window values.
2525
- Use [Managed Inference](/managed-inference/), where the context window can be increased for [several configurations with additional GPU vRAM](/managed-inference/reference-content/supported-models/). For instance, `llama-3.3-70b-instruct` model in `fp8` quantization can be served with:
26-
- `15k` tokens context window on `H100` instances
27-
- `128k` tokens context window on `H100-2` instances.
26+
- `15k` tokens context window on `H100` Instances
27+
- `128k` tokens context window on `H100-2` Instances
2828

2929
## 403: Forbidden - Insufficient permissions to access the resource
3030

@@ -46,7 +46,7 @@ Below are common issues that you may encounter when using Generative APIs, their
4646
- You provided a value for `max_completion_tokens` that is too high and not supported by the model you are using.
4747

4848
### Solution
49-
- Remove `max_completion_tokens` field from your request or client library, or reduce its value below what is [supported by the model](https://www.scaleway.com/en/docs/generative-apis/reference-content/supported-models/).
49+
- Remove `max_completion_tokens` field from your request or client library, or reduce its value below what is [supported by the model](https://www.scaleway.com/en/docs/generative-apis/reference-content/supported-models/).
5050
- As an example, when using the [init_chat_model from Langchain](https://python.langchain.com/api_reference/_modules/langchain/chat_models/base.html#init_chat_model), you should edit the `max_tokens` value in the following configuration:
5151
```python
5252
llm = init_chat_model("llama-3.3-70b-instruct", max_tokens="8000", model_provider="openai", base_url="https://api.scaleway.ai/v1", temperature=0.7)
@@ -57,16 +57,16 @@ Below are common issues that you may encounter when using Generative APIs, their
5757
## 416: Range Not Satisfiable - max_completion_tokens is limited for this model
5858

5959
### Cause
60-
- You provided `max_completion_tokens` value too high, that is not supported by the model you are using.
60+
- You provided `max_completion_tokens` value too high, which is not supported by the model you are using.
6161

6262
### Solution
63-
- Remove the `max_completion_tokens` field from your request or client library, or reduce its value below what is [supported by the model](https://www.scaleway.com/en/docs/generative-apis/reference-content/supported-models/).
63+
- Remove the `max_completion_tokens` field from your request or client library, or reduce its value below what is [supported by the model](https://www.scaleway.com/en/docs/generative-apis/reference-content/supported-models/).
6464
- As an example, when using the [init_chat_model from Langchain](https://python.langchain.com/api_reference/_modules/langchain/chat_models/base.html#init_chat_model), you should edit the `max_tokens` value in the following configuration:
6565
```python
6666
llm = init_chat_model("llama-3.3-70b-instruct", max_tokens="8000", model_provider="openai", base_url="https://api.scaleway.ai/v1", temperature=0.7)
6767
```
6868
- Use a model supporting a higher `max_completion_tokens` value.
69-
- Use [Managed Inference](/managed-inference/), where these limits on completion tokens do not apply (your completion tokens amount will still be limited by the maximum context window supported by the model).
69+
- Use [Managed Inference](/managed-inference/), where these limits on completion tokens do not apply (your completion tokens amount will still be limited by the maximum context window supported by the model).
7070

7171
## 429: Too Many Requests - You exceeded your current quota of requests/tokens per minute
7272

@@ -79,15 +79,15 @@ Below are common issues that you may encounter when using Generative APIs, their
7979
- [Add a payment method](/billing/how-to/add-payment-method/#how-to-add-a-credit-card) and [validate your identity](/account/how-to/verify-identity/) to increase automatically your quotas [based on standard limits](/organizations-and-projects/additional-content/organization-quotas/#generative-apis).
8080
- [Ask our support](https://console.scaleway.com/support/tickets/create) to raise your quota.
8181
- Reduce the size of the input or output tokens processed by your API requests.
82-
- Use [Managed Inference](/managed-inference/), where these quota do not apply (your throughput will be only limited by the amount of Inference Deployment your provision)
82+
- Use [Managed Inference](/managed-inference/), where these quotas do not apply (your throughput will be only limited by the amount of Inference Deployment your provision)
8383

8484
## 429: Too Many Requests - You exceeded your current threshold of concurrent requests
8585

8686
### Cause
8787
- You kept too many API requests opened at the same time (number of HTTP sessions opened in parallel)
8888

8989
### Solution
90-
- Smooth out your API requests rate by limiting the number of API requests you perform at the same time (eg. requests which did not receive a complete response and are still opened) so that you remain below your [organization quotas for Generative APIs](/organizations-and-projects/additional-content/organization-quotas/#generative-apis).
90+
- Smooth out your API requests rate by limiting the number of API requests you perform at the same time (eg. requests which did not receive a complete response and are still opened) so that you remain below your [Organization quotas for Generative APIs](/organizations-and-projects/additional-content/organization-quotas/#generative-apis).
9191
- Use [Managed Inference](/managed-inference/), where concurrent request limit do not apply. Note that exceeding the number of concurrent requests your Inference Deployment can handle may impact performance metrics.
9292

9393

@@ -162,15 +162,15 @@ Below are common issues that you may encounter when using Generative APIs, their
162162
- Counter for **Tokens Processed** or **API Requests** should display a correct value (different from 0)
163163
- Graph across time should be empty
164164

165-
## Embeddings vectors cannot be stored in database or used with a third-party library
165+
## Embeddings vectors cannot be stored in a database or used with a third-party library
166166

167167
### Cause
168168
The embedding model you are using generates vector representations with a fixed dimension number, which is too high for your database or third-party library.
169169
- For example, the embedding model `bge-multilingual-gemma2` generates vector representations with `3584` dimensions. However, when storing vectors using PostgreSQL `pgvector` extensions, indexes (in `hnsw` or `ivvflat` formats) only support up to `2000` dimensions.
170170

171171
### Solution
172-
- Use a vector store supporting higher dimensions number, such as [Qdrant](https://www.scaleway.com/en/docs/tutorials/deploying-qdrant-vectordb-kubernetes/).
173-
- Do not use indexes for vectors or disable them from your third-party library. This may limit performance in vector similarity search for significant volumes.
172+
- Use a vector store supporting higher dimensions numbers, such as [Qdrant](https://www.scaleway.com/en/docs/tutorials/deploying-qdrant-vectordb-kubernetes/).
173+
- Do not use indexes for vectors or disable them from your third-party library. This may limit performance in vector similarity search for significant volumes.
174174
- When using [Langchain PGVector method](https://python.langchain.com/docs/integrations/vectorstores/pgvector/), this method does not create an index by default and should not raise errors.
175175
- When using the [Mastra](https://mastra.ai/) library with `vectorStoreName: "pgvector"`, specify indexConfig type as `flat` to avoid creating any index on vector dimensions.
176176
```typescript
@@ -180,7 +180,7 @@ The embedding model you are using generates vector representations with a fixed
180180
indexConfig: {"type":"flat"},
181181
});
182182
```
183-
- Use a model with a lower number of dimensions. Using [Managed Inference](https://console.scaleway.com/inference/deployments), you can deploy for instance the`sentence-t5-xxl` model, which represents vectors with `768` dimensions.
183+
- Use a model with a lower number of dimensions. Using [Managed Inference](https://console.scaleway.com/inference/deployments), you can deploy for instance the`sentence-t5-xxl` model, which represents vectors with `768` dimensions.
184184

185185
## Previous messages are not taken into account by the model
186186

@@ -219,7 +219,7 @@ response = client.chat.completions.create(
219219
print(response.choices[0].message.content)
220220
```
221221
This snippet will output the model response, which is `4`.
222-
- When exceeding maximum context window, you should receive a `400 - BadRequestError` detailing context length value you exceeded. In this case, you should reduce the size of the content you send to the API.
222+
- When exceeding the maximum context window, you should receive a `400 - BadRequestError` detailing the context length value you exceeded. In this case, you should reduce the size of the content you send to the API.
223223

224224
## Best practices for optimizing model performance
225225

@@ -234,4 +234,4 @@ This snippet will output the model response, which is `4`.
234234
### Debugging silent errors
235235
- For cases where no explicit error is returned:
236236
- Verify all fields in the API request are correctly named and formatted.
237-
- Test the request with smaller and simpler inputs to isolate potential issues.
237+
- Test the request with smaller and simpler inputs to isolate potential issues.

pages/gpu/how-to/use-nvidia-mig-technology.mdx

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ categories:
1515

1616
<Message type="note">
1717
* Scaleway offers MIG-compatible GPU Instances such as H100 PCIe GPU Instances
18-
* NVIDIA uses the term *GPU instance* to designate a MIG partition of a GPU (MIG= Multi-Instance GPU)
18+
* NVIDIA uses the term *GPU instance* to designate an MIG partition of a GPU (MIG= Multi-Instance GPU)
1919
* To avoid confusion, we will use the term GPU Instance in this document to designate the Scaleway GPU Instance, and *MIG partition* in the context of the MIG feature.
2020
</Message>
2121

@@ -151,10 +151,10 @@ Refer to the official documentation for more information about the supported [MI
151151
* `-cgi 9,19,19,19`: this flag specifies the MIG partition configuration. The numbers following the flag represent the MIG partitions for each of the four MIG device slices. In this case, there are four slices with configurations 9, 19, 19, and 19 compute instances each. These numbers correspond to the profile IDs retrieved previously. Note that you can use either of the following:
152152
* Profile ID (e.g. 9, 14, 5)
153153
* Short name of the profile (e.g. `3g.40gb`)
154-
* Full profile name of the instance (e.g. `MIG 3g.40gb`)
154+
* Full profile name of the Instance (e.g. `MIG 3g.40gb`)
155155
* `-C`: this flag automatically creates the corresponding compute instances for the MIG partitions.
156156

157-
The command instructs the `nvidia-smi` tool to set up a MIG configuration where the GPU is divided into four slices, each containing different numbers of MIG partition configurations as specified: an MIG 3g.40gb (Profile ID 9) for the first slice, and an MIG 1g.10gb (Profile ID 19) for each of the remaining three slices.
157+
The command instructs the `nvidia-smi` tool to set up an MIG configuration where the GPU is divided into four slices, each containing different numbers of MIG partition configurations as specified: an MIG 3g.40gb (Profile ID 9) for the first slice, and an MIG 1g.10gb (Profile ID 19) for each of the remaining three slices.
158158

159159
<Message type="note">
160160
- Running CUDA workloads on the GPU requires the creation of MIG partitions along with their corresponding compute instances. Just enabling MIG mode on the GPU is not enough to achieve this.

pages/gpu/reference-content/gpu-instances-bandwidth-overview.mdx

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -13,9 +13,8 @@ categories:
1313
- compute
1414
---
1515

16-
1716
Scaleway GPU Instances are designed to deliver **high-performance computing** for AI/ML workloads, rendering, scientific simulations, and visualization tasks.
18-
This guide provides a detailed overview of their **internet and Block Storage bandwidth capabilities** to help you choose the right instance for your GPU-powered workloads.
17+
This guide provides a detailed overview of their **internet and Block Storage bandwidth capabilities** to help you choose the right Instance for your GPU-powered workloads.
1918

2019
### Why bandwidth matters for GPU Instances
2120

pages/instances/api-cli/using-cloud-init.mdx

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ Cloud-config files are special scripts designed to be run by the cloud-init proc
3030

3131
You can give provisioning instructions to cloud-init using the `cloud-init` key of the `user_data` facility.
3232

33-
For `user_data` to be effective, it has to be added prior to the creation of the instance since `cloud-init` gets activated early in the first phases of the boot process.
33+
For `user_data` to be effective, it has to be added prior to the creation of the Instance since `cloud-init` gets activated early in the first phases of the boot process.
3434

3535
* **Server ID** refers to the unique identification string of your server. It will be displayed when you create your server. You can also recover it from the list of your servers, by typing `scw instance server list`.
3636

@@ -88,6 +88,4 @@ Subcommands:
8888
8989
````
9090

91-
For detailed information on cloud-init, refer to the official cloud-init [documentation](http://cloudinit.readthedocs.io/en/latest/index.html).
92-
93-
91+
For detailed information on cloud-init, refer to the official cloud-init [documentation](http://cloudinit.readthedocs.io/en/latest/index.html).

pages/instances/api-cli/using-routed-ips.mdx

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -491,7 +491,7 @@ Then you can create a new Instance using those IPs through the `public_ips` fiel
491491
❯ http post $API_URL/servers $HEADERS <payloads/server-data.json
492492
```
493493
<Message type="tip">
494-
In order to create Instance you have to add `"routed_ip_enabled": true` to your payload.
494+
To create an Instance, you must add `"routed_ip_enabled": true` to your payload.
495495
</Message>
496496
</TabsTab>
497497
<TabsTab label="Response">
@@ -648,7 +648,7 @@ You can use a specific server action to move an existing (legacy network) Instan
648648
❯ http post $API_URL/servers/$SERVER_ID/action $HEADERS action=enable_routed_ip
649649
```
650650
<Message type="note">
651-
Your instance *will* reboot during this action.
651+
Your Instance *will* reboot during this action.
652652
</Message>
653653
</TabsTab>
654654
<TabsTab label="Response">
@@ -1002,5 +1002,3 @@ You can verify if your Instance is enabled for routed IPs through the `/servers`
10021002
```
10031003
</TabsTab>
10041004
</Tabs>
1005-
1006-

0 commit comments

Comments
 (0)