Skip to content

Commit f2b11c7

Browse files
docs(ins): rework naming (#5425)
* docs(ins): rework naming * docs(ins): update content * feat(ins): ins renaming * docs(ins): update * fix(ins): fix wording * feat(ins): update content * fix(ins): fix link * docs(ins): update wording * Apply suggestions from code review Co-authored-by: Jessica <[email protected]> * Apply suggestions from code review * Apply suggestions from code review * feat(ins): update table * Apply suggestions from code review Co-authored-by: Jessica <[email protected]> * Apply suggestions from code review Co-authored-by: Jessica <[email protected]> * Update pages/instances/reference-content/choosing-instance-type.mdx Co-authored-by: Jessica <[email protected]> * docs(ins): update wording * docs(ins): update docs * fix(ins): fix typo * docs(ins): add slo * fix(ins): typo * feat(ins): update wording * docs(ins): fix typo * docs(ins): replace links * docs(ins): update links * feat(ins): update * fix(ins): fix link --------- Co-authored-by: Jessica <[email protected]>
1 parent 2e915d2 commit f2b11c7

File tree

48 files changed

+237
-317
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

48 files changed

+237
-317
lines changed

macros/compute/instances.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,4 +2,4 @@
22
macro: compute-instances
33
---
44

5-
An Instance is a computing unit, either virtual or physical, that provides resources to run your applications on. Currently Scaleway offers the following Instance types: [General Purpose](/instances/reference-content/cost-optimized/#development-instances-and-general-purpose-instances), [Development](/instances/reference-content/cost-optimized/#development-instances-and-general-purpose-instances), [GPU](/instances/concepts/#gpu-instance), [Stardust](/instances/reference-content/learning/#learning-range-stardust-instances) and [Enterprise](/instances/reference-content/production-optimized/#production-optimized-range-ent1-instances).
5+
An Instance is a computing unit, either virtual or physical, that provides resources to run your applications on. Currently Scaleway offers the following Instance types: [General Purpose](/instances/reference-content/general-purpose/), [Development](/instances/reference-content/development/), [GPU](/instances/concepts/#gpu-instance), and [Specialized](/instances/reference-content/specialized/).

menu/navigation.json

Lines changed: 4 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1610,21 +1610,17 @@
16101610
"label": "Instances internet and Block Storage bandwidth overview",
16111611
"slug": "instances-bandwidth-overview"
16121612
},
1613-
{
1614-
"label": "The right Instance for learning purposes",
1615-
"slug": "learning"
1616-
},
16171613
{
16181614
"label": "The right Instance for development purposes",
1619-
"slug": "cost-optimized"
1615+
"slug": "development"
16201616
},
16211617
{
16221618
"label": "The right Instance for production purposes",
1623-
"slug": "production-optimized"
1619+
"slug": "general-purpose"
16241620
},
16251621
{
1626-
"label": "The right Instance for workload purposes",
1627-
"slug": "workload-optimized"
1622+
"label": "The right Instance for specialized purposes",
1623+
"slug": "specialized"
16281624
},
16291625
{
16301626
"label": "Instance OS images and InstantApps",

pages/billing/additional-content/understanding-savings-plans.mdx

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -116,10 +116,10 @@ There is currently one available savings plan type: the Compute savings plan.
116116

117117
The **Compute savings plan** can be used with the following resources, simultaneously and across all regions:
118118

119-
- Instances 
120-
- Cost-Optimized (DEV1, GP1, PLAY2, PRO2)
121-
- Production-Optimized (ENT1, POP2)
122-
- Workload-Optmized (POP2 HC, POP2-HM, POP2-HN) 
119+
- Instances
120+
- Development
121+
- Geneal Purpose
122+
- Specialized
123123

124124
The following resources are **not** covered by the savings plan discount:
125125

pages/gpu/how-to/use-gpu-with-docker.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ Docker is a platform as a service (PaaS) tool that uses OS-level virtualization
1313

1414
Unlike virtual machines, containers share the services of a single operating system kernel. This reduces unnecessary overhead and makes them lightweight and portable. Docker containers can run on any computer running macOS, Windows, or Linux, either on-premises or in a public cloud environment, such as Scaleway.
1515

16-
All [Scaleway GPU Instances](https://www.scaleway.com/en/gpu-instances/) come with prebuilt Docker images which can be launched as soon as you connect to your Instance. Each image provides a different AI environment. When you launch one of these images, you are in your chosen environment within seconds with all your favorite Python packages already installed. Using Docker for your AI projects in this way allows you to ensure that your working environments are both **isolated** and **portable**, since they are in containers that can be easily transferred between machines.
16+
All [Scaleway GPU Instances](/gpu/reference-content/choosing-gpu-instance-type/) come with prebuilt Docker images which can be launched as soon as you connect to your Instance. Each image provides a different AI environment. When you launch one of these images, you are in your chosen environment within seconds with all your favorite Python packages already installed. Using Docker for your AI projects in this way allows you to ensure that your working environments are both **isolated** and **portable**, since they are in containers that can be easily transferred between machines.
1717

1818
You can also run Docker images provided by other sources and use them with your GPU Instance - for instance, you might want to use Docker images provided by NVIDIA, Google, etc. Alternatively, you could also choose to build your own Docker images.
1919

pages/gpu/how-to/use-mig-with-kubernetes.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ In this guide, we will explore the capabilities of NVIDIA MIG within a Kubernete
2424
<Requirements />
2525

2626
- A Scaleway account logged into the [console](https://console.scaleway.com)
27-
- A [Kubernetes cluster](/kubernetes/quickstart/#how-to-create-a-kubernetes-cluster) with a [GPU Instance](https://www.scaleway.com/en/gpu-instances/) as node
27+
- A [Kubernetes cluster](/kubernetes/quickstart/#how-to-create-a-kubernetes-cluster) with a [GPU Instance](/gpu/reference-content/choosing-gpu-instance-type/) as node
2828

2929
<Message type="tip">
3030
MIG is fully supported on [Scaleway managed Kubernetes](/kubernetes/quickstart/) clusters (Kapsule and Kosmos).

pages/gpu/reference-content/choosing-gpu-instance-type.mdx

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ It empowers European AI startups, giving them the tools (without the need for a
1616

1717
## How to choose the right GPU Instance type
1818

19-
Scaleway provides a range of GPU Instance offers, from [GPU RENDER Instances](https://www.scaleway.com/en/gpu-render-instances/) and [H100 SXM Instances](https://www.scaleway.com/en/gpu-instances/) to [custom build clusters](https://www.scaleway.com/en/ai-supercomputers/). There are several factors to consider when choosing the right GPU Instance type to ensure that it meets your performance, budget, and scalability requirements.
19+
Scaleway provides a range of GPU Instance offers, from [GPU RENDER Instances](https://www.scaleway.com/en/gpu-render-instances/) and [H100 SXM Instances](/gpu/reference-content/choosing-gpu-instance-type/) to [custom build clusters](https://www.scaleway.com/en/ai-supercomputers/). There are several factors to consider when choosing the right GPU Instance type to ensure that it meets your performance, budget, and scalability requirements.
2020
Below, you will find a guide to help you make an informed decision:
2121

2222
* **Workload requirements:** Identify the nature of your workload. Are you running machine learning, deep learning, high-performance computing (HPC), data analytics, or graphics-intensive applications? Different Instance types are optimized for different types of workloads. For example, the H100 is not designed for graphics rendering. However, other models are. As [stated by Tim Dettmers](https://timdettmers.com/2023/01/30/which-gpu-for-deep-learning/), “Tensor Cores are most important, followed by the memory bandwidth of a GPU, the cache hierarchy, and only then FLOPS of a GPU.”. For more information, refer to the [NVIDIA GPU portfolio](https://docs.nvidia.com/data-center-gpu/line-card.pdf).
@@ -28,7 +28,7 @@ Below, you will find a guide to help you make an informed decision:
2828
* **Scaling:** Consider the scalability requirements of your workload. The most efficient way to scale up your workload is by using:
2929
* Bigger GPU
3030
* Up to 2 PCIe GPU with [H100 Instances](https://www.scaleway.com/en/h100-pcie-try-it-now/) or 8 PCIe GPU with [L4](https://www.scaleway.com/en/l4-gpu-instance/) or [L4OS](https://www.scaleway.com/en/contact-l40s/) Instances.
31-
* Or better, an HGX-based server setup with up to 8x NVlink GPUs with [H100-SXM Instances](https://www.scaleway.com/en/gpu-instances/)
31+
* Or better, an HGX-based server setup with up to 8x NVlink GPUs with [H100-SXM Instances](/gpu/reference-content/choosing-gpu-instance-type/)
3232
* A [supercomputer architecture](https://www.scaleway.com/en/ai-supercomputers/) for a larger setup for workload-intensive tasks
3333
* Another way to scale your workload is to use [Kubernetes and MIG](/gpu/how-to/use-nvidia-mig-technology/): You can divide a single H100 or H100-SXM GPU into as many as 7 MIG partitions. This means that instead of employing seven P100 GPUs to set up seven K8S pods, you could opt for a single H100 GPU with MIG to effectively deploy all seven K8S pods.
3434
* **Online resources:** Check for online resources, forums, and community discussions related to the specific GPU type you are considering. This can provide insights into common issues, best practices, and optimizations.
@@ -109,7 +109,7 @@ Remember that there is no one-size-fits-all answer, and the right GPU Instance t
109109
| What they are not made for | | | | |
110110

111111
<Message type="note">
112-
The service level objective (SLO) for all GPU Instance types (except H100-SXM) is 99.5% availability. [Read the SLA](https://www.scaleway.com/en/virtual-instances/sla/)
112+
The service level objective (SLO) for all GPU Instance types (except H100-SXM) is 99.5% availability. [Read the SLA](https://www.scaleway.com/en/virtual-instances/sla/).
113113
</Message>
114114

115115
### Scaleway AI Supercomputer

pages/instances/concepts.mdx

Lines changed: 14 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -13,6 +13,10 @@ import Region from '@macros/concepts/region.mdx'
1313
import Volumes from '@macros/concepts/volumes.mdx'
1414
import StorageBootOnBlock from '@macros/storage/boot-on-block.mdx'
1515

16+
## ARM Instances
17+
18+
[ARM Instances](/instances/reference-content/understanding-differences-x86-arm/) are cost-effective and energy-efficient Instances powered by Ampere Altra processors, optimized for AI innovation, real-time applications, and sustainable cloud computing.
19+
1620

1721
## Availability Zone
1822

@@ -30,13 +34,9 @@ import StorageBootOnBlock from '@macros/storage/boot-on-block.mdx'
3034

3135
Cloud-init is a multi-distribution package that [provides boot time customization for cloud servers](/instances/how-to/use-boot-modes/#how-to-use-cloud-init). It enables an automatic Instance configuration as it boots into the cloud, turning a generic Ubuntu image into a configured server in a few seconds.
3236

33-
## Cost-Optimized Instances
34-
35-
[Cost-Optimized Instances](https://www.scaleway.com/en/cost-optimized-instances/) are production-grade [Instances](#instance) designed for scalable infrastructures. Cost-Optimized Instances support the boot-on-block feature and allow you to launch high-performance services with high-end CPUs.
36-
3737
## Development Instance
3838

39-
[Development Instances](https://www.scaleway.com/en/cost-optimized-instances/) are reliable and flexible Instances tuned to host your websites, applications, and development environments.
39+
[Development Instances](/instances/reference-content/development/) are reliable and flexible Instances tuned to host your websites, applications, and development environments.
4040

4141
## Dynamic IP
4242

@@ -48,9 +48,13 @@ You can choose to give your Instance a dynamic IP address when creating or updat
4848

4949
Flexible IP addresses are public IP addresses that you can hold independently of any Instance. When you create a Scaleway Instance, by default, its public IP address is also a flexible IP address. Flexible IP addresses can be attached to and detached from any Instances you wish. You can keep a number of flexible IP addresses in your account at any given time. When you delete a flexible IP address, it is disassociated from your account to be used by other users. Find out more with our dedicated documentation on [how to use flexible IP addresses](/instances/how-to/use-flexips/). See also [Dynamic IPs](#dynamic-ip).
5050

51+
## General Purpose Instances
52+
53+
[General Purpose Instances](/instances/reference-content/general-purpose/) are production-grade [Instances](#instance) designed for scalable infrastructures. Development Instances support the boot-on-block feature and allow you to launch high-performance services with high-end CPUs.
54+
5155
## GPU Instance
5256

53-
[GPU Instances](https://www.scaleway.com/en/gpu-instances/) are powerful Instances equipped with dedicated high-end Nvidia graphics processing units. See our [dedicated GPU documentation](/gpu/) for more details.
57+
[GPU Instances](/gpu/reference-content/choosing-gpu-instance-type/) are powerful Instances equipped with dedicated high-end Nvidia graphics processing units. See our [dedicated GPU documentation](/gpu/) for more details.
5458

5559
## Image
5660

@@ -64,10 +68,6 @@ An Instance is a virtual computing unit that offers resources for running applic
6468

6569
An InstantApp is an image with a preinstalled application. By choosing an InstantApp when prompted to select an image during the [creation of your Instance](/instances/how-to/create-an-instance/), you choose to install the specified application on your Instance. You can then start using the application immediately.
6670

67-
## Learning Instance
68-
69-
[Learning Instances](https://www.scaleway.com/en/stardust-instances/) are the perfect Instances for small workloads and simple applications. You can create up to one Instance per Availability Zone (available in FR-PAR-1 and NL-AMS-1).
70-
7171
## Local volumes
7272

7373
<LocalVolumes />
@@ -76,16 +76,6 @@ An InstantApp is an image with a preinstalled application. By choosing an Instan
7676

7777
Placement groups allow you to run multiple Compute Instances, each on a different physical hypervisor. Placement groups have two operating modes. The first one is called `max_availability`. It ensures that all the Compute Instances that belong to the same cluster will not run on the same underlying hardware. The second one is called `low_latency` and does the opposite, bringing Compute Instances closer together to achieve higher network throughput. [Learn how to use placement groups](/instances/how-to/use-placement-groups/).
7878

79-
## Production-Optimized Instances
80-
81-
[Production-Optimized Instances](https://www.scaleway.com/en/production-optimized-instances/) (aka POP2) are compute resources with dedicated resources (RAM and vCPUs). Designed for demanding applications, high-traffic databases, and production workloads.
82-
83-
Three variants of POP2 Instances are available:
84-
* **POP2**: Production-Optimized Instances with Block Storage.
85-
* **POP2-HC**: Workload-Optimized Instances, providing a ratio of vCPU:RAM of 1:8.
86-
* **POP2-HM**: Workload-Optimized Instances, providing a ratio of vCPU:RAM of 1:2.
87-
* **POP2-HN**: Workload-Optimized Instances, providing a ratio of vCPU:RAM of 1:2 and up to 10 Gbps bandwidth.
88-
8979
## Power-off mode
9080

9181
The Power-off mode [shuts down an Instance](/instances/how-to/power-off-instance/) by transferring all data on the local volume of the Instance to a volume store. The physical node is released back to the pool of available machines. The reserved flexible IP of the Instance remains available in the account.
@@ -159,3 +149,7 @@ Tags allow you to organize, sort, filter, and monitor your cloud resources using
159149
## Volumes
160150

161151
<Volumes />
152+
153+
## x86 (Intel/AMD) Instances
154+
155+
[x86 (Intel/AMD) Instances](/instances/reference-content/understanding-differences-x86-arm/) are reliable and high-performance Instances powered by AMD EPYC processors, tailored for development, testing, production workloads, and general-purpose applications.

pages/instances/faq.mdx

Lines changed: 5 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -59,13 +59,13 @@ You can change the storage type and flexible IP after the Instance creation, whi
5959
* PAR3 prices are shown separately.
6060
</Message>
6161

62-
**Learning Instances**
62+
**Development Instances**
6363

6464
| Range | Price for all regions* | Price for PAR3 |
6565
|-------------------|------------------------|-------------------|
6666
| STARDUST1-S | €0.0046/hour | Not available |
6767

68-
**Cost-Optimized Instances**
68+
**Development Instances**
6969

7070
| Range | Price for all regions* | Price for PAR3 |
7171
|-------------------|------------------------|-------------------|
@@ -92,7 +92,7 @@ You can change the storage type and flexible IP after the Instance creation, whi
9292
| DEV1-L | €0.0495/hour | Not available |
9393
| DEV1-XL | €0.0731/hour | Not available |
9494

95-
**Production-Optimized Instances**
95+
**General Purpose Instances**
9696

9797
| Range | Price for all regions* | Price for PAR3 |
9898
|-------------------|------------------------|-------------------|
@@ -111,7 +111,7 @@ You can change the storage type and flexible IP after the Instance creation, whi
111111
| ENT1-XL | €2.35/hour | €3.53/hour |
112112
| ENT1-2XL | €3.53/hour | €5.29/hour |
113113

114-
**Production-Optimized Instances with Windows Server operating system**
114+
**General Purpose Instances with Windows Server operating system**
115115

116116
| Range | Price for all regions* | Price for PAR3 |
117117
|-------------------|------------------------|-------------------|
@@ -121,7 +121,7 @@ You can change the storage type and flexible IP after the Instance creation, whi
121121
| POP2-16C-64G-WIN | €1.4567/hour | Not available |
122122
| POP2-32C-128-WIN | €2.9133/hour | Not available |
123123

124-
**Workload-Optimized Instances**
124+
**Specialized Instances**
125125

126126
| Range | Price for all regions* | Price for PAR3 |
127127
|-------------------|------------------------|-------------------|
@@ -267,25 +267,12 @@ You are free to bootstrap your own distribution.
267267

268268
We provide a wide range of different Linux distributions and InstantApps for Instances. Refer to [Scaleway Instance OS images and InstantApps](/instances/reference-content/images-and-instantapps/) for a complete list of all available OSes and InstantApps.
269269

270-
### What are the differences between ENT1 and POP2 Instances?
271-
272-
Both ENT1 and POP2 Instance types share the following features:
273-
- Identical hardware specifications
274-
- Dedicated vCPU allocation
275-
- Same pricing structure
276-
- Accelerated booting process
277-
278-
POP2 Instances provide CPU- and memory-optimized variants tailored to suit your workload requirements more effectively. The primary distinction between ENT1 and POP2 lies in [AMD Secure Encrypted Virtualization (SEV)](https://www.amd.com/fr/developer/sev.html), which is disabled for POP2 Instances.
279-
By choosing POP2 Instances, you gain access to the latest features, such as the potential for live migration of Instances in the future, ensuring that your infrastructure remains aligned with evolving demands and technological advancements.
280-
We recommend choosing POP2 Instances for most general workloads unless your specific workload requires features unique to ENT1 Instances.
281-
282270
### Where are my Instances located?
283271

284272
Scaleway offers different Instance ranges in all regions: Paris (France), Amsterdam (Netherlands), and Warsaw (Poland).
285273

286274
Check the [Instances availability guide](/account/reference-content/products-availability/) to discover where each Instance type is available.
287275

288-
289276
### What makes FR-PAR-2 a sustainable region?
290277

291278
`FR-PAR-2` is our sustainable and environmentally efficient Availability Zone (AZ) in Paris.

0 commit comments

Comments
 (0)