Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
30 commits
Select commit Hold shift + click to select a range
c7d814e
docs(ins): rework naming
bene2k1 Aug 19, 2025
f078638
docs(ins): update content
bene2k1 Aug 19, 2025
c70b807
feat(ins): ins renaming
bene2k1 Aug 20, 2025
53000af
docs(ins): update
bene2k1 Aug 20, 2025
e60c7e6
Merge branch 'MTA-6390' of https://github.com/scaleway/docs-content i…
bene2k1 Aug 20, 2025
ee607eb
fix(ins): fix wording
bene2k1 Aug 20, 2025
c9b64e6
feat(ins): update content
bene2k1 Aug 20, 2025
e16b2e1
fix(ins): fix link
bene2k1 Aug 20, 2025
462e9cf
docs(ins): update wording
bene2k1 Aug 20, 2025
b06b4cb
Apply suggestions from code review
bene2k1 Aug 20, 2025
bf426aa
Apply suggestions from code review
bene2k1 Aug 20, 2025
6eec14c
Apply suggestions from code review
bene2k1 Aug 20, 2025
ee8fa15
Merge branch 'MTA-6390' of https://github.com/scaleway/docs-content i…
bene2k1 Aug 20, 2025
4eeba21
feat(ins): update table
bene2k1 Aug 20, 2025
67e90bb
Apply suggestions from code review
bene2k1 Aug 20, 2025
83ffc50
Apply suggestions from code review
bene2k1 Aug 20, 2025
aa18fa2
Update pages/instances/reference-content/choosing-instance-type.mdx
bene2k1 Aug 20, 2025
e83f8a1
docs(ins): update wording
bene2k1 Aug 20, 2025
909135b
Merge branch 'MTA-6390' of https://github.com/scaleway/docs-content i…
bene2k1 Aug 20, 2025
d9c3b03
docs(ins): update docs
bene2k1 Aug 20, 2025
4a45c3e
fix(ins): fix typo
bene2k1 Aug 20, 2025
06dae9d
docs(ins): add slo
bene2k1 Aug 26, 2025
a0ba4f0
fix(ins): typo
bene2k1 Aug 26, 2025
bc48c22
feat(ins): update wording
bene2k1 Aug 27, 2025
0463499
docs(ins): fix typo
bene2k1 Aug 27, 2025
d1ff076
docs(ins): replace links
bene2k1 Aug 27, 2025
dda61d8
docs(ins): update links
bene2k1 Aug 27, 2025
9b2ed8f
feat(ins): update
bene2k1 Aug 27, 2025
d3f782b
fix(ins): fix link
bene2k1 Aug 27, 2025
14631b0
Merge branch 'main' into MTA-6390
bene2k1 Aug 28, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion macros/compute/instances.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -2,4 +2,4 @@
macro: compute-instances
---

An Instance is a computing unit, either virtual or physical, that provides resources to run your applications on. Currently Scaleway offers the following Instance types: [General Purpose](/instances/reference-content/cost-optimized/#development-instances-and-general-purpose-instances), [Development](/instances/reference-content/cost-optimized/#development-instances-and-general-purpose-instances), [GPU](/instances/concepts/#gpu-instance), [Stardust](/instances/reference-content/learning/#learning-range-stardust-instances) and [Enterprise](/instances/reference-content/production-optimized/#production-optimized-range-ent1-instances).
An Instance is a computing unit, either virtual or physical, that provides resources to run your applications on. Currently Scaleway offers the following Instance types: [General Purpose](/instances/reference-content/general-purpose/), [Development](/instances/reference-content/development/), [GPU](/instances/concepts/#gpu-instance), and [Specialized](/instances/reference-content/specialized/).
12 changes: 4 additions & 8 deletions menu/navigation.json
Original file line number Diff line number Diff line change
Expand Up @@ -1610,21 +1610,17 @@
"label": "Instances internet and Block Storage bandwidth overview",
"slug": "instances-bandwidth-overview"
},
{
"label": "The right Instance for learning purposes",
"slug": "learning"
},
{
"label": "The right Instance for development purposes",
"slug": "cost-optimized"
"slug": "development"
},
{
"label": "The right Instance for production purposes",
"slug": "production-optimized"
"slug": "general-purpose"
},
{
"label": "The right Instance for workload purposes",
"slug": "workload-optimized"
"label": "The right Instance for specialized purposes",
"slug": "specialized"
},
{
"label": "Instance OS images and InstantApps",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -116,10 +116,10 @@ There is currently one available savings plan type: the Compute savings plan.

The **Compute savings plan** can be used with the following resources, simultaneously and across all regions:

- Instances 
- Cost-Optimized (DEV1, GP1, PLAY2, PRO2)
- Production-Optimized (ENT1, POP2)
- Workload-Optmized (POP2 HC, POP2-HM, POP2-HN) 
- Instances
- Development
- Geneal Purpose
- Specialized

The following resources are **not** covered by the savings plan discount:

Expand Down
2 changes: 1 addition & 1 deletion pages/gpu/how-to/use-gpu-with-docker.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ Docker is a platform as a service (PaaS) tool that uses OS-level virtualization

Unlike virtual machines, containers share the services of a single operating system kernel. This reduces unnecessary overhead and makes them lightweight and portable. Docker containers can run on any computer running macOS, Windows, or Linux, either on-premises or in a public cloud environment, such as Scaleway.

All [Scaleway GPU Instances](https://www.scaleway.com/en/gpu-instances/) come with prebuilt Docker images which can be launched as soon as you connect to your Instance. Each image provides a different AI environment. When you launch one of these images, you are in your chosen environment within seconds with all your favorite Python packages already installed. Using Docker for your AI projects in this way allows you to ensure that your working environments are both **isolated** and **portable**, since they are in containers that can be easily transferred between machines.
All [Scaleway GPU Instances](/gpu/reference-content/choosing-gpu-instance-type/) come with prebuilt Docker images which can be launched as soon as you connect to your Instance. Each image provides a different AI environment. When you launch one of these images, you are in your chosen environment within seconds with all your favorite Python packages already installed. Using Docker for your AI projects in this way allows you to ensure that your working environments are both **isolated** and **portable**, since they are in containers that can be easily transferred between machines.

You can also run Docker images provided by other sources and use them with your GPU Instance - for instance, you might want to use Docker images provided by NVIDIA, Google, etc. Alternatively, you could also choose to build your own Docker images.

Expand Down
2 changes: 1 addition & 1 deletion pages/gpu/how-to/use-mig-with-kubernetes.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ In this guide, we will explore the capabilities of NVIDIA MIG within a Kubernete
<Requirements />

- A Scaleway account logged into the [console](https://console.scaleway.com)
- A [Kubernetes cluster](/kubernetes/quickstart/#how-to-create-a-kubernetes-cluster) with a [GPU Instance](https://www.scaleway.com/en/gpu-instances/) as node
- A [Kubernetes cluster](/kubernetes/quickstart/#how-to-create-a-kubernetes-cluster) with a [GPU Instance](/gpu/reference-content/choosing-gpu-instance-type/) as node

<Message type="tip">
MIG is fully supported on [Scaleway managed Kubernetes](/kubernetes/quickstart/) clusters (Kapsule and Kosmos).
Expand Down
6 changes: 3 additions & 3 deletions pages/gpu/reference-content/choosing-gpu-instance-type.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ It empowers European AI startups, giving them the tools (without the need for a

## How to choose the right GPU Instance type

Scaleway provides a range of GPU Instance offers, from [GPU RENDER Instances](https://www.scaleway.com/en/gpu-render-instances/) and [H100 SXM Instances](https://www.scaleway.com/en/gpu-instances/) to [custom build clusters](https://www.scaleway.com/en/ai-supercomputers/). There are several factors to consider when choosing the right GPU Instance type to ensure that it meets your performance, budget, and scalability requirements.
Scaleway provides a range of GPU Instance offers, from [GPU RENDER Instances](https://www.scaleway.com/en/gpu-render-instances/) and [H100 SXM Instances](/gpu/reference-content/choosing-gpu-instance-type/) to [custom build clusters](https://www.scaleway.com/en/ai-supercomputers/). There are several factors to consider when choosing the right GPU Instance type to ensure that it meets your performance, budget, and scalability requirements.
Below, you will find a guide to help you make an informed decision:

* **Workload requirements:** Identify the nature of your workload. Are you running machine learning, deep learning, high-performance computing (HPC), data analytics, or graphics-intensive applications? Different Instance types are optimized for different types of workloads. For example, the H100 is not designed for graphics rendering. However, other models are. As [stated by Tim Dettmers](https://timdettmers.com/2023/01/30/which-gpu-for-deep-learning/), “Tensor Cores are most important, followed by the memory bandwidth of a GPU, the cache hierarchy, and only then FLOPS of a GPU.”. For more information, refer to the [NVIDIA GPU portfolio](https://docs.nvidia.com/data-center-gpu/line-card.pdf).
Expand All @@ -28,7 +28,7 @@ Below, you will find a guide to help you make an informed decision:
* **Scaling:** Consider the scalability requirements of your workload. The most efficient way to scale up your workload is by using:
* Bigger GPU
* Up to 2 PCIe GPU with [H100 Instances](https://www.scaleway.com/en/h100-pcie-try-it-now/) or 8 PCIe GPU with [L4](https://www.scaleway.com/en/l4-gpu-instance/) or [L4OS](https://www.scaleway.com/en/contact-l40s/) Instances.
* Or better, an HGX-based server setup with up to 8x NVlink GPUs with [H100-SXM Instances](https://www.scaleway.com/en/gpu-instances/)
* Or better, an HGX-based server setup with up to 8x NVlink GPUs with [H100-SXM Instances](/gpu/reference-content/choosing-gpu-instance-type/)
* A [supercomputer architecture](https://www.scaleway.com/en/ai-supercomputers/) for a larger setup for workload-intensive tasks
* Another way to scale your workload is to use [Kubernetes and MIG](/gpu/how-to/use-nvidia-mig-technology/): You can divide a single H100 or H100-SXM GPU into as many as 7 MIG partitions. This means that instead of employing seven P100 GPUs to set up seven K8S pods, you could opt for a single H100 GPU with MIG to effectively deploy all seven K8S pods.
* **Online resources:** Check for online resources, forums, and community discussions related to the specific GPU type you are considering. This can provide insights into common issues, best practices, and optimizations.
Expand Down Expand Up @@ -109,7 +109,7 @@ Remember that there is no one-size-fits-all answer, and the right GPU Instance t
| What they are not made for | | | | |

<Message type="note">
The service level objective (SLO) for all GPU Instance types (except H100-SXM) is 99.5% availability. [Read the SLA](https://www.scaleway.com/en/virtual-instances/sla/)
The service level objective (SLO) for all GPU Instance types (except H100-SXM) is 99.5% availability. [Read the SLA](https://www.scaleway.com/en/virtual-instances/sla/).
</Message>

### Scaleway AI Supercomputer
Expand Down
34 changes: 14 additions & 20 deletions pages/instances/concepts.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,10 @@ import Region from '@macros/concepts/region.mdx'
import Volumes from '@macros/concepts/volumes.mdx'
import StorageBootOnBlock from '@macros/storage/boot-on-block.mdx'

## ARM Instances

[ARM Instances](/instances/reference-content/understanding-differences-x86-arm/) are cost-effective and energy-efficient Instances powered by Ampere Altra processors, optimized for AI innovation, real-time applications, and sustainable cloud computing.


## Availability Zone

Expand All @@ -30,13 +34,9 @@ import StorageBootOnBlock from '@macros/storage/boot-on-block.mdx'

Cloud-init is a multi-distribution package that [provides boot time customization for cloud servers](/instances/how-to/use-boot-modes/#how-to-use-cloud-init). It enables an automatic Instance configuration as it boots into the cloud, turning a generic Ubuntu image into a configured server in a few seconds.

## Cost-Optimized Instances

[Cost-Optimized Instances](https://www.scaleway.com/en/cost-optimized-instances/) are production-grade [Instances](#instance) designed for scalable infrastructures. Cost-Optimized Instances support the boot-on-block feature and allow you to launch high-performance services with high-end CPUs.

## Development Instance

[Development Instances](https://www.scaleway.com/en/cost-optimized-instances/) are reliable and flexible Instances tuned to host your websites, applications, and development environments.
[Development Instances](/instances/reference-content/development/) are reliable and flexible Instances tuned to host your websites, applications, and development environments.

## Dynamic IP

Expand All @@ -48,9 +48,13 @@ You can choose to give your Instance a dynamic IP address when creating or updat

Flexible IP addresses are public IP addresses that you can hold independently of any Instance. When you create a Scaleway Instance, by default, its public IP address is also a flexible IP address. Flexible IP addresses can be attached to and detached from any Instances you wish. You can keep a number of flexible IP addresses in your account at any given time. When you delete a flexible IP address, it is disassociated from your account to be used by other users. Find out more with our dedicated documentation on [how to use flexible IP addresses](/instances/how-to/use-flexips/). See also [Dynamic IPs](#dynamic-ip).

## General Purpose Instances

[General Purpose Instances](/instances/reference-content/general-purpose/) are production-grade [Instances](#instance) designed for scalable infrastructures. Development Instances support the boot-on-block feature and allow you to launch high-performance services with high-end CPUs.

## GPU Instance

[GPU Instances](https://www.scaleway.com/en/gpu-instances/) are powerful Instances equipped with dedicated high-end Nvidia graphics processing units. See our [dedicated GPU documentation](/gpu/) for more details.
[GPU Instances](/gpu/reference-content/choosing-gpu-instance-type/) are powerful Instances equipped with dedicated high-end Nvidia graphics processing units. See our [dedicated GPU documentation](/gpu/) for more details.

## Image

Expand All @@ -64,10 +68,6 @@ An Instance is a virtual computing unit that offers resources for running applic

An InstantApp is an image with a preinstalled application. By choosing an InstantApp when prompted to select an image during the [creation of your Instance](/instances/how-to/create-an-instance/), you choose to install the specified application on your Instance. You can then start using the application immediately.

## Learning Instance

[Learning Instances](https://www.scaleway.com/en/stardust-instances/) are the perfect Instances for small workloads and simple applications. You can create up to one Instance per Availability Zone (available in FR-PAR-1 and NL-AMS-1).

## Local volumes

<LocalVolumes />
Expand All @@ -76,16 +76,6 @@ An InstantApp is an image with a preinstalled application. By choosing an Instan

Placement groups allow you to run multiple Compute Instances, each on a different physical hypervisor. Placement groups have two operating modes. The first one is called `max_availability`. It ensures that all the Compute Instances that belong to the same cluster will not run on the same underlying hardware. The second one is called `low_latency` and does the opposite, bringing Compute Instances closer together to achieve higher network throughput. [Learn how to use placement groups](/instances/how-to/use-placement-groups/).

## Production-Optimized Instances

[Production-Optimized Instances](https://www.scaleway.com/en/production-optimized-instances/) (aka POP2) are compute resources with dedicated resources (RAM and vCPUs). Designed for demanding applications, high-traffic databases, and production workloads.

Three variants of POP2 Instances are available:
* **POP2**: Production-Optimized Instances with Block Storage.
* **POP2-HC**: Workload-Optimized Instances, providing a ratio of vCPU:RAM of 1:8.
* **POP2-HM**: Workload-Optimized Instances, providing a ratio of vCPU:RAM of 1:2.
* **POP2-HN**: Workload-Optimized Instances, providing a ratio of vCPU:RAM of 1:2 and up to 10 Gbps bandwidth.

## Power-off mode

The Power-off mode [shuts down an Instance](/instances/how-to/power-off-instance/) by transferring all data on the local volume of the Instance to a volume store. The physical node is released back to the pool of available machines. The reserved flexible IP of the Instance remains available in the account.
Expand Down Expand Up @@ -159,3 +149,7 @@ Tags allow you to organize, sort, filter, and monitor your cloud resources using
## Volumes

<Volumes />

## x86 (Intel/AMD) Instances

[x86 (Intel/AMD) Instances](/instances/reference-content/understanding-differences-x86-arm/) are reliable and high-performance Instances powered by AMD EPYC processors, tailored for development, testing, production workloads, and general-purpose applications.
23 changes: 5 additions & 18 deletions pages/instances/faq.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -59,13 +59,13 @@ You can change the storage type and flexible IP after the Instance creation, whi
* PAR3 prices are shown separately.
</Message>

**Learning Instances**
**Development Instances**

| Range | Price for all regions* | Price for PAR3 |
|-------------------|------------------------|-------------------|
| STARDUST1-S | €0.0046/hour | Not available |

**Cost-Optimized Instances**
**Development Instances**

| Range | Price for all regions* | Price for PAR3 |
|-------------------|------------------------|-------------------|
Expand All @@ -92,7 +92,7 @@ You can change the storage type and flexible IP after the Instance creation, whi
| DEV1-L | €0.0495/hour | Not available |
| DEV1-XL | €0.0731/hour | Not available |

**Production-Optimized Instances**
**General Purpose Instances**

| Range | Price for all regions* | Price for PAR3 |
|-------------------|------------------------|-------------------|
Expand All @@ -111,7 +111,7 @@ You can change the storage type and flexible IP after the Instance creation, whi
| ENT1-XL | €2.35/hour | €3.53/hour |
| ENT1-2XL | €3.53/hour | €5.29/hour |

**Production-Optimized Instances with Windows Server operating system**
**General Purpose Instances with Windows Server operating system**

| Range | Price for all regions* | Price for PAR3 |
|-------------------|------------------------|-------------------|
Expand All @@ -121,7 +121,7 @@ You can change the storage type and flexible IP after the Instance creation, whi
| POP2-16C-64G-WIN | €1.4567/hour | Not available |
| POP2-32C-128-WIN | €2.9133/hour | Not available |

**Workload-Optimized Instances**
**Specialized Instances**

| Range | Price for all regions* | Price for PAR3 |
|-------------------|------------------------|-------------------|
Expand Down Expand Up @@ -267,25 +267,12 @@ You are free to bootstrap your own distribution.

We provide a wide range of different Linux distributions and InstantApps for Instances. Refer to [Scaleway Instance OS images and InstantApps](/instances/reference-content/images-and-instantapps/) for a complete list of all available OSes and InstantApps.

### What are the differences between ENT1 and POP2 Instances?

Both ENT1 and POP2 Instance types share the following features:
- Identical hardware specifications
- Dedicated vCPU allocation
- Same pricing structure
- Accelerated booting process

POP2 Instances provide CPU- and memory-optimized variants tailored to suit your workload requirements more effectively. The primary distinction between ENT1 and POP2 lies in [AMD Secure Encrypted Virtualization (SEV)](https://www.amd.com/fr/developer/sev.html), which is disabled for POP2 Instances.
By choosing POP2 Instances, you gain access to the latest features, such as the potential for live migration of Instances in the future, ensuring that your infrastructure remains aligned with evolving demands and technological advancements.
We recommend choosing POP2 Instances for most general workloads unless your specific workload requires features unique to ENT1 Instances.

### Where are my Instances located?

Scaleway offers different Instance ranges in all regions: Paris (France), Amsterdam (Netherlands), and Warsaw (Poland).

Check the [Instances availability guide](/account/reference-content/products-availability/) to discover where each Instance type is available.


### What makes FR-PAR-2 a sustainable region?

`FR-PAR-2` is our sustainable and environmentally efficient Availability Zone (AZ) in Paris.
Expand Down
Loading