diff --git a/macros/compute/instances.mdx b/macros/compute/instances.mdx
index 18f165b8a6..7d2bdb7381 100644
--- a/macros/compute/instances.mdx
+++ b/macros/compute/instances.mdx
@@ -2,4 +2,4 @@
macro: compute-instances
---
-An Instance is a computing unit, either virtual or physical, that provides resources to run your applications on. Currently Scaleway offers the following Instance types: [General Purpose](/instances/reference-content/cost-optimized/#development-instances-and-general-purpose-instances), [Development](/instances/reference-content/cost-optimized/#development-instances-and-general-purpose-instances), [GPU](/instances/concepts/#gpu-instance), [Stardust](/instances/reference-content/learning/#learning-range-stardust-instances) and [Enterprise](/instances/reference-content/production-optimized/#production-optimized-range-ent1-instances).
\ No newline at end of file
+An Instance is a computing unit, either virtual or physical, that provides resources to run your applications on. Currently Scaleway offers the following Instance types: [General Purpose](/instances/reference-content/general-purpose/), [Development](/instances/reference-content/development/), [GPU](/instances/concepts/#gpu-instance), and [Specialized](/instances/reference-content/specialized/).
\ No newline at end of file
diff --git a/menu/navigation.json b/menu/navigation.json
index a83d0a2d6d..93b3932a84 100644
--- a/menu/navigation.json
+++ b/menu/navigation.json
@@ -1610,21 +1610,17 @@
"label": "Instances internet and Block Storage bandwidth overview",
"slug": "instances-bandwidth-overview"
},
- {
- "label": "The right Instance for learning purposes",
- "slug": "learning"
- },
{
"label": "The right Instance for development purposes",
- "slug": "cost-optimized"
+ "slug": "development"
},
{
"label": "The right Instance for production purposes",
- "slug": "production-optimized"
+ "slug": "general-purpose"
},
{
- "label": "The right Instance for workload purposes",
- "slug": "workload-optimized"
+ "label": "The right Instance for specialized purposes",
+ "slug": "specialized"
},
{
"label": "Instance OS images and InstantApps",
diff --git a/pages/billing/additional-content/understanding-savings-plans.mdx b/pages/billing/additional-content/understanding-savings-plans.mdx
index bd307db7fa..8a421f0ddf 100644
--- a/pages/billing/additional-content/understanding-savings-plans.mdx
+++ b/pages/billing/additional-content/understanding-savings-plans.mdx
@@ -116,10 +116,10 @@ There is currently one available savings plan type: the Compute savings plan.
The **Compute savings plan** can be used with the following resources, simultaneously and across all regions:
- - Instances
- - Cost-Optimized (DEV1, GP1, PLAY2, PRO2)
- - Production-Optimized (ENT1, POP2)
- - Workload-Optmized (POP2 HC, POP2-HM, POP2-HN)
+ - Instances
+ - Development
+ - Geneal Purpose
+ - Specialized
The following resources are **not** covered by the savings plan discount:
diff --git a/pages/gpu/how-to/use-gpu-with-docker.mdx b/pages/gpu/how-to/use-gpu-with-docker.mdx
index 0461eb5ef6..cbcbd50f5a 100644
--- a/pages/gpu/how-to/use-gpu-with-docker.mdx
+++ b/pages/gpu/how-to/use-gpu-with-docker.mdx
@@ -13,7 +13,7 @@ Docker is a platform as a service (PaaS) tool that uses OS-level virtualization
Unlike virtual machines, containers share the services of a single operating system kernel. This reduces unnecessary overhead and makes them lightweight and portable. Docker containers can run on any computer running macOS, Windows, or Linux, either on-premises or in a public cloud environment, such as Scaleway.
-All [Scaleway GPU Instances](https://www.scaleway.com/en/gpu-instances/) come with prebuilt Docker images which can be launched as soon as you connect to your Instance. Each image provides a different AI environment. When you launch one of these images, you are in your chosen environment within seconds with all your favorite Python packages already installed. Using Docker for your AI projects in this way allows you to ensure that your working environments are both **isolated** and **portable**, since they are in containers that can be easily transferred between machines.
+All [Scaleway GPU Instances](/gpu/reference-content/choosing-gpu-instance-type/) come with prebuilt Docker images which can be launched as soon as you connect to your Instance. Each image provides a different AI environment. When you launch one of these images, you are in your chosen environment within seconds with all your favorite Python packages already installed. Using Docker for your AI projects in this way allows you to ensure that your working environments are both **isolated** and **portable**, since they are in containers that can be easily transferred between machines.
You can also run Docker images provided by other sources and use them with your GPU Instance - for instance, you might want to use Docker images provided by NVIDIA, Google, etc. Alternatively, you could also choose to build your own Docker images.
diff --git a/pages/gpu/how-to/use-mig-with-kubernetes.mdx b/pages/gpu/how-to/use-mig-with-kubernetes.mdx
index 696c9a1f99..eec6d316a4 100644
--- a/pages/gpu/how-to/use-mig-with-kubernetes.mdx
+++ b/pages/gpu/how-to/use-mig-with-kubernetes.mdx
@@ -24,7 +24,7 @@ In this guide, we will explore the capabilities of NVIDIA MIG within a Kubernete
- A Scaleway account logged into the [console](https://console.scaleway.com)
-- A [Kubernetes cluster](/kubernetes/quickstart/#how-to-create-a-kubernetes-cluster) with a [GPU Instance](https://www.scaleway.com/en/gpu-instances/) as node
+- A [Kubernetes cluster](/kubernetes/quickstart/#how-to-create-a-kubernetes-cluster) with a [GPU Instance](/gpu/reference-content/choosing-gpu-instance-type/) as node
MIG is fully supported on [Scaleway managed Kubernetes](/kubernetes/quickstart/) clusters (Kapsule and Kosmos).
diff --git a/pages/gpu/reference-content/choosing-gpu-instance-type.mdx b/pages/gpu/reference-content/choosing-gpu-instance-type.mdx
index ccb7ee0ca8..95805345be 100644
--- a/pages/gpu/reference-content/choosing-gpu-instance-type.mdx
+++ b/pages/gpu/reference-content/choosing-gpu-instance-type.mdx
@@ -16,7 +16,7 @@ It empowers European AI startups, giving them the tools (without the need for a
## How to choose the right GPU Instance type
-Scaleway provides a range of GPU Instance offers, from [GPU RENDER Instances](https://www.scaleway.com/en/gpu-render-instances/) and [H100 SXM Instances](https://www.scaleway.com/en/gpu-instances/) to [custom build clusters](https://www.scaleway.com/en/ai-supercomputers/). There are several factors to consider when choosing the right GPU Instance type to ensure that it meets your performance, budget, and scalability requirements.
+Scaleway provides a range of GPU Instance offers, from [GPU RENDER Instances](https://www.scaleway.com/en/gpu-render-instances/) and [H100 SXM Instances](/gpu/reference-content/choosing-gpu-instance-type/) to [custom build clusters](https://www.scaleway.com/en/ai-supercomputers/). There are several factors to consider when choosing the right GPU Instance type to ensure that it meets your performance, budget, and scalability requirements.
Below, you will find a guide to help you make an informed decision:
* **Workload requirements:** Identify the nature of your workload. Are you running machine learning, deep learning, high-performance computing (HPC), data analytics, or graphics-intensive applications? Different Instance types are optimized for different types of workloads. For example, the H100 is not designed for graphics rendering. However, other models are. As [stated by Tim Dettmers](https://timdettmers.com/2023/01/30/which-gpu-for-deep-learning/), “Tensor Cores are most important, followed by the memory bandwidth of a GPU, the cache hierarchy, and only then FLOPS of a GPU.”. For more information, refer to the [NVIDIA GPU portfolio](https://docs.nvidia.com/data-center-gpu/line-card.pdf).
@@ -28,7 +28,7 @@ Below, you will find a guide to help you make an informed decision:
* **Scaling:** Consider the scalability requirements of your workload. The most efficient way to scale up your workload is by using:
* Bigger GPU
* Up to 2 PCIe GPU with [H100 Instances](https://www.scaleway.com/en/h100-pcie-try-it-now/) or 8 PCIe GPU with [L4](https://www.scaleway.com/en/l4-gpu-instance/) or [L4OS](https://www.scaleway.com/en/contact-l40s/) Instances.
- * Or better, an HGX-based server setup with up to 8x NVlink GPUs with [H100-SXM Instances](https://www.scaleway.com/en/gpu-instances/)
+ * Or better, an HGX-based server setup with up to 8x NVlink GPUs with [H100-SXM Instances](/gpu/reference-content/choosing-gpu-instance-type/)
* A [supercomputer architecture](https://www.scaleway.com/en/ai-supercomputers/) for a larger setup for workload-intensive tasks
* Another way to scale your workload is to use [Kubernetes and MIG](/gpu/how-to/use-nvidia-mig-technology/): You can divide a single H100 or H100-SXM GPU into as many as 7 MIG partitions. This means that instead of employing seven P100 GPUs to set up seven K8S pods, you could opt for a single H100 GPU with MIG to effectively deploy all seven K8S pods.
* **Online resources:** Check for online resources, forums, and community discussions related to the specific GPU type you are considering. This can provide insights into common issues, best practices, and optimizations.
@@ -109,7 +109,7 @@ Remember that there is no one-size-fits-all answer, and the right GPU Instance t
| What they are not made for | | | | |
- The service level objective (SLO) for all GPU Instance types (except H100-SXM) is 99.5% availability. [Read the SLA](https://www.scaleway.com/en/virtual-instances/sla/)
+ The service level objective (SLO) for all GPU Instance types (except H100-SXM) is 99.5% availability. [Read the SLA](https://www.scaleway.com/en/virtual-instances/sla/).
### Scaleway AI Supercomputer
diff --git a/pages/instances/concepts.mdx b/pages/instances/concepts.mdx
index 67cbd3487b..09289ab86a 100644
--- a/pages/instances/concepts.mdx
+++ b/pages/instances/concepts.mdx
@@ -13,6 +13,10 @@ import Region from '@macros/concepts/region.mdx'
import Volumes from '@macros/concepts/volumes.mdx'
import StorageBootOnBlock from '@macros/storage/boot-on-block.mdx'
+## ARM Instances
+
+[ARM Instances](/instances/reference-content/understanding-differences-x86-arm/) are cost-effective and energy-efficient Instances powered by Ampere Altra processors, optimized for AI innovation, real-time applications, and sustainable cloud computing.
+
## Availability Zone
@@ -30,13 +34,9 @@ import StorageBootOnBlock from '@macros/storage/boot-on-block.mdx'
Cloud-init is a multi-distribution package that [provides boot time customization for cloud servers](/instances/how-to/use-boot-modes/#how-to-use-cloud-init). It enables an automatic Instance configuration as it boots into the cloud, turning a generic Ubuntu image into a configured server in a few seconds.
-## Cost-Optimized Instances
-
-[Cost-Optimized Instances](https://www.scaleway.com/en/cost-optimized-instances/) are production-grade [Instances](#instance) designed for scalable infrastructures. Cost-Optimized Instances support the boot-on-block feature and allow you to launch high-performance services with high-end CPUs.
-
## Development Instance
-[Development Instances](https://www.scaleway.com/en/cost-optimized-instances/) are reliable and flexible Instances tuned to host your websites, applications, and development environments.
+[Development Instances](/instances/reference-content/development/) are reliable and flexible Instances tuned to host your websites, applications, and development environments.
## Dynamic IP
@@ -48,9 +48,13 @@ You can choose to give your Instance a dynamic IP address when creating or updat
Flexible IP addresses are public IP addresses that you can hold independently of any Instance. When you create a Scaleway Instance, by default, its public IP address is also a flexible IP address. Flexible IP addresses can be attached to and detached from any Instances you wish. You can keep a number of flexible IP addresses in your account at any given time. When you delete a flexible IP address, it is disassociated from your account to be used by other users. Find out more with our dedicated documentation on [how to use flexible IP addresses](/instances/how-to/use-flexips/). See also [Dynamic IPs](#dynamic-ip).
+## General Purpose Instances
+
+[General Purpose Instances](/instances/reference-content/general-purpose/) are production-grade [Instances](#instance) designed for scalable infrastructures. Development Instances support the boot-on-block feature and allow you to launch high-performance services with high-end CPUs.
+
## GPU Instance
-[GPU Instances](https://www.scaleway.com/en/gpu-instances/) are powerful Instances equipped with dedicated high-end Nvidia graphics processing units. See our [dedicated GPU documentation](/gpu/) for more details.
+[GPU Instances](/gpu/reference-content/choosing-gpu-instance-type/) are powerful Instances equipped with dedicated high-end Nvidia graphics processing units. See our [dedicated GPU documentation](/gpu/) for more details.
## Image
@@ -64,10 +68,6 @@ An Instance is a virtual computing unit that offers resources for running applic
An InstantApp is an image with a preinstalled application. By choosing an InstantApp when prompted to select an image during the [creation of your Instance](/instances/how-to/create-an-instance/), you choose to install the specified application on your Instance. You can then start using the application immediately.
-## Learning Instance
-
-[Learning Instances](https://www.scaleway.com/en/stardust-instances/) are the perfect Instances for small workloads and simple applications. You can create up to one Instance per Availability Zone (available in FR-PAR-1 and NL-AMS-1).
-
## Local volumes
@@ -76,16 +76,6 @@ An InstantApp is an image with a preinstalled application. By choosing an Instan
Placement groups allow you to run multiple Compute Instances, each on a different physical hypervisor. Placement groups have two operating modes. The first one is called `max_availability`. It ensures that all the Compute Instances that belong to the same cluster will not run on the same underlying hardware. The second one is called `low_latency` and does the opposite, bringing Compute Instances closer together to achieve higher network throughput. [Learn how to use placement groups](/instances/how-to/use-placement-groups/).
-## Production-Optimized Instances
-
-[Production-Optimized Instances](https://www.scaleway.com/en/production-optimized-instances/) (aka POP2) are compute resources with dedicated resources (RAM and vCPUs). Designed for demanding applications, high-traffic databases, and production workloads.
-
-Three variants of POP2 Instances are available:
-* **POP2**: Production-Optimized Instances with Block Storage.
-* **POP2-HC**: Workload-Optimized Instances, providing a ratio of vCPU:RAM of 1:8.
-* **POP2-HM**: Workload-Optimized Instances, providing a ratio of vCPU:RAM of 1:2.
-* **POP2-HN**: Workload-Optimized Instances, providing a ratio of vCPU:RAM of 1:2 and up to 10 Gbps bandwidth.
-
## Power-off mode
The Power-off mode [shuts down an Instance](/instances/how-to/power-off-instance/) by transferring all data on the local volume of the Instance to a volume store. The physical node is released back to the pool of available machines. The reserved flexible IP of the Instance remains available in the account.
@@ -159,3 +149,7 @@ Tags allow you to organize, sort, filter, and monitor your cloud resources using
## Volumes
+
+## x86 (Intel/AMD) Instances
+
+[x86 (Intel/AMD) Instances](/instances/reference-content/understanding-differences-x86-arm/) are reliable and high-performance Instances powered by AMD EPYC processors, tailored for development, testing, production workloads, and general-purpose applications.
\ No newline at end of file
diff --git a/pages/instances/faq.mdx b/pages/instances/faq.mdx
index 095cadde45..c9f5a3dc34 100644
--- a/pages/instances/faq.mdx
+++ b/pages/instances/faq.mdx
@@ -59,13 +59,13 @@ You can change the storage type and flexible IP after the Instance creation, whi
* PAR3 prices are shown separately.
-**Learning Instances**
+**Development Instances**
| Range | Price for all regions* | Price for PAR3 |
|-------------------|------------------------|-------------------|
| STARDUST1-S | €0.0046/hour | Not available |
-**Cost-Optimized Instances**
+**Development Instances**
| Range | Price for all regions* | Price for PAR3 |
|-------------------|------------------------|-------------------|
@@ -92,7 +92,7 @@ You can change the storage type and flexible IP after the Instance creation, whi
| DEV1-L | €0.0495/hour | Not available |
| DEV1-XL | €0.0731/hour | Not available |
-**Production-Optimized Instances**
+**General Purpose Instances**
| Range | Price for all regions* | Price for PAR3 |
|-------------------|------------------------|-------------------|
@@ -111,7 +111,7 @@ You can change the storage type and flexible IP after the Instance creation, whi
| ENT1-XL | €2.35/hour | €3.53/hour |
| ENT1-2XL | €3.53/hour | €5.29/hour |
-**Production-Optimized Instances with Windows Server operating system**
+**General Purpose Instances with Windows Server operating system**
| Range | Price for all regions* | Price for PAR3 |
|-------------------|------------------------|-------------------|
@@ -121,7 +121,7 @@ You can change the storage type and flexible IP after the Instance creation, whi
| POP2-16C-64G-WIN | €1.4567/hour | Not available |
| POP2-32C-128-WIN | €2.9133/hour | Not available |
-**Workload-Optimized Instances**
+**Specialized Instances**
| Range | Price for all regions* | Price for PAR3 |
|-------------------|------------------------|-------------------|
@@ -267,25 +267,12 @@ You are free to bootstrap your own distribution.
We provide a wide range of different Linux distributions and InstantApps for Instances. Refer to [Scaleway Instance OS images and InstantApps](/instances/reference-content/images-and-instantapps/) for a complete list of all available OSes and InstantApps.
-### What are the differences between ENT1 and POP2 Instances?
-
-Both ENT1 and POP2 Instance types share the following features:
-- Identical hardware specifications
-- Dedicated vCPU allocation
-- Same pricing structure
-- Accelerated booting process
-
-POP2 Instances provide CPU- and memory-optimized variants tailored to suit your workload requirements more effectively. The primary distinction between ENT1 and POP2 lies in [AMD Secure Encrypted Virtualization (SEV)](https://www.amd.com/fr/developer/sev.html), which is disabled for POP2 Instances.
-By choosing POP2 Instances, you gain access to the latest features, such as the potential for live migration of Instances in the future, ensuring that your infrastructure remains aligned with evolving demands and technological advancements.
-We recommend choosing POP2 Instances for most general workloads unless your specific workload requires features unique to ENT1 Instances.
-
### Where are my Instances located?
Scaleway offers different Instance ranges in all regions: Paris (France), Amsterdam (Netherlands), and Warsaw (Poland).
Check the [Instances availability guide](/account/reference-content/products-availability/) to discover where each Instance type is available.
-
### What makes FR-PAR-2 a sustainable region?
`FR-PAR-2` is our sustainable and environmentally efficient Availability Zone (AZ) in Paris.
diff --git a/pages/instances/how-to/create-an-instance.mdx b/pages/instances/how-to/create-an-instance.mdx
index c86808b82b..073f308ca1 100644
--- a/pages/instances/how-to/create-an-instance.mdx
+++ b/pages/instances/how-to/create-an-instance.mdx
@@ -67,7 +67,7 @@ Select a tab below for instructions on how to create an Instance via either our
2. Click **Create Instance**. The [Instance creation page](https://console.scaleway.com/instance/servers) displays.
3. Complete the following steps:
- Choose an **Availability Zone**, which represents the geographical region where your Instance will be deployed.
- - **Choose a POP2-WIN** Instance type from the **Production-Optimized** range.
+ - **Choose a Windows** Instance type from the **General Purpose** range.
- **Choose a Windows Server image** to run on your Instance.
- **Name your Instance**, or leave the randomly-generated name in place. Optionally, you can add [tags](/instances/concepts/#tags) to help you organize your Instance.
- **Add volumes**, which are storage spaces used by your Instances. A block volume with a default name and 5,000 IOPS is automatically provided for your system volume. You can customize this volume and attach up to 16 local and/or block type volumes as needed.
diff --git a/pages/instances/how-to/migrate-instances.mdx b/pages/instances/how-to/migrate-instances.mdx
index 59cea18f74..92481bdfc2 100644
--- a/pages/instances/how-to/migrate-instances.mdx
+++ b/pages/instances/how-to/migrate-instances.mdx
@@ -9,7 +9,7 @@ dates:
import Requirements from '@macros/iam/requirements.mdx'
-The Scaleway platform makes it very easy to migrate your data from one Instance to another or upgrade your Instance to a more powerful one if your requirements grow. In this how-to, we will upgrade an Instance by migrating from a [DEV1-S](/instances/concepts/#development-instance) Instance to a [GP1-XS](/instances/reference-content/cost-optimized/#development-instances-and-general-purpose-instances) Instance. The new GP1-XS Instance will have the same [flexible IP](/instances/concepts/#flexible-ip) as the original DEV1-S Instance.
+The Scaleway platform makes it very easy to migrate your data from one Instance to another or upgrade your Instance to a more powerful one if your requirements grow.
For more information about choosing the best Instance type to migrate to for your use case, see our [dedicated documentation](/instances/reference-content/choosing-instance-type/).
@@ -30,9 +30,9 @@ Follow the instructions to [create an image](/instances/how-to/create-a-backup/#
2. Click the Instance you created an image of.
3. Click the **Images** tab.
4. Click next to the Instance's image.
-5. Select **Create Instance from image** on the drop-down list. You are redirected to the Instance Creation Wizard, where the image has been preselected for you at step 2.
+5. Select **Create Instance from image** on the drop-down list. You are redirected to the Instance creation wizard, where the image has been preselected for you at step 2.
6. Finish configuring the Instance according to your requirements. Notably:
- - You are free to choose your Instance type. We chose to create a [GP1-XS](/instances/reference-content/cost-optimized/#development-instances-and-general-purpose-instances) Instance, upgrading from the [DEV1-S](/instances/concepts/#development-instance) on which the image was based.
+ - You are free to choose your Instance type as long it uses the same CPU architecture (e.g. ARM to ARM or x86 to x86)
- Click **Advanced options** and use the toggle to deselect **flexible IP**. This creates a the new Instance without a flexible IP, as we are going to attach the one from the existing Instance in the next step.
7. Click **Create Instance** to finish. The Instance is created and shows in your Instances list.
@@ -42,13 +42,13 @@ Next, we will move the original DEV1-S Instance's [flexible IP address](/instanc
1. Click **CPU & GPU Instances** in the **Compute** section of the side menu. The [Instances page](https://console.scaleway.com/instance/servers) displays.
2. Click the **Flexible IPs** tab.
-3. Click next to the DEV1-S's flexible IP. In the pop-up menu that then displays, click **Switch Instance**.
-4. Select the GP1-XS Instance from the drop-down list, and click **Attach flexible IP to Instance**.
+3. Click next to the Instance's flexible IP. In the pop-up menu that then displays, click **Switch Instance**.
+4. Select the new Instance from the drop-down list, and click **Attach flexible IP to Instance**.
The new Instance is now reachable via the old Instance's flexible IP.
## How to delete the old Instance
-Presuming that you no longer need the original DEV1-S Instance, you can [delete it](/instances/how-to/delete-instance/).
+Presuming that you no longer need the original Instance, you can [delete it](/instances/how-to/delete-instance/).
diff --git a/pages/instances/reference-content/choosing-instance-type.mdx b/pages/instances/reference-content/choosing-instance-type.mdx
index 7627f5fc7a..14a162f70d 100644
--- a/pages/instances/reference-content/choosing-instance-type.mdx
+++ b/pages/instances/reference-content/choosing-instance-type.mdx
@@ -4,7 +4,7 @@ description: Find out how to select the ideal Scaleway Instance type for your sp
dates:
validation: 2025-05-15
posted: 2023-02-20
-tags: instance type stardust range vcpu hyperthread core ram bandwidth dedicated shared memory hypervisor vm storage dev1 play2 gp1 pro2 ent1 gpu arm learning development production production-optimized cost-optimized memory-optimized storage-optimized
+tags: instance comparison vcpu ram core
---
Scaleway **CPU & GPU Instances** are virtual machines in the cloud. You can create and manage Instances via our [console](https://console.scaleway.com/), [API](https://www.scaleway.com/en/developers/api/), [CLI](https://www.scaleway.com/en/cli/), or [other developer tools](https://www.scaleway.com/en/developers/). When you [create an Instance](/instances/how-to/create-an-instance/), you must select the **Instance type** you want to create. This page explains the different ranges of Instances available at Scaleway and helps you to choose the best one for your needs.
@@ -13,7 +13,7 @@ Scaleway **CPU & GPU Instances** are virtual machines in the cloud. You can crea
Different Instance types have different prices and are designed for different use cases. They offer different levels of power and performance, based on their **vCPU** (cores), **memory**, **storage**, and **bandwidth**.
-You may not need a super powerful Instance if you just want to play around and do some experiments for personal projects, so a **Learning** Instance could be perfect for you in this case. But if you want to use your Instance to host a business-critical application in production, you need the power and reliability of a **Production-Optimized** Instance, precisely designed to reliably handle this type of demanding workload.
+You may not need a super powerful Instance if you just want to play around and do some experiments for personal projects, so a **Development** Instance could be perfect for you in this case. But if you want to use your Instance to host a business-critical application in production, you need the power and reliability of a **General Purpose** Instance, precisely designed to reliably handle this type of demanding workload.
## Instance technical specifications
@@ -47,35 +47,20 @@ The table below shows the different ranges of Instances at Scaleway and their sp
Use this table to help identify the right Instance range for your use case and computing needs.
-| **Instance range** | [Learning](/instances/reference-content/learning/) | [Cost-Optimized](/instances/reference-content/cost-optimized/) | [Production-Optimized](/instances/reference-content/production-optimized/) | [Workload-Optimized](/instances/reference-content/workload-optimized/) |
-|-----------------------|----------------------------------------------------------------|----------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| **Use cases** | Discovering the Scaleway ecosystem
Hosting personal projects | Scaling your development and testing environment
Hosting CI/CD runners and containerized worker nodes | Hosting production workloads and business-critical applications
Ensuring predictable CPU performance in the face of high traffic | Hosting high-demanding analysis, in-memory calculation, big-data processing, high-performance or cache databases
Designed for high-performance web-serving, video encoding, machine learning, batch processing, CI/CD |
-| **Supported storage** | Resilient Block Storage or Local Storage | Resilient Block Storage or Local Storage, OR Resilient Block Storage only (depending on Instance type) | Resilient Block Storage | Resilient Block Storage |
-| **vCPU** | 1 core | From 1 to 32 cores | From 2 to 96 cores | From 2 to 64 cores |
-| **Shared/Dedicated** | [Shared vCPU](/instances/reference-content/choosing-shared-vs-dedicated-cpus/#shared-vcpu-instances) | [Shared vCPU](/instances/reference-content/choosing-shared-vs-dedicated-cpus/#shared-vcpu-instances) | [Dedicated vCPU](/instances/reference-content/choosing-shared-vs-dedicated-cpus/#dedicated-vcpu-instances) | [Dedicated vCPU](/instances/reference-content/choosing-shared-vs-dedicated-cpus/#dedicated-vcpu-instances) |
-| **RAM** | 1 GiB | From 2 GiB to 128 GiB | From 8 GiB to 384 GiB | From 4 GiB to 512 GiB |
-| **Maximum Bandwidth** | 100 Mbit/s | From 100 Mbps to 6 Gbps | From 400 Mbps to 20 Gbps | From 400 Mbps to 12.8 Gbps |
+| **Instance range** | [Development](/instances/reference-content/development/) | [General Purpose](/instances/reference-content/general-purpose/) | [Compute Optimized](/instances/reference-content/specialized/) | [Memory Optimized](/instances/reference-content/specialized/) |
+|-----------------------|----------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| **Use cases** | Scaling your development and testing environment
Hosting CI/CD runners and containerized worker nodes | Hosting production workloads and business-critical applications
Ensuring predictable CPU performance in the face of high traffic | Designed for high-performance web-serving, video encoding, machine learning, batch processing, CI/CD | Hosting high-demanding analysis, in-memory calculation, big-data processing, high-performance or cache databases |
+| **Supported storage** | Resilient Block Storage or Local Storage | Resilient Block Storage | Resilient Block Storage | Resilient Block Storage |
+| **vCPU** | From 1 to 48 cores | From 1 to 64 cores | From 2 to 64 cores | From 2 to 64 cores |
+| **Shared/Dedicated** | [Shared vCPU](/instances/reference-content/choosing-shared-vs-dedicated-cpus/#shared-vcpu-instances) | [Shared vCPU](/instances/reference-content/choosing-shared-vs-dedicated-cpus/#shared-vcpu-instances) or [Dedicated vCPU](/instances/reference-content/choosing-shared-vs-dedicated-cpus/#dedicated-vcpu-instances) | [Dedicated vCPU](/instances/reference-content/choosing-shared-vs-dedicated-cpus/#dedicated-vcpu-instances) | [Dedicated vCPU](/instances/reference-content/choosing-shared-vs-dedicated-cpus/#dedicated-vcpu-instances) |
+| **RAM** | From 2 GiB to 256 GiB | From 2 GiB to 384 GiB | From 4 GiB to 512 GiB | From 4 GiB to 512 GiB |
+| **Maximum Bandwidth** | From 200 Mbps to 10 Gbps | From 100 Mbps to 12.8 Gbps | From 400 Mbps to 12.8 Gbps | From 400 Mbps to 12.8 Gbps
Refer to the tabs below to compare the different Instance ranges and learn more about their key use cases:
-
- Scaleway Learning Instances are virtual machines tailored for educational use, providing an ideal environment for discovering and experimenting with cloud computing.
- Perfect for hosting small, self-contained applications like a LAMP website, code repository backups, or an internal wiki, these Instances eliminate the need to overpay for minimal personal projects.
-
- Key use cases include:
- * Personal or low-traffic websites
- * Blogs
- * Development and test environments
- * Microservice hosting
- * Repository hosting
-
- Each Instance comes equipped with a single shared vCPU, 1 GB of RAM, and up to 100 Mbps of bandwidth. By adding an IPv4 address and 10 GB of storage, you can get started immediately. You can create up to one Instance per Availability Zone, currently available in FR-PAR-1, NL-AMS-1 and PL-WAW-2.
-
- Learn more about [Learning Instances](/instances/reference-content/learning/)
-
-
- Scaleway Cost-Optimized Instances are highly reliable and affordably priced, making them an ideal choice for businesses looking to balance operational flexibility, cost-efficiency, and performance.
+
+ Scaleway Development Instances are highly reliable and affordably priced, making them an ideal choice for businesses looking to balance operational flexibility, cost-efficiency, and performance.
Designed for lightweight applications, development environments, and low-intensity workloads, these Instances offer a budget-friendly cloud solution.
Key use cases include:
@@ -85,13 +70,13 @@ Refer to the tabs below to compare the different Instance ranges and learn more
* Hosting worker nodes in container ecosystems
* CI/CD runners
- Cost-Optimized Instances provide a cost-efficient yet powerful compute platform suitable for both development and light production use. Equipped with shared AMD EPYC™ vCPUs and ranging from 2 GiB to 128 GiB of RAM, they cater to a wide array of workloads with solid performance.
+ Development Instances provide a cost-efficient yet powerful compute platform suitable for both development and light production use. Equipped with shared AMD EPYC™ vCPUs and ranging from 2 GiB to 128 GiB of RAM, they cater to a wide array of workloads with solid performance.
- Learn more about [Cost-Optimized Instances](/instances/reference-content/cost-optimized/)
+ Learn more about [Development Instances](/instances/reference-content/development/)
-
- Scaleway Production-Optimized Instances are reliable, high-performance Instances with dedicated vCPUs, designed for the most demanding workloads and mission-critical applications. Leveraging the state-of-the-art AMD EPYC™ 7003 series processors, they deliver exceptional performance and production-grade computing.
+
+ Scaleway General Purpose Instances are reliable, high-performance Instances with shared or dedicated vCPUs, designed for the most demanding workloads and mission-critical applications. Leveraging the state-of-the-art AMD EPYC™ 7003 series processors, they deliver exceptional performance and production-grade computing.
Designed for critical production environments and high-traffic applications, these Instances prioritize performance and reliability with an optimal mix of compute, memory, and network resources.
Key use cases include:
@@ -100,12 +85,12 @@ Refer to the tabs below to compare the different Instance ranges and learn more
* Data analytics and business intelligence applications
* Hosting of Software-as-a-Service (SaaS) platforms
- Production-Optimized Instances provide high performance, security, and scalability, enabling organizations to efficiently manage high-traffic applications and build resilient architectures. Features like CPU pinning ensure optimized performance and reliability for your production workloads.
+ General Purpose Instances provide high performance, security, and scalability, enabling organizations to efficiently manage high-traffic applications and build resilient architectures. Features like CPU pinning ensure optimized performance and reliability for your production workloads.
- Learn more about [Production-Optimized Instances](/instances/reference-content/production-optimized/)
+ Learn more about [General Purpose Instances](/instances/reference-content/general-purpose/)
-
- Scaleway Workload-Optimized Instances are designed to handle specific types of workloads efficiently, meeting the demands of applications that require high CPU or memory resources. These Instances are ideal for running compute-intensive or memory-intensive applications such as data analytics, high-performance databases, and machine learning inference.
+
+ Scaleway Specialized Instances are designed to handle specific types of workloads efficiently, meeting the demands of applications that require high CPU or memory resources. These Instances are ideal for running compute-intensive or memory-intensive applications such as data analytics, high-performance databases, and machine learning inference.
Tailored for businesses with fluctuating needs, these Instances allow for optimal use of resources and cost-effectiveness. Two variants are on offer:
@@ -122,9 +107,9 @@ Refer to the tabs below to compare the different Instance ranges and learn more
* Machine learning
* Batch processing
- Workload-Optimized Instances precisely tailor your computing resources to your application’s needs, ensuring you neither overspend nor underperform. Whether managing CPU-intensive data analytics or memory-hungry databases, these Instances adapt to your workloads, providing a balanced, cost-effective, and high-performing cloud computing solution ready to meet your specific business challenges.
+ Specialized Instances precisely tailor your computing resources to your application’s needs, ensuring you neither overspend nor underperform. Whether managing CPU-intensive data analytics or memory-hungry databases, these Instances adapt to your workloads, providing a balanced, cost-effective, and high-performing cloud computing solution ready to meet your specific business challenges.
- Learn more about [Workload-Optimized Instances](/instances/reference-content/workload-optimized/)
+ Learn more about [Specialized Instances](/instances/reference-content/specialized/)
diff --git a/pages/instances/reference-content/choosing-shared-vs-dedicated-cpus.mdx b/pages/instances/reference-content/choosing-shared-vs-dedicated-cpus.mdx
index ccb12232db..a67fb9b673 100644
--- a/pages/instances/reference-content/choosing-shared-vs-dedicated-cpus.mdx
+++ b/pages/instances/reference-content/choosing-shared-vs-dedicated-cpus.mdx
@@ -29,7 +29,7 @@ Understanding the difference between these two techniques is key to making an in
## Shared vCPU Instances
-Shared vCPU Instances, including [Learning](/instances/reference-content/learning/) and [Cost-Optimized](/instances/reference-content/cost-optimized/), are cost-effective virtual machines in which CPU resources are shared among multiple Instances.
+Shared vCPU Instances, including [Development](/instances/reference-content/development/) and some [General Purpose](/instances/reference-content/general-purpose/) Instances, are cost-effective virtual machines in which CPU resources are shared among multiple Instances.
This means that while each Instance gets its own vCPUs, these vCPUs are scheduled on physical cores that are shared across multiple Instances.
@@ -56,7 +56,7 @@ While physical CPU threads are shared between Instances, vCPUs are dedicated to
## Dedicated vCPU Instances
-Dedicated vCPU Instances, including [Production-Optimized](/instances/reference-content/production-optimized/) and [Workload-Optimized](/instances/reference-content/workload-optimized/), provide exclusive access to physical CPU cores.
+Dedicated vCPU Instances, including selected [General Purpose](/instances/reference-content/general-purpose/), [Compute Optimized](/instances/reference-content/specialized/), and [Memory Optimized](/instances/reference-content/specialized/) Instances provide exclusive access to physical CPU cores.
This ensures consistent and predictable performance at all times. Dedicated vCPU Instances are perfect for applications that require high CPU utilization and low latency.
diff --git a/pages/instances/reference-content/cost-optimized.mdx b/pages/instances/reference-content/cost-optimized.mdx
deleted file mode 100644
index d29bdfe3a6..0000000000
--- a/pages/instances/reference-content/cost-optimized.mdx
+++ /dev/null
@@ -1,83 +0,0 @@
----
-title: The right Instance for development purposes
-description: Find out more about Instances from Scaleway's Cost-Optimized range, such as Development and General Purpose Instances.
-dates:
- validation: 2025-03-19
- posted: 2023-02-22
-tags: instance type development DEV range cost-optimized general-purpose general purpose GP PLAY2 PRO2 play pro
----
-
-An Instance is a virtual machine in the cloud. Scaleway supports several [types of Instances](/instances/reference-content/choosing-instance-type/), each with their own set of resources, unique value propositions, and technical specifications. Each Instance supports the essential operating systems and distributions, as well as customized [Instantapps](/instances/concepts/#instantapp).
-
-## Development Instances and General Purpose Instances
-
-Scaleway's **Cost-Optimized** range includes Development Instances and General Purpose Instances. These Instances provide a balance of compute, memory, and networking resources, and can be used for a wide range of workloads. For example, they are ideally suited for scaling a development and testing environment, but also Content Management Systems (CMS) or microservices. They are also a good default choice if you are not sure which Instance type is best for your application.
-
-See below the technical specifications of Development Instances or General Purposes Instances:
-
-| Range | Cost-Optimized |
-|:--------------------------------|:------------------------------------------------------------------------------|
-| Instance Type | DEV1
GP1 |
-| Availability Zone | PAR1, PAR2, PAR3 (excl. DEV1), AMS1, AMS2, WAW1, WAW2 |
-| Storage | Local or Block |
-| Max. Bandwidth | From 200 to 500 Mbps |
-| CPU Type | DEV1: AMD EPYC 7281 (2,1 GHz) or equivalent
GP1: AMD EPYC 7410P (2 GHz) or equivalent|
-| Resources | Shared vCPUs |
-| Sizing | From 2 to 4 vCPUs
From 2 to 12 GiB RAM |
-| vCPU:RAM ratio | Various
(1:1, 1:2, 1:3) |
-
-## PLAY2 Instances and PRO2 Instances
-
-In the same **Cost-Optimized** range, you will also find PLAY2 and PRO2 Instances. These are the next generation of Development and General Purpose Instances. They present the best price-performance ratio with the most flexible vCPU to RAM ratio, and provide features that target most standard and cloud-native workloads. In other words, these Instances keep costs down while still supporting a wide variety of cloud applications, such as medium-to-high-traffic web servers, medium-sized databases and e-commerce websites.
-
-See below the technical specifications of PLAY2 and PRO2 Instances:
-
-| Range | Cost-Optimized |
-|:--------------------------------|:------------------------------------------------------------------------------|
-| Instance Type | PLAY2
PRO2 |
-| Availability Zone | PAR1, PAR2, PAR3 (excl PLAY2), AMS1, AMS2, AMS3, WAW1, WAW2, WAW3 |
-| Storage | Block |
-| Max. Bandwidth | From 100 Mbps to 6 Gbps |
-| CPU Type | AMD EPYC 7543 (2,8 GHz) |
-| Resources | Shared vCPUs |
-| Sizing | From 1 to 32 vCPUs
From 2 to 128 GiB RAM |
-| vCPU:RAM ratio | Various
(1:2, 1:4) |
-
-## COP-ARM Instances
-
-An innovative option in the **Cost-Optimized** range are COP-ARM Instances, which are powered by ARM CPUs. These Instances mark a significant step in the world of development and general purpose computing. Their ARM architecture is a key feature, offering an excellent price-performance ratio while maintaining various vCPU to RAM configurations. This ARM CPU design is especially efficient for various standard and cloud-native workloads, ensuring cost-effective operations. Ideal for a wide range of cloud applications, COP-ARM Instances are well-suited for managing medium-to-high-traffic web servers, medium-sized databases, and e-commerce platforms, all the while leveraging the unique advantages of the ARM architecture.
-
-The table below displays the technical specifications of COP-ARM Instances:
-
-| Range | Cost-Optimized |
-|:--------------------------------|:------------------------------------------------------------------------------|
-| Instance Type | COPARM1 |
-| Availability Zone | PAR2 |
-| Storage | Block |
-| Max. Bandwidth | From 200 Mbps to 3.2 Gbps |
-| CPU Type | ARM (Ampere Altra Max M128-30) |
-| Resources | Shared vCPUs |
-| Sizing | From 2 to 128 vCPUs
From 8 to 128 GiB RAM |
-| vCPU:RAM ratio | 1:4 |
-
-## Complementary services
-
-To help build and manage your applications, consider complementing your Instance with the following compatible services:
-- [Learn how to back up your Instance](/instances/how-to/create-a-backup/)
-- [Learn how to create snapshots of your Instance for specific volumes](/block-storage/how-to/create-a-snapshot/)
-- [Learn how to migrate your data from one Instance to another](/instances/how-to/migrate-instances/)
-
-## Matching use cases
-
-Try Scaleway Development Instances or General Purpose Instances with the following tutorials:
-
-- [Hosting your own GitHub runner on an Instance](/tutorials/host-github-runner/)
-- [Deploying WordPress with LEMP on Ubuntu Jammy Jellyfish (22.04 LTS)](/tutorials/wordpress-lemp-stack-ubuntu-jammy-jellyfish-22-04/)
-- [Creating your own Minecraft server](/tutorials/setup-minecraft/)
-
-Try Scaleway PLAY2 Instances or PRO2 Instances with the following tutorials:
-
-- [Configuring a Prometheus monitoring server with a Grafana dashboard](/tutorials/prometheus-monitoring-grafana-dashboard/)
-- [Setting up GitLab with a Managed Database for PostgreSQL](/tutorials/configuring-gitlab-scaleway-elements-database/)
-- [Deploying AWStats](/tutorials/deploy-awstats/)
-- [Running web analytics with Plausible on Ubuntu Linux](/tutorials/plausible-analytics-ubuntu/)
diff --git a/pages/instances/reference-content/development.mdx b/pages/instances/reference-content/development.mdx
new file mode 100644
index 0000000000..f7b2fed2e3
--- /dev/null
+++ b/pages/instances/reference-content/development.mdx
@@ -0,0 +1,46 @@
+---
+title: The right Instance for development purposes
+description: Discover Scaleway's development range of Instances, including options like the Stardust Instance, ideal for educational purposes.
+dates:
+ validation: 2025-07-15
+ posted: 2023-02-22
+tags: instance type stardust range development use-case
+---
+
+An Instance is a virtual machine in the cloud. Scaleway supports several [types of Instances](/instances/reference-content/choosing-instance-type/), each with their own set of resources, unique value propositions, and technical specifications. Each Instance supports the essential operating systems and distributions, as well as customized [Instantapps](/instances/concepts/#instantapp).
+
+## Development Instances
+
+Scaleway's **Development** range includes DEV1 and GP1 Instances. These Instances provide a balance of compute, memory, and networking resources, and can be used for a wide range of workloads. For example, they are ideally suited for scaling a development and testing environment, but also Content Management Systems (CMS) or microservices. They are also a good default choice if you are not sure which Instance type is best for your application.
+
+See below the technical specifications of Development Instances:
+
+| Range | Development |
+|:--------------------------------|:------------------------------------------------------------------------------|
+| Instance type | DEV1
GP1 |
+| Availability Zone | PAR1, PAR2, PAR3 (excl. DEV1), AMS1, AMS2, WAW1, WAW2 |
+| Storage | Local or Block |
+| Max. bandwidth | From 200 to 500 Mbps |
+| CPU type | DEV1: AMD EPYC 7281 (2,1 GHz) or equivalent
GP1: AMD EPYC 7410P (2 GHz) or equivalent |
+| Resources | Shared vCPUs |
+| Sizing | From 2 to 48 vCPUs
From 2 to 256 GiB RAM |
+| vCPU:RAM ratio | Various
(1:1, 1:2, 1:3) |
+
+## Complementary services
+
+To help build and manage your applications, consider complementing your Instance with the following compatible services:
+- [Learn how to back up your Instance](/instances/how-to/create-a-backup/)
+- [Learn how to create snapshots of your Instance for specific volumes](/block-storage/how-to/create-a-snapshot/)
+- [Learn how to migrate your data from one Instance to another](/instances/how-to/migrate-instances/)
+
+## Matching use cases
+
+Try Scaleway Development Instances with the following tutorials:
+
+- [Hosting your own GitHub runner on an Instance](/tutorials/host-github-runner/)
+- [Deploying WordPress with LEMP on Ubuntu Jammy Jellyfish (22.04 LTS)](/tutorials/wordpress-lemp-stack-ubuntu-jammy-jellyfish-22-04/)
+- [Creating your own Minecraft server](/tutorials/setup-minecraft/)
+- [Configuring a Prometheus monitoring server with a Grafana dashboard](/tutorials/prometheus-monitoring-grafana-dashboard/)
+- [Setting up GitLab with a Managed Database for PostgreSQL](/tutorials/configuring-gitlab-scaleway-elements-database/)
+- [Deploying AWStats](/tutorials/deploy-awstats/)
+- [Running web analytics with Plausible on Ubuntu Linux](/tutorials/plausible-analytics-ubuntu/)
diff --git a/pages/instances/reference-content/production-optimized.mdx b/pages/instances/reference-content/general-purpose.mdx
similarity index 50%
rename from pages/instances/reference-content/production-optimized.mdx
rename to pages/instances/reference-content/general-purpose.mdx
index 53a52052c9..395f7baeed 100644
--- a/pages/instances/reference-content/production-optimized.mdx
+++ b/pages/instances/reference-content/general-purpose.mdx
@@ -1,10 +1,10 @@
---
title: The right Instance for production purposes
-description: Find out more about Instances from Scaleway's Production-Optimized range, such as ENT1 and POP2 Instances.
+description: Find out more about Instances from Scaleway's General Purpose range.
dates:
validation: 2025-03-03
posted: 2023-02-22
-tags: instance type production production-optimized range POP2 ENT1
+tags: instance type production production- range POP2 ENT1
---
An Instance is a virtual machine in the cloud. Scaleway supports several [types of Instances](/instances/reference-content/choosing-instance-type/), each with their own set of resources, unique value propositions, and technical specifications. Each Instance supports the essential operating systems and distributions, as well as customized [Instantapps](/instances/concepts/#instantapp).
@@ -16,15 +16,15 @@ An Instance is a virtual machine in the cloud. Scaleway supports several [types
POP2 Instances are recommended for general workloads, unless your workload specifically needs features unique to ENT1 Instances.
-## Production-Optimized range: ENT1 Instances
+## ENT1 Instances
-Scaleway's **Production-Optimized** range includes Enterprise Instances (ENT1). ENT1 Instances are high-end, dedicated cloud Instances for demanding workloads. They offer the highest consistent performance per core to support real-time applications. In addition, their computing power makes them generally more robust for compute-intensive workloads.
+Scaleway's **General Purpose** range includes Enterprise Instances (ENT1). ENT1 Instances are high-end, dedicated cloud Instances for demanding workloads. They offer the highest consistent performance per core to support real-time applications. In addition, their computing power makes them generally more robust for compute-intensive workloads.
They are best suited for production websites, enterprise applications, high-traffic databases, and any application that requires 100% sustained CPU usage such as monitoring and analytics software. This includes Prometheus and Grafana, gaming sessions, and ad serving.
See below the technical specifications of Enterprise Instances:
-| Range | Production-Optimized |
+| Range | General Purpose |
|:--------------------------------|:------------------------------------------------------------------------------|
| Instance Type | ENT1 |
| Availability Zone | PAR1, PAR2, PAR3, AMS1, AMS2, AMS3, WAW1, WAW2, WAW3 |
@@ -36,15 +36,15 @@ See below the technical specifications of Enterprise Instances:
| Sizing | From 2 to 96 vCPUs
From 8 GiB to 384 GiB RAM |
| vCPU:RAM ratio | 1:4 |
-## Production-Optimized range: POP2 Instances
+## POP2 Instances
-Scaleway's **Production-Optimized** range includes POP2 Instances. POP2 Instances are high-end, dedicated cloud Instances for demanding workloads. They offer the highest consistent performance per core to support real-time applications. In addition, their computing power makes them generally more robust for compute-intensive workloads.
+Scaleway's **General Purpose** range includes POP2 Instances. POP2 Instances are high-end, dedicated cloud Instances for demanding workloads. They offer the highest consistent performance per core to support real-time applications. In addition, their computing power makes them generally more robust for compute-intensive workloads.
They are best suited for production websites, demanding applications, high-traffic databases, and any application that requires 100% sustained CPU usage such as monitoring and analytics software. This includes Prometheus and Grafana, gaming sessions, and ad serving.
-See below the technical specifications of Production-Optimized Instances:
+See below the technical specifications of General Purpose Instances:
-| Range | Production-Optimized |
+| Range | General Purpose |
|:--------------------------------|:------------------------------------------------------------------------------|
| Instance Type | POP2 |
| Availability Zone | PAR1, PAR2, PAR3, AMS1, AMS2, AMS3, WAW2, WAW3 |
@@ -55,14 +55,48 @@ See below the technical specifications of Production-Optimized Instances:
| Sizing | From 2 to 96 vCPUs
From 8 GiB to 384 GiB RAM |
| vCPU:RAM ratio | 1:4 |
+## PLAY2 Instances and PRO2 Instances
+
+In the same **General Purpose** range, you will also find PLAY2 and PRO2 Instances. These are the next generation of General Purpose and General Purpose Instances. They offer the best price-performance ratio with the most flexible vCPU to RAM ratio, and provide features that target most standard and cloud-native workloads. In other words, these Instances keep costs down while still supporting a wide variety of cloud applications, such as medium-to-high-traffic web servers, medium-sized databases, and e-commerce websites.
+
+See below the technical specifications of PLAY2 and PRO2 Instances:
+
+| Range | General Purpose |
+|:--------------------------------|:------------------------------------------------------------------------------|
+| Instance type | PLAY2
PRO2 |
+| Availability Zone | PAR1, PAR2, PAR3 (excl PLAY2), AMS1, AMS2, AMS3, WAW1, WAW2, WAW3 |
+| Storage | Block |
+| Max. bandwidth | From 100 Mbps to 6 Gbps |
+| CPU type | AMD EPYC 7543 (2,8 GHz) |
+| Resources | Shared vCPUs |
+| Sizing | From 1 to 32 vCPUs
From 2 to 128 GiB RAM |
+| vCPU:RAM ratio | Various
(1:2, 1:4) |
+
+## COP-ARM Instances
+
+COP-ARM Instances are an innovative option, powered by ARM CPUs. These Instances mark a significant step in the world of development and general-purpose computing. Their ARM architecture is a key feature, offering an excellent price-performance ratio while maintaining various vCPU to RAM configurations. This ARM CPU design is especially efficient for standard and cloud-native workloads, ensuring cost-effective operations. Ideal for a wide range of cloud applications, COP-ARM Instances are well-suited for managing medium-to-high-traffic web servers, medium-sized databases, and e-commerce platforms, all the while leveraging the unique advantages of the ARM architecture.
+
+The table below displays the technical specifications of COP-ARM Instances:
+
+| Range | General Purpose |
+|:--------------------------------|:------------------------------------------------------------------------------|
+| Instance type | COPARM1 |
+| Availability Zone | PAR2 |
+| Storage | Block |
+| Max. bandwidth | From 200 Mbps to 3.2 Gbps |
+| CPU type | ARM (Ampere Altra Max M128-30) |
+| Resources | Shared vCPUs |
+| Sizing | From 2 to 128 vCPUs
From 8 to 128 GiB RAM |
+| vCPU:RAM ratio | 1:4
+
## Complementary services
-To help build and manage your applications, consider complementing your Production-Optimized Instances with the following compatible services:
+To help build and manage your applications, consider complementing your General Purpose Instances with the following compatible services:
- [Learn how to back up your Instance](/instances/how-to/create-a-backup/)
- [Learn how to create snapshots of your Instance for specific volumes](/block-storage/how-to/create-a-snapshot/)
- [Learn how to migrate your data from one Instance to another](/instances/how-to/migrate-instances/)
## Matching use cases
-Try Scaleway Production-Optimized Instances with the following tutorials:
+Try Scaleway General Purpose Instances with the following tutorials:
- [Setting up a Kubernetes cluster using Rancher on Ubuntu Bionic Beaver](/tutorials/setup-k8s-cluster-rancher/)
- [Deploying a RTMP Streaming Server](/tutorials/rtmp-self-hosted-streaming/)
- [Configuring a Nagios Monitoring System](/tutorials/configure-nagios-monitoring/)
diff --git a/pages/instances/reference-content/instances-datasheet.mdx b/pages/instances/reference-content/instances-datasheet.mdx
index 1eba413f22..de6049c95d 100644
--- a/pages/instances/reference-content/instances-datasheet.mdx
+++ b/pages/instances/reference-content/instances-datasheet.mdx
@@ -28,13 +28,14 @@ This datasheet provides a concise overview of the performance, technical feature
| Resources | Shared vCPUs |
| Sizing | 1 vCPU, 1 GiB RAM |
| vCPU:RAM ratio | 1:1 |
-| SLO | None |
+| [SLO](https://www.scaleway.com/en/virtual-instances/sla/) | None |
-## Development and General Purpose Instances
+
+## Development Instances
See below the technical specifications of Development and General Purpose Instances:
-| Range | Cost-Optimized |
+| Range | Development |
|:--------------------------------|:-------------------------------------------------------------------------------------------|
| Instance type | DEV1
GP1 |
| Availability Zone | PAR1, PAR2, PAR3 (excl. DEV1), AMS1, AMS2, WAW1, WAW2 |
@@ -45,13 +46,15 @@ See below the technical specifications of Development and General Purpose Instan
| Resources | Shared vCPUs |
| Sizing | From 2 to 4 vCPUs
From 2 to 12 GiB RAM |
| vCPU:RAM ratio | Various
(1:1, 1:2, 1:3) |
-| SLO | None |
+| [SLO](https://www.scaleway.com/en/virtual-instances/sla/) | None |
+
-## PLAY2 and PRO2 Instances
+## General Purpose Instances
+### PLAY2 and PRO2 Instances
See below the technical specifications of PLAY2 and PRO2 Instances:
-| Range | Cost-Optimized |
+| Range | General Purpose |
|:--------------------------------|:------------------------------------------------------------------------------|
| Instance type | PLAY2
PRO2 |
| Availability Zone | PAR1, PAR2, PAR3 (excl PLAY2), AMS1, AMS2, AMS3, WAW1, WAW2, WAW3 |
@@ -62,13 +65,14 @@ See below the technical specifications of PLAY2 and PRO2 Instances:
| Resources | Shared vCPUs |
| Sizing | From 1 to 32 vCPUs
From 2 to 128 GiB RAM |
| vCPU:RAM ratio | Various
(1:2, 1:4) |
-| SLO | None |
+| [SLO](https://www.scaleway.com/en/virtual-instances/sla/) | None |
+
-## COP-ARM Instances
+### COP-ARM Instances
The table below displays the technical specifications of COP-ARM Instances:
-| Range | Cost-Optimized |
+| Range | General Purpose |
|:--------------------------------|:------------------------------------------------------------------------------|
| Instance type | COPARM1 |
| Availability Zone | PAR2 |
@@ -79,13 +83,13 @@ The table below displays the technical specifications of COP-ARM Instances:
| Resources | Shared vCPUs |
| Sizing | From 2 to 128 vCPUs
From 8 to 128 GiB RAM |
| vCPU:RAM ratio | 1:4 |
-| SLO | None |
+| [SLO](https://www.scaleway.com/en/virtual-instances/sla/) | None |
-## Enterprise Instances
+### ENT1 Instances
-See below the technical specifications of Enterprise Instances:
+See below the technical specifications of ENT1 Instances:
-| Range | Production-Optimized |
+| Range | General Purpose |
|:--------------------------------|:------------------------------------------------------------------------------|
| Instance type | ENT1 |
| Availability Zone | PAR1, PAR2, PAR3, AMS1, AMS2, AMS3, WAW1, WAW2, WAW3 |
@@ -97,15 +101,16 @@ See below the technical specifications of Enterprise Instances:
| Security feature | Secure Encrypted Virtualization |
| Sizing | From 2 to 96 vCPUs
From 8 GiB to 384 GiB RAM |
| vCPU:RAM ratio | 1:4 |
-| SLO | None |
+| [SLO](https://www.scaleway.com/en/virtual-instances/sla/) | None |
+
\* Instances with dedicated vCPU do not share their compute resources with other Instances (1 vCPU = 1 CPU thread dedicated to that Instance). This type of Instance is particularly recommended for running production-grade compute-intensive applications.
-## Production-Optimized Instances
+### POP2 Instances
-See below the technical specifications of Production-Optimized Instances:
+See below the technical specifications of POP2 Instances:
-| Range | Production-Optimized |
+| Range | General Purpose |
|:--------------------------------|:------------------------------------------------------------------------------|
| Instance type | POP2 |
| Availability Zone | PAR1, PAR2, PAR3, AMS1, AMS2, AMS3, WAW2, WAW3 |
@@ -116,18 +121,19 @@ See below the technical specifications of Production-Optimized Instances:
| Resources | Dedicated vCPUs* |
| Sizing | From 2 to 96 vCPUs
From 8 GiB to 384 GiB RAM |
| vCPU:RAM ratio | 1:4 |
-| SLO | 99.5% availability |
+| [SLO](https://www.scaleway.com/en/virtual-instances/sla/) | 99.5% availability |
+
\* Instances with dedicated vCPU do not share their compute resources with other Instances (1 vCPU = 1 CPU thread dedicated to that Instance). This type of Instance is particularly recommended for running production-grade compute-intensive applications.
-## Workload-Optimized Instances
+## Specialized Instances
-See below the technical specifications of Workload-Optimized Instances:
+See below the technical specifications of Specialized Instances:
* High-Memory: Designed for RAM-intensive usages, and high memory production applications, these machines provide more RAM than vCPU.
* High-CPU: Made for high computing workloads and compute-bound applications, this machine provides more vCPU than RAM.
-| Range | Workload-Optimized |
+| Range | Specialized |
|:--------------------------------|:------------------------------------------------------------------------------|
| Instance type | POP2-HM, POP2-HC, POP2-HN |
| Availability Zone | PAR1, PAR2, PAR3, AMS1, AMS2, WAW2, WAW3 |
@@ -139,6 +145,7 @@ See below the technical specifications of Workload-Optimized Instances:
| Security feature | Secure Encrypted Virtualization |
| Sizing | From 2 to 64 dedicated vCPUs
From 4 GiB to 512 GiB RAM |
| vCPU:RAM ratio | 1:8 (POP2-HM), 1:2 (POP2-HC and POP2-HN) |
-| SLO | 99.5% availability |
+| [SLO](https://www.scaleway.com/en/virtual-instances/sla/) | 99.5% availability |
+
\* Instances with dedicated vCPU do not share their compute resources with other Instances (1 vCPU = 1 CPU thread dedicated to that Instance). This type of Instance is particularly recommended for running production-grade compute-intensive applications.
\ No newline at end of file
diff --git a/pages/instances/reference-content/learning.mdx b/pages/instances/reference-content/learning.mdx
deleted file mode 100644
index 31123f13a1..0000000000
--- a/pages/instances/reference-content/learning.mdx
+++ /dev/null
@@ -1,46 +0,0 @@
----
-title: The right Instance for learning purposes
-description: Discover Scaleway's learning range of Instances, including options like the Stardust Instance, ideal for educational purposes.
-dates:
- validation: 2025-07-15
- posted: 2023-02-22
-tags: instance type stardust range learning use-case
----
-
-An Instance is a virtual machine in the cloud. Scaleway supports several [types of Instances](/instances/reference-content/choosing-instance-type/), each with their own set of resources, unique value propositions, and technical specifications. Each Learning Instance supports the essential operating systems and distributions.
-
-## Learning range: Stardust Instances
-
-Scaleway's **Learning** range includes our Stardust Instances. Stardust Instances are the perfect Instances for small workloads and simple applications. They are built to host small internal applications, staging environments or low-traffic web servers.
-
- See below the technical specifications of Stardust Instances:
-
-| **Range** | Learning |
-|:------------------------|:------------------------------------------|
-| **Instance Type** | STARDUST1-S |
-| **Availability Zone** | PAR1, AMS1 and WAW2 |
-| **Storage** | Local or Block |
-| **Max. Bandwidth** | 100 Mbit/s |
-| **CPU Type** | AMD EPYC 7282 (2,8 GHz) or 7281 (2,1 GHz) |
-| **Resources** | Shared vCPUs |
-| **Sizing** | 1 vCPU, 1 GiB RAM |
-| **vCPU:RAM ratio** | 1:1 |
-| **SLA** | None |
-
-
-## Complementary services
-
-To help build and manage your applications, consider complementing your Stardust Instance with the following compatible services:
-
-- [Learn how to back up your Instance](/instances/how-to/create-a-backup/)
-- [Learn how to create snapshots of your Instance for specific volumes](/block-storage/how-to/create-a-snapshot/)
-- [Learn how to migrate your data from one Instance to another](/instances/how-to/migrate-instances/)
-
-## Matching use cases
-
-Try Scaleway Stardust Instances with the following tutorials:
-
-- [Deploying Strapi with a click on Stardust](/tutorials/strapi/)
-- [Using Bash to display a Christmas tree](/tutorials/bash-christmas-tree/)
-- [Setting a Private mesh VPN with WireGuard](/tutorials/wireguard-mesh-vpn/)
-- [First steps with the Linux command line](/tutorials/first-steps-linux-command-line/)
\ No newline at end of file
diff --git a/pages/instances/reference-content/workload-optimized.mdx b/pages/instances/reference-content/specialized.mdx
similarity index 78%
rename from pages/instances/reference-content/workload-optimized.mdx
rename to pages/instances/reference-content/specialized.mdx
index bfc5ca900e..37165deb3b 100644
--- a/pages/instances/reference-content/workload-optimized.mdx
+++ b/pages/instances/reference-content/specialized.mdx
@@ -1,6 +1,6 @@
---
title: The right Instance for workload purposes
-description: Find out more about Instances from Scaleway's Workload-Optimized range, such as POP2-HC, POP2-HN, and POP2-HM.
+description: Find out more about Instances from Scaleway's Specialized range, such as POP2-HC, POP2-HN, and POP2-HM.
dates:
validation: 2025-06-24
posted: 2023-05-11
@@ -9,17 +9,17 @@ tags: instance type workload workload-optimized range POP2-HM POP2-HC POP2-HN
An Instance is a virtual machine in the cloud. Scaleway supports several [types of Instances](/instances/reference-content/choosing-instance-type/), each with their own set of resources, unique value propositions, and technical specifications. Each Instance supports the essential operating systems and distributions, as well as customized [InstantApps](/instances/concepts/#instantapp).
-## Workload-Optimized range: POP2-HM and POP2-HC Instances
+## Specialized range: POP2-HM and POP2-HC Instances
-Scaleway's **Workload-Optimized** range includes POP2-HM (High Memory), POP2-HN (High Network) and POP2-HC (High CPU). Equipped with dedicated vCPUs, these Instances are optimized for workload-intensive applications.
+Scaleway's **Specialized** range includes POP2-HM (High Memory), POP2-HN (High Network) and POP2-HC (High CPU). Equipped with dedicated vCPUs, these Instances are optimized for workload-intensive applications.
* **High-Memory**: Designed for RAM-intensive usages, and high memory production applications, these machines provide the highest RAM to vCPU ratio. These Instances are designed for high-demanding analysis, in-memory calculation, big-data processing, high-performance, or cache databases.
* **High-CPU**: Made for high computing workloads and compute-bound applications, this machine provides the highest vCPU to RAM ratio. Designed for high-performance web-serving, video encoding, machine learning, batch processing, CI/CD.
-See below the technical specifications of Workload-Optimized Instances:
+See below the technical specifications of Specialized Instances:
-| Range | Workload-Optimized |
+| Range | Specialized |
|:--------------------------------|:------------------------------------------------------------------------------|
| Instance Type | POP2-HM, POP2-HC, POP2-HN |
| Availability Zone | PAR1, PAR2, PAR3, AMS1, AMS2, WAW2, WAW3 |
@@ -31,12 +31,12 @@ See below the technical specifications of Workload-Optimized Instances:
| vCPU:RAM ratio | 1:8 (POP2-HM), 1:2 (POP2-HC and POP2-HN) |
## Complementary services
-To help build and manage your applications, consider complementing your Workload-Optimized Instance with the following compatible services:
+To help build and manage your applications, consider complementing your Specialized Instance with the following compatible services:
- [Learn how to back up your Instance](/instances/how-to/create-a-backup/)
- [Learn how to create snapshots of your Instance for specific volumes](/block-storage/how-to/create-a-snapshot/)
- [Learn how to migrate your data from one Instance to another](/instances/how-to/migrate-instances/)
## Matching use cases
-Try Scaleway Workload-Optimized Instances with the following tutorials:
+Try Scaleway Specialized Instances with the following tutorials:
- [Setting up a MySQL database engine on Ubuntu Linux](/tutorials/setup-mysql/)
- [Project management for technical teams with Focalboard on Ubuntu Instances](/tutorials/focalboard-project-management/)
diff --git a/pages/instances/reference-content/understanding-instance-pricing.mdx b/pages/instances/reference-content/understanding-instance-pricing.mdx
index e8b63f3443..d51463bbf9 100644
--- a/pages/instances/reference-content/understanding-instance-pricing.mdx
+++ b/pages/instances/reference-content/understanding-instance-pricing.mdx
@@ -45,9 +45,9 @@ For cost optimization, Scaleway offers [savings plans](/billing/additional-conte
The **Compute savings plan** applies to the following Instance types across all regions:
-- **Cost-optimized**: DEV1, GP1, PLAY2, PRO2
-- **Production-optimized**: ENT1, POP2
-- **Workload-optimized**: POP2 HC
+- Development Instances
+- General Purpose Instances
+- Specialized Instances
However, savings plans **do not apply** to the following Instance types: H100, RENDER, L40S, L4, COPARM1, START1, X64, and POP-WIN.
diff --git a/pages/kubernetes/api-cli/creating-managing-kubernetes-lifecycle-cliv2.mdx b/pages/kubernetes/api-cli/creating-managing-kubernetes-lifecycle-cliv2.mdx
index 79c97757c7..fa6522be4c 100644
--- a/pages/kubernetes/api-cli/creating-managing-kubernetes-lifecycle-cliv2.mdx
+++ b/pages/kubernetes/api-cli/creating-managing-kubernetes-lifecycle-cliv2.mdx
@@ -166,7 +166,7 @@ Kubeconfig for cluster your-cluster-ID successfully written at /Users/youruserna
A pool is a set of identical nodes. A pool has a name, a size (its current number of nodes), nodes number limits (min and max), and a Scaleway Instance type.
- Instance type with insufficient memory are not eligible to become nodes (DEV1-S, PLAY2-PICO, STARDUST)
+ Instance type with insufficient (less than 4 GB) memory are not eligible to become nodes.
Changing these limits increases/decreases the size of a pool. Thus, when autoscaling is enabled, the pool will grow or shrink inside those limits, depending on its load.
diff --git a/pages/kubernetes/how-to/deploy-x86-arm-images.mdx b/pages/kubernetes/how-to/deploy-x86-arm-images.mdx
index 1b245b56ac..a4c9ba2fa9 100644
--- a/pages/kubernetes/how-to/deploy-x86-arm-images.mdx
+++ b/pages/kubernetes/how-to/deploy-x86-arm-images.mdx
@@ -12,7 +12,7 @@ However, Kubernetes provides several mechanisms to manage architectural diversit
## What is ARM architecture, and why is it different from x86?
- - ARM architecture is commonly used in devices like Raspberry Pi, IoT devices, and recent [Cost-Optimized Instances based on ARM](https://www.scaleway.com/en/cost-optimized-instances-based-on-arm/).
+ - ARM architecture is commonly used in devices like Raspberry Pi, IoT devices, and recent [General Purpose Instances based on ARM](https://www.scaleway.com/en/cost-optimized-instances-based-on-arm/).
- Kubernetes clusters may consist of nodes with different architectures, including x86 and ARM.
- Deploying applications across these diverse architectures requires special consideration.
diff --git a/pages/kubernetes/how-to/use-nvidia-gpu-operator.mdx b/pages/kubernetes/how-to/use-nvidia-gpu-operator.mdx
index a0e03c3509..b5cce82806 100644
--- a/pages/kubernetes/how-to/use-nvidia-gpu-operator.mdx
+++ b/pages/kubernetes/how-to/use-nvidia-gpu-operator.mdx
@@ -10,7 +10,7 @@ import Requirements from '@macros/iam/requirements.mdx'
Kubernetes Kapsule and Kosmos support NVIDIA's official Kubernetes operator for all GPU pools.
-This operator is compatible with all Scaleway [GPU Instance](https://www.scaleway.com/en/gpu-instances/) offers.
+This operator is compatible with all Scaleway [GPU Instance](/gpu/reference-content/choosing-gpu-instance-type/) offers.
The GPU operator is set up for all GPU pools created in Kubernetes Kapsule and Kosmos, providing automated installation of all required software on GPU worker nodes, such as the device plugin, container toolkit, GPU drivers etc. For more information, refer to [the GPU operator overview](https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/latest/overview.html).
diff --git a/pages/opensearch/how-to/create-opensearch-deployment.mdx b/pages/opensearch/how-to/create-opensearch-deployment.mdx
index c2f984ff93..c4677f6ba0 100644
--- a/pages/opensearch/how-to/create-opensearch-deployment.mdx
+++ b/pages/opensearch/how-to/create-opensearch-deployment.mdx
@@ -29,7 +29,7 @@ This page explains how to create an OpenSearch deployment using the Scaleway con
5. Select a deployment configuration. You can choose between:
- **High Availability**: ensures fault tolerance in case of node failure, workload distribution across all nodes for improved performance, service continuity during rolling updates.
- - **Standalone**: cost-optimized single-node deployment for testing environments and non-critical, small-scale applications, without redundancy.
+ - **Standalone**: General Purpose single-node deployment for testing environments and non-critical, small-scale applications, without redundancy.
6. Choose a node type for your deployment. Refer to the [dedicated documentation](/opensearch/reference-content/shared-vs-dedicated-resources/) for more information on shared and dedicated compute resources.
diff --git a/pages/opensearch/quickstart.mdx b/pages/opensearch/quickstart.mdx
index 7a4b9b327c..cb4123f5bc 100644
--- a/pages/opensearch/quickstart.mdx
+++ b/pages/opensearch/quickstart.mdx
@@ -26,7 +26,7 @@ This guide covers the basic steps to set up, log in to, and delete a Cloud Essen
2. Click **+ Create deployment**. A creation form displays.
-3. Select the **Standalone** deployment configuration. Standalone deployments are cost-optimized single-node deployments for testing environments and non-critical, small-scale applications, without redundancy.
+3. Select the **Standalone** deployment configuration. Standalone deployments are General Purpose single-node deployments for testing environments and non-critical, small-scale applications, without redundancy.
4. Select the **SEARCHDB-SHARED-2C-8G** node type.
diff --git a/tutorials/ark-server/index.mdx b/tutorials/ark-server/index.mdx
index e5531aa19f..944187dd44 100644
--- a/tutorials/ark-server/index.mdx
+++ b/tutorials/ark-server/index.mdx
@@ -49,10 +49,10 @@ In this tutorial, you will learn how to create an ARK server on a [Scaleway Inst
- A copy of [ARK: Survival Evolved](https://store.steampowered.com/app/346110/ARK_Survival_Evolved/) for your local computer
- We recommend you follow this tutorial using a [Cost-Optimized Instance](/instances/reference-content/choosing-instance-type/).
+ We recommend you follow this tutorial using a [General Purpose Instance](/instances/reference-content/general-purpose/).
-Creating an ARK server can be done in a few steps on a [Scaleway Instance](https://www.scaleway.com/en/cost-optimized-instances/). If you do not have an Instance yet, start by [deploying your first Instance](/instances/how-to/create-an-instance/).
+Creating an ARK server can be done in a few steps on a [Scaleway Instance](/instances/reference-content/development/). If you do not have an Instance yet, start by [deploying your first Instance](/instances/how-to/create-an-instance/).
The ARK: Survival Evolved game server application requires at least 6 GB of RAM to start. Memory requirements increase as the number of connected players increases, as well depending on the activated mods. We recommend that you use at minimum a **DEV1-L** Instance for smooth gameplay.
diff --git a/tutorials/configuring-gitlab-scaleway-elements-database/index.mdx b/tutorials/configuring-gitlab-scaleway-elements-database/index.mdx
index ce11984b7e..9f2ecc04a9 100644
--- a/tutorials/configuring-gitlab-scaleway-elements-database/index.mdx
+++ b/tutorials/configuring-gitlab-scaleway-elements-database/index.mdx
@@ -17,10 +17,10 @@ import Requirements from '@macros/iam/requirements.mdx'
GitLab serves as an open-core Git repository manager, offering a broad suite of features such as a wiki, issue tracking, and CI/CD pipelines. In the open-core model, the fundamental software functionalities are available under an open-source license, complemented by optional modules.
Numerous major technology companies rely on GitLab to oversee their software development life cycles. Originally crafted in Ruby, the platform now incorporates Go and Vue.js within its technology stack.
-For those seeking a dependable and high-performance hosting solution, Scaleway Cost-Optimized Instances present an ideal choice for your GitLab infrastructure. They provide a robust infrastructure for hosting your GitLab Instance, coupled with a Managed Database for PostgreSQL.
+For those seeking a dependable and high-performance hosting solution, Scaleway Development Instances present an ideal choice for your GitLab infrastructure. They provide a robust infrastructure for hosting your GitLab Instance, coupled with a Managed Database for PostgreSQL.
- We recommend you follow this tutorial using a [Cost-Optimized Instance](/instances/reference-content/choosing-instance-type/).
+ We recommend you follow this tutorial using a [General Purpose Instance](/instances/reference-content/general-purpose/).
diff --git a/tutorials/create-models-django/index.mdx b/tutorials/create-models-django/index.mdx
index 84b8994826..dd957310c5 100644
--- a/tutorials/create-models-django/index.mdx
+++ b/tutorials/create-models-django/index.mdx
@@ -24,7 +24,7 @@ import Requirements from '@macros/iam/requirements.mdx'
To follow this tutorial, we assume that you completed the [first tutorial on Django installation and configuration](/tutorials/django-ubuntu-focal-fossa/).
- We recommend you follow this tutorial using a [Production-Optimized Instance](/instances/reference-content/choosing-instance-type/).
+ We recommend you follow this tutorial using a [General Purpose Instance](/instances/reference-content/general-purpose/).
diff --git a/tutorials/deploy-awstats/index.mdx b/tutorials/deploy-awstats/index.mdx
index 7dc046f90a..0de1c5ab14 100644
--- a/tutorials/deploy-awstats/index.mdx
+++ b/tutorials/deploy-awstats/index.mdx
@@ -29,7 +29,7 @@ AwStats leverages log file analysis to parse data from a wide range of web serve
- `sudo` privileges or access to the root user
- We recommend you follow this tutorial using a [Cost-Optimized Instance](/instances/reference-content/choosing-instance-type/).
+ We recommend you follow this tutorial using a [General Purpose Instance](/instances/reference-content/general-purpose/).
## Installing Apache
diff --git a/tutorials/deploy-chatwoot-self-care/index.mdx b/tutorials/deploy-chatwoot-self-care/index.mdx
index 749b52edef..f5ca92719b 100644
--- a/tutorials/deploy-chatwoot-self-care/index.mdx
+++ b/tutorials/deploy-chatwoot-self-care/index.mdx
@@ -1,6 +1,6 @@
---
-title: Deploying Chatwoot on Scaleway Production-Optimized Instances
-description: Learn how to deploy the Chatwoot CRM on Scaleway Production-Optimized Instances with this comprehensive guide. Follow our step-by-step instructions to set up and optimize Chatwoot for seamless customer relationship management on Scaleway.
+title: Deploying Chatwoot on Scaleway General Purpose Instances
+description: Learn how to deploy the Chatwoot CRM on Scaleway General Purpose Instances with this comprehensive guide. Follow our step-by-step instructions to set up and optimize Chatwoot for seamless customer relationship management on Scaleway.
tags: chatwoot crm customer-service
products:
- instances
@@ -27,7 +27,7 @@ Chatwoot is designed to enhance customer satisfaction and improve customer suppo
- A Scaleway account logged into the [console](https://console.scaleway.com)
- [Owner](/iam/concepts/#owner) status or [IAM permissions](/iam/concepts/#permission) allowing you to perform actions in the intended Organization
- An [SSH key](/organizations-and-projects/how-to/create-ssh-key/)
-- A [Production-Optimized Instance](/instances/how-to/create-an-instance/) with at least 8 GB RAM and 25 GB Block Storage, running on Ubuntu 20.04 LTS or later
+- A [General Purpose Instance](/instances/how-to/create-an-instance/) with at least 8 GB RAM and 25 GB Block Storage, running on Ubuntu 20.04 LTS or later
- Installed [Ruby 3.2.2](/tutorials/ruby-on-rails/) on your Instance
- A (sub-)domain pointed to the Instance's IP address
diff --git a/tutorials/deploy-instances-packer-terraform/index.mdx b/tutorials/deploy-instances-packer-terraform/index.mdx
index dc899a5f85..ef2ea604ef 100644
--- a/tutorials/deploy-instances-packer-terraform/index.mdx
+++ b/tutorials/deploy-instances-packer-terraform/index.mdx
@@ -27,7 +27,7 @@ Both applications are available for Linux, macOS, Windows, FreeBSD, and NetBSD.
- [Owner](/iam/concepts/#owner) status or [IAM permissions](/iam/concepts/#permission) allowing you to perform actions in the intended Organization
- We recommend you follow this tutorial using a [Production-Optimized Instance](/instances/reference-content/choosing-instance-type/).
+ We recommend you follow this tutorial using a [General Purpose Instance](/instances/reference-content/general-purpose/).
## Downloading and installing Packer
diff --git a/tutorials/django-ubuntu-focal-fossa/index.mdx b/tutorials/django-ubuntu-focal-fossa/index.mdx
index a3417220bb..6a20e8e6c1 100644
--- a/tutorials/django-ubuntu-focal-fossa/index.mdx
+++ b/tutorials/django-ubuntu-focal-fossa/index.mdx
@@ -1,5 +1,5 @@
---
-title: Setting up a Django Web Framework on a Scaleway Production-Optimized Instance running Ubuntu 20.04 LTS (Focal Fossa)
+title: Setting up a Django Web Framework on a Scaleway General Purpose Instance running Ubuntu 20.04 LTS (Focal Fossa)
description: This page shows how to install Django web framework on Ubuntu 20.04 LTS (Focal Fossa)
products:
- instances
@@ -28,7 +28,7 @@ There are many different ways to install Django on Ubuntu:
In this tutorial, we install Django using `pip` in a virtual environment, as it is the most practical and most flexible way to install without affecting the larger system, along with other per-project customizations and packages.
- We recommend you follow this tutorial using a [Production-Optimized Instance](/instances/reference-content/choosing-instance-type/).
+ We recommend you follow this tutorial using a [General Purpose Instance](/instances/reference-content/general-purpose/).
diff --git a/tutorials/focalboard-project-management/index.mdx b/tutorials/focalboard-project-management/index.mdx
index 0ed484350a..190e408b8c 100644
--- a/tutorials/focalboard-project-management/index.mdx
+++ b/tutorials/focalboard-project-management/index.mdx
@@ -1,6 +1,6 @@
---
title: Project management for technical teams with Focalboard on Ubuntu Instances
-description: This page shows how to set up a Focalboard project-management tool on Cost-Optimized Scaleway Instances running Ubuntu Linux
+description: This page shows how to set up a Focalboard project-management tool on General Purpose Scaleway Instances running Ubuntu Linux
tags: focalboard ubuntu project-management nginx mariadb
hero:
products:
@@ -22,7 +22,7 @@ Focalboard boasts a comprehensive array of functionalities as one of its standou
In short, if you are seeking a robust and budget-friendly project management solution, Focalboard unquestionably merits exploration. Its impressive feature set coupled with a commitment to privacy positions it as a tool capable of fostering team organization and goal attainment.
- We recommend you follow this tutorial using a [Production-Optimized Instance](/instances/reference-content/production-optimized/).
+ We recommend you follow this tutorial using a [General Purpose Instance](/instances/reference-content/general-purpose/).
diff --git a/tutorials/host-github-runner/index.mdx b/tutorials/host-github-runner/index.mdx
index 11e090847c..d5a3b4bcd0 100644
--- a/tutorials/host-github-runner/index.mdx
+++ b/tutorials/host-github-runner/index.mdx
@@ -21,10 +21,10 @@ GitHub Actions stands as a versatile tool, simplifying the automation of all you
GitHub offers a limited range of complementary resources for constructing applications through GitHub Actions; however, proficient developers can swiftly encounter these constraints. Teams engaged in professional-grade projects might also want full authority over their build environment. GitHub extends the option to use runners on self-managed instances. A runner function is an application executing tasks from a GitHub Actions workflow.
-In this guide, you will learn how to configure a GitHub Actions runner on a Scaleway Instance, effectively streamlining your project workflows. For typical workloads, opting for a [Cost-Optimized Instance](/instances/reference-content/choosing-instance-type/) is recommended. For resource-intensive workloads, the use of Production-Optimized Instances provides dedicated resources for enhanced performance.
+In this guide, you will learn how to configure a GitHub Actions runner on a Scaleway Instance, effectively streamlining your project workflows. For typical workloads, opting for a [General Purpose Instance with shared resources](/instances/reference-content/choosing-instance-type/) is recommended. For resource-intensive workloads, the use of General Purpose Instances with dedicated resouces provides enhanced performance.
- We recommend you follow this tutorial using a [Cost-Optimized Instance](/instances/reference-content/choosing-instance-type/).
+ We recommend you follow this tutorial using a [General Purpose Instance](/instances/reference-content/general-purpose/).
diff --git a/tutorials/load-testing-vegeta/index.mdx b/tutorials/load-testing-vegeta/index.mdx
index dc8135ffcf..e64dc9d519 100644
--- a/tutorials/load-testing-vegeta/index.mdx
+++ b/tutorials/load-testing-vegeta/index.mdx
@@ -28,7 +28,7 @@ Before transitioning an application to a production environment, load testing he
- Identification of performance constraints attributed to technical specifications of employed Instances
- We recommend you follow this tutorial using a [Production-Optimized Instance](/instances/reference-content/choosing-instance-type/).
+ We recommend you follow this tutorial using a [General Purpose Instance](/instances/reference-content/general-purpose/).
diff --git a/tutorials/migrate-mysql-databases-postgresql-pgloader/index.mdx b/tutorials/migrate-mysql-databases-postgresql-pgloader/index.mdx
index bf1deb9fee..548659fe48 100644
--- a/tutorials/migrate-mysql-databases-postgresql-pgloader/index.mdx
+++ b/tutorials/migrate-mysql-databases-postgresql-pgloader/index.mdx
@@ -17,7 +17,7 @@ pgLoader is an open-source database migration tool developed to simplify the pro
The tool supports migrations from several file types and database engines like [MySQL](https://www.mysql.com/), [MS SQL](https://www.microsoft.com/en-us/sql-server/sql-server-2019) and [SQLite](https://www.sqlite.org/index.html).
-In this tutorial, you learn how to migrate an existing remote MySQL database to a [Database for PostgreSQL](https://www.scaleway.com/en/database/) using pgLoader and an intermediate [Development Instance](https://www.scaleway.com/en/cost-optimized-instances/) running Ubuntu Linux.
+In this tutorial, you learn how to migrate an existing remote MySQL database to a [Database for PostgreSQL](https://www.scaleway.com/en/database/) using pgLoader and an intermediate [Development Instance](/instances/reference-content/development/) running Ubuntu Linux.
diff --git a/tutorials/migrating-from-another-managed-kubernetes-service-to-scaleway-kapsule/index.mdx b/tutorials/migrating-from-another-managed-kubernetes-service-to-scaleway-kapsule/index.mdx
index 7bd9cce0e5..ba9a8a2576 100644
--- a/tutorials/migrating-from-another-managed-kubernetes-service-to-scaleway-kapsule/index.mdx
+++ b/tutorials/migrating-from-another-managed-kubernetes-service-to-scaleway-kapsule/index.mdx
@@ -465,7 +465,7 @@ Conduct functional, performance, and end-to-end testing to verify the applicatio
### Leveraging Scaleway features
-- **Elastic Metal nodes**: For workloads requiring dedicated resources, consider adding [Production-Optimized or Workload-Optimized nodes](/instances/reference-content/choosing-instance-type/) to your cluster.
+- **Elastic Metal nodes**: For workloads requiring dedicated resources, consider adding [General Purpose or Specialized nodes](/instances/reference-content/choosing-instance-type/) to your cluster.
- **Autoscaling**: Use cluster and [pod autoscaling](/kubernetes/concepts/#autoscale) to handle variable workloads efficiently.
- **Private Networking**: Use [VPC and Private Networks](/vpc/quickstart/) for enhanced security.
diff --git a/tutorials/mutli-node-rocket-chat-community-private-network/index.mdx b/tutorials/mutli-node-rocket-chat-community-private-network/index.mdx
index 4700ea35dc..943ece103d 100644
--- a/tutorials/mutli-node-rocket-chat-community-private-network/index.mdx
+++ b/tutorials/mutli-node-rocket-chat-community-private-network/index.mdx
@@ -25,13 +25,13 @@ import image11 from './assets/scaleway-rc_admin.webp'
import Requirements from '@macros/iam/requirements.mdx'
-In this tutorial, you will learn how the Private Network feature can help you to build a distributed [Rocket.Chat](/tutorials/run-messaging-platform-with-rocketchat/) application on [General Purpose](https://www.scaleway.com/en/cost-optimized-instances/) and [Development](https://www.scaleway.com/en/cost-optimized-instances/) Instances using a Private Network to communicate securely between them:
+In this tutorial, you will learn how the Private Network feature can help you to build a distributed [Rocket.Chat](/tutorials/run-messaging-platform-with-rocketchat/) application on [General Purpose](/instances/reference-content/development/) and [Development](/instances/reference-content/development/) Instances using a Private Network to communicate securely between them:
Private Networks are a LAN-like layer 2 Ethernet network. A new network interface with a unique MAC address is configured on each Instance in a Private Network. You can use this interface to communicate in a secure and isolated network, using private IP addresses of your choice.
-To reach the goal of this tutorial, you will use four [Production-Optimized Instance](/instances/reference-content/choosing-instance-type/) running **Ubuntu 24.04 Noble Numbat** or later:
+To reach the goal of this tutorial, you will use four [General Purpose Instance](/instances/reference-content/general-purpose/) running **Ubuntu 24.04 Noble Numbat** or later:
- 1 POP2-2C-8G Instance as NGINX Proxy frontend, that distributes the load on the Rocket.Chat applications
- 1 POP2-8C-32G Instance as MongoDB® host
@@ -39,7 +39,7 @@ To reach the goal of this tutorial, you will use four [Production-Optimized Inst
- A [Private Network](/vpc/quickstart/) between these Instances
- We recommend you follow this tutorial using a [Production-Optimized Instance](/instances/reference-content/choosing-instance-type/).
+ We recommend you follow this tutorial using a [General Purpose Instance](/instances/reference-content/general-purpose/).
diff --git a/tutorials/plausible-analytics-ubuntu/index.mdx b/tutorials/plausible-analytics-ubuntu/index.mdx
index 01a40bcd84..702c4a157f 100644
--- a/tutorials/plausible-analytics-ubuntu/index.mdx
+++ b/tutorials/plausible-analytics-ubuntu/index.mdx
@@ -1,6 +1,6 @@
---
title: Running web analytics with Plausible on Ubuntu Linux
-description: This page shows how to generate web analytics with Plausible on Cost-Optimized Instances running Ubuntu Linux
+description: This page shows how to generate web analytics with Plausible on Development Instances running Ubuntu Linux
tags: plausible ubuntu analytics
hero:
products:
@@ -17,7 +17,7 @@ Plausible Analytics is an open-source web analytics initiative driven by the goa
This tool significantly contributes to the enhancement of site performance, with its analytics script weighing in at less than 1 KB. This is 45 times smaller than the size tag of a typical commercial analytics solution. Plausible Analytics is intentionally designed for self-hosting through Docker, providing you with control over your analytics setup.
- We recommend you follow this tutorial using a [Cost-Optimized Instance](/instances/reference-content/choosing-instance-type/).
+ We recommend you follow this tutorial using a [General Purpose Instance](/instances/reference-content/general-purpose/).
diff --git a/tutorials/prometheus-monitoring-grafana-dashboard/index.mdx b/tutorials/prometheus-monitoring-grafana-dashboard/index.mdx
index 64ca657d43..a93a54b8b3 100644
--- a/tutorials/prometheus-monitoring-grafana-dashboard/index.mdx
+++ b/tutorials/prometheus-monitoring-grafana-dashboard/index.mdx
@@ -28,7 +28,7 @@ In this tutorial, you will learn how to use a Prometheus Monitoring Instance wit
- We recommend you follow this tutorial using a [Production-Optimized Instance](/instances/reference-content/choosing-instance-type/).
+ We recommend you follow this tutorial using a [General Purpose Instance](/instances/reference-content/general-purpose/).
## Preparing your environment
diff --git a/tutorials/rtmp-self-hosted-streaming/index.mdx b/tutorials/rtmp-self-hosted-streaming/index.mdx
index 397e1ca61a..25cf0faf30 100644
--- a/tutorials/rtmp-self-hosted-streaming/index.mdx
+++ b/tutorials/rtmp-self-hosted-streaming/index.mdx
@@ -26,7 +26,7 @@ For individuals craving absolute control over their content, open-source solutio
Using the open-source [RTMP protocol](https://en.wikipedia.org/wiki/Real-Time_Messaging_Protocol) on self-hosted streaming servers, users gain autonomy to manage their content free from external constraints and interruptions.
- We recommend you follow this tutorial using a [Production-Optimized Instance](/instances/reference-content/choosing-instance-type/).
+ We recommend you follow this tutorial using a [General Purpose Instance](/instances/reference-content/general-purpose/).
diff --git a/tutorials/sentry-error-tracking/index.mdx b/tutorials/sentry-error-tracking/index.mdx
index 03e44c3a3c..e8bd928cfa 100644
--- a/tutorials/sentry-error-tracking/index.mdx
+++ b/tutorials/sentry-error-tracking/index.mdx
@@ -25,7 +25,7 @@ Crafted using Python, Sentry employs a client/server architecture that facilitat
With Sentry as your ally, you can streamline your development workflow, enhance your applications, and provide your users with the seamless experience they desire.
- We recommend you follow this tutorial using a [Production-Optimized Instance](/instances/reference-content/production-optimized/).
+ We recommend you follow this tutorial using a [General Purpose Instance](/instances/reference-content/general-purpose/).
You can find all reports on a dashboard, which makes it easy to triage the problem, how often it occurs, its impact on the user experience, which part of your code causes the problem, and so on.
diff --git a/tutorials/setup-minecraft/index.mdx b/tutorials/setup-minecraft/index.mdx
index e4e299e251..2ecaf3aa2b 100644
--- a/tutorials/setup-minecraft/index.mdx
+++ b/tutorials/setup-minecraft/index.mdx
@@ -53,10 +53,10 @@ The Minecraft server is a Java application and runs perfectly on [Scaleway Insta
- `sudo` privileges or access to the root user
- A copy of the [Minecraft game client](https://www.minecraft.net/en-us/) for your local computer
-Deploying your own Minecraft server can be done in a few easy steps on a [Scaleway Development Instance](https://www.scaleway.com/en/cost-optimized-instances/). In case you do not have an Instance yet, start by [deploying your first Instance](/instances/how-to/create-an-instance/).
+Deploying your own Minecraft server can be done in a few easy steps on a [Scaleway Development Instance](/instances/reference-content/development/). In case you do not have an Instance yet, start by [deploying your first Instance](/instances/how-to/create-an-instance/).
- We recommend you follow this tutorial using a [Cost-Optimized Instance](/instances/reference-content/choosing-instance-type/) with at least 8GB of RAM.
+ We recommend you follow this tutorial using a [General Purpose Instance](/instances/reference-content/general-purpose/) with at least 8GB of RAM.
1. Connect to your Instance using [SSH](/instances/how-to/connect-to-instance/).
diff --git a/tutorials/setup-mongodb-on-ubuntu/index.mdx b/tutorials/setup-mongodb-on-ubuntu/index.mdx
index 5b60570ea9..2a8b5ff59d 100644
--- a/tutorials/setup-mongodb-on-ubuntu/index.mdx
+++ b/tutorials/setup-mongodb-on-ubuntu/index.mdx
@@ -16,10 +16,10 @@ import Requirements from '@macros/iam/requirements.mdx'
Diverging from traditional relational databases, MongoDB® users no longer need an intricate predefined schema before adding data. This flexibility stems from its ability to modify schemas at any point in time. Embracing the NoSQL philosophy, it employs JSON-like documents for data storage, allowing the insertion of diverse and arbitrary data.
-Powerful [Production-Optimized Instance](/instances/reference-content/choosing-instance-type/) comes with the compute and storage capabilities you need to run your MongoDB® Instance smoothly.
+Powerful [General Purpose Instance](/instances/reference-content/general-purpose/) comes with the compute and storage capabilities you need to run your MongoDB® Instance smoothly.
- We recommend you follow this tutorial using a [Production-Optimized Instance](/instances/reference-content/choosing-instance-type/).
+ We recommend you follow this tutorial using a [General Purposeance](/instances/reference-content/choosing-instance-type/).
diff --git a/tutorials/setup-postfix-ubuntu-bionic/index.mdx b/tutorials/setup-postfix-ubuntu-bionic/index.mdx
index a2be48bc0f..b308b9fb5e 100644
--- a/tutorials/setup-postfix-ubuntu-bionic/index.mdx
+++ b/tutorials/setup-postfix-ubuntu-bionic/index.mdx
@@ -16,7 +16,7 @@ In this tutorial you will learn how to configure a mail server that uses Postfi
You learn also how to install a Roundcube webmail interface to be able to read your emails directly from your browser.
- We recommend you follow this tutorial using a [Production-Optimized Instance](/instances/reference-content/choosing-instance-type/).
+ We recommend you follow this tutorial using a [General Purpose Instance](/instances/reference-content/general-purpose/).
diff --git a/tutorials/wordpress-instantapp/index.mdx b/tutorials/wordpress-instantapp/index.mdx
index d6ed8aa6ec..06268d0272 100644
--- a/tutorials/wordpress-instantapp/index.mdx
+++ b/tutorials/wordpress-instantapp/index.mdx
@@ -30,7 +30,7 @@ This guide demonstrates how to quickly deploy a WordPress application in seconds
- An [SSH key](/organizations-and-projects/how-to/create-ssh-key/)
- We recommend you follow this tutorial using a [Cost-Optimized Instance](/instances/reference-content/cost-optimized/).
+ We recommend you follow this tutorial using a [General Purpose Instance](/instances/reference-content/general-purpose/).
diff --git a/tutorials/wordpress-lemp-stack-ubuntu-jammy-jellyfish-22-04/index.mdx b/tutorials/wordpress-lemp-stack-ubuntu-jammy-jellyfish-22-04/index.mdx
index e08df3042a..1c1789ffda 100644
--- a/tutorials/wordpress-lemp-stack-ubuntu-jammy-jellyfish-22-04/index.mdx
+++ b/tutorials/wordpress-lemp-stack-ubuntu-jammy-jellyfish-22-04/index.mdx
@@ -18,10 +18,10 @@ import Requirements from '@macros/iam/requirements.mdx'
WordPress is a popular and freely accessible open-source tool that offers a seamless means to craft and manage content on your website. With its intuitive interface and user-friendly features, WordPress has garnered extensive adoption, making it an ideal solution for swiftly launching a website. The web front-end it provides ensures effortless administration, simplifying the process even for those lacking technical expertise.
-If you are seeking to install WordPress on a newly established Ubuntu 22.04 LTS Instance, this tutorial is tailor-made for your needs. We will meticulously walk you through the installation steps, employing the LEMP stack (Linux + Nginx - pronounced "engine x" + MySQL + PHP). For the sake of this tutorial, we are choosing Nginx, a robust HTTP server that is efficient in resource usage, resulting in faster page delivery, especially for static content. By opting for a Cost-Optimized Instance configured with LEMP, you will gain access to a robust web server that elevates website performance, thus ensuring a seamless WordPress installation experience.
+If you are seeking to install WordPress on a newly established Ubuntu 22.04 LTS Instance, this tutorial is tailor-made for your needs. We will meticulously walk you through the installation steps, employing the LEMP stack (Linux + Nginx - pronounced "engine x" + MySQL + PHP). For the sake of this tutorial, we are choosing Nginx, a robust HTTP server that is efficient in resource usage, resulting in faster page delivery, especially for static content. By opting for a General Purpose Instance configured with LEMP, you will gain access to a robust web server that elevates website performance, thus ensuring a seamless WordPress installation experience.
- We recommend you follow this tutorial using a [Cost-Optimized Instance](/instances/reference-content/choosing-instance-type/).
+ We recommend you follow this tutorial using a [General Purpose Instance](/instances/reference-content/general-purpose/).
diff --git a/tutorials/zabbix-monitoring/index.mdx b/tutorials/zabbix-monitoring/index.mdx
index ec79f31004..81aa075899 100644
--- a/tutorials/zabbix-monitoring/index.mdx
+++ b/tutorials/zabbix-monitoring/index.mdx
@@ -29,10 +29,10 @@ Zabbix is a powerful open-source software that offers real-time monitoring for s
This monitoring tool enables users to determine the health status of their IT infrastructure and analyze data over time. The insights provided by Zabbix can be used to plan upgrades to the infrastructure when requirements grow.
-In this tutorial, we will use two Scaleway [Production-Optimized](/instances/reference-content/choosing-instance-type/) Ubuntu Jammy Jellyfish (22.04 LTS) Instances. One Instance will be a server hosting the Zabbix application, while the other will be a client being monitored. Following this tutorial will teach you how to set up Zabbix and take control of your IT infrastructure.
+In this tutorial, we will use two Scaleway [General Purpose](/instances/reference-content/choosing-instance-type/) Ubuntu Jammy Jellyfish (22.04 LTS) Instances. One Instance will be a server hosting the Zabbix application, while the other will be a client being monitored. Following this tutorial will teach you how to set up Zabbix and take control of your IT infrastructure.
- We recommend you follow this tutorial using a [Production-Optimized Instance](/instances/reference-content/choosing-instance-type/).
+ We recommend you follow this tutorial using a [General Purpose Instance](/instances/reference-content/general-purpose/).
diff --git a/tutorials/zammad-ticketing/index.mdx b/tutorials/zammad-ticketing/index.mdx
index 4aa8f7776e..ca74a80e65 100644
--- a/tutorials/zammad-ticketing/index.mdx
+++ b/tutorials/zammad-ticketing/index.mdx
@@ -26,7 +26,7 @@ Zammad is an open-source helpdesk system that allows you to oversee customer int
This tutorial will guide you through the process of installing Zammad on a Scaleway Instance operating on **Ubuntu 20.04 LTS (Focal Fossa)**. Furthermore, you will receive a brief orientation of the application.
- We recommend you follow this tutorial using a [Cost-Optimized Instance](/instances/reference-content/choosing-instance-type/). If you are installing Zammad on Ubuntu 22.04 and up, avoid using an Instance with ARM architecure to follow this tutorial, as the package manager used upon installation is not compatible with the ARM architecture.
+ We recommend you follow this tutorial using a [General Purpose Instance](/instances/reference-content/general-purpose/). If you are installing Zammad on Ubuntu 22.04 and up, avoid using an Instance with ARM architecure to follow this tutorial, as the package manager used upon installation is not compatible with the ARM architecture.