From 06ee37542c373904f8be40dc7b1cca288f59aba3 Mon Sep 17 00:00:00 2001 From: Benedikt Rollik Date: Tue, 21 Oct 2025 16:03:15 +0200 Subject: [PATCH 1/8] docs(gpu): migrate h100 pcie --- .../gpu/reference-content/migration-h100.mdx | 107 ++++++++++++++++++ 1 file changed, 107 insertions(+) create mode 100644 pages/gpu/reference-content/migration-h100.mdx diff --git a/pages/gpu/reference-content/migration-h100.mdx b/pages/gpu/reference-content/migration-h100.mdx new file mode 100644 index 0000000000..7591f28829 --- /dev/null +++ b/pages/gpu/reference-content/migration-h100.mdx @@ -0,0 +1,107 @@ +--- +title: Migrating from H100-2-80G to H100-SXM-2-80G +description: Learn how to migrating from H100-2-80G to H100-SXM-2-80G GPU Instances. +tags: gpu nvidia +dates: + validation: 2025-10-21 + posted: 2025-10-21 +--- + +Scaleway is optimizing its H100 GPU Instance portfolio to improve long-term availability and provide better performance for all users. + +## Current situation + +Below is an overview of the current status of each instance type: + +| Instance type | Availability status | Notes | +| ------------------ | ----------------------- | -------------------------------------------------------------------------------------------------------------------------------------- | +| H100-1-80G | Low stock | No additional GPUs can be added at this time. | +| H100-2-80G | Frequently out of stock | Supply remains unstable, and shortages are expected to continue. | +| H100-SXM-2-80G | Good availability | This Instance type can scale further and is ideal for multi-GPU workloads, offering NVLink connectivity and superior memory bandwidth. | + +In summary, while the single- and dual-GPU PCIe instances (H100-1-80G and H100-2-80G) are experiencing supply constraints, the H100-SXM-2-80G remains available in good quantity and is the recommended option for users requiring scalable performance and high-bandwidth interconnects. + +We recommend users to migrate their workload from PCIe-based GPU Instances to SXM GPU Instances for improvements in performance and fure-proof access to GPUs. As H100 PCIe-variants becomes increasingly scarce, migrating ensures uninterrupted access to H100-class compute. + +## Benefits of the migration + +There are two primary scenarios: migrating **Kubernetes (Kapsule)** workloads or **standalone** workloads. + + + Always ensure that your **data is backed up** before performing any operations that could affect it. + + +### Migrating Kubernetes workloads (Kubernetes Kapsule) + +If you are using Kapsule, follow these steps to move existing workloads to nodes powered by `H100-SXM-2-80G`. + + + The Kubernetes autoscaler may get stuck if it tries to scale up a node pool with out-of-stock. We recommend switching to `H100-SXM-2-80G` proactively to avoid disruptions. + + + +#### Step-by-step +1. Create a new node pool using `H100-SXM-2-80G` GPU Instances. + +2. Run `kubectl get nodes` to check that the new nodes are in a `Ready` state. +3. Cordon the nodes in the old node pool to prevent new Pods from being scheduled there. For each node, run: `kubectl cordon ` + + You can use a selector on the pool name label to cordon or drain multiple nodes at the same time if your app allows it (ex. `kubectl cordon -l k8s.scaleway.com/pool-name=mypoolname`) + +4. Drain the nodes to evict the Pods gracefully. + - For each node, run: `kubectl drain --ignore-daemonsets --delete-emptydir-data` + - The `--ignore-daemonsets` flag is used because daemon sets manage Pods across all nodes and will automatically reschedule them. + - The `--delete-emptydir-data` flag is necessary if your Pods use emptyDir volumes, but use this option carefully as it will delete the data stored in these volumes. + - Refer to the [official Kubernetes documentation](https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/) for further information. +5. Run `kubectl get pods -o wide` after draining, to verify that the Pods have been rescheduled to the new node pool. +6. Delete the old node pool. + + + For further information, refer to our dedicated documentation [How to migrate existing workloads to a new Kapsule node pool](/kubernetes/how-to/manage-node-pools/#how-to-migrate-existing-workloads-to-a-new-kubernets-kapsule-node-pool). + + +### Migrating a standalone Instance + +For standalone GPU instances, you can recreate your environment using a `H100-SXM-2-80G` GPU Instance using either the CLI, API or in visual mode using the Scaleway console. + +#### Quick Start (CLI example): +1. Stop the Instance. + ``` + scw instance server stop zone= + ``` + Replace `` with the Availability Zone of your Instance. For example, if your Instance is located in Paris-1, the zone would be `fr-par-1`. Replace `` with the ID of your Instance. + + You can find the ID of your Instance on it's overview page in the Scaleway console or using the CLI by running the following command: `scw instance server list`. + + +2. Update the commercial type of the Instance + ``` + scw instance server update commercial-type=H100-SXM-2-80G zone= + ``` + Replace `` with the UUID of your Instance and `` with the Availability Zone of your GPU Instance. + +3. Power on the Instance. + ``` + scw instance server start zone= + ``` +For further information, refer to the [Instance CLI documentation](https://github.com/scaleway/scaleway-cli/blob/master/docs/commands/instance.md). + + + You can also migrate your GPU Instances using the [API](https://www.scaleway.com/en/docs/instances/api-cli/migrating-instances/) and via [Scaleway console](/instances/how-to/migrate-instances/). + + +## FAQ + +#### Are PCIe-based H100 being discontinued? +H100 PCIe-based GPU Instances are not End-of-Life (EOL), but due to limited availability, we recommend migrating to `H100-SXM-2-80G` to avoid future disruptions. + +#### Is H100-SXM-2-80G compatible with my current setup? +Yes — it runs the same CUDA toolchain and supports standard frameworks (PyTorch, TensorFlow, etc.). However, verify that your workload does not require large system RAM or NVMe scratch space. + +#### Why is H100-SXM better for multi-GPU? +Because of *NVLink*, which enables near-shared-memory speeds between GPUs. In contrast, PCIe-based instances like H100-2-80G have slower interconnects that can bottleneck training. Learn more: [Understanding NVIDIA NVLink](https://www.scaleway.com/en/docs/gpu/reference-content/understanding-nvidia-nvlink/) + +#### What if my workload needs more CPU or RAM? +Let us know via [support ticket we’re evaluating options for compute-optimized configurations to complement our GPU offerings. + +- \ No newline at end of file From d7206ae30e42992cb1af5c277d021b466ea51df0 Mon Sep 17 00:00:00 2001 From: Benedikt Rollik Date: Thu, 23 Oct 2025 12:54:53 +0200 Subject: [PATCH 2/8] docs(gpu): update content --- .../gpu/reference-content/migration-h100.mdx | 23 ++++++------------- 1 file changed, 7 insertions(+), 16 deletions(-) diff --git a/pages/gpu/reference-content/migration-h100.mdx b/pages/gpu/reference-content/migration-h100.mdx index 7591f28829..adfca96b3e 100644 --- a/pages/gpu/reference-content/migration-h100.mdx +++ b/pages/gpu/reference-content/migration-h100.mdx @@ -9,18 +9,6 @@ dates: Scaleway is optimizing its H100 GPU Instance portfolio to improve long-term availability and provide better performance for all users. -## Current situation - -Below is an overview of the current status of each instance type: - -| Instance type | Availability status | Notes | -| ------------------ | ----------------------- | -------------------------------------------------------------------------------------------------------------------------------------- | -| H100-1-80G | Low stock | No additional GPUs can be added at this time. | -| H100-2-80G | Frequently out of stock | Supply remains unstable, and shortages are expected to continue. | -| H100-SXM-2-80G | Good availability | This Instance type can scale further and is ideal for multi-GPU workloads, offering NVLink connectivity and superior memory bandwidth. | - -In summary, while the single- and dual-GPU PCIe instances (H100-1-80G and H100-2-80G) are experiencing supply constraints, the H100-SXM-2-80G remains available in good quantity and is the recommended option for users requiring scalable performance and high-bandwidth interconnects. - We recommend users to migrate their workload from PCIe-based GPU Instances to SXM GPU Instances for improvements in performance and fure-proof access to GPUs. As H100 PCIe-variants becomes increasingly scarce, migrating ensures uninterrupted access to H100-class compute. ## Benefits of the migration @@ -28,7 +16,7 @@ We recommend users to migrate their workload from PCIe-based GPU Instances to SX There are two primary scenarios: migrating **Kubernetes (Kapsule)** workloads or **standalone** workloads. - Always ensure that your **data is backed up** before performing any operations that could affect it. + Always ensure that your **data is backed up** before performing any operations that could affect it. Keep in mind that **Scratch Storage** is ephemere and does not survive once the Instance is stopped: doing a full stop/start cycle will **erase the scratch data**. However, doing a simple reboot or using the stop in place function will keep the data. ### Migrating Kubernetes workloads (Kubernetes Kapsule) @@ -96,12 +84,15 @@ For further information, refer to the [Instance CLI documentation](https://githu H100 PCIe-based GPU Instances are not End-of-Life (EOL), but due to limited availability, we recommend migrating to `H100-SXM-2-80G` to avoid future disruptions. #### Is H100-SXM-2-80G compatible with my current setup? -Yes — it runs the same CUDA toolchain and supports standard frameworks (PyTorch, TensorFlow, etc.). However, verify that your workload does not require large system RAM or NVMe scratch space. +Yes — it runs the same CUDA toolchain and supports standard frameworks (PyTorch, TensorFlow, etc.). No changes in your code base are required when upgrading to a SXM-based GPU Instance. #### Why is H100-SXM better for multi-GPU? -Because of *NVLink*, which enables near-shared-memory speeds between GPUs. In contrast, PCIe-based instances like H100-2-80G have slower interconnects that can bottleneck training. Learn more: [Understanding NVIDIA NVLink](https://www.scaleway.com/en/docs/gpu/reference-content/understanding-nvidia-nvlink/) +The NVIDIA H100-SXM outperforms the H100-PCIe in multi-GPU configurations due to its superior interconnect and higher power capacity. +It leverages fourth-generation NVLink and NVSwitch, providing up to 900 GB/s of bidirectional bandwidth for rapid GPU-to-GPU communication, compared to the H100-PCIe's 128 GB/s via PCIe Gen 5, which creates bottlenecks in demanding workloads like large-scale AI training and HPC. +Additionally, the H100-SXM’s 700W TDP enables higher clock speeds and sustained performance, while the H100-PCIe’s 300-350W TDP limits its throughput. +For high-communication, multi-GPU tasks, the H100-SXM is the optimal choice, while the H100-PCIe suits less intensive applications with greater flexibility. #### What if my workload needs more CPU or RAM? -Let us know via [support ticket we’re evaluating options for compute-optimized configurations to complement our GPU offerings. +Let us know via [support ticket](https://console.scaleway.com/support/tickets/create) what your specific requoirements are. Currently we are evaluating options for compute-optimized configurations to complement our GPU offerings. - \ No newline at end of file From 9d262d981f03ddbf4b299635dc8a88ceebfeb9bf Mon Sep 17 00:00:00 2001 From: Benedikt Rollik Date: Tue, 4 Nov 2025 11:22:12 +0100 Subject: [PATCH 3/8] docs(gpu): update content --- pages/gpu/reference-content/migration-h100.mdx | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/pages/gpu/reference-content/migration-h100.mdx b/pages/gpu/reference-content/migration-h100.mdx index adfca96b3e..7fd2fa22ff 100644 --- a/pages/gpu/reference-content/migration-h100.mdx +++ b/pages/gpu/reference-content/migration-h100.mdx @@ -3,13 +3,13 @@ title: Migrating from H100-2-80G to H100-SXM-2-80G description: Learn how to migrating from H100-2-80G to H100-SXM-2-80G GPU Instances. tags: gpu nvidia dates: - validation: 2025-10-21 - posted: 2025-10-21 + validation: 2025-11-04 + posted: 2025-11-04 --- Scaleway is optimizing its H100 GPU Instance portfolio to improve long-term availability and provide better performance for all users. -We recommend users to migrate their workload from PCIe-based GPU Instances to SXM GPU Instances for improvements in performance and fure-proof access to GPUs. As H100 PCIe-variants becomes increasingly scarce, migrating ensures uninterrupted access to H100-class compute. +For optimal availability and performance, we recommend switching from **H100-2-80G** to the next generation **H100-SXM-2-80G** GPU Instance. This latest generation has more stock, improved NVLink, better and faster VRAM. ## Benefits of the migration @@ -21,10 +21,10 @@ There are two primary scenarios: migrating **Kubernetes (Kapsule)** workloads or ### Migrating Kubernetes workloads (Kubernetes Kapsule) -If you are using Kapsule, follow these steps to move existing workloads to nodes powered by `H100-SXM-2-80G`. +If you are using Kapsule, follow these steps to move existing workloads to nodes powered by `H100-SXM-2-80G` GPUs. - The Kubernetes autoscaler may get stuck if it tries to scale up a node pool with out-of-stock. We recommend switching to `H100-SXM-2-80G` proactively to avoid disruptions. + The Kubernetes autoscaler may get stuck if it tries to scale up a node pool with out-of-stock Instances. We recommend switching to `H100-SXM-2-80G` GPU Instances proactively to avoid disruptions. From d07c1164fab3cfcd54e0392fb6dcc8d999cbf8cb Mon Sep 17 00:00:00 2001 From: Benedikt Rollik Date: Thu, 6 Nov 2025 10:01:19 +0100 Subject: [PATCH 4/8] Apply suggestions from code review Co-authored-by: Rowena Jones <36301604+RoRoJ@users.noreply.github.com> --- pages/gpu/reference-content/migration-h100.mdx | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/pages/gpu/reference-content/migration-h100.mdx b/pages/gpu/reference-content/migration-h100.mdx index 7fd2fa22ff..d54f546a23 100644 --- a/pages/gpu/reference-content/migration-h100.mdx +++ b/pages/gpu/reference-content/migration-h100.mdx @@ -16,7 +16,7 @@ For optimal availability and performance, we recommend switching from **H100-2-8 There are two primary scenarios: migrating **Kubernetes (Kapsule)** workloads or **standalone** workloads. - Always ensure that your **data is backed up** before performing any operations that could affect it. Keep in mind that **Scratch Storage** is ephemere and does not survive once the Instance is stopped: doing a full stop/start cycle will **erase the scratch data**. However, doing a simple reboot or using the stop in place function will keep the data. + Always ensure that your **data is backed up** before performing any operations that could affect it. Keep in mind that **scratch storage** is ephemeral and does not survive once the Instance is stopped: doing a full stop/start cycle will **erase the scratch data**. However, doing a simple reboot or using the **stop in place** function will keep the data. ### Migrating Kubernetes workloads (Kubernetes Kapsule) @@ -80,7 +80,7 @@ For further information, refer to the [Instance CLI documentation](https://githu ## FAQ -#### Are PCIe-based H100 being discontinued? +#### Are PCIe-based H100s being discontinued? H100 PCIe-based GPU Instances are not End-of-Life (EOL), but due to limited availability, we recommend migrating to `H100-SXM-2-80G` to avoid future disruptions. #### Is H100-SXM-2-80G compatible with my current setup? @@ -94,5 +94,3 @@ For high-communication, multi-GPU tasks, the H100-SXM is the optimal choice, whi #### What if my workload needs more CPU or RAM? Let us know via [support ticket](https://console.scaleway.com/support/tickets/create) what your specific requoirements are. Currently we are evaluating options for compute-optimized configurations to complement our GPU offerings. - -- \ No newline at end of file From e88be1c7daf8eae3f056d24d8b221d148eaa8b0f Mon Sep 17 00:00:00 2001 From: Benedikt Rollik Date: Tue, 18 Nov 2025 09:50:20 +0100 Subject: [PATCH 5/8] docs(gpu): wording --- pages/gpu/reference-content/migration-h100.mdx | 14 ++++++-------- 1 file changed, 6 insertions(+), 8 deletions(-) diff --git a/pages/gpu/reference-content/migration-h100.mdx b/pages/gpu/reference-content/migration-h100.mdx index d54f546a23..46b75050ae 100644 --- a/pages/gpu/reference-content/migration-h100.mdx +++ b/pages/gpu/reference-content/migration-h100.mdx @@ -16,7 +16,7 @@ For optimal availability and performance, we recommend switching from **H100-2-8 There are two primary scenarios: migrating **Kubernetes (Kapsule)** workloads or **standalone** workloads. - Always ensure that your **data is backed up** before performing any operations that could affect it. Keep in mind that **scratch storage** is ephemeral and does not survive once the Instance is stopped: doing a full stop/start cycle will **erase the scratch data**. However, doing a simple reboot or using the **stop in place** function will keep the data. + Always make sure your **data is backed up** before performing any operation that could affect it. Remember that **scratch storage** is ephemeral and will not persist after an Instance is fully stopped. A full stop/start cycle—such as during an Instance server migration—will **erase all scratch data**. However, outside of server-type migrations, a simple reboot or using **stop in place** will preserve the data stored on the Instance’s scratch storage. ### Migrating Kubernetes workloads (Kubernetes Kapsule) @@ -86,11 +86,9 @@ H100 PCIe-based GPU Instances are not End-of-Life (EOL), but due to limited avai #### Is H100-SXM-2-80G compatible with my current setup? Yes — it runs the same CUDA toolchain and supports standard frameworks (PyTorch, TensorFlow, etc.). No changes in your code base are required when upgrading to a SXM-based GPU Instance. -#### Why is H100-SXM better for multi-GPU? -The NVIDIA H100-SXM outperforms the H100-PCIe in multi-GPU configurations due to its superior interconnect and higher power capacity. -It leverages fourth-generation NVLink and NVSwitch, providing up to 900 GB/s of bidirectional bandwidth for rapid GPU-to-GPU communication, compared to the H100-PCIe's 128 GB/s via PCIe Gen 5, which creates bottlenecks in demanding workloads like large-scale AI training and HPC. -Additionally, the H100-SXM’s 700W TDP enables higher clock speeds and sustained performance, while the H100-PCIe’s 300-350W TDP limits its throughput. -For high-communication, multi-GPU tasks, the H100-SXM is the optimal choice, while the H100-PCIe suits less intensive applications with greater flexibility. +### Why is the H100-SXM better for multi-GPU workloads? -#### What if my workload needs more CPU or RAM? -Let us know via [support ticket](https://console.scaleway.com/support/tickets/create) what your specific requoirements are. Currently we are evaluating options for compute-optimized configurations to complement our GPU offerings. +The NVIDIA H100-SXM outperforms the H100-PCIe in multi-GPU configurations primarily due to its higher interconnect bandwidth and greater power capacity. It uses fourth-generation NVLink and NVSwitch, delivering up to **900 GB/s of bidirectional bandwidth** for fast GPU-to-GPU communication. In contrast, the H100-PCIe is limited to a **theoretical maximum of 128 GB/s** via PCIe Gen 5, which becomes a bottleneck in communication-heavy workloads such as large-scale AI training and HPC. +The H100-SXM also provides **HBM3e memory** with up to **3.35 TB/s of bandwidth**, compared to **2 TB/s** with the H100-PCIe’s HBM3, improving performance in memory-bound tasks. +Additionally, the H100-SXM’s **700W TDP** allows higher sustained clock speeds and throughput, while the H100-PCIe’s **300–350W TDP** imposes stricter performance limits. +Overall, the H100-SXM is the optimal choice for high-communication, multi-GPU workloads, whereas the H100-PCIe offers more flexibility for less communication-intensive applications. From c2ecd76a2cb4e49f4615fbc56c6576e6844efa7e Mon Sep 17 00:00:00 2001 From: Benedikt Rollik Date: Tue, 18 Nov 2025 10:05:33 +0100 Subject: [PATCH 6/8] Update pages/gpu/reference-content/migration-h100.mdx --- pages/gpu/reference-content/migration-h100.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/gpu/reference-content/migration-h100.mdx b/pages/gpu/reference-content/migration-h100.mdx index 46b75050ae..584eb75fe5 100644 --- a/pages/gpu/reference-content/migration-h100.mdx +++ b/pages/gpu/reference-content/migration-h100.mdx @@ -9,7 +9,7 @@ dates: Scaleway is optimizing its H100 GPU Instance portfolio to improve long-term availability and provide better performance for all users. -For optimal availability and performance, we recommend switching from **H100-2-80G** to the next generation **H100-SXM-2-80G** GPU Instance. This latest generation has more stock, improved NVLink, better and faster VRAM. +For optimal availability and performance, we recommend switching from **H100-2-80G** to the improved **H100-SXM-2-80G** GPU Instance. This latest generation has more stock, improved NVLink, better and faster VRAM. ## Benefits of the migration From f0267a2636d07b5078e1cb7e8cf2de4d33152455 Mon Sep 17 00:00:00 2001 From: Benedikt Rollik Date: Tue, 18 Nov 2025 10:43:27 +0100 Subject: [PATCH 7/8] Update pages/gpu/reference-content/migration-h100.mdx Co-authored-by: ldecarvalho-doc <82805470+ldecarvalho-doc@users.noreply.github.com> --- pages/gpu/reference-content/migration-h100.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/gpu/reference-content/migration-h100.mdx b/pages/gpu/reference-content/migration-h100.mdx index 584eb75fe5..2594dd0326 100644 --- a/pages/gpu/reference-content/migration-h100.mdx +++ b/pages/gpu/reference-content/migration-h100.mdx @@ -59,7 +59,7 @@ For standalone GPU instances, you can recreate your environment using a `H100-SX ``` Replace `` with the Availability Zone of your Instance. For example, if your Instance is located in Paris-1, the zone would be `fr-par-1`. Replace `` with the ID of your Instance. - You can find the ID of your Instance on it's overview page in the Scaleway console or using the CLI by running the following command: `scw instance server list`. + You can find the ID of your Instance on its overview page in the Scaleway console or using the CLI by running the following command: `scw instance server list`. 2. Update the commercial type of the Instance From 4b42e3d1f334aa6c3bf63d76371a13f82ae19765 Mon Sep 17 00:00:00 2001 From: Benedikt Rollik Date: Tue, 18 Nov 2025 11:27:59 +0100 Subject: [PATCH 8/8] Apply suggestions from code review MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Co-authored-by: Néda <87707325+nerda-codes@users.noreply.github.com> --- pages/gpu/reference-content/migration-h100.mdx | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/pages/gpu/reference-content/migration-h100.mdx b/pages/gpu/reference-content/migration-h100.mdx index 2594dd0326..df1ce474f3 100644 --- a/pages/gpu/reference-content/migration-h100.mdx +++ b/pages/gpu/reference-content/migration-h100.mdx @@ -1,6 +1,6 @@ --- title: Migrating from H100-2-80G to H100-SXM-2-80G -description: Learn how to migrating from H100-2-80G to H100-SXM-2-80G GPU Instances. +description: Learn how to migrate from H100-2-80G to H100-SXM-2-80G GPU Instances. tags: gpu nvidia dates: validation: 2025-11-04 @@ -9,7 +9,7 @@ dates: Scaleway is optimizing its H100 GPU Instance portfolio to improve long-term availability and provide better performance for all users. -For optimal availability and performance, we recommend switching from **H100-2-80G** to the improved **H100-SXM-2-80G** GPU Instance. This latest generation has more stock, improved NVLink, better and faster VRAM. +For optimal availability and performance, we recommend switching from **H100-2-80G** to the improved **H100-SXM-2-80G** GPU Instance. This latest generation has more stock, improved NVLink, and better and faster VRAM. ## Benefits of the migration @@ -45,7 +45,7 @@ If you are using Kapsule, follow these steps to move existing workloads to nodes 6. Delete the old node pool. - For further information, refer to our dedicated documentation [How to migrate existing workloads to a new Kapsule node pool](/kubernetes/how-to/manage-node-pools/#how-to-migrate-existing-workloads-to-a-new-kubernets-kapsule-node-pool). + For further information, refer to our dedicated documentation: [How to migrate existing workloads to a new Kapsule node pool](/kubernetes/how-to/manage-node-pools/#how-to-migrate-existing-workloads-to-a-new-kubernets-kapsule-node-pool). ### Migrating a standalone Instance @@ -59,10 +59,10 @@ For standalone GPU instances, you can recreate your environment using a `H100-SX ``` Replace `` with the Availability Zone of your Instance. For example, if your Instance is located in Paris-1, the zone would be `fr-par-1`. Replace `` with the ID of your Instance. - You can find the ID of your Instance on its overview page in the Scaleway console or using the CLI by running the following command: `scw instance server list`. + You can find the ID of your Instance on its overview page in the Scaleway console or by running the following CLI command: `scw instance server list`. -2. Update the commercial type of the Instance +2. Update the commercial type of the Instance. ``` scw instance server update commercial-type=H100-SXM-2-80G zone= ```