You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: pages/gpu/how-to/use-gpu-with-docker.mdx
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -13,7 +13,7 @@ Docker is a platform as a service (PaaS) tool that uses OS-level virtualization
13
13
14
14
Unlike virtual machines, containers share the services of a single operating system kernel. This reduces unnecessary overhead and makes them lightweight and portable. Docker containers can run on any computer running macOS, Windows, or Linux, either on-premises or in a public cloud environment, such as Scaleway.
15
15
16
-
All [Scaleway GPU Instances](https://www.scaleway.com/en/gpu-instances/) come with prebuilt Docker images which can be launched as soon as you connect to your Instance. Each image provides a different AI environment. When you launch one of these images, you are in your chosen environment within seconds with all your favorite Python packages already installed. Using Docker for your AI projects in this way allows you to ensure that your working environments are both **isolated** and **portable**, since they are in containers that can be easily transferred between machines.
16
+
All [Scaleway GPU Instances](/gpu/reference-content/choosing-gpu-instance-type/) come with prebuilt Docker images which can be launched as soon as you connect to your Instance. Each image provides a different AI environment. When you launch one of these images, you are in your chosen environment within seconds with all your favorite Python packages already installed. Using Docker for your AI projects in this way allows you to ensure that your working environments are both **isolated** and **portable**, since they are in containers that can be easily transferred between machines.
17
17
18
18
You can also run Docker images provided by other sources and use them with your GPU Instance - for instance, you might want to use Docker images provided by NVIDIA, Google, etc. Alternatively, you could also choose to build your own Docker images.
Copy file name to clipboardExpand all lines: pages/gpu/how-to/use-mig-with-kubernetes.mdx
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -24,7 +24,7 @@ In this guide, we will explore the capabilities of NVIDIA MIG within a Kubernete
24
24
<Requirements />
25
25
26
26
- A Scaleway account logged into the [console](https://console.scaleway.com)
27
-
- A [Kubernetes cluster](/kubernetes/quickstart/#how-to-create-a-kubernetes-cluster) with a [GPU Instance](https://www.scaleway.com/en/gpu-instances/) as node
27
+
- A [Kubernetes cluster](/kubernetes/quickstart/#how-to-create-a-kubernetes-cluster) with a [GPU Instance](/gpu/reference-content/choosing-gpu-instance-type/) as node
28
28
29
29
<Messagetype="tip">
30
30
MIG is fully supported on [Scaleway managed Kubernetes](/kubernetes/quickstart/) clusters (Kapsule and Kosmos).
Copy file name to clipboardExpand all lines: pages/gpu/reference-content/choosing-gpu-instance-type.mdx
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -16,7 +16,7 @@ It empowers European AI startups, giving them the tools (without the need for a
16
16
17
17
## How to choose the right GPU Instance type
18
18
19
-
Scaleway provides a range of GPU Instance offers, from [GPU RENDER Instances](https://www.scaleway.com/en/gpu-render-instances/) and [H100 SXM Instances](https://www.scaleway.com/en/gpu-instances/) to [custom build clusters](https://www.scaleway.com/en/ai-supercomputers/). There are several factors to consider when choosing the right GPU Instance type to ensure that it meets your performance, budget, and scalability requirements.
19
+
Scaleway provides a range of GPU Instance offers, from [GPU RENDER Instances](https://www.scaleway.com/en/gpu-render-instances/) and [H100 SXM Instances](/gpu/reference-content/choosing-gpu-instance-type/) to [custom build clusters](https://www.scaleway.com/en/ai-supercomputers/). There are several factors to consider when choosing the right GPU Instance type to ensure that it meets your performance, budget, and scalability requirements.
20
20
Below, you will find a guide to help you make an informed decision:
21
21
22
22
***Workload requirements:** Identify the nature of your workload. Are you running machine learning, deep learning, high-performance computing (HPC), data analytics, or graphics-intensive applications? Different Instance types are optimized for different types of workloads. For example, the H100 is not designed for graphics rendering. However, other models are. As [stated by Tim Dettmers](https://timdettmers.com/2023/01/30/which-gpu-for-deep-learning/), “Tensor Cores are most important, followed by the memory bandwidth of a GPU, the cache hierarchy, and only then FLOPS of a GPU.”. For more information, refer to the [NVIDIA GPU portfolio](https://docs.nvidia.com/data-center-gpu/line-card.pdf).
@@ -28,7 +28,7 @@ Below, you will find a guide to help you make an informed decision:
28
28
***Scaling:** Consider the scalability requirements of your workload. The most efficient way to scale up your workload is by using:
29
29
* Bigger GPU
30
30
* Up to 2 PCIe GPU with [H100 Instances](https://www.scaleway.com/en/h100-pcie-try-it-now/) or 8 PCIe GPU with [L4](https://www.scaleway.com/en/l4-gpu-instance/) or [L4OS](https://www.scaleway.com/en/contact-l40s/) Instances.
31
-
* Or better, an HGX-based server setup with up to 8x NVlink GPUs with [H100-SXM Instances](https://www.scaleway.com/en/gpu-instances/)
31
+
* Or better, an HGX-based server setup with up to 8x NVlink GPUs with [H100-SXM Instances](/gpu/reference-content/choosing-gpu-instance-type/)
32
32
* A [supercomputer architecture](https://www.scaleway.com/en/ai-supercomputers/) for a larger setup for workload-intensive tasks
33
33
* Another way to scale your workload is to use [Kubernetes and MIG](/gpu/how-to/use-nvidia-mig-technology/): You can divide a single H100 or H100-SXM GPU into as many as 7 MIG partitions. This means that instead of employing seven P100 GPUs to set up seven K8S pods, you could opt for a single H100 GPU with MIG to effectively deploy all seven K8S pods.
34
34
***Online resources:** Check for online resources, forums, and community discussions related to the specific GPU type you are considering. This can provide insights into common issues, best practices, and optimizations.
Copy file name to clipboardExpand all lines: pages/instances/concepts.mdx
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -36,7 +36,7 @@ Cloud-init is a multi-distribution package that [provides boot time customizatio
36
36
37
37
## Development Instance
38
38
39
-
[Development Instances](https://www.scaleway.com/en/cost-optimized-instances/) are reliable and flexible Instances tuned to host your websites, applications, and development environments.
39
+
[Development Instances](/instances/reference-content/development/) are reliable and flexible Instances tuned to host your websites, applications, and development environments.
40
40
41
41
## Dynamic IP
42
42
@@ -50,11 +50,11 @@ Flexible IP addresses are public IP addresses that you can hold independently of
50
50
51
51
## General Purpose Instances
52
52
53
-
[General Purpose Instances](https://www.scaleway.com/en/general-purpose-instances/) are production-grade [Instances](#instance) designed for scalable infrastructures. Development Instances support the boot-on-block feature and allow you to launch high-performance services with high-end CPUs.
53
+
[General Purpose Instances](/instances/reference-content/general-purpose/) are production-grade [Instances](#instance) designed for scalable infrastructures. Development Instances support the boot-on-block feature and allow you to launch high-performance services with high-end CPUs.
54
54
55
55
## GPU Instance
56
56
57
-
[GPU Instances](https://www.scaleway.com/en/gpu-instances/) are powerful Instances equipped with dedicated high-end Nvidia graphics processing units. See our [dedicated GPU documentation](/gpu/) for more details.
57
+
[GPU Instances](/gpu/reference-content/choosing-gpu-instance-type/) are powerful Instances equipped with dedicated high-end Nvidia graphics processing units. See our [dedicated GPU documentation](/gpu/) for more details.
58
58
59
59
## Image
60
60
@@ -152,4 +152,4 @@ Tags allow you to organize, sort, filter, and monitor your cloud resources using
152
152
153
153
## x86 (Intel/AMD) Instances
154
154
155
-
[x86 (Intel/AMD) Instances](https://www.scaleway.com/en/cost-optimized-instances/) are reliable and high-performance Instances powered by AMD EPYC processors, tailored for development, testing, production workloads, and general-purpose applications.
155
+
[x86 (Intel/AMD) Instances](/instances/reference-content/development/) are reliable and high-performance Instances powered by AMD EPYC processors, tailored for development, testing, production workloads, and general-purpose applications.
Copy file name to clipboardExpand all lines: pages/kubernetes/how-to/use-nvidia-gpu-operator.mdx
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,7 +10,7 @@ import Requirements from '@macros/iam/requirements.mdx'
10
10
11
11
12
12
Kubernetes Kapsule and Kosmos support NVIDIA's official Kubernetes operator for all GPU pools.
13
-
This operator is compatible with all Scaleway [GPU Instance](https://www.scaleway.com/en/gpu-instances/) offers.
13
+
This operator is compatible with all Scaleway [GPU Instance](/gpu/reference-content/choosing-gpu-instance-type/) offers.
14
14
15
15
The GPU operator is set up for all GPU pools created in Kubernetes Kapsule and Kosmos, providing automated installation of all required software on GPU worker nodes, such as the device plugin, container toolkit, GPU drivers etc. For more information, refer to [the GPU operator overview](https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/latest/overview.html).
Copy file name to clipboardExpand all lines: tutorials/ark-server/index.mdx
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -52,7 +52,7 @@ In this tutorial, you will learn how to create an ARK server on a [Scaleway Inst
52
52
We recommend you follow this tutorial using a [General Purpose Instance](/instances/reference-content/general-purpose/).
53
53
</Message>
54
54
55
-
Creating an ARK server can be done in a few steps on a [Scaleway Instance](https://www.scaleway.com/en/cost-optimized-instances/). If you do not have an Instance yet, start by [deploying your first Instance](/instances/how-to/create-an-instance/).
55
+
Creating an ARK server can be done in a few steps on a [Scaleway Instance](/instances/reference-content/development/). If you do not have an Instance yet, start by [deploying your first Instance](/instances/how-to/create-an-instance/).
56
56
57
57
<Messagetype="note">
58
58
The ARK: Survival Evolved game server application requires at least 6 GB of RAM to start. Memory requirements increase as the number of connected players increases, as well depending on the activated mods. We recommend that you use at minimum a **DEV1-L** Instance for smooth gameplay.
Copy file name to clipboardExpand all lines: tutorials/migrate-mysql-databases-postgresql-pgloader/index.mdx
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -17,7 +17,7 @@ pgLoader is an open-source database migration tool developed to simplify the pro
17
17
18
18
The tool supports migrations from several file types and database engines like [MySQL](https://www.mysql.com/), [MS SQL](https://www.microsoft.com/en-us/sql-server/sql-server-2019) and [SQLite](https://www.sqlite.org/index.html).
19
19
20
-
In this tutorial, you learn how to migrate an existing remote MySQL database to a [Database for PostgreSQL](https://www.scaleway.com/en/database/) using pgLoader and an intermediate [Development Instance](https://www.scaleway.com/en/cost-optimized-instances/) running Ubuntu Linux.
20
+
In this tutorial, you learn how to migrate an existing remote MySQL database to a [Database for PostgreSQL](https://www.scaleway.com/en/database/) using pgLoader and an intermediate [Development Instance](/instances/reference-content/development/) running Ubuntu Linux.
In this tutorial, you will learn how the Private Network feature can help you to build a distributed [Rocket.Chat](/tutorials/run-messaging-platform-with-rocketchat/) application on [General Purpose](https://www.scaleway.com/en/cost-optimized-instances/) and [Development](https://www.scaleway.com/en/cost-optimized-instances/) Instances using a Private Network to communicate securely between them:
28
+
In this tutorial, you will learn how the Private Network feature can help you to build a distributed [Rocket.Chat](/tutorials/run-messaging-platform-with-rocketchat/) application on [General Purpose](/instances/reference-content/development/) and [Development](/instances/reference-content/development/) Instances using a Private Network to communicate securely between them:
Copy file name to clipboardExpand all lines: tutorials/setup-minecraft/index.mdx
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -53,7 +53,7 @@ The Minecraft server is a Java application and runs perfectly on [Scaleway Insta
53
53
-`sudo` privileges or access to the root user
54
54
- A copy of the [Minecraft game client](https://www.minecraft.net/en-us/) for your local computer
55
55
56
-
Deploying your own Minecraft server can be done in a few easy steps on a [Scaleway Development Instance](https://www.scaleway.com/en/cost-optimized-instances/). In case you do not have an Instance yet, start by [deploying your first Instance](/instances/how-to/create-an-instance/).
56
+
Deploying your own Minecraft server can be done in a few easy steps on a [Scaleway Development Instance](/instances/reference-content/development/). In case you do not have an Instance yet, start by [deploying your first Instance](/instances/how-to/create-an-instance/).
57
57
58
58
<Messagetype="tip">
59
59
We recommend you follow this tutorial using a [General Purpose Instance](/instances/reference-content/general-purpose/) with at least 8GB of RAM.
0 commit comments