Skip to content

Commit d1ff076

Browse files
committed
docs(ins): replace links
1 parent 0463499 commit d1ff076

File tree

9 files changed

+13
-13
lines changed

9 files changed

+13
-13
lines changed

pages/gpu/how-to/use-gpu-with-docker.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ Docker is a platform as a service (PaaS) tool that uses OS-level virtualization
1313

1414
Unlike virtual machines, containers share the services of a single operating system kernel. This reduces unnecessary overhead and makes them lightweight and portable. Docker containers can run on any computer running macOS, Windows, or Linux, either on-premises or in a public cloud environment, such as Scaleway.
1515

16-
All [Scaleway GPU Instances](https://www.scaleway.com/en/gpu-instances/) come with prebuilt Docker images which can be launched as soon as you connect to your Instance. Each image provides a different AI environment. When you launch one of these images, you are in your chosen environment within seconds with all your favorite Python packages already installed. Using Docker for your AI projects in this way allows you to ensure that your working environments are both **isolated** and **portable**, since they are in containers that can be easily transferred between machines.
16+
All [Scaleway GPU Instances](/gpu/reference-content/choosing-gpu-instance-type/) come with prebuilt Docker images which can be launched as soon as you connect to your Instance. Each image provides a different AI environment. When you launch one of these images, you are in your chosen environment within seconds with all your favorite Python packages already installed. Using Docker for your AI projects in this way allows you to ensure that your working environments are both **isolated** and **portable**, since they are in containers that can be easily transferred between machines.
1717

1818
You can also run Docker images provided by other sources and use them with your GPU Instance - for instance, you might want to use Docker images provided by NVIDIA, Google, etc. Alternatively, you could also choose to build your own Docker images.
1919

pages/gpu/how-to/use-mig-with-kubernetes.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ In this guide, we will explore the capabilities of NVIDIA MIG within a Kubernete
2424
<Requirements />
2525

2626
- A Scaleway account logged into the [console](https://console.scaleway.com)
27-
- A [Kubernetes cluster](/kubernetes/quickstart/#how-to-create-a-kubernetes-cluster) with a [GPU Instance](https://www.scaleway.com/en/gpu-instances/) as node
27+
- A [Kubernetes cluster](/kubernetes/quickstart/#how-to-create-a-kubernetes-cluster) with a [GPU Instance](/gpu/reference-content/choosing-gpu-instance-type/) as node
2828

2929
<Message type="tip">
3030
MIG is fully supported on [Scaleway managed Kubernetes](/kubernetes/quickstart/) clusters (Kapsule and Kosmos).

pages/gpu/reference-content/choosing-gpu-instance-type.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ It empowers European AI startups, giving them the tools (without the need for a
1616

1717
## How to choose the right GPU Instance type
1818

19-
Scaleway provides a range of GPU Instance offers, from [GPU RENDER Instances](https://www.scaleway.com/en/gpu-render-instances/) and [H100 SXM Instances](https://www.scaleway.com/en/gpu-instances/) to [custom build clusters](https://www.scaleway.com/en/ai-supercomputers/). There are several factors to consider when choosing the right GPU Instance type to ensure that it meets your performance, budget, and scalability requirements.
19+
Scaleway provides a range of GPU Instance offers, from [GPU RENDER Instances](https://www.scaleway.com/en/gpu-render-instances/) and [H100 SXM Instances](/gpu/reference-content/choosing-gpu-instance-type/) to [custom build clusters](https://www.scaleway.com/en/ai-supercomputers/). There are several factors to consider when choosing the right GPU Instance type to ensure that it meets your performance, budget, and scalability requirements.
2020
Below, you will find a guide to help you make an informed decision:
2121

2222
* **Workload requirements:** Identify the nature of your workload. Are you running machine learning, deep learning, high-performance computing (HPC), data analytics, or graphics-intensive applications? Different Instance types are optimized for different types of workloads. For example, the H100 is not designed for graphics rendering. However, other models are. As [stated by Tim Dettmers](https://timdettmers.com/2023/01/30/which-gpu-for-deep-learning/), “Tensor Cores are most important, followed by the memory bandwidth of a GPU, the cache hierarchy, and only then FLOPS of a GPU.”. For more information, refer to the [NVIDIA GPU portfolio](https://docs.nvidia.com/data-center-gpu/line-card.pdf).
@@ -28,7 +28,7 @@ Below, you will find a guide to help you make an informed decision:
2828
* **Scaling:** Consider the scalability requirements of your workload. The most efficient way to scale up your workload is by using:
2929
* Bigger GPU
3030
* Up to 2 PCIe GPU with [H100 Instances](https://www.scaleway.com/en/h100-pcie-try-it-now/) or 8 PCIe GPU with [L4](https://www.scaleway.com/en/l4-gpu-instance/) or [L4OS](https://www.scaleway.com/en/contact-l40s/) Instances.
31-
* Or better, an HGX-based server setup with up to 8x NVlink GPUs with [H100-SXM Instances](https://www.scaleway.com/en/gpu-instances/)
31+
* Or better, an HGX-based server setup with up to 8x NVlink GPUs with [H100-SXM Instances](/gpu/reference-content/choosing-gpu-instance-type/)
3232
* A [supercomputer architecture](https://www.scaleway.com/en/ai-supercomputers/) for a larger setup for workload-intensive tasks
3333
* Another way to scale your workload is to use [Kubernetes and MIG](/gpu/how-to/use-nvidia-mig-technology/): You can divide a single H100 or H100-SXM GPU into as many as 7 MIG partitions. This means that instead of employing seven P100 GPUs to set up seven K8S pods, you could opt for a single H100 GPU with MIG to effectively deploy all seven K8S pods.
3434
* **Online resources:** Check for online resources, forums, and community discussions related to the specific GPU type you are considering. This can provide insights into common issues, best practices, and optimizations.

pages/instances/concepts.mdx

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@ Cloud-init is a multi-distribution package that [provides boot time customizatio
3636

3737
## Development Instance
3838

39-
[Development Instances](https://www.scaleway.com/en/cost-optimized-instances/) are reliable and flexible Instances tuned to host your websites, applications, and development environments.
39+
[Development Instances](/instances/reference-content/development/) are reliable and flexible Instances tuned to host your websites, applications, and development environments.
4040

4141
## Dynamic IP
4242

@@ -50,11 +50,11 @@ Flexible IP addresses are public IP addresses that you can hold independently of
5050

5151
## General Purpose Instances
5252

53-
[General Purpose Instances](https://www.scaleway.com/en/general-purpose-instances/) are production-grade [Instances](#instance) designed for scalable infrastructures. Development Instances support the boot-on-block feature and allow you to launch high-performance services with high-end CPUs.
53+
[General Purpose Instances](/instances/reference-content/general-purpose/) are production-grade [Instances](#instance) designed for scalable infrastructures. Development Instances support the boot-on-block feature and allow you to launch high-performance services with high-end CPUs.
5454

5555
## GPU Instance
5656

57-
[GPU Instances](https://www.scaleway.com/en/gpu-instances/) are powerful Instances equipped with dedicated high-end Nvidia graphics processing units. See our [dedicated GPU documentation](/gpu/) for more details.
57+
[GPU Instances](/gpu/reference-content/choosing-gpu-instance-type/) are powerful Instances equipped with dedicated high-end Nvidia graphics processing units. See our [dedicated GPU documentation](/gpu/) for more details.
5858

5959
## Image
6060

@@ -152,4 +152,4 @@ Tags allow you to organize, sort, filter, and monitor your cloud resources using
152152

153153
## x86 (Intel/AMD) Instances
154154

155-
[x86 (Intel/AMD) Instances](https://www.scaleway.com/en/cost-optimized-instances/) are reliable and high-performance Instances powered by AMD EPYC processors, tailored for development, testing, production workloads, and general-purpose applications.
155+
[x86 (Intel/AMD) Instances](/instances/reference-content/development/) are reliable and high-performance Instances powered by AMD EPYC processors, tailored for development, testing, production workloads, and general-purpose applications.

pages/kubernetes/how-to/use-nvidia-gpu-operator.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ import Requirements from '@macros/iam/requirements.mdx'
1010

1111

1212
Kubernetes Kapsule and Kosmos support NVIDIA's official Kubernetes operator for all GPU pools.
13-
This operator is compatible with all Scaleway [GPU Instance](https://www.scaleway.com/en/gpu-instances/) offers.
13+
This operator is compatible with all Scaleway [GPU Instance](/gpu/reference-content/choosing-gpu-instance-type/) offers.
1414

1515
The GPU operator is set up for all GPU pools created in Kubernetes Kapsule and Kosmos, providing automated installation of all required software on GPU worker nodes, such as the device plugin, container toolkit, GPU drivers etc. For more information, refer to [the GPU operator overview](https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/latest/overview.html).
1616

tutorials/ark-server/index.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -52,7 +52,7 @@ In this tutorial, you will learn how to create an ARK server on a [Scaleway Inst
5252
We recommend you follow this tutorial using a [General Purpose Instance](/instances/reference-content/general-purpose/).
5353
</Message>
5454

55-
Creating an ARK server can be done in a few steps on a [Scaleway Instance](https://www.scaleway.com/en/cost-optimized-instances/). If you do not have an Instance yet, start by [deploying your first Instance](/instances/how-to/create-an-instance/).
55+
Creating an ARK server can be done in a few steps on a [Scaleway Instance](/instances/reference-content/development/). If you do not have an Instance yet, start by [deploying your first Instance](/instances/how-to/create-an-instance/).
5656

5757
<Message type="note">
5858
The ARK: Survival Evolved game server application requires at least 6 GB of RAM to start. Memory requirements increase as the number of connected players increases, as well depending on the activated mods. We recommend that you use at minimum a **DEV1-L** Instance for smooth gameplay.

tutorials/migrate-mysql-databases-postgresql-pgloader/index.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ pgLoader is an open-source database migration tool developed to simplify the pro
1717

1818
The tool supports migrations from several file types and database engines like [MySQL](https://www.mysql.com/), [MS SQL](https://www.microsoft.com/en-us/sql-server/sql-server-2019) and [SQLite](https://www.sqlite.org/index.html).
1919

20-
In this tutorial, you learn how to migrate an existing remote MySQL database to a [Database for PostgreSQL](https://www.scaleway.com/en/database/) using pgLoader and an intermediate [Development Instance](https://www.scaleway.com/en/cost-optimized-instances/) running Ubuntu Linux.
20+
In this tutorial, you learn how to migrate an existing remote MySQL database to a [Database for PostgreSQL](https://www.scaleway.com/en/database/) using pgLoader and an intermediate [Development Instance](/instances/reference-content/development/) running Ubuntu Linux.
2121

2222
<Requirements />
2323

tutorials/mutli-node-rocket-chat-community-private-network/index.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ import image11 from './assets/scaleway-rc_admin.webp'
2525
import Requirements from '@macros/iam/requirements.mdx'
2626

2727

28-
In this tutorial, you will learn how the Private Network feature can help you to build a distributed [Rocket.Chat](/tutorials/run-messaging-platform-with-rocketchat/) application on [General Purpose](https://www.scaleway.com/en/cost-optimized-instances/) and [Development](https://www.scaleway.com/en/cost-optimized-instances/) Instances using a Private Network to communicate securely between them:
28+
In this tutorial, you will learn how the Private Network feature can help you to build a distributed [Rocket.Chat](/tutorials/run-messaging-platform-with-rocketchat/) application on [General Purpose](/instances/reference-content/development/) and [Development](/instances/reference-content/development/) Instances using a Private Network to communicate securely between them:
2929

3030
<Lightbox image={image} alt="" />
3131

tutorials/setup-minecraft/index.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -53,7 +53,7 @@ The Minecraft server is a Java application and runs perfectly on [Scaleway Insta
5353
- `sudo` privileges or access to the root user
5454
- A copy of the [Minecraft game client](https://www.minecraft.net/en-us/) for your local computer
5555

56-
Deploying your own Minecraft server can be done in a few easy steps on a [Scaleway Development Instance](https://www.scaleway.com/en/cost-optimized-instances/). In case you do not have an Instance yet, start by [deploying your first Instance](/instances/how-to/create-an-instance/).
56+
Deploying your own Minecraft server can be done in a few easy steps on a [Scaleway Development Instance](/instances/reference-content/development/). In case you do not have an Instance yet, start by [deploying your first Instance](/instances/how-to/create-an-instance/).
5757

5858
<Message type="tip">
5959
We recommend you follow this tutorial using a [General Purpose Instance](/instances/reference-content/general-purpose/) with at least 8GB of RAM.

0 commit comments

Comments
 (0)