You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: pages/instances/reference-content/choosing-shared-vs-dedicated-cpus.mdx
+51-42Lines changed: 51 additions & 42 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,82 +8,91 @@ content:
8
8
categories:
9
9
- compute
10
10
dates:
11
-
validation: 2025-04-18
11
+
validation: 2025-04-30
12
12
posted: 2025-04-18
13
13
tags: instance, cpu, vcpu, shared, dedicated
14
14
---
15
-
16
15
When deploying a Scaleway Instance, selecting the appropriate CPU provisioning is key for optimizing performance and cost.
17
16
18
17
All Instances are deployed on large, powerful physical servers powered by the latest datacenter-grade CPUs. A hypervisor splits the large physical machine into smaller virtual units on these machines.
18
+
19
19
The hypervisor manages the allocation and scheduling of physical resources such as CPU or RAM. While RAM is always allocated dedicated per Instance, two types of CPU provisioning are available: **shared** and **dedicated** vCPUs.
20
20
21
21
Understanding the difference between these two techniques is key to making an informed decision about the best Instance for your application.
22
22
23
23
## Comparison of shared and dedicated vCPU Instances
| CPU access | Physical cores shared across multiple Instances | Exclusive access to physical CPU cores |
28
+
| Isolation | Strong virtual isolation, no data sharing between Instances| Full physical resource isolation |
29
+
| Performance consistency | Variable – depends on other workloads on the host | High – consistent and predictable performance |
30
+
| Resource contention risk | Possible during peak usage | None |
31
+
| Latency sensitivity | Not suitable for latency-sensitive apps | Ideal for latency-critical applications |
32
+
| Cost | Lower | Higher |
33
+
| Use case | Dev/staging, personal projects, blogs, low-traffic sites | Production apps, eCommerce, CI/CD, ML, real-time processing |
34
+
| Best for | Non-critical or experimental workloads | Business-critical, latency-sensitive or high-performance workloads |
34
35
35
36
## Shared vCPU Instances
37
+
36
38
Shared vCPU Instances, including [Learning](/instances/reference-content/learning/) and [Cost-Optimized](/instances/reference-content/cost-optimized/), are cost-effective virtual machines in which CPU resources are shared among multiple Instances.
37
-
This means multiple virtual CPU cores are allocated to these Instances, but the physical CPU cores available on the hypervisors' hardware are shared among them.
39
+
40
+
This means that while each Instance gets its own vCPUs, these vCPUs are scheduled on physical cores that are shared across multiple Instances.
41
+
38
42
As a result, Instances share physical CPU time, and during peak demand from other Instances on the same host, your workloads might temporarily slow down due to CPU contention (also known as "CPU steal").
43
+
39
44
While physical CPU threads are shared between Instances, vCPUs are dedicated to each Instance, and no data can be shared or accessed between Instances through this setup.
40
45
41
46
### Typical use cases
42
-
- Development and staging environments
43
-
- Small and non-critical production environments
44
-
- Low to medium-traffic websites
45
-
- Personal blogs and forums
46
-
- Applications tolerant to occasional performance variability
47
-
- Worker nodes in container orchestration clusters such as Kubernetes
48
-
- Experimental or proof-of-concept projects
49
-
- Small-scale applications with limited traffic
47
+
48
+
- Development and staging environments
49
+
- Small and non-critical production environments
50
+
- Low to medium-traffic websites
51
+
- Personal blogs and forums
52
+
- Applications tolerant to occasional performance variability
53
+
- Worker nodes in container orchestration clusters such as Kubernetes
54
+
- Experimental or proof-of-concept projects
55
+
- Small-scale applications with limited traffic
50
56
51
57
### Summary
52
-
- Shared vCPU Instances provide an affordable solution for non-critical workloads.
53
-
- CPU performance is less predictable and may fluctuate depending on neighboring workloads ("noisy neighbors").
54
-
- During peak usage, your workloads might experience temporary slowdowns due to CPU steal.
58
+
59
+
- Shared vCPU Instances provide an affordable solution for non-critical workloads.
60
+
- CPU performance is less predictable and may fluctuate depending on neighboring workloads ("noisy neighbors").
61
+
- During peak usage, your workloads might experience temporary slowdowns due to CPU steal.
55
62
56
63
## Dedicated vCPU Instances
64
+
57
65
Dedicated vCPU Instances, including [Production-Optimized](/instances/reference-content/production-optimized/) and [Workload-Optimized](/instances/reference-content/workload-optimized/), provide exclusive access to physical CPU cores.
66
+
58
67
This ensures consistent and predictable performance at all times. Dedicated vCPU Instances are perfect for applications that require high CPU utilization and low latency.
59
68
60
69
### Typical use cases
61
-
- Production applications with high CPU demands
62
-
- eCommerce platforms and business-critical services
63
-
- Game servers requiring low latency
64
-
- CI/CD pipelines
65
-
- Audio and video transcoding
66
-
- Machine learning and scientific computing
67
-
- Real-time data processing and analytics
68
-
- High-traffic websites and applications
69
70
70
-
### Summary
71
-
- Dedicated vCPU allocation ensures consistent and predictable performance.
72
-
- No risk of performance degradation due to neighboring workloads.
73
-
- Dedicated vCPU Instances are more expensive than shared vCPU Instances but offer guaranteed CPU performance.
71
+
- Production applications with high CPU demands
72
+
- eCommerce platforms and business-critical services
73
+
- Game servers requiring low latency
74
+
- CI/CD pipelines
75
+
- Audio and video transcoding
76
+
- Machine learning and scientific computing
77
+
- Real-time data processing and analytics
78
+
- High-traffic websites and applications
74
79
75
-
##Choosing the best configuration
80
+
### Summary
76
81
77
-
Choose **shared vCPU** Instances if:
82
+
- Dedicated vCPU allocation ensures consistent and predictable performance.
83
+
- No risk of performance degradation due to neighboring workloads.
84
+
- Dedicated vCPU Instances are more expensive than shared vCPU Instances but offer guaranteed CPU performance.
78
85
79
-
- You are running non-critical or experimental workloads
80
-
- Budget is a priority over performance consistency
86
+
## Choosing the best configuration
81
87
82
-
Choose **dedicated vCPU** Instances if:
88
+
Choose **shared vCPU** Instances if:
89
+
- You are running non-critical or experimental workloads
90
+
- Budget is a priority over performance consistency
83
91
84
-
- Your application requires stable, predictable CPU performance
85
-
- You are in a production environment with strict performance requirements
92
+
Choose **dedicated vCPU** Instances if:
93
+
- Your application requires stable, predictable CPU performance
94
+
- You are in a production environment with strict performance requirements
86
95
87
96
Consider your needs and workload requirements to choose the best vCPU provisioning option for your Scaleway Instance.
88
97
89
-
For more details about available Instance types, refer to [Choosing the best Scaleway Instance type for your workload](/instances/reference-content/choosing-instance-type/).
98
+
For more details about available Instance types, refer to [Choosing the best Scaleway Instance type for your workload](/instances/reference-content/choosing-instance-type/).
0 commit comments