You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/virtual-machines/n-series-migration.md
+14-13Lines changed: 14 additions & 13 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -41,15 +41,27 @@ The [NC (v1)-Series](./nc-series.md) VMs are Azure’s oldest GPU-accelerated co
41
41
42
42
Today, given the relatively low compute performance of the aging NVIDIA K80 GPU platform, in comparison to VM series featuring newer GPUs, a popular use case for the NC-series is real-time inference and analytics workloads, where an accelerated VM must be available in a steady state to serve request from applications as they arrive. In these cases the volume or batch size of requests may be insufficient to benefit from more performant GPUs. NC VMs are also popular for developers and students learning about, developing for, or experimenting with GPU acceleration, who need an inexpensive cloud-based CUDA deployment target upon which to iterate that doesn’t need to perform to production levels.
43
43
44
-
In general, NC-Series customers should consider moving directly across from NC sizes to [NC T4 v3](./nct4-v3-series.md) sizes, Azure’s new GPU-accelerated platform for light workloads powered by NVIDIA Tesla T4 GPUs, although other VM SKUs should be considered for workloads running on InfiniBand-enabled [NDm A100 v4](./ndm-a100-v4-series.md) size.
44
+
In general, NC-Series customers should consider moving directly across from NC sizes to [NC T4 v3](./nct4-v3-series.md) sizes, Azure’s new GPU-accelerated platform for light workloads powered by NVIDIA Tesla T4 GPUs.
45
45
46
46
| Current VM Size | Target VM Size | Difference in Specification |
47
47
|---|---|---|
48
48
Standard_NC6 <br> Standard_NC6_Promo | Standard_NC4as_T4_v3 <br>or<br>Standard_NC8as_T4 | CPU: Intel Haswell vs AMD Rome<br>GPU count: 1 (same)<br>GPU generation: NVIDIA Keppler vs. Turing (+2 generations, ~2x FP32 FLOPs)<br>GPU memory (GiB per GPU): 16 (+4)<br>vCPU: 4 (-2) or 8 (+2)<br>Memory GiB: 16 (-40) or 56 (same)<br>Temp Storage (SSD) GiB: 180 (-160) or 360 (+20)<br>Max data disks: 8 (-4) or 16 (+4)<br>Accelerated Networking: Yes (+)<br>Premium Storage: Yes (+)|
|Standard_NC24r<br>Standard_NC24r_Promo| Standard_NC64as_T4_v3*| CPU: Intel Haswell vs AMD Rome<br>GPU count: 4 (same)<br>GPU generation: NVIDIA Keppler vs. Turing (+2 generations, ~2x FP32 FLOPs)<br>GPU memory (GiB per GPU): 16 (+4)<br>vCPU: 64 (+40)<br>Memory GiB: 440 (+216)<br>Temp Storage (SSD) GiB: 2880 (+1440)<br>Max data disks: 32 (-32)<br>Accelerated Networking: Yes (+)<br>Premium Storage: Yes (+)<br>InfiniBand interconnect: No |
52
52
53
+
### NC v2-Series VMs featuring NVIDIA Tesla P100 GPUs
54
+
55
+
The NC v2-series virtual machines are a flagship platform originally designed for AI and Deep Learning workloads. They offered excellent performance for Deep Learning training, with per-GPU performance roughly 2x that of the original NC-Series and are powered by NVIDIA Tesla P100 GPUs and Intel Xeon E5-2690 v4 (Broadwell) CPUs. Like the NC and ND -Series, the NC v2-Series offers a configuration with a secondary low-latency, high-throughput network through RDMA, and InfiniBand connectivity so you can run large-scale training jobs spanning many GPUs.
56
+
57
+
In general, NCv2-Series customers should consider moving directly across to [NC A100 v4](./nc-a100-v4-series.md) sizes, Azure’s new GPU-accelerated platform powered by NVIDIA Ampere A100 PCIe GPUs.
58
+
59
+
| Current VM Size | Target VM Size | Difference in Specification |
60
+
|---|---|---|
61
+
| Standard_NC6s_v2 | Standard_NC24ads_A100_v4 | CPU: Intel Broadwell vs AMD Milan <br>GPU count: 1 (same)<br>GPU generation: NVIDIA Pascal vs. Ampere (+2 generation)<br>GPU memory (GiB per GPU): 80 (+64)<br>vCPU: 24 (+18)<br>Memory GiB: 220 (+108)<br>Temp Storage (SSD) GiB: 1123 (+387)<br>Max data disks: 12 (same)<br>Accelerated Networking: Yes (+)<br>Premium Storage: Yes (+) |
### NC v2-Series VMs featuring NVIDIA Tesla P100 GPUs
66
-
67
-
The NC v2-series virtual machines are a flagship platform originally designed for AI and Deep Learning workloads. They offered excellent performance for Deep Learning training, with per-GPU performance roughly 2x that of the original NC-Series and are powered by NVIDIA Tesla P100 GPUs and Intel Xeon E5-2690 v4 (Broadwell) CPUs. Like the NC and ND -Series, the NC v2-Series offers a configuration with a secondary low-latency, high-throughput network through RDMA, and InfiniBand connectivity so you can run large-scale training jobs spanning many GPUs.
68
77
69
-
In general, NCv2-Series customers should consider moving directly across to [NC A100 v4](./nc-a100-v4-series.md) sizes, Azure’s new GPU-accelerated platform powered by NVIDIA Ampere A100 PCIe GPUs, although other VM SKUs should be considered for workloads running on InfiniBand-enabled [NDm A100 v4](./ndm-a100-v4-series.md) size.
70
-
71
-
| Current VM Size | Target VM Size | Difference in Specification |
72
-
|---|---|---|
73
-
| Standard_NC6s_v2 | Standard_NC24ads_A100_v4 | CPU: Intel Broadwell vs AMD Milan <br>GPU count: 1 (same)<br>GPU generation: NVIDIA Pascal vs. Ampere (+2 generation)<br>GPU memory (GiB per GPU): 80 (+64)<br>vCPU: 24 (+18)<br>Memory GiB: 220 (+108)<br>Temp Storage (SSD) GiB: 1123 (+387)<br>Max data disks: 12 (same)<br>Accelerated Networking: Yes (+)<br>Premium Storage: Yes (+) |
0 commit comments