You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/virtual-machines/compiling-scaling-applications.md
+11-17Lines changed: 11 additions & 17 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -20,39 +20,33 @@ Optimal scale-up and scale-out performance of HPC applications on Azure requires
20
20
The [azurehpc repo](https://github.com/Azure/azurehpc) contains many examples of:
21
21
- Setting up and running [applications](https://github.com/Azure/azurehpc/tree/master/apps) optimally.
22
22
- Configuration of [file systems, and clusters](https://github.com/Azure/azurehpc/tree/master/examples).
23
-
-[Tutorials](https://github.com/Azure/azurehpc/tree/master/tutorials) on how to get started easily with some common application workflows.
23
+
--[Tutorials](https://github.com/Azure/azurehpc/tree/master/tutorials) on how to get started easily with some common application workflows.
24
24
25
25
## Optimally scaling MPI
26
26
27
27
The following suggestions apply for optimal application scaling efficiency, performance, and consistency:
28
28
29
-
- For smaller scale jobs (that is, < 256K connections) use the option:
30
-
```bash
31
-
UCX_TLS=rc,sm
32
-
```
33
-
34
-
- For larger scale jobs (that is, > 256K connections) use the option:
35
-
```bash
36
-
UCX_TLS=dc,sm
37
-
```
38
-
29
+
- For smaller scale jobs (< 256 K connections) use the option:
30
+
```bash UCX_TLS=rc,sm ```
31
+
- For larger scale jobs (> 256 K connections) use the option:
32
+
```bash UCX_TLS=dc,sm ```
39
33
- In the above, to calculate the number of connections for your MPI job, use:
40
34
```bash
41
35
Max Connections = (processes per node) x (number of nodes per job) x (number of nodes per job)
42
36
```
43
37
44
38
## Adaptive Routing
45
-
Adaptive Routing (AR) allows Azure Virtual Machines (VMs) running EDR and HDR InfiniBand to automatically detect and avoid network congestion by dynamically selecting optimal network paths. As a result, AR offers improved latency and bandwidth on the InfiniBand network, which in turn drives higher performance and scaling efficiency. For more details, see [TechCommunity article](https://techcommunity.microsoft.com/t5/azure-compute/adaptive-routing-on-azure-hpc/ba-p/1205217).
39
+
Adaptive Routing (AR) allows Azure Virtual Machines (VMs) running EDR and HDR InfiniBand to automatically detect and avoid network congestion by dynamically selecting optimal network paths. As a result, AR offers improved latency and bandwidth on the InfiniBand network, which in turn drives higher performance and scaling efficiency. For more information, see [TechCommunity article](https://techcommunity.microsoft.com/t5/azure-compute/adaptive-routing-on-azure-hpc/ba-p/1205217).
46
40
47
41
## Process pinning
48
42
49
43
- Pin processes to cores using a sequential pinning approach (as opposed to an autobalance approach).
50
44
- Binding by Numa/Core/HwThread is better than default binding.
51
45
- For hybrid parallel applications (OpenMP+MPI), use 4 threads and 1 MPI rank per [CCX]([HB-series virtual machines overview including info on CCXs](/azure/virtual-machines/hb-series-overview)) on HB and HBv2 VM sizes.
52
46
- For pure MPI applications, experiment with 1-4 MPI ranks per CCX for optimal performance on HB and HBv2 VM sizes.
53
-
- Some applications with extreme sensitivity to memory bandwidth may benefit from using a reduced number of cores per CCX. For these applications, using 3 or 2 cores per CCX may reduce memory bandwidth contention and yield higher real-world performance or more consistent scalability. In particular, MPI Allreduce may benefit from this approach.
54
-
- For larger scale runs, it's recommended to use UD or hybrid RC+UD transports. Many MPI libraries/runtime libraries does this internally (such as UCX or MVAPICH2). Check your transport configurations for large-scale runs.
55
-
47
+
- Some applications with extreme sensitivity to memory bandwidth may benefit from using a reduced number of cores per CCX. For these applications, using three or two cores per CCX may reduce memory bandwidth contention and yield higher real-world performance or more consistent scalability. In particular, MPI 'Allreduce' may benefit from this approach.
48
+
- For larger scale runs, it's recommended to use UD or hybrid RC+UD transports. Many MPI libraries/runtime libraries do this internally (such as UCX or MVAPICH2). Check your transport configurations for large-scale runs.
49
+
56
50
## Compiling applications
57
51
<br>
58
52
<details>
@@ -71,7 +65,7 @@ Clang supports the `-march=znver1` flag to enable best code generation and tuni
71
65
72
66
### FLANG
73
67
74
-
The FLANG compiler is a recent addition to the AOCC suite (added April 2018) and is currently in pre-release for developers to download and test. Based on Fortran 2008, AMD extends the GitHub version of FLANG (https://github.com/flang-compiler/flang). The FLANG compiler supports all Clang compiler options and other number of FLANG-specific compiler options.
68
+
The FLANG compiler is a recent addition to the AOCC suite (added April 2018) and is currently in prerelease for developers to download and test. Based on Fortran 2008, AMD extends the GitHub version of FLANG (https://github.com/flang-compiler/flang). The FLANG compiler supports all Clang compiler options and other number of FLANG-specific compiler options.
For HPC, AMD recommends GCC compiler 7.3 or newer. Older versions, such as 4.8.5 included with RHEL/CentOS 7.4, aren't recommended. GCC 7.3, and newer, delivers higher performance on HPL, HPCG, and DGEMM tests.
97
+
For HPC workloads, AMD recommends GCC compiler 7.3 or newer. Older versions, such as 4.8.5 included with RHEL/CentOS 7.4, aren't recommended. GCC 7.3, and newer, delivers higher performance on HPL, HPCG, and DGEMM tests.
0 commit comments